code
stringlengths 501
5.19M
| package
stringlengths 2
81
| path
stringlengths 9
304
| filename
stringlengths 4
145
|
---|---|---|---|
.. _glossary:
============================
Glossary
============================
.. glossary::
Setuptools
`Setuptools <http://peak.telecommunity.com/DevCenter/setuptools>`_
builds on Python's ``distutils`` to provide easier building,
distribution, and installation of packages.
Interface
An attribute of a model object that determines its type. It is an
instance of a ``zope.interface`` Interface class.
Zope
`The Z Object Publishing Framework <http://zope.org>`_. The granddaddy
of Python web frameworks.
ZODB
`The Zope Object Database <http://wiki.zope.org/ZODB/FrontPage>`_
which is a persistent object store for Python.
Field index
A type of index that is optimized to index single simple tokenized
values. When a field index is searched, it can be searched for
one or more values, and it will return a result set that includes
these values exacty.
Text index
A type of index which indexes a value in such a way that parts of
it can be searched in a non-exact manner. When a text index is
searched, it returns results for values that match based on
various properties of the text indexed, such as omitting
"stopwords" the text might have.
Facet index
A type of index which can be used for faceted search.
Path index
A type of index that keeps track of documents within a graph;
documents can be searched for by their position in the graph.
zope.index
The `underlying indexing machinery
<http://pypi.python.org/pypi/zope.index>`_ that
:mod:`repoze.catalog` uses.
zope.app.catalog
The `cataloging implementation
<http://pypi.python.org/pypi/zope.app.catalog>`_ on which
:mod:`repoze.catalog` is based (although it doesn't use any of
its code).
Virtualenv
An isolated Python environment. Allows you to control which
packages are used on a particular project by cloning your main
Python. `virtualenv <http://pypi.python.org/pypi/virtualenv>`_
was created by Ian Bicking.
CQE
A string representing a Python-like domain-specific-language
expression which is used to generate a query object.
Query Object
An object used as an argument to the :meth:`repoze.catalog.Catalog.query`
method's ``queryobject`` parameter.
| zerodbext.catalog | /zerodbext.catalog-0.8.4.tar.gz/zerodbext.catalog-0.8.4/docs/glossary.rst | glossary.rst |
.. _usage:
Using :mod:`repoze.catalog`
===========================
:mod:`repoze.catalog` is an indexing and search system for Python. It
is inspired by (and uses much code from) Zope's
:term:`zope.app.catalog`, and uses other :term:`Zope` libraries to do
much of its work. It manages its own persistence: it stores catalog
information into a :term:`ZODB` database.
In order to make use of :mod:`repoze.catalog`, your application will
be required to create objects that are willing to be indexed, and it
will be responsible for providing each of these objects a unique
integer identifier, and maintaining the association between the object
and the unique identifier for the lifetime of your application.
Objects which are willing to be indexed must either have a particular
attribute which is guaranteed to have a value *or* you must provide a
callback that is willing to inspect the content for a value.
The result of searching a catalog is a sequence of integers that
represent all the document ids that match the query. Your application
is responsible for being able to (re-) resolve these integers into
content objects.
Indexing
--------
Here's a simple example of indexing data within your application.
This example sets up two indexes.
The first index for ``flavor`` is a :term:`field index`. The second
index, ``text``, is a :term:`text index`.
.. literalinclude:: code/index_attributes.py
:linenos:
:language: python
Here's a more complicated example. It uses callbacks to adapt
cataloged objects to values rather than directly inspecting attributes
of the content object. We use the same types of indexes as the
previous example, but we set up callbacks that allow us to adapt
content to a result instead of examining the object for an attribute
directly. This is useful in the case that your content objects don't
have attributes that match exactly what you want to index:
.. literalinclude:: code/index_callbacks.py
:linenos:
:language: python
Searching
---------
Searching for values from a previously indexed corpus of content is
significantly easier than indexing. There are a number of ways to
perform searches.
Search Using the :meth:`repoze.catalog.Catalog.query` Method
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The suggested way to perform searches is to use the
:meth:`repoze.catalog.Catalog.query` method. This method accepts a
number of arguments:
``queryobject``
A query object or a string representing the query.
``sort_index``
The name of the index used to sort the results.
``limit``
Limit the number of results returned to this argument, which should be
an integer. This is only used if ``sort_index`` is also specified.
``reverse``
Reverse the order of the result sequence if this is ``True``. Only used
if ``sort_index`` is also specified.
For example::
from repoze.catalog.catalog import FileStorageCatalogFactory
from repoze.catalog.catalog import ConnectionManager
from repoze.catalog.query import Eq
factory = FileStorageCatalogFactory('catalog.db', 'mycatalog')
manager = ConnectionManager()
catalog = factory(manager)
numdocs, results = catalog.query(Eq('flavors', 'peach'))
print (numdocs, [ x for x in results ])
The results of the above search will search the corpus for documents
which have a result in the ``flavor`` index that matches the value
``peach``.
The :meth:`repoze.catalog.Catalog.query` method will return a
two-tuple, with the first element in the sequence being the length of
the result set, and the second element being the result set
itself. Our above example will print::
(1, [1])
The first element in the tuple is the length of the result set (the
integer ``1``, in this case).
The second element in the tuple is the result set. It has one item.
This item is the document id for the content we indexed. Your
application is responsible for resolving this document identifier back
to its constituent content.
.. warning:: The result set is only guaranteed to be an iterable. It
will always be of a particular type, and *not* always sliceable;
for example it may be a generator.
You can also combine query objects, using boolean operations, to search
multiple indexes:
.. code-block:: python
:linenos:
from repoze.catalog.catalog import FileStorageCatalogFactory
from repoze.catalog.catalog import ConnectionManager
factory = FileStorageCatalogFactory('catalog.db', 'mycatalog')
manager = ConnectionManager()
catalog = factory(manager)
numdocs, results = catalog.query(
Eq('flavors', 'peach') & Eq('texts', 'nutty'))
print (numdocs, [ x for x in results ])
The results of the above search will return the following::
(0, [])
This is because no results in our index match a document which has
both a flavor of ``peach`` and text which contains the word ``nutty``.
You can sort the result set using ``sort_index``. The value of
``sort_index`` should be the name of an index which supports being
used as a sort index::
from repoze.catalog.query import Range
numdocs, results = catalog.query(
Range('flavors', 'peach', 'pistachio'),
sort_index='flavors')
print (numdocs, [ x for x in results ])
Would result in::
(2, [1, 2])
The default sort order is ascending. You can reverse the sort using
``reverse``::
from repoze.catalog.query import Range
numdocs, results = catalog.query(
Range('flavors', 'peach', 'pistachio'),
sort_index='flavors',
reverse=True)
print (numdocs, [ x for x in results ])
Would result in::
(2, [2, 1])
Query Objects
!!!!!!!!!!!!!
The value passed as the ``queryobject`` argument to
:meth:`repoze.catalog.Catalog.query` may be one of two distinct types:
- a "raw" :term:`query object`
- a "CQE" string representing a domain-specific-language expression
which will be used to *generate* a :term:`query object`. "CQE"
stands for "catalog query expression".
For example, you can construct a raw query object using Python, and
pass it as ``queryobject`` to the :meth:`repoze.catalog.Catalog.query`
method:
.. code-block:: python
:linenos:
from repoze.catalog.query import Eq
results = catalog.query(Eq('index_name', 'value'))
Or you can allow repoze.catalog to construct a query object on your
behalf by passing a *string* as ``queryobject``.
.. code-block:: python
:linenos:
from repoze.catalog.query import Eq
catalog.query('index_name == "value"')
The above string is a CQE. A "CQE" is a string representing a Python
expression which uses index names and values. It is parsed by the
catalog to create a query object.
.. warning:: CQE strings are not supported on Python versions < 2.6.
Whether a query object is used directly or query objects are generated
as the result of a CQE, an individual query object will be one of two
types: a comparator or a boolean operator. A comparator performs a single
query on a single index. A boolean operator allows results from
individual queries to be combined using boolean operations. For example:
.. code-block:: python
:linenos:
from repoze.catalog.query import And, Eq, Contains
query = And(Eq('author', 'crossi'), Contains('body', 'biscuits'))
In the above example, ``And`` is a boolean operator, and both ``Eq`` and
``Contains`` are comparison operators. The resulting query will search two
indexes, ``author`` and ``body``. Because the individual comparators are
passed as arguments to the ``And`` set operator, the result becomes all
documents which satisfy *both* comparators.
All query objects overload the bitwise and (``&``) and or (``|``) operators
and can be combined using these. The above query could also have been written
as follows:
.. code-block:: python
:linenos:
query = Eq('author', 'crossi') & Contains('body', 'biscuits')
.. note:: Although it would be more intuitive to use the boolean operators,
``or`` and ``and`` for this rather than bitwise operators, Python does not
allow overloading boolean operators.
Query objects may also be created by parsing a :term:`CQE` string.
The query parser uses Python's internal code parser to parse CQE query
expression strings, so the syntax is just like Python::
mycatalog.query("author == 'crossi' and 'biscuits' in body")
The query parser allows name substitution in expressions. Names are
resolved using a dict passed into
:meth:`repoze.catalog.Catalog.query`::
author = request.params.get("author")
word = request.params.get("search_term")
query = mycatalog.query("author == author and word in body",
names=locals())
Unlike true Python expressions, ordering of the terms in a CQE
expression is important for comparators. For most comparators the
``index_name`` must be written on the left. The following, for
example, would raise an exception::
query = mycatalog.query("'crossi' == author")
Note that not all index types support all comparators. An attempt to
perform a query using a comparator that is not supported by the index
being queried will result in a NotImplementedError being raised when
the query is performed.
Comparators
!!!!!!!!!!!
The supported comparator operators are as follows:
Equal To
########
Python::
from repoze.catalog.query import Eq
Eq(index_name, value)
CQE::
index_name == value
Not Equal To
############
Python::
from repoze.catalog.query import NotEq
NotEq(index_name, value)
CQE::
index_name != value
Greater Than
############
Python::
from repoze.catalog.query import Gt
Gt(index_name, value)
CQE::
index_name > value
Less Than
#########
Python::
from repoze.catalog.query import Lt
Lt(index_name, value)
CQE::
index_name < value
Greater Than Or Equal To
########################
Python::
from repoze.catalog.query import Ge
Ge(index_name, value)
CQE::
index_name >= value
Less Than Or Equal To
#####################
Python::
from repoze.catalog.query import Ge
Le(index_name, value)
CQE::
index_name <= value
Contains
########
Python::
from repoze.catalog.query import Contains
Contains(index_name, value)
CQE::
value in index_name
Does Not Contain
################
Python::
from repoze.catalog.query import DoesNotContain
DoesNotContain(index_name, value)
CQE::
value not in index_name
Any
###
Python::
from repoze.catalog.query import Any
Any(index_name, [value1, value2, ...])
CQE::
index_name == value1 or index_name == value2 or etc...
index_name in any([value1, value2, ...])
index_name in any(values)
Not Any (aka None Of)
#####################
Python::
from repoze.catalog.query import NotAny
NotAny(index_name, [value1, value2, ...])
CQE::
index_name != value1 and index_name != value2 and etc...
index_name not in any([value1, value2, ...])
index_name not in any(values)
All
###
Python::
from repoze.catalog.query import All
All(index_name, [value1, value2, ...])
CQE::
index_name == value1 and index_name == value2 and etc...
index_name in all([value1, value2, ...])
index_name in all(values)
Not All
#######
Python::
from repoze.catalog.query import NotAll
NotAll(index_name, [value1, value2, ...])
CQE::
index_name != value1 or index_name != value2 or etc...
index_name not in all([value1, value2, ...])
index_name not in all(values)
Within Range
############
Python::
from repoze.catalog.query import InRange
InRange(index_name, start, end,
start_exclusive=False, end_exclusive=False)
CQE::
index_name >= start and index_name <= end
start < index_name < end
Not Within Range
################
Python::
from repoze.catalog.query import NotInRange
NotInRange(index_name, start, end,
start_exclusive=False, end_exclusive=False)
CQE::
index_name <= start or index_name >= end
not(start < index_name < end)
Boolean Operators
!!!!!!!!!!!!!!!!!
The following set operators are allowed in queries:
And
###
Python (explicit)::
from repoze.catalog.query import And
And(query1, query2)
Python (implicit)::
query1 & query2
CQE::
query1 and query2
query1 & query2
Or
##
Python (explicit)::
from repoze.catalog.query import Or
Or(query1, query2)
Python (implicit)::
query1 | query2
CQE::
query1 or query2
query1 | query2
Not
###
Python (explicit)::
from repoze.catalog.query import Not
Not(query1, query2)
CQE::
not query1
Search Using the :meth:`repoze.catalog.Catalog.search` Method (Deprecated)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. warning::
The :meth:`repoze.catalog.Catalog.search` method is deprecated as of
:mod:`repoze.catalog` 0.8. Use :meth:`repoze.catalog.Catalog.query`
instead.
We can pass a query into our catalog's ``search`` method, which is
composed of the name of our index and a value we'd like to find a
document for.
.. code-block:: python
:linenos:
from repoze.catalog.catalog import FileStorageCatalogFactory
from repoze.catalog.catalog import ConnectionManager
factory = FileStorageCatalogFactory('catalog.db', 'mycatalog')
manager = ConnectionManager()
catalog = factory(manager)
numdocs, results = catalog.search(flavors=('peach', 'peach'))
print (numdocs, [ x for x in results ])
The results of the above search will search the corpus for documents
which have a result in the ``flavor`` index that matches the value
``peach``. Since the index is a "field" index, its query arguments
are a "range" search: you can read ``('peach', 'peach')`` as "from
peach to peach". You could say ``('peach', 'pistachio')`` to find all
documents that are in the "range" from peach to pistachio.
The :meth:`repoze.catalog.Catalog.search` method will return a
two-tuple, with the first element in the sequence being the length of
the result set, and the second element being the result set itself.
Our above example will print:
(1, [1])
The first element in the tuple is the length of the result set (the
integer ``1``, in this case).
The second element in the tuple is the result set. It has one item.
This item is the document id for the content we indexed. Your
application is responsible for resolving this document identifier back
to its constituent content.
You can also pass compound search parameters for multiple indexes.
The results are intersected to provide a result:
.. code-block:: python
:linenos:
from repoze.catalog.catalog import FileStorageCatalogFactory
from repoze.catalog.catalog import ConnectionManager
factory = FileStorageCatalogFactory('catalog.db', 'mycatalog')
manager = ConnectionManager()
catalog = factory(manager)
numdocs, results = catalog.search(flavors=('peach', 'peach'), texts='nutty')
print (numdocs, [ x for x in results ])
The results of the above search will return the following:
(0, [])
This is because no results in our index match a document which
has both a flavor of ``peach`` and text which contains the word
``nutty``.
See the :term:`zope.index` documentation and implementation for more
information about what specific index types expect for query
parameters.
You can also use a field index as a ``sort_index``, which sorts the
document ids based on the values for that docid present in that index::
numdocs, results = catalog.search(flavors=('peach', 'pistachio'),
sort_index='flavors')
print (numdocs, [ x for x in results ])
(2, [1, 2])
The default sort order is ascending. You can reverse the sort using
``reverse``::
numdocs, results = catalog.search(flavors=('peach', 'pistachio'),
sort_index='flavors',
reverse=True)
print (numdocs, [ x for x in results ])
(2, [2, 1])
If you use a sort index, you may choose to limit the number of results
returned. Do this by passing ``limit`` with an integer value of the
number of results you want. Note that this parameter has no effect if
you do not supply a ``sort_index``::
numdocs, results = catalog.search(flavors=('peach', 'pistachio'),
sort_index='flavors',
limit=1)
print (numdocs, [ x for x in results ])
(1, [1])
You may combine ``reverse`` and ``limit`` as necessary.
If a sort_index is used, and the sort index you're using does not
contain all the documents returned by the search, the ``numdocs``
value returned by ``search`` may be incorrect. There will be fewer
results than those indicated by ``numdocs`` in this circumstance.
When querying a text index, to sort the results by relevance, specify
the name of the text index as the sort index. The most relevant
results will be provided first, unless you specify reverse=True, in
which case the least relevant will be provided first.
Document Map
------------
An implementation of a "document map" suitable for ZODB applications
exists within the ``repoze.bfg.document.DocumentMap`` class. A
document map allows you to map document ids to "addresses" (e.g. paths
or unique identifiers). See :ref:`api_document_section` in the API
documentation chapter for more information.
Restrictions
------------
Values indexed by a :mod:`repoze.catalog` catalog cannot subclass from the
ZODB ``Persistent`` class. This is a safeguard to ensure that
irresolveable cross-database references aren't put into the catalog's
(separate) database.
Gotchas
-------
When the ``ConnectionManager`` 's ``commit`` method is called, it will
commit a transaction for all databases participating in Zope
transaction management. Don't use this method if you already have
transaction management enabled in another way.
| zerodbext.catalog | /zerodbext.catalog-0.8.4.tar.gz/zerodbext.catalog-0.8.4/docs/usage.rst | usage.rst |
.. _index:
==============
repoze.catalog
==============
:mod:`repoze.catalog` is a Python indexing and searching framework.
It relies on :term:`zope.index` and most of its internals are taken
from :term:`zope.app.catalog`. Unlike ``zope.app.catalog``, however,
it is meant to be useful outside of the larger Zope framework within
arbitrary Python applications.
Narrative documentation
-----------------------
Narrative documentation explaining how to use :mod:`repoze.catalog`.
.. toctree::
:maxdepth: 2
overview
install
upgrade
usage
genealogy
glossary
changes
API documentation
-----------------
API documentation for :mod:`repoze.catalog`.
.. toctree::
:maxdepth: 2
api
Source Code and Issue Tracking
------------------------------
Source code is available from https://github.com/repoze/repoze.catalog
File bugs via https://github.com/repoze/repoze.catalog/issues
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`
* :ref:`glossary`
| zerodbext.catalog | /zerodbext.catalog-0.8.4.tar.gz/zerodbext.catalog-0.8.4/docs/index.rst | index.rst |
.. _api_catalog_section:
:mod:`repoze.catalog.catalog`
-----------------------------
.. automodule:: repoze.catalog.catalog
.. autoclass:: Catalog
:members:
.. automethod:: __setitem__
.. automethod:: __getitem__
Retrieve an index.
.. automethod:: get
Retrieve an index or return failobj.
:mod:`repoze.catalog.query`
---------------------------
Comparators
~~~~~~~~~~~
.. automodule:: repoze.catalog.query
.. autoclass:: Contains
.. autoclass:: Eq
.. autoclass:: NotEq
.. autoclass:: Gt
.. autoclass:: Lt
.. autoclass:: Ge
.. autoclass:: Le
.. autoclass:: Contains
.. autoclass:: DoesNotContain
.. autoclass:: Any
.. autoclass:: NotAny
.. autoclass:: All
.. autoclass:: NotAll
.. autoclass:: InRange
.. autoclass:: NotInRange
Boolean Operators
~~~~~~~~~~~~~~~~~
.. automodule:: repoze.catalog.query
.. autoclass:: Or
.. autoclass:: And
.. autoclass:: Not
Other Helpers
~~~~~~~~~~~~~
.. automodule:: repoze.catalog.query
.. autoclass:: Name
.. autofunction:: parse_query
.. _api_fieldindex_section:
:mod:`repoze.catalog.indexes.field`
-----------------------------------
.. automodule:: repoze.catalog.indexes.field
.. autoclass:: CatalogFieldIndex
:members:
.. _api_keywordindex_section:
:mod:`repoze.catalog.indexes.keyword`
-------------------------------------
.. automodule:: repoze.catalog.indexes.keyword
.. autoclass:: CatalogKeywordIndex
:members:
.. _api_textindex_section:
:mod:`repoze.catalog.indexes.text`
-----------------------------------
.. automodule:: repoze.catalog.indexes.text
.. autoclass:: CatalogTextIndex
:members:
.. _api_facetindex_section:
:mod:`repoze.catalog.indexes.facet`
-------------------------------------
.. automodule:: repoze.catalog.indexes.facet
.. autoclass:: CatalogFacetIndex
:members:
:mod:`repoze.catalog.indexes.path`
----------------------------------
.. automodule:: repoze.catalog.indexes.path
.. autoclass:: CatalogPathIndex
:members:
.. _api_document_section:
:mod:`repoze.catalog.document`
------------------------------
.. automodule:: repoze.catalog.document
.. autoclass:: DocumentMap
:members:
| zerodbext.catalog | /zerodbext.catalog-0.8.4.tar.gz/zerodbext.catalog-0.8.4/docs/api.rst | api.rst |
Genealogy of :mod:`repoze.catalog`
==================================
All versions of Zope depend heavily on the Zope Object Database (ZODB).
Because ZODB is less a database and more a persistent object store (it
doesn't possess a query language; Python *is* its query language), it
has been necessary to create indexing and searching facilities for data
stored in ZODB.
The first iteration of searching and indexing for ZODB-based
applications (at least post-Principia, which had something named
Tabula, which I never actually used) was the ZCatalog. The ZCatalog
was entirely tied to Zope2, and still remains in heavy use today
within Zope 2 applications such as Plone.
The second iteration was ``zope.app.catalog``, which was a ZCatalog
do-over for Zope 3 applications.
Neither of these searching and indexing packages are particularly easy
to use outside of a Zope application. Each makes various assumptions
about the content objects that need indexing or the environment that
aren't appropriate for arbitrary applications. For instance, ZCatalog
wants objects you want to catalog to have a ``getPhysicalPath`` method
which returns a "path". An instance of ``zope.app.catalog`` makes the
assumption that that it's located within a Zope 3 "site" object within
a ZODB, and assumes that you want query result sets to be sets of
Python references to the original object you indexed. In other words,
these packages assume too much to be maximally useful outside the
context in which they were developed. `Repoze <http://repoze.org>`_ is
a project which has as a stated goal making it easier for non-Zope
Python developers to use Zope technologies outside Zope, so this
seemed like a natural thing to do under the Repoze flag.
| zerodbext.catalog | /zerodbext.catalog-0.8.4.tar.gz/zerodbext.catalog-0.8.4/docs/genealogy.rst | genealogy.rst |
A Tour of :mod:`repoze.catalog`
===============================
:mod:`repoze.catalog` borrows heavily from ``zope.app.catalog`` and
depends wholly on the ``zope.index`` package for its index
implementations, but assumes less about how you want your indexing and
querying to behave. In this spirit, you can index any Python object;
it needn't implement any particular interface except perhaps one you
define yourself conventionally. :mod:`repoze.catalog` does less than
any of its predecessors, in order to make it more useful for arbitrary
Python applications. Its implemented in terms of ZODB objects, and
the ZODB will store the derived index data, but it assumes little
else. You should be able to use it in any Python application. The
fact that it uses ZODB is ancillary: it's akin to Xapian using "flint"
or "quartz" backends.
Indexing
--------
To perform indexing of objects, you set up a catalog with some number
of indexes, each of which is capable of calling a callback function to
obtain data about an object being cataloged::
from repoze.catalog.indexes.field import CatalogFieldIndex
from repoze.catalog.indexes.text import CatalogTextIndex
from repoze.catalog.catalog import Catalog
def get_flavor(object, default):
return getattr(object, 'flavor', default)
def get_description(object, default):
return getattr(object, 'description', default)
catalog = Catalog()
catalog['flavors'] = CatalogFieldIndex(get_flavor)
catalog['description'] = CatalogTextIndex(get_description)
Note that ``get_flavor`` and ``get_description`` will be called for each
object you attempt to index. Each of them attempts to grab an
attribute from the object being indexed, and returns a default if no
such attribute exists.
Once you've got a catalog set up, you can begin to index Python
objects (aka "documents")::
class IceCream(object):
def __init__(self, flavor, description):
self.flavor = flavor
self.description = description
peach = IceCream('peach', 'This ice cream has a peachy flavor')
catalog.index_doc(1, peach)
pistachio = IceCream('pistachio', 'This ice cream tastes like pistachio nuts')
catalog.index_doc(2, pistachio)
Note that when you call ``index_doc``, you pass in a ``docid`` as the
first argument, and the object you want to index as the second
argument. When we index the ``peach`` object above we index it with
the docid ``1``. Each docid must be unique within a catalog; when you
query a :mod:`repoze.catalog` catalog, you'll get back a sequence of
document ids that match the query you supplied, which you'll
presumably need to map back to the content object in order to make
sense of the response; you're responsible for keeping track of which
objects map to which document id yourself.
Querying
--------
Once you've got some number of documents indexed, you can perform queries
against an existing catalog. A query is performed by passing a query argument
and optional keyword arguments to the ``query`` method of the catalog object::
from repoze.catalog.query import Eq
catalog.query(Eq('flavor', 'peach'))
The argument passed to ``query`` above is a :term:`query object`.
This particular query object is a :class:`repoze.catalog.query.Eq`
object, which is a *comparator* meaning "equals". The first argument
to the ``Eq`` object is an index name, the second argument is a value.
In english, this query represents "a document indexed in the
``flavor`` index with the value ``peach``". Other arguments to
:meth:`repoze.catalog.Catalog.query` may be special values that
specify sort ordering and query limiting.
In the above example, we specified no particular sort ordering or
limit, and we're essentially asking the catalog to return us all the
documents that match the word ``peach`` as a field within the field
index named ``flavor``. Other types of indexes can be queried
similarly::
from repoze.catalog.query import Contains
catalog.query(Contains('description', 'nuts'))
The result of calling the ``query`` method is a two tuple. The first
element of the tuple is the number of document ids in the catalog
which match the query. The second element is an iterable: each
iteration over this iterable returns a document id. The results of
``catalog.query(Contains('description', 'nuts'))`` might return::
(1, [2])
The first element in the tuple indicates that there is one document in
the catalog that matches the description 'nuts'. The second element
in the tuple (here represented as a list, although it's more typically
a generator) is a sequence of document ids that match the query.
You can combine search parameters to further limit a query::
from repoze.catalog.query import Contains, Eq, Intersection
catalog.query(Eq('flavor', 'peach') & Contains('description', 'nuts'))
This would return a result representing all the documents indexed
within the catalog with the flavor of peach and a description of nuts.
Index Types
-----------
Out of the box, ``repoze.catalog`` supports five index types: field indexes,
keyword indexes, text indexes, facet indexes, and path indexes. Field indexes
are meant to index single discrete values. Keys are stored in order, allowing
for the full suite of range and comparison operators to be used. Keyword
indexes index sequences of values which can be queried for any of the values
in each sequence indexed. Text indexes index text using the
``zope.index.text`` index type, and can be queried with arbitrary textual
terms. Text indexes can use various splitting and normalizing strategies to
collapse indexed texts for better querying. Facet indexes are much like
keyword indexes, but also allow for "faceted" indexing and searching, useful
for performing narrowing searches when there is a well-known set of allowable
values (the "facets"). Path indexes allow you to index documents as part of a
graph, and return documents that are contained in a portion of the graph.
.. note:: The existing facet index implementation narrowing support is
naive. It is not meant to be used in catalogs that must use it to
get count information for over, say, 30K documents, for performance
reasons.
Helper Facilities
-----------------
:mod:`repoze.catalog` provides some helper facilities which help you
integrate a catalog into an arbitrary Python application. The most
obvious is a ``FileStorageCatalogFactory``, which makes it reasonably
easy to create a Catalog object within an arbitrary Python
application. Using this facility, you don't have to know anything
about ZODB to use :mod:`repoze.catalog`. If you have an existing ZODB
application, however, you can ignore this facility entirely and use
the Catalog implementation directly.
:mod:`repoze.catalog` provides a ``DocumentMap`` object which can be
used to map document ids to "addresses". An address is any value that
can be used to resolve the document id back into to a Python object.
In Zope, an address is typically a traversal path. This facility
exists in :mod:`repoze.catalog.document.DocumentMap`.
| zerodbext.catalog | /zerodbext.catalog-0.8.4.tar.gz/zerodbext.catalog-0.8.4/docs/overview.rst | overview.rst |
import glob
import math
import os
import random
import sys
import time
from zerodbext.catalog.catalog import ConnectionManager
from zerodbext.catalog.catalog import FileStorageCatalogFactory
from zerodbext.catalog.indexes.field import CatalogFieldIndex
from zerodbext.catalog.query import Eq
from zerodbext.catalog.query import BoolOp
_marker = object()
random.seed()
class Intersection1(BoolOp):
"""
Total cost: O(log(nk2, oob) + max(n1, nd2/nk2))
"""
def apply(self, catalog):
left = self.left.apply(catalog)
if len(left) == 0:
results = self.family.IF.Set()
else:
right = self.right.apply(catalog)
if len(right) == 0:
results = self.family.IF.Set()
else:
_, results = self.family.IF.weightedIntersection(left, right)
return results
class Intersection2(BoolOp):
"""
Implements algorithm2 above. In real life we wouldn't do this in the
Intersection operator--we'd do it in the apply_intersection() of the index
and wire the Intersection operator to use that.
Total cost: O(n1 * log(nd2, iob))
"""
def apply(self, catalog):
left = self.left.apply(catalog)
rev_index = catalog[self.right.index_name]._rev_index
value = self.right.value
results = self.family.IF.Set()
for docid in left:
if rev_index.get(docid, _marker) == value:
results.add(docid)
return results
def predictions(nd, nk1, nk2):
FUDGE = 17.0
## oob = 250 # OOBTree DEFAULT_MAX_BTREE_SIZE
## iob = 500 # IOBTree DEFAULT_MAX_BTREE_SIZE
oob = 125 # OOBTree wag avg bucket size
iob = 250 # IOBTree wag avg bucket size
L_FWD_LOOKUP_COST = FUDGE * math.log(nk1, oob)
R_FWD_LOOKUP_COST = FUDGE * math.log(nk2, oob)
L_REV_LOOKUP_COST = math.log(nd, iob)
AVG_L_RESULT_SIZE = float(nd)/nk1
AVG_R_RESULT_SIZE = float(nd)/nk2
MAX_INTERSECT_COST = max(AVG_L_RESULT_SIZE, AVG_R_RESULT_SIZE)
AVG_INTERSECT_COST = MAX_INTERSECT_COST / 2.0 #max(nk1/2, nk2/2) / 2
# Total cost: O(log(nk2, oob) + max(n1, nd2/nk2))
cost1 = L_FWD_LOOKUP_COST + R_FWD_LOOKUP_COST + AVG_INTERSECT_COST
# Total cost: O(n1 * log(nd2, iob))
cost2 = L_FWD_LOOKUP_COST + (AVG_L_RESULT_SIZE * L_REV_LOOKUP_COST)
return cost1, cost2
##def predictions(nd, nk1, nk2):
## s1 = nd / nk1
## s2 = nd / nk2
## if s1 <= s2 / 2:
## return 2.0, 1.0
## return 1.0, 2.0
def do_benchmark(fname, nd, nk1, nk2, out=sys.stdout):
cumulative1 = 0.0
cumulative2 = 0.0
print >>out, "Index 1:"
print >>out, "\t# docs: %d" % nd
print >>out, "\t# distinct keys: %d" % nk1
print >>out, "Index 2:"
print >>out, "\t# docs: %d" % nd
print >>out, "\t# distinct keys: %d" % nk2
print >>out, ""
cost1, cost2 = predictions(nd, nk1, nk2)
print >>out, 'Cost1: %0.2f' % cost1
print >>out, 'Cost2: %0.2f' % cost2
print >>out
print >>out, "Prediction:"
if cost1 > cost2:
print >>out, "Algorithm 2 %0.2f times faster than Algorithm 1" % (
cost1/cost2)
else:
print >>out, "Algorithm 1 %0.2f times faster than Algorithm 2" % (
cost2/cost1)
print >>out, ""
print >>out, "Setting up indexes..."
for fn in glob.glob(fname + "*"):
os.remove(fn)
manager = ConnectionManager()
factory = FileStorageCatalogFactory(fname, 'intersection')
catalog = factory(manager)
catalog['one'] = CatalogFieldIndex('one')
catalog['two'] = CatalogFieldIndex('two')
class Document(object):
def __init__(self, docid):
self.one = str(docid % nk1)
self.two = str(docid % nk2)
for docid in xrange(nd):
catalog.index_doc(docid, Document(docid))
manager.commit()
manager.close()
N_QUERIES = 1000
print >>out, "Running %d queries for each algorithm..." % N_QUERIES
catalog = factory(manager)
for _ in xrange(1000):
key1 = random.randrange(nk1)
key2 = random.randrange(nk2)
query1 = Intersection1(Eq('one', str(key1)), Eq('two', str(key2)))
query2 = Intersection2(Eq('one', str(key1)), Eq('two', str(key2)))
start = time.time()
result1 = query1.apply(catalog)
cumulative1 += time.time() - start
start = time.time()
result2 = query2.apply(catalog)
cumulative2 += time.time() - start
s1 = sorted(list(result1))
s2 = sorted(list(result2))
assert s1==s2, (s1, s2)
manager.close()
for fn in glob.glob(fname + "*"):
os.remove(fn)
print >>out, ""
print >>out, "Result:"
print >>out, "Time for algorithm1: %0.3f s" % cumulative1
print >>out, "Time for algorithm2: %0.3f s" % cumulative2
if cumulative1 > cumulative2:
print >>out, "Algorithm 2 %0.2f times faster than Algorithm 1" % (
cumulative1/cumulative2)
else:
print >>out, "Algorithm 1 %0.2f times faster than Algorithm 2" % (
cumulative2/cumulative1)
return cost1 / cost2, cumulative1 / cumulative2
class Null(object):
def write(self, s):
pass
def _range_order_of_magnitude(n):
# Iterate over (at most) 3 orders of magnitude
n_magnitude = int(math.ceil(math.log10(n)))
lowest_magnitude = max(0, n_magnitude - 3)
for magnitude in xrange(lowest_magnitude, n_magnitude):
for i in xrange(1,10):
value = i * 10**magnitude
if value >= n:
break
yield value
def do_benchmarks(fname):
null = Null()
print "Cost of algorithm 1 / Cost of algorithm 2"
print "N Docs | N Keys 1 | N Keys 2 | Predicted | Actual | Correct"
for nd in [100, 1000, 10000, 100000, 1000000]:
for nk1 in _range_order_of_magnitude(nd / 2):
for nk2 in _range_order_of_magnitude(nd):
predicted, actual = do_benchmark(fname, nd, nk1, nk2, out=null)
correct = ((predicted >= 1 and actual >= 1) or
(predicted < 1 and actual < 1))
print "%6d | %8d | %8d | %9.2f | %6.2f | %s" % (
nd, nk1, nk2, predicted, actual, correct)
sys.stdout.flush()
# profile (unused right now)
def profile(cmd, globals, locals, sort_order, callers):
import profile
import pstats
import tempfile
fd, fn = tempfile.mkstemp()
try:
if hasattr(profile, 'runctx'):
profile.runctx(cmd, globals, locals, fn)
else:
raise NotImplementedError('No profiling support under Python 2.3')
stats = pstats.Stats(fn)
stats.strip_dirs()
# calls,time,cumulative and cumulative,calls,time are useful
stats.sort_stats(*sort_order or ('cumulative', 'calls', 'time'))
if callers:
stats.print_callers(.3)
else:
stats.print_stats(.3)
finally:
os.remove(fn)
def rerun_predictions(fname):
benchmarks = open(fname).xreadlines()
benchmarks.next(); benchmarks.next() # skip header lines
print "Cost of algorithm 1 / Cost of algorithm 2"
print "nd | nd/nk1 | nd/nk2 | Predicted | Actual | Correct"
gain = count = n_correct = 0
for line in benchmarks:
line = line.split('|')
nd = int(line[0].strip())
nk1 = int(line[1].strip())
nk2 = int(line[2].strip())
actual = float(line[4].strip())
cost1, cost2 = predictions(nd, nk1, nk2)
predicted = cost1 / cost2
correct = ((predicted >= 1 and actual >= 1) or
(predicted < 1 and actual < 1))
print "%6d | %8d | %8d | %9.2f | %6.2f | %s" % (
nd, nd/nk1, nd/nk2, predicted, actual, correct)
count += 1
if correct:
n_correct += 1
if cost1 < cost2:
# I picked algorithm1, so no net loss or gain
gain += 1.0
else:
# I picked algorith2, so note difference in performance
gain += actual
print "-" * 79
print "%% correct: %0.1f" % (n_correct * 100.0 / count)
print "%% performance gain: %0.1f" % ((gain / count - 1.0) * 100.0)
if __name__ == '__main__':
#do_benchmark('benchmark.db', 10000, 1000, 1000)
#do_benchmarks('/dev/shm/benchmark.db')
rerun_predictions('benchmarks.txt') | zerodbext.catalog | /zerodbext.catalog-0.8.4.tar.gz/zerodbext.catalog-0.8.4/benchmark/intersection.py | intersection.py |
import cPickle
import math
import os
import random
import sys
import time
from pychart import theme
from pychart import canvas
from pychart import axis
from pychart import area
from pychart import line_plot
from pychart import legend
from pychart import text_box
from BTrees.IFBTree import IFSet
from zerodbext.catalog.indexes.field import fwscan_wins
from zerodbext.catalog.indexes.field import nbest_ascending_wins
theme.get_options()
theme.use_color = True
theme.scale_factor = 2
# db keys:
# 64
# 512
# 1024
# 2048
# 4096
# 8192
# 16384
# 32768
# 65536
class FieldIndexForwardSort:
""" Benchmark and compare the field index forward sort algorithms """
def __init__(self, limitbase=2, rlenbase=2, dbfn='sort.db', dbkey='65536'):
self.limitbase = limitbase
self.rlenbase = rlenbase
self.dbfn = dbfn
self.dbkey = dbkey
self.index = self.get_index()
self.numdocs = self.index._num_docs.value
self.dbkey = dbkey
# the set of rlens and limits are series generated via
# exponents to the power of the base base, e.g. [4, 16, 64,
# 256, 1024, 4096, 16384, 65535 ] if numdocs = 65708 and the
# base is 4
self.rlens = series(self.numdocs+1, self.rlenbase)
self.limits = series(self.numdocs+1, self.limitbase)
self.sorts = (
('nbest', self.index.nbest_ascending),
('fwscan', self.index.scan_forward),
('timsort', self.index.timsort_ascending)
)
def get_index(self):
if not os.path.exists(self.dbfn):
raise NotImplementedError # XXX create index-creation code
from ZODB.FileStorage.FileStorage import FileStorage
from ZODB.DB import DB
s = FileStorage(self.dbfn)
db = DB(s, cache_size=300000)
c = db.open()
root = c.root()
return root[self.dbkey]
def __call__(self):
if not os.path.exists('%s.pck' % self.dbkey):
self.bench()
self.chart()
def bench(self):
tf = open('%s.txt' % self.dbkey, 'w')
def output(msg):
tf.write(msg + '\n')
tf.flush()
print msg
all_docids = list(self.index._rev_index.keys())
random.shuffle(all_docids)
main = []
for rlen in self.rlens:
docids = IFSet(random.sample(all_docids,
min(rlen, len(all_docids))))
output('for %s' % rlen)
output('-----------------------------------')
control = []
for k, s in self.index._fwd_index.items():
for docid in s:
if docid in docids:
control.append(docid)
capture = {}
result = None
for name, fn in self.sorts:
data = capture.setdefault(name, [])
for limit in self.limits:
t, result = timer(fn, docids, limit)
result = list(result)
if control[:limit] != result:
raise AssertionError((control[:limit], result))
data.append(t)
output('%0.6f %s at limit %s' % (t, name, limit))
main.append({'rlen':rlen, 'capture':capture})
cPickle.dump(main, open('%s.pck' % self.dbkey, 'w'))
def chart(self):
self.main = cPickle.load(open('%s.pck' % self.dbkey))
for chartable in self.main:
self.detailchart(chartable)
sortnames = [ x[0] for x in self.sorts ]
for sortname1, sortname2 in product(sortnames, sortnames):
if sortname1 == sortname2:
continue
self.comparisonchart(sortname1, sortname2)
def detailchart(self, chartable):
theme.reinitialize()
min_y = 0
max_y = 0
capture = chartable.get('capture')
for sortname, sortfn in self.sorts:
data = capture[sortname]
m = median(data)
if m > max_y:
max_y = m
max_x = max(self.limits)
min_x = min(self.limits)
ipoints = 10.0
x_interval = (max_x - min_x) / ipoints
y_interval = (max_y - min_y) / ipoints
xaxis = axis.X(label='Limit',
tic_interval = x_interval,
format='/4{}%d')
yaxis = axis.Y(label='Seconds',
tic_interval = y_interval,
format='/4{}%0.3f')
ar = area.T(
x_range = (min_x, max_x),
y_range = (min_y, max_y),
x_axis = xaxis,
y_axis = yaxis,
legend = legend.T(),
)
tb = text_box.T(loc=(140,90), text='Rlen\n%s' % chartable['rlen'])
for sortname, sortfn in self.sorts:
data = capture[sortname]
linedata = [ (self.limits[x], data[x]) for x in range(len(data)) ]
ar.add_plot(
line_plot.T(label="%s" % sortname, data=linedata)
)
fd = open('detail-%s-%s.pdf' % (self.dbkey, chartable['rlen']), 'w')
can = canvas.init(fd, 'pdf')
ar.draw(can)
tb.draw(can)
can.close()
def comparisonchart(self, sortname1, sortname2):
linedata = []
test_total = 0
test_wrong = 0
for rlendata in self.main:
rlen = rlendata['rlen']
capture = rlendata['capture']
values1 = capture[sortname1]
values2 = capture[sortname2]
doc_ratio = rlen / float(self.numdocs)
cutoff = None
wins = []
#test = sortname1 == 'fwscan' and sortname2 in ('nbest', 'timsort')
#test_fn = fwscan_wins
test = sortname1 == 'nbest' and sortname2 == 'timsort'
test_fn = nbest_ascending_wins
for x in xrange(0, min(len(values1), len(values2))):
t1 = values1[x]
t2 = values2[x]
limit = self.limits[x]
limitratio = limit / float(self.numdocs)
won = t1 < t2
if won:
wins.append(limit)
wrongmsg = "wrong %s? rlen %s, limit %s (%0.5f > %0.5f)%s"
if test:
test_total += 1
curvewin = test_fn(limit, rlen, self.numdocs)
if won and (not curvewin):
extra = ''
if (t1 / t2) < .90: # more than 10% difference
extra = " * (%0.2f)" % (t1/t2)
print wrongmsg % ('curvelose', rlen, limit, t2, t1,
extra)
test_wrong +=1
elif (not won) and curvewin:
extra = ''
if (t2 / t1) < .90: # more than 10% difference
extra = " * (%0.2f)" % (t2/t1)
print wrongmsg % ('curvewin', rlen, limit, t1, t2,
extra)
test_wrong +=1
for limit in wins:
limitratio = limit / float(self.numdocs)
linedata.append((doc_ratio, limitratio))
if test:
if test_total:
test_right = test_total - test_wrong
test_percent = test_right / float(test_total)
print "test percentage %0.2f: (%s wrong out of %s)" % (
test_percent, test_wrong, test_total)
comparename = 'compare-%s-%s-beats-%s' % (self.dbkey,
sortname1, sortname2)
xaxis=axis.X(label='Doc Ratio (rlen//numdocs)',
tic_interval=.1,
format='/4{}%0.2f')
yaxis=axis.Y(label='Limit Ratio (limit//numdocs)',
tic_interval=.1,
format='/4{}%0.2f')
ar = area.T(
x_range = (0, 1),
y_range = (0, 1),
x_axis = xaxis,
y_axis = yaxis,
legend = legend.T(),
)
ar.add_plot(
line_plot.T(label="%s \nbeats \n%s" % (sortname1, sortname2),
data=linedata),
)
tb = text_box.T(loc=(140,90), text='Numdocs\n%s' % self.numdocs)
fd = open('%s.pdf' % comparename, 'w')
can = canvas.init(fd, 'pdf')
ar.draw(can)
tb.draw(can)
can.close()
def timer(fn, *args, **kw):
times = []
for x in xrange(7):
start = time.time()
result = fn(*args, **kw)
if not hasattr(result, '__len__'):
result = list(result)
end = time.time()
times.append(end-start)
return median(times), result
def isect(seq1, seq2):
res = [] # start empty
for x in seq1: # scan seq1
if x in seq2: # common item?
res.append(x) # add to end
return res
def product(*args):
# product('ABCD', 'xy') --> Ax Ay Bx By Cx Cy Dx Dy
# product(range(2), repeat=3) --> 000 001 010 011 100 101 110 111
pools = map(tuple, args)
result = [[]]
for pool in pools:
result = [x+[y] for x in result for y in pool]
for prod in result:
yield tuple(prod)
def avg(numbers):
return sum(numbers) / len(numbers)
def median(numbers):
"Return the median of the list of numbers."
# Sort the list and take the middle element.
n = len(numbers)
copy = numbers[:] # So that "numbers" keeps its original order
copy.sort()
if n & 1: # There is an odd number of elements
return copy[n // 2]
else:
return (copy[n // 2 - 1] + copy[n // 2]) / 2
def series(numdocs, base):
exp = int(math.ceil(math.log(numdocs) / math.log(base)))
return [ pow(base, x) for x in range(1, exp) ]
def main(argv=sys.argv):
bench = FieldIndexForwardSort()
bench() | zerodbext.catalog | /zerodbext.catalog-0.8.4.tar.gz/zerodbext.catalog-0.8.4/benchmark/sortbench.py | sortbench.py |
import os
import datetime
BENCHMARK_DATA_DIR='benchmark_data'
MAILLIST_INDEX='http://mail.python.org/pipermail/python-list/'
from zerodbext.catalog.catalog import FileStorageCatalogFactory
from zerodbext.catalog.catalog import ConnectionManager
from zerodbext.catalog.indexes.field import CatalogFieldIndex
from zerodbext.catalog.indexes.facet import CatalogFacetIndex
from zerodbext.catalog.indexes.text import CatalogTextIndex
from email.Parser import Parser
from rfc822 import parsedate_tz
import gzip,time
from urllib2 import urlopen
from HTMLParser import HTMLParser
from urlparse import urljoin
class Profiler(object):
"""This is a 'profiler' of sorts intended to let us find out how
long particular actions, of programmer interest, take to run,
total. Actions can be arbitrarily nested. This doesn't do
anything like the actual python profiler, which tells how much
time you end up spending in particular function calls
aggregated across all calls to particular functions. Both
kinds of data are useful and recommended for getting a handle
on where performance bottlenecks might be.
"""
def __init__(self):
self.action_root = TimedAction('Total')
self.action_stack = [ self.action_root, ]
def start(self, name):
action = TimedAction(name)
self.action_stack[-1].children.append(action)
self.action_stack.append(action)
print name
def stop(self, name=None):
if name is None:
self.action_root.stop()
return
action = self.action_stack.pop()
if action.name != name:
raise Exception( "Profiler action stopped out of sequence. "
"Expecting: %s" % action.name )
action.stop()
def print_stack(self):
self.action_root.print_action()
class TimedAction(object):
def __init__(self, name):
self.name = name
self.start_time = time.time()
self.end_time = None
self.children = []
def stop(self):
self.end_time = time.time()
def print_action(self,level=0):
indent = " ".join( [ "" for i in xrange(level+1) ] ) # Hacky, sorry
if self.end_time:
print "%s%s: %0.3f" % ( indent, self.name,
self.end_time - self.start_time )
else:
print "%s%s:" % ( indent, self.name )
for child in self.children:
child.print_action( level + 1 )
# Start profiling
profiler = Profiler()
def prep_catalog():
"""Download python mailing list, create new catalog and catalog
messages, if not done already.
"""
if not os.path.exists(BENCHMARK_DATA_DIR):
os.makedirs(BENCHMARK_DATA_DIR)
# Check to see if mailing list data already present
if len(get_mailbox_filenames()) == 0:
MailListSucker(MAILLIST_INDEX,BENCHMARK_DATA_DIR).suck()
# Create ZODB and index maillist messages, if not yet done
zodb_file = os.path.join(BENCHMARK_DATA_DIR, 'test.zodb')
if not os.path.exists(zodb_file):
# Create a catalog
manager = ConnectionManager()
factory = FileStorageCatalogFactory(
os.path.join(BENCHMARK_DATA_DIR,
'test.zodb'), 'benchmark' )
c = factory(manager)
# Create some indices
c['subject'] = CatalogFieldIndex(get_subject)
c['date'] = CatalogFieldIndex(get_date)
c['sender_email'] = CatalogFieldIndex(get_sender_email)
c['topics'] = CatalogFacetIndex(get_topics, topic_taxonomy)
c['text'] = CatalogTextIndex(get_text)
manager.commit()
# Loop over messages to get base line
profiler.start( "Loop over messages without indexing" )
for _ in MessageIterator():
pass
profiler.stop( "Loop over messages without indexing" )
profiler.start( "Index messages" )
id = 1
for msg in MessageIterator():
c.index_doc(id,msg)
id += 1
if id / 100 == 0:
manager.commit()
manager.commit()
manager.close()
profiler.stop( "Index messages" )
print "Indexed %d messages" % id
def get_mailbox_filenames():
return [ dir for dir in
os.listdir(BENCHMARK_DATA_DIR) if dir[-7:] == '.txt.gz' ]
# Adapter methods for indexing messages
def get_subject(msg,default):
subject = msg.get('Subject',default)
return subject
def get_date(msg,default):
date = msg.get('Date',default)
return date
def get_sender_email(msg,default):
sender_email = msg.get('From', default)
return sender_email
topic_taxonomy = set(['year'])
for year in (2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009):
topic_taxonomy.add('year:%s' % year)
for num in range(1, 13):
monthname = datetime.date(year, num, 1).strftime('%B')
topic_taxonomy.add('year:%s:%s' % (year, monthname))
def get_topics(msg, default):
date = msg.get('Date', default)
if date is default:
return default
try:
tt = parsedate_tz(date)
except:
return default
if not tt:
return default
year, mon, _, _, _, _, _, _, _, _ = tt
try:
year = int(year)
except:
return default
if year < 1900:
year = year + 1900
monthname = datetime.date(year, mon, 1).strftime('%B')
return ['year:%s:%s' % (year, monthname)]
def get_text(msg, default):
return msg.as_string()
class MailListSucker(HTMLParser):
BUFFER_SIZE = 64 * 1024
def __init__(self,url,out_dir):
HTMLParser.__init__(self)
self.url = url
self.out_dir = out_dir
def suck(self):
self.feed(urlopen(self.url).read())
self.close()
def blow(self):
raise NotImplemented
def handle_starttag(self,name,attrs):
if name == 'a':
for name, href in attrs:
if name == 'href' and href and href[-7:] == '.txt.gz':
# Download file
href = urljoin( self.url, href )
print "Downloading %s..." % href
fname = href[href.rindex('/')+1:]
down = urlopen(href)
out = open( os.path.join( BENCHMARK_DATA_DIR,
fname ), "wb" )
buf = down.read(self.BUFFER_SIZE)
while len(buf):
out.write(buf)
buf = down.read(self.BUFFER_SIZE)
out.close()
down.close()
class MessageIterator(object):
"""Iterates over a messages in a series of gzipped mailboxes in the
benchmark data directory. Conveniently aggregates all messages
in all mailboxes into a single iterable.
"""
email_parser = Parser()
def __init__(self):
self.file_list = get_mailbox_filenames()
self._next_file()
def _next_file(self):
if self.file_list:
fname = self.file_list.pop(0)
# Read whole thing into memory and manipulate it.
# Not the most efficient but good enough for testing
print "load %s" % fname
self.messages = gzip.open(
os.path.join(BENCHMARK_DATA_DIR,fname)).read().split('\nFrom ')
else:
raise StopIteration
def next(self):
if not self.messages:
self._next_file()
message = self.messages.pop(0)
return self.email_parser.parsestr( message[message.index('\n')+1:] )
def __iter__(self):
return self
def run():
# Download mailbox archive of python mailing list and build
# catalog if needed
prep_catalog()
# Open a catalog
manager = ConnectionManager()
factory = FileStorageCatalogFactory(
os.path.join(BENCHMARK_DATA_DIR,'test.zodb'), 'benchmark' )
c = factory(manager)
# Do some searches
profiler.start( "unsorted retrieval" )
n, results = c.search(date=('0', 'Z'))
print '%d results ' % n
# Force generator to marshall brains
for result in results:
pass
profiler.stop( "unsorted retrieval" )
profiler.start( "repeat unsorted retrieval" )
n, results = c.search(date=('0', 'Z'))
print '%d results ' % n
# Force generator to marshall brains
for result in results:
pass
profiler.stop( "repeat unsorted retrieval" )
profiler.start( "sorted retrieval" )
n, results = c.search( date=('0', 'Z'), sort_index='subject' )
print '%d results ' % n
for result in results:
pass
profiler.stop( "sorted retrieval" )
profiler.start( "reverse sorted retrieval" )
n, results = c.search( date=('0', 'Z'), sort_index='subject', reverse=True )
print '%d results ' % n
for result in results:
pass
profiler.stop( "reverse sorted retrieval" )
profiler.start('limit to topic=year:2000')
n, results = c.search( topics=['year:2000'] )
print '%d results' % n
L = []
for result in results:
L.append(result)
profiler.stop( "limit to topic=year:2000" )
profiler.start('count limited to topic=year:2000')
print c['topics'].counts(L, ['year:2000'])
profiler.stop('count limited to topic=year:2000')
profiler.stop()
profiler.print_stack() | zerodbext.catalog | /zerodbext.catalog-0.8.4.tar.gz/zerodbext.catalog-0.8.4/benchmark/benchmark.py | benchmark.py |
import random
from persistent import Persistent
from BTrees.IOBTree import IOBTree
from BTrees.OIBTree import OIBTree
from BTrees.OOBTree import OOBTree
import BTrees
_marker = ()
class DocumentMap(Persistent):
""" A two-way map between addresses (e.g. location paths) and document ids.
The map is a persistent object meant to live in a ZODB storage.
Additionally, the map is capable of mapping 'metadata' to docids.
"""
_v_nextid = None
family = BTrees.family32
_randrange = random.randrange
docid_to_metadata = None # latch for b/c
def __init__(self):
self.docid_to_address = IOBTree()
self.address_to_docid = OIBTree()
self.docid_to_metadata = IOBTree()
def docid_for_address(self, address):
""" Retrieve a document id for a given address.
``address`` is a string or other hashable object which represents
a token known by the application.
Return the integer document id corresponding to ``address``.
If ``address`` doesn't exist in the document map, return None.
"""
return self.address_to_docid.get(address)
def address_for_docid(self, docid):
""" Retrieve an address for a given document id.
``docid`` is an integer document id.
Return the address corresponding to ``docid``.
If ``docid`` doesn't exist in the document map, return None.
"""
return self.docid_to_address.get(docid)
def add(self, address, docid=_marker):
""" Add a new document to the document map.
``address`` is a string or other hashable object which represents
a token known by the application.
``docid``, if passed, must be an int. In this case, remove
any previous address stored for it before mapping it to the
new address. Passing an explicit ``docid`` also removes any
metadata associated with that docid.
If ``docid`` is not passed, generate a new docid.
Return the integer document id mapped to ``address``.
"""
if docid is _marker:
docid = self.new_docid()
self.remove_docid(docid)
self.remove_address(address)
self.docid_to_address[docid] = address
self.address_to_docid[address] = docid
return docid
def remove_docid(self, docid):
""" Remove a document from the document map for the given document ID.
``docid`` is an integer document id.
Remove any corresponding metadata for ``docid`` as well.
Return a True if ``docid`` existed in the map, else return False.
"""
# It should be an invariant that if one entry exists in
# docid_to_address for a docid/address pair, exactly one
# corresponding entry exists in address_to_docid for the same
# docid/address pair. However, versions of this code before
# r.catalog 0.7.3 had a bug which, if this method was called
# multiple times, each time with the same address but a
# different docid, the ``docid_to_address`` mapping could
# contain multiple entries for the same address each with a
# different docid, causing this invariant to be violated. The
# symptom: in systems that used r.catalog 0.7.2 and lower,
# there might be more entries in docid_to_address than there
# are in address_to_docid. The conditional fuzziness in the
# code directly below is a runtime kindness to systems in that
# state. Technically, the administrator of a system in such a
# state should normalize the two data structures by running a
# script after upgrading to 0.7.3. If we made the admin do
# this, some of the code fuzziness below could go away,
# replaced with something simpler. But there's no sense in
# breaking systems at runtime through being a hardass about
# consistency if an unsuspecting upgrader has not yet run the
# data fixer script. The "fix the data" mantra rings a
# little hollow when you weren't the one who broke the data in
# the first place ;-)
self._check_metadata()
address = self.docid_to_address.get(docid, _marker)
if address is _marker:
return False
old_docid = self.address_to_docid.get(address, _marker)
if (old_docid is not _marker) and (old_docid != docid):
self.remove_docid(old_docid)
if docid in self.docid_to_address:
del self.docid_to_address[docid]
if address in self.address_to_docid:
del self.address_to_docid[address]
if docid in self.docid_to_metadata:
del self.docid_to_metadata[docid]
return True
def remove_address(self, address):
""" Remove a document from the document map using an address.
``address`` is a string or other hashable object which represents
a token known by the application.
Remove any corresponding metadata for ``address`` as well.
Return a True if ``address`` existed in the map, else return False.
"""
# See the comment in remove_docid for complexity rationalization
self._check_metadata()
docid = self.address_to_docid.get(address, _marker)
if docid is _marker:
return False
old_address = self.docid_to_address.get(docid, _marker)
if (old_address is not _marker) and (old_address != address):
self.remove_address(old_address)
if docid in self.docid_to_address:
del self.docid_to_address[docid]
if address in self.address_to_docid:
del self.address_to_docid[address]
if docid in self.docid_to_metadata:
del self.docid_to_metadata[docid]
return True
def _check_metadata(self):
# backwards compatibility
if self.docid_to_metadata is None:
self.docid_to_metadata = IOBTree()
def add_metadata(self, docid, data):
""" Add metadata related to a given document id.
``data`` must be a mapping, such as a dictionary.
For each key/value pair in ``data`` insert a metadata key/value pair
into the metadata stored for ``docid``.
Overwrite any existing values for the keys in ``data``, leaving values
unchanged for other existing keys.
Raise a KeyError If ``docid`` doesn't relate to an address in the
document map.
"""
if not docid in self.docid_to_address:
raise KeyError(docid)
if len(data.keys()) == 0:
return
self._check_metadata()
meta = self.docid_to_metadata.setdefault(docid, OOBTree())
for k in data:
meta[k] = data[k]
def remove_metadata(self, docid, *keys):
""" Remove metadata related to a given document id.
If ``docid`` doesn't exist in the metadata map, raise a KeyError.
For each key in ``keys``, remove the metadata value for the
docid related to that key.
Do not raise any error if no value exists for a given key.
If no keys are specified, remove all metadata related to the docid.
"""
self._check_metadata()
if keys:
meta = self.docid_to_metadata.get(docid, _marker)
if meta is _marker:
raise KeyError(docid)
for k in keys:
if k in meta:
del meta[k]
if not meta:
del self.docid_to_metadata[docid]
else:
if not (docid in self.docid_to_metadata):
raise KeyError(docid)
del self.docid_to_metadata[docid]
def get_metadata(self, docid):
""" Return the metadata for ``docid``.
Return a mapping of the keys and values set using ``add_metadata``.
Raise a KeyError If metadata does not exist for ``docid``.
"""
if self.docid_to_metadata is None:
raise KeyError(docid)
meta = self.docid_to_metadata[docid]
return meta
def new_docid(self):
""" Return a new document id.
The returned value is guaranteed not to be used already in this
document map.
"""
while True:
if self._v_nextid is None:
self._v_nextid = self._randrange(self.family.minint,
self.family.maxint)
uid = self._v_nextid
self._v_nextid += 1
if uid not in self.docid_to_address:
return uid
self._v_nextid = None | zerodbext.catalog | /zerodbext.catalog-0.8.4.tar.gz/zerodbext.catalog-0.8.4/zerodbext/catalog/document.py | document.py |
import six
import BTrees
from persistent.mapping import PersistentMapping
import transaction
from zope.interface import implementer
from zerodbext.catalog.interfaces import ICatalog
from zerodbext.catalog.interfaces import ICatalogIndex
@implementer(ICatalog)
class Catalog(PersistentMapping):
family = BTrees.family32
def __init__(self, family=None):
PersistentMapping.__init__(self)
if family is not None:
self.family = family
def clear(self):
""" Clear all indexes in this catalog. """
for index in self.values():
index.clear()
def index_doc(self, docid, obj):
"""Register the document represented by ``obj`` in indexes of
this catalog using docid ``docid``."""
assertint(docid)
for index in self.values():
index.index_doc(docid, obj)
def unindex_doc(self, docid):
"""Unregister the document id from indexes of this catalog."""
assertint(docid)
for index in self.values():
index.unindex_doc(docid)
def reindex_doc(self, docid, obj):
""" Reindex the document referenced by docid using the object
passed in as ``obj`` (typically just does the equivalent of
``unindex_doc``, then ``index_doc``, but specialized indexes
can override the method that this API calls to do less work. """
assertint(docid)
for index in self.values():
index.reindex_doc(docid, obj)
def __setitem__(self, name, index):
""" Add an object which implements
``zerodbext.catalog.interfaces.ICatalogIndex`` to the catalog.
No other type of object may be added to a catalog."""
if not ICatalogIndex.providedBy(index):
raise ValueError('%s does not provide ICatalogIndex')
return PersistentMapping.__setitem__(self, name, index)
def search(self, **query):
""" Use the query terms to perform a query. Return a tuple of
(num, resultseq) based on the merging of results from
individual indexes.
.. note::
this method is deprecated as of :mod:`zerodbext.catalog`
version 0.8. Use :meth:`zerodbext.catalog.Catalog.query`
instead.
"""
sort_index = None
reverse = False
limit = None
sort_type = None
index_query_order = None
if 'sort_index' in query:
sort_index = query.pop('sort_index')
if 'reverse' in query:
reverse = query.pop('reverse')
if 'limit' in query:
limit = query.pop('limit')
if 'sort_type' in query:
sort_type = query.pop('sort_type')
if 'index_query_order' in query:
index_query_order = query.pop('index_query_order')
if index_query_order is None:
# unordered query (use apply)
results = []
for index_name, index_query in query.items():
index = self.get(index_name)
if index is None:
raise ValueError('No such index %s' % index_name)
r = index.apply(index_query)
if not r:
# empty results, bail early; intersect will be null
return EMPTY_RESULT
results.append((len(r), r))
if not results:
return EMPTY_RESULT
results.sort(key=len) # order from smallest to largest
_, result = results.pop(0)
for _, r in results:
_, result = self.family.IF.weightedIntersection(result, r)
if not result:
EMPTY_RESULT
else:
# ordered query (use apply_intersect)
result = None
_marker = object()
for index_name in index_query_order:
index_query = query.get(index_name, _marker)
if index_query is _marker:
continue
index = self.get(index_name)
if index is None:
raise ValueError('No such index %s' % index_name)
result = index.apply_intersect(index_query, result)
if not result:
# empty results
return EMPTY_RESULT
return self.sort_result(result, sort_index, limit, sort_type, reverse)
def sort_result(self, result, sort_index=None, limit=None, sort_type=None,
reverse=False):
numdocs = total = len(result)
if sort_index:
index = self[sort_index]
result = index.sort(result, reverse=reverse, limit=limit,
sort_type=sort_type)
if limit:
numdocs = min(numdocs, limit)
return ResultSetSize(numdocs, total), result
def query(self, queryobject, sort_index=None, limit=None, sort_type=None,
reverse=False, names=None):
""" Use the arguments to perform a query. Return a tuple of
(num, resultseq)."""
try:
from zerodbext.catalog.query import parse_query
if isinstance(queryobject, six.string_types):
queryobject = parse_query(queryobject)
except ImportError: #pragma NO COVERAGE
pass
results = queryobject._apply(self, names)
return self.sort_result(results, sort_index, limit, sort_type, reverse)
def apply(self, query):
return self.search(**query)
def assertint(docid):
if not isinstance(docid, int):
raise ValueError('%r is not an integer value; document ids must be '
'integers' % docid)
class CatalogFactory(object):
def __call__(self, connection_handler=None):
conn = self.db.open()
if connection_handler:
connection_handler(conn)
root = conn.root()
if root.get(self.appname) is None:
root[self.appname] = Catalog()
return root[self.appname]
class FileStorageCatalogFactory(CatalogFactory):
def __init__(self, filename, appname, **kw):
""" ``filename`` is a filename to the FileStorage storage,
``appname`` is a key name in the root of the FileStorage in
which to store the catalog, and ``**kw`` is passed as extra
keyword arguments to :class:`ZODB.DB.DB` when creating a
database. Note that when we create a :class:`ZODB.DB.DB`
instance, if a ``cache_size`` is not passed in ``*kw``, we
override the default ``cache_size`` value with ``50000`` in
order to provide a more realistic cache size for modern apps"""
cache_size = kw.get('cache_size')
if cache_size is None:
kw['cache_size'] = 50000
from ZODB.FileStorage.FileStorage import FileStorage
from ZODB.DB import DB
f = FileStorage(filename)
self.db = DB(f, **kw)
self.appname = appname
def __del__(self):
self.db.close()
class ConnectionManager(object):
def __call__(self, conn):
self.conn = conn
def close(self):
self.conn.close()
def __del__(self):
self.close()
def commit(self, transaction=transaction):
transaction.commit()
class ResultSetSizeClass(int):
def __repr__(self):
return 'ResultSetSize(%d, %d)' % (self, self.total)
def ResultSetSize(i, total):
size = ResultSetSizeClass(i)
size.total = total
return size
EMPTY_RESULT = ResultSetSize(0, 0), () | zerodbext.catalog | /zerodbext.catalog-0.8.4.tar.gz/zerodbext.catalog-0.8.4/zerodbext/catalog/catalog.py | catalog.py |
import six
import BTrees
import sys
try:
import ast
ast_support = True
except ImportError: # pragma NO COVERAGE
ast_support = False
_marker = object()
class Query(object):
"""
Base class for all elements that make up queries.
"""
__parent__ = None
__name__ = None
def __and__(self, right):
self._check_type("and", right)
return And(self, right)
def __or__(self, right):
self._check_type("or", right)
return Or(self, right)
def _check_type(self, setop, operand):
if not isinstance(operand, Query):
raise TypeError(
"TypeError: unsupported operand types for %s: %s %s" %
(setop, type(self), type(operand)))
def iter_children(self):
return ()
def print_tree(self, out=sys.stdout, level=0):
six.print_(' ' * level + str(self), file=out)
for child in self.iter_children():
child.print_tree(out, level + 1)
def _optimize(self):
"""
If subtree represented by this node can be transformed into a more
optimal subtree, return the transformed subtree, otherwise return self.
"""
return self
class Comparator(Query):
"""
Base class for all comparators used in queries.
"""
def __init__(self, index_name, value):
self.index_name = index_name
self._value = value
def _get_index(self, catalog):
return catalog[self.index_name]
def _get_value(self, names, value=_marker):
if value is _marker:
value = self._value
if isinstance(value, list):
return [self._get_value(names, child) for child in value]
elif isinstance(value, tuple):
return tuple(self._get_value(names, child) for child in value)
elif isinstance(value, Name):
name = value.name
if not names or name not in names:
raise NameError("No value passed in for name: %s" % name)
return names[name]
return value
def __str__(self):
return ' '.join((self.index_name, self.operator, repr(self._value)))
def __eq__(self, other):
return (self.index_name == other.index_name and
self._value == other._value)
class Contains(Comparator):
"""Contains query.
CQE equivalent: 'foo' in index
"""
def _apply(self, catalog, names):
index = self._get_index(catalog)
return index.applyContains(self._get_value(names))
def __str__(self):
return '%s in %s' % (repr(self._value), self.index_name)
def negate(self):
return DoesNotContain(self.index_name, self._value)
class DoesNotContain(Comparator):
"""CQE equivalent: 'foo' not in index
"""
def _apply(self, catalog, names):
index = self._get_index(catalog)
return index.applyDoesNotContain(self._get_value(names))
def __str__(self):
return '%s not in %s' % (repr(self._value), self.index_name)
def negate(self):
return Contains(self.index_name, self._value)
class Eq(Comparator):
"""Equals query.
CQE equivalent: index == 'foo'
"""
operator = '=='
def _apply(self, catalog, names):
index = self._get_index(catalog)
return index.applyEq(self._get_value(names))
def negate(self):
return NotEq(self.index_name, self._value)
class NotEq(Comparator):
"""Not equal query.
CQE eqivalent: index != 'foo'
"""
operator = '!='
def _apply(self, catalog, names):
index = self._get_index(catalog)
return index.applyNotEq(self._get_value(names))
def negate(self):
return Eq(self.index_name, self._value)
class Gt(Comparator):
""" Greater than query.
CQE equivalent: index > 'foo'
"""
operator = '>'
def _apply(self, catalog, names):
index = self._get_index(catalog)
return index.applyGt(self._get_value(names))
def negate(self):
return Le(self.index_name, self._value)
class Lt(Comparator):
""" Less than query.
CQE equivalent: index < 'foo'
"""
operator = '<'
def _apply(self, catalog, names):
index = self._get_index(catalog)
return index.applyLt(self._get_value(names))
def negate(self):
return Ge(self.index_name, self._value)
class Ge(Comparator):
"""Greater (or equal) query.
CQE equivalent: index >= 'foo'
"""
operator = '>='
def _apply(self, catalog, names):
index = self._get_index(catalog)
return index.applyGe(self._get_value(names))
def negate(self):
return Lt(self.index_name, self._value)
class Le(Comparator):
"""Less (or equal) query.
CQE equivalent: index <= 'foo
"""
operator = '<='
def _apply(self, catalog, names):
index = self._get_index(catalog)
return index.applyLe(self._get_value(names))
def negate(self):
return Gt(self.index_name, self._value)
class Any(Comparator):
"""Any of query.
CQE equivalent: index in any(['foo', 'bar'])
"""
def _apply(self, catalog, names):
index = self._get_index(catalog)
return index.applyAny(self._get_value(names))
def negate(self):
return NotAny(self.index_name, self._value)
def __str__(self):
return '%s in any(%s)' % (self.index_name, repr(self._value))
class NotAny(Comparator):
"""Not any of query (ie, None of query)
CQE equivalent: index not in any(['foo', 'bar'])
"""
operator = 'not any'
def _apply(self, catalog, names):
index = self._get_index(catalog)
return index.applyNotAny(self._get_value(names))
def negate(self):
return Any(self.index_name, self._value)
def __str__(self):
return '%s not in any(%s)' % (self.index_name, repr(self._value))
class All(Comparator):
"""All query.
CQE equivalent: index in all(['foo', 'bar'])
"""
operator = 'all'
def _apply(self, catalog, names):
index = self._get_index(catalog)
return index.applyAll(self._get_value(names))
def negate(self):
return NotAll(self.index_name, self._value)
def __str__(self):
return '%s in all(%s)' % (self.index_name, repr(self._value))
class NotAll(Comparator):
"""NotAll query.
CQE equivalent: index not in all(['foo', 'bar'])
"""
operator = 'not all'
def _apply(self, catalog, names):
index = self._get_index(catalog)
return index.applyAll(self._get_value(names))
def negate(self):
return All(self.index_name, self._value)
def __str__(self):
return '%s not in all(%s)' % (self.index_name, repr(self._value))
class _Range(Comparator):
@classmethod
def fromGTLT(cls, start, end):
assert isinstance(start, (Gt, Ge))
if isinstance(start, Gt):
start_exclusive = True
else:
start_exclusive = False
assert isinstance(end, (Lt, Le))
if isinstance(end, Lt):
end_exclusive = True
else:
end_exclusive = False
assert start.index_name == end.index_name
return cls(start.index_name, start._value, end._value,
start_exclusive, end_exclusive)
def __init__(self, index_name, start, end,
start_exclusive=False, end_exclusive=False):
self.index_name = index_name
self._start = start
self._end = end
self.start_exclusive = start_exclusive
self.end_exclusive = end_exclusive
def _get_start(self, names):
value = self._start
if isinstance(value, Name):
name = value.name
if name not in names:
raise NameError("No value passed in for name: %s" % name)
return names[name]
return value
def _get_end(self, names):
value = self._end
if isinstance(value, Name):
name = value.name
if name not in names:
raise NameError("No value passed in for name: %s" % name)
return names[name]
return value
def __str__(self):
s = [repr(self._start)]
if self.start_exclusive:
s.append('<')
else:
s.append('<=')
s.append(self.index_name)
if self.end_exclusive:
s.append('<')
else:
s.append('<=')
s.append(repr(self._end))
return ' '.join(s)
def __eq__(self, other):
if not isinstance(other, type(self)):
return False
return (self.index_name == other.index_name and
self._start == other._start and
self._end == other._end and
self.start_exclusive == other.start_exclusive and
self.end_exclusive == other.end_exclusive)
class InRange(_Range):
""" Index value falls within a range.
CQE eqivalent: lower < index < upper
lower <= index <= upper
"""
def _apply(self, catalog, names):
index = self._get_index(catalog)
return index.applyInRange(
self._get_start(names), self._get_end(names),
self.start_exclusive, self.end_exclusive)
def negate(self):
return NotInRange(self.index_name, self._start, self._end,
self.start_exclusive, self.end_exclusive)
class NotInRange(_Range):
""" Index value falls outside a range.
CQE eqivalent: not(lower < index < upper)
not(lower <= index <= upper)
"""
def _apply(self, catalog, names):
index = self._get_index(catalog)
return index.applyNotInRange(
self._get_start(names), self._get_end(names),
self.start_exclusive, self.end_exclusive)
def __str__(self):
return 'not(%s)' % _Range.__str__(self)
def negate(self):
return InRange(self.index_name, self._start, self._end,
self.start_exclusive, self.end_exclusive)
class BoolOp(Query):
"""
Base class for Or and And operators.
"""
family = BTrees.family32
def __init__(self, *queries):
arguments = []
for query in queries:
# If argument is of the same type, can promote its arguments up
# to here.
if type(query) == type(self):
arguments.extend(query.queries)
else:
arguments.append(query)
self.queries = arguments
def __str__(self):
return type(self).__name__
def iter_children(self):
for query in self.queries:
yield query
def _optimize(self):
self.queries = [query._optimize() for query in self.queries]
new_me = self._optimize_eq()
if new_me is not None:
return new_me
new_me = self._optimize_not_eq()
if new_me is not None:
return new_me
return self
def _optimize_eq(self):
# If all queries are Eq operators for the same index, we can replace
# this And or Or with an All or Any node.
queries = list(self.queries)
query = queries.pop(0)
if type(query) != Eq:
return None
index_name = query.index_name
values = [query._value]
while queries:
query = queries.pop(0)
if type(query) != Eq or query.index_name != index_name:
return None
values.append(query._value)
# All queries are Eq operators for the same index.
if type(self) == Or:
return Any(index_name, values)
return All(index_name, values)
def _optimize_not_eq(self):
# If all queries are NotEq operators for the same index, we can
# replace this And or Or with a NotAll or NotAny node.
queries = list(self.queries)
query = queries.pop(0)
if type(query) != NotEq:
return None
index_name = query.index_name
values = [query._value]
while queries:
query = queries.pop(0)
if type(query) != NotEq or query.index_name != index_name:
return None
values.append(query._value)
# All queries are Eq operators for the same index.
if type(self) == Or:
return NotAll(index_name, values)
return NotAny(index_name, values)
class Or(BoolOp):
"""Boolean Or of multiple queries."""
def _apply(self, catalog, names):
# XXX Try to figure out when we need weightedOr and when we can
# just use union or multiunion.
queries = self.queries
result = queries[0]._apply(catalog, names)
for query in queries[1:]:
next_result = query._apply(catalog, names)
if len(result) == 0:
result = next_result
elif len(next_result) > 0:
_, result = self.family.IF.weightedUnion(result, next_result)
return result
def negate(self):
neg_queries = [query.negate() for query in self.queries]
return And(*neg_queries)
def _optimize(self):
new_self = BoolOp._optimize(self)
if self is not new_self:
return new_self
# There might be a combination of Gt/Ge and Lt/Le operators for the
# same index that could be used to compose a NotInRange.
uppers = {}
lowers = {}
queries = list(self.queries)
def process_range(i_lower, query_lower, i_upper, query_upper):
queries[i_lower] = NotInRange.fromGTLT(
query_lower.negate(), query_upper.negate())
queries[i_upper] = None
for i in six.moves.xrange(len(queries)):
query = queries[i]
if type(query) in (Lt, Le):
match = uppers.get(query.index_name)
if match is not None:
i_upper, query_upper = match
process_range(i, query, i_upper, query_upper)
else:
lowers[query.index_name] = (i, query)
elif type(query) in (Gt, Ge):
match = lowers.get(query.index_name)
if match is not None:
i_lower, query_lower = match
process_range(i_lower, query_lower, i, query)
else:
uppers[query.index_name] = (i, query)
queries = [q for q in queries if q]
if len(queries) == 1:
return queries[0]
self.queries = queries
return self
class And(BoolOp):
"""Boolean And of multiple queries."""
def _apply(self, catalog, names):
# XXX Try to figure out when we need weightedIntersection and when we
# can just use intersection.
IF = self.family.IF
queries = self.queries
result = queries[0]._apply(catalog, names)
for query in queries[1:]:
if len(result) == 0:
return IF.Set()
next_result = query._apply(catalog, names)
if len(next_result) == 0:
return IF.Set()
_, result = IF.weightedIntersection(result, next_result)
return result
def negate(self):
neg_queries = [query.negate() for query in self.queries]
return Or(*neg_queries)
def _optimize(self):
new_self = BoolOp._optimize(self)
if self is not new_self:
return new_self
# There might be a combination of Gt/Ge and Lt/Le operators for the
# same index that could be used to compose an InRange.
uppers = {}
lowers = {}
queries = list(self.queries)
def process_range(i_lower, query_lower, i_upper, query_upper):
queries[i_lower] = InRange.fromGTLT(query_lower, query_upper)
queries[i_upper] = None
for i in six.moves.xrange(len(queries)):
query = queries[i]
if type(query) in (Gt, Ge):
match = uppers.get(query.index_name)
if match is not None:
i_upper, query_upper = match
process_range(i, query, i_upper, query_upper)
else:
lowers[query.index_name] = (i, query)
elif type(query) in (Lt, Le):
match = lowers.get(query.index_name)
if match is not None:
i_lower, query_lower = match
process_range(i_lower, query_lower, i, query)
else:
uppers[query.index_name] = (i, query)
queries = [q for q in queries if q]
if len(queries) == 1:
return queries[0]
self.queries = queries
return self
class Not(Query):
"""Negation of a query."""
def __init__(self, query):
self.query = query
def __str__(self):
return 'Not'
def iter_children(self):
yield self.query
def negate(self):
return self.query
def _apply(self, catalog, names):
return self.query.negate()._apply(catalog, names)
def _optimize(self):
return self.query.negate()._optimize()
class Name(object):
"""
A variable name in an expression, evaluated at query time. Can be used
to defer evaluation of variables used inside of expressions until query
time.
Example::
from zerodbext.catalog.query import Eq
from zerodbext.catalog.query import Name
# Define query at module scope
find_cats = Eq('color', Name('color')) & Eq('sex', Name('sex'))
# Use query in a search function, evaluating color and sex at the
# time of the query
def search_cats(catalog, resolver, color='tabby', sex='female'):
# Let resolver be some function which can retrieve a cat object
# from your application given a docid.
params = dict(color=color, sex=sex)
count, docids = catalog.query(find_cats, params)
for docid in docids:
yield resolver(docid)
"""
def __init__(self, name):
self.name = name
def __repr__(self):
return '%s(%r)' % (self.__class__.__name__, self.name)
__str__ = __repr__
def __eq__(self, right):
if isinstance(right, Name):
return right.name == self.name
return False
class _AstParser(object):
"""
Uses Python's ast module to parse an expression into an abstract syntax
tree. It then walks the tree and constructs a tree of Query objects. Take
the following query:
>>> expr = "a == 1 or (b == 2 or b == 3)"
The ast for this expression looks like this:
>>> _print_ast(expr)
<_ast.Module object at 0xb7377d4c>
<_ast.Expr object at 0xb737e44c>
<_ast.BoolOp object at 0x88f13cc>
<_ast.And object at 0x88ec0ac>
<_ast.Compare object at 0x88f13ac>
<_ast.Name object at 0x88f13ec>
<_ast.Load object at 0x88ea86c>
<_ast.Eq object at 0x88ece2c>
<_ast.Num object at 0x88f14cc>
<_ast.BoolOp object at 0x88f14ec>
<_ast.Or object at 0x88ec14c>
<_ast.Compare object at 0x88f154c>
<_ast.Name object at 0x88f15ac>
<_ast.Load object at 0x88ea86c>
<_ast.Eq object at 0x88ece2c>
<_ast.Num object at 0x88f15cc>
<_ast.Compare object at 0x88f162c>
<_ast.Name object at 0x88f168c>
<_ast.Load object at 0x88ea86c>
<_ast.Eq object at 0x88ece2c>
<_ast.Num object at 0x88f16ac>
_ast.Module is always the root of any tree returned by the ast parser. It
is a requirement for _AstParser that the _ast.Module node contain only a
single child node of type _ast.Expr, which represents the expression we
are trying to transform into a query. The _ast.Expr node will always only
have a single child which is the root of the expression tree,
_ast.BoolOp in the above example.
The walk method is the driver for constructing the query tree. It performs
a depth first traversal of the ast. For each node in the ast it checks to
see if we have a method for processing that node type. Node processors are
all named 'process_NodeType' where NodeType is the name of the class of the
ast node, ie type(node).__name__. Each processor method is passed the
current node and its children which have already been processed. In this
way the query tree is built from the ast from the bottom up.
"""
def __init__(self, expr):
self.expr = expr
def parse(self):
statements = ast.parse(self.expr).body
if len(statements) > 1:
raise ValueError("Can only process single expression.")
expr_tree = statements[0]
if not isinstance(expr_tree, ast.Expr):
raise ValueError("Not an expression.")
result = self.walk(expr_tree.value)
return result
def walk(self, tree):
def visit(node):
children = [visit(child) for child in ast.iter_child_nodes(node)]
name = 'process_%s' % type(node).__name__
processor = getattr(self, name, None)
if processor is None:
raise ValueError(
"Unable to parse expression. Unhandled expression "
"element: %s" % type(node).__name__)
return processor(node, children)
return visit(tree)
def process_Load(self, node, children):
pass
def process_Name(self, node, children):
return node
def process_Attribute(self, node, children):
name = children[0]
dotted_name = ast.Name()
dotted_name.id = '.'.join((name.id, node.attr))
return dotted_name
def process_Str(self, node, children):
return node.s
def process_Num(self, node, children):
return node.n
def process_List(self, node, children):
l = list(children[:-1])
for i in six.moves.xrange(len(l)):
if isinstance(l[i], ast.Name):
l[i] = self._value(l[i])
return l
def process_Tuple(self, node, children):
return tuple(self.process_List(node, children))
def process_Eq(self, node, children):
return self.process_comparator(Eq)
def process_NotEq(self, node, children):
return self.process_comparator(NotEq)
def process_Lt(self, node, children):
return self.process_comparator(Lt)
def process_LtE(self, node, children):
return self.process_comparator(Le)
def process_Gt(self, node, children):
return self.process_comparator(Gt)
def process_GtE(self, node, children):
return self.process_comparator(Ge)
def process_comparator(self, cls):
def factory(left, right):
return cls(self._index_name(left), self._value(right))
factory.type = cls
return factory
def process_In(self, node, children):
def factory(left, right):
if callable(right): # any or all, see process_Call
return right(self._index_name(left))
return Contains(self._index_name(right), self._value(left))
factory.type = Contains
return factory
def process_NotIn(self, node, children):
def factory(left, right):
if callable(right): # any or all, see process_Call
return right(self._index_name(left)).negate()
return DoesNotContain(self._index_name(right), self._value(left))
factory.type = DoesNotContain
return factory
def process_Not(self, node, children):
return Not
def process_UnaryOp(self, node, children):
operator, query = children
return operator(query)
def process_USub(self, node, children):
return Not
def process_Compare(self, node, children):
# Python allows arbitrary chaining of comparisons, ie:
# x == y == z != abc
# x < y >= z
#
# For our purposes, though, we are only interested in two basic forms:
# index_name <comparison_operator> value
# or
# start [<|<=] index_name [<|<=] end
#
# Where the second form maps to an InRange comparator and the first
# form matches any of the other comparators. Arbitrary chaining as
# shown above is not supported.
if len(children) == 3:
# Simple binary form
left, factory, right = children
return factory(left, right)
elif len(children) == 5:
# Range expression
start, f1, f2, index_name, end = children
op1, op2 = f1.type, f2.type
if op1 in (Lt, Le) and op2 in (Lt, Le):
if op1 is Lt:
start_exclusive = True
else:
start_exclusive = False
if op2 is Lt:
end_exclusive = True
else:
end_exclusive = False
return InRange(self._index_name(index_name),
self._value(start),
self._value(end),
start_exclusive,
end_exclusive)
raise ValueError(
"Bad expression: unsupported chaining of comparators.")
def process_BitOr(self, node, children):
return Or
def process_BitAnd(self, node, children):
return And
def process_BinOp(self, node, children):
left, operator, right = children
if not isinstance(left, Query):
raise ValueError(
"Bad expression: left operand for %s must be a result set." %
operator.__name__)
if not isinstance(right, Query):
raise ValueError(
"Bad expression: right operand for %s must be a result set." %
operator.__name__)
return operator(left, right)
def process_Or(self, node, children):
return Or
def process_And(self, node, children):
return And
def process_BoolOp(self, node, children):
operator = children.pop(0)
for child in children:
if not isinstance(child, Query):
raise ValueError(
"Bad expression: All operands for %s must be result sets."
% operator.__name__)
return operator(*children)
def process_Call(self, node, children):
func = children.pop(0)
name = getattr(func, 'id', str(node.func))
if name not in ('any', 'all'):
raise ValueError(
"Bad expression: Illegal function call in expression: %s" %
name)
if len(children) != 1:
raise ValueError(
"Bad expression: Wrong number of arguments to %s" % name)
values = children[0]
if name == 'any':
comparator = Any
else:
comparator = All
def factory(index_name):
return comparator(index_name, self._value(values))
return factory
def _index_name(self, node):
if not isinstance(node, ast.Name):
raise ValueError("Index name must be a name.")
return node.id
def _value(self, node):
if isinstance(node, ast.Name):
return Name(node.id)
return node
def optimize(query):
if isinstance(query, Query):
return query._optimize()
return query
def parse_query(expr, optimize_query=True):
"""
Parses the given expression string and returns a query object. Requires
Python >= 2.6.
"""
if not ast_support:
raise NotImplementedError("Parsing of CQEs requires Python >= 2.6")
query = _AstParser(expr).parse()
if optimize_query:
query = optimize(query)
return query
def _print_ast(expr): # pragma NO COVERAGE
"""
Useful method for visualizing AST trees while debugging.
"""
tree = ast.parse(expr)
def visit(node, level):
print(' ' * level + str(node))
for child in ast.iter_child_nodes(node):
visit(child, level + 1)
visit(tree, 0) | zerodbext.catalog | /zerodbext.catalog-0.8.4.tar.gz/zerodbext.catalog-0.8.4/zerodbext/catalog/query.py | query.py |
import six
from persistent import Persistent
from ZODB.broken import Broken
import BTrees
_marker = ()
class CatalogIndex(object):
""" Abstract class for interface-based lookup """
family = BTrees.family32
def __init__(self, discriminator):
if not callable(discriminator):
if not isinstance(discriminator, six.string_types):
raise ValueError('discriminator value must be callable or a '
'string')
self.discriminator = discriminator
self._not_indexed = self.family.IF.Set()
def index_doc(self, docid, object):
if callable(self.discriminator):
value = self.discriminator(object, _marker)
else:
value = getattr(object, self.discriminator, _marker)
if value is _marker:
# unindex the previous value
super(CatalogIndex, self).unindex_doc(docid)
# Store docid in set of unindexed docids
self._not_indexed.add(docid)
return None
if isinstance(value, Persistent):
raise ValueError('Catalog cannot index persistent object %s' %
value)
if isinstance(value, Broken):
raise ValueError('Catalog cannot index broken object %s' %
value)
if docid in self._not_indexed:
# Remove from set of unindexed docs if it was in there.
self._not_indexed.remove(docid)
return super(CatalogIndex, self).index_doc(docid, value)
def unindex_doc(self, docid):
_not_indexed = self._not_indexed
if docid in _not_indexed:
_not_indexed.remove(docid)
super(CatalogIndex, self).unindex_doc(docid)
def reindex_doc(self, docid, object):
""" Default reindex_doc implementation """
self.unindex_doc(docid)
self.index_doc(docid, object)
def docids(self):
not_indexed = self._not_indexed
indexed = self._indexed()
if len(not_indexed) == 0:
return self.family.IF.Set(indexed)
elif len(indexed) == 0:
return not_indexed
indexed = self.family.IF.Set(indexed)
return self.family.IF.union(not_indexed, indexed)
def apply_intersect(self, query, docids):
""" Default apply_intersect implementation """
result = self.apply(query)
if docids is None:
return result
return self.family.IF.weightedIntersection(result, docids)[1]
def _negate(self, assertion, *args, **kw):
positive = assertion(*args, **kw)
all = self.docids()
if len(positive) == 0:
return all
return self.family.IF.difference(all, positive)
def applyContains(self, *args, **kw):
raise NotImplementedError(
"Contains is not supported for %s" % type(self).__name__)
def applyDoesNotContain(self, *args, **kw):
return self._negate(self.applyContains, *args, **kw)
def applyEq(self, *args, **kw):
raise NotImplementedError(
"Eq is not supported for %s" % type(self).__name__)
def applyNotEq(self, *args, **kw):
return self._negate(self.applyEq, *args, **kw)
def applyGt(self, *args, **kw):
raise NotImplementedError(
"Gt is not supported for %s" % type(self).__name__)
def applyLt(self, *args, **kw):
raise NotImplementedError(
"Lt is not supported for %s" % type(self).__name__)
def applyGe(self, *args, **kw):
raise NotImplementedError(
"Ge is not supported for %s" % type(self).__name__)
def applyLe(self, *args, **kw):
raise NotImplementedError(
"Le is not supported for %s" % type(self).__name__)
def applyAny(self, *args, **kw):
raise NotImplementedError(
"Any is not supported for %s" % type(self).__name__)
def applyNotAny(self, *args, **kw):
return self._negate(self.applyAny, *args, **kw)
def applyAll(self, *args, **kw):
raise NotImplementedError(
"All is not supported for %s" % type(self).__name__)
def applyNotAll(self, *args, **kw):
return self._negate(self.applyAll, *args, **kw)
def applyInRange(self, *args, **kw):
raise NotImplementedError(
"InRange is not supported for %s" % type(self).__name__)
def applyNotInRange(self, *args, **kw):
return self._negate(self.applyInRange, *args, **kw)
def _migrate_to_0_8_0(self, docids):
"""
I'm sorry.
"""
docids = self.family.IF.Set(docids)
indexed = self.family.IF.Set(self._indexed())
self._not_indexed = self.family.IF.difference(docids, indexed) | zerodbext.catalog | /zerodbext.catalog-0.8.4.tar.gz/zerodbext.catalog-0.8.4/zerodbext/catalog/indexes/common.py | common.py |
import bisect
import heapq
from itertools import islice
import six
from zope.interface import implementer
from zope.index.field import FieldIndex
from zerodbext.catalog.interfaces import ICatalogIndex
from zerodbext.catalog.indexes.common import CatalogIndex
from zerodbext.catalog import RangeValue
_marker = []
FWSCAN = 'fwscan'
NBEST = 'nbest'
TIMSORT = 'timsort'
@implementer(ICatalogIndex)
class CatalogFieldIndex(CatalogIndex, FieldIndex):
""" Field indexing.
Query types supported:
- Eq
- NotEq
- Gt
- Ge
- Lt
- Le
- In
- NotIn
- Any
- NotAny
- InRange
- NotInRange
"""
def __init__(self, discriminator):
if not callable(discriminator):
if not isinstance(discriminator, six.string_types):
raise ValueError('discriminator value must be callable or a '
'string')
self.discriminator = discriminator
self._not_indexed = self.family.IF.Set()
self.clear()
def reindex_doc(self, docid, value):
# the base index's index_doc method special-cases a reindex
return self.index_doc(docid, value)
def unindex_doc(self, docid):
"""See interface IInjection.
Base class overridden to be able to unindex None values.
"""
_not_indexed = self._not_indexed
if docid in _not_indexed:
_not_indexed.remove(docid)
rev_index = self._rev_index
value = rev_index.get(docid, _marker)
if value is _marker:
return # not in index
del rev_index[docid]
try:
set = self._fwd_index[value]
set.remove(docid)
except KeyError: #pragma NO COVERAGE
# This is fishy, but we don't want to raise an error.
# We should probably log something.
# but keep it from throwing a dirty exception
set = 1
if not set:
del self._fwd_index[value]
self._num_docs.change(-1)
def _indexed(self):
return self._rev_index.keys()
def sort(self, docids, reverse=False, limit=None, sort_type=None):
if not docids:
return []
numdocs = self._num_docs.value
if not numdocs:
return []
if limit is not None:
limit = int(limit)
if limit < 1:
raise ValueError('limit must be 1 or greater')
if reverse:
return self.sort_reverse(docids, limit, numdocs, sort_type)
else:
return self.sort_forward(docids, limit, numdocs, sort_type)
def sort_forward(self, docids, limit, numdocs, sort_type=None):
rlen = len(docids)
# See http://www.zope.org/Members/Caseman/ZCatalog_for_2.6.1
# for an overview of why we bother doing all this work to
# choose the right sort algorithm.
if sort_type is None:
if fwscan_wins(limit, rlen, numdocs):
# forward scan beats both n-best and timsort reliably
# if this is true
sort_type = FWSCAN
elif limit and nbest_ascending_wins(limit, rlen, numdocs):
# nbest beats timsort reliably if this is true
sort_type = NBEST
else:
sort_type = TIMSORT
if sort_type == FWSCAN:
return self.scan_forward(docids, limit)
elif sort_type == NBEST:
if limit is None:
raise ValueError('nbest requires a limit')
return self.nbest_ascending(docids, limit)
elif sort_type == TIMSORT:
return self.timsort_ascending(docids, limit)
else:
raise ValueError('Unknown sort type %s' % sort_type)
def sort_reverse(self, docids, limit, numdocs, sort_type=None):
if sort_type is None:
# XXX this needs work.
rlen = len(docids)
if limit:
if (limit < 300) or (limit/float(rlen) > 0.09):
sort_type = NBEST
else:
sort_type = TIMSORT
else:
sort_type = TIMSORT
if sort_type == NBEST:
if limit is None:
raise ValueError('nbest requires a limit')
return self.nbest_descending(docids, limit)
elif sort_type == TIMSORT:
return self.timsort_descending(docids, limit)
else:
raise ValueError('Unknown sort type %s' % sort_type)
def scan_forward(self, docids, limit=None):
fwd_index = self._fwd_index
n = 0
for set in fwd_index.values():
for docid in set:
if docid in docids:
n+=1
yield docid
if limit and n >= limit:
return
def nbest_ascending(self, docids, limit):
if limit is None: #pragma NO COVERAGE
raise RuntimeError('n-best used without limit')
# lifted from heapq.nsmallest
h = nsort(docids, self._rev_index)
it = iter(h)
result = sorted(islice(it, 0, limit))
if not result: #pragma NO COVERAGE
return
insort = bisect.insort
pop = result.pop
los = result[-1] # los --> Largest of the nsmallest
for elem in it:
if los <= elem:
continue
insort(result, elem)
pop()
los = result[-1]
for value, docid in result:
yield docid
def nbest_descending(self, docids, limit):
if limit is None: #pragma NO COVERAGE
raise RuntimeError('N-Best used without limit')
iterable = nsort(docids, self._rev_index)
for value, docid in heapq.nlargest(limit, iterable):
yield docid
def timsort_ascending(self, docids, limit):
return self._timsort(docids, limit, reverse=False)
def timsort_descending(self, docids, limit):
return self._timsort(docids, limit, reverse=True)
def _timsort(self, docids, limit=None, reverse=False):
n = 0
marker = _marker
_missing = []
pairs = []
for docid in docids:
v = self._rev_index.get(docid, marker)
if v is not marker:
pairs.append((docid, v))
for (docid, _) in sorted(pairs, key=lambda p: p[1], reverse=reverse):
n += 1
yield docid
if limit and n >= limit:
return
def search(self, queries, operator='or'):
sets = []
for query in queries:
if isinstance(query, RangeValue):
query = query.as_tuple()
else:
query = (query, query)
set = self.family.IF.multiunion(self._fwd_index.values(*query))
sets.append(set)
result = None
if len(sets) == 1:
result = sets[0]
elif operator == 'and':
sets.sort(key=len)
for set in sets:
result = self.family.IF.intersection(set, result)
else:
result = self.family.IF.multiunion(sets)
return result
def apply(self, query):
if isinstance(query, dict):
val = query['query']
if isinstance(val, RangeValue):
val = [val]
elif not isinstance(val, (list, tuple)):
val = [val]
operator = query.get('operator', 'or')
result = self.search(val, operator)
else:
if isinstance(query, tuple) and len(query) == 2:
# b/w compat stupidity; this needs to die
query = RangeValue(*query)
query = [query]
elif not isinstance(query, (list, tuple)):
query = [query]
result = self.search(query, 'or')
return result
def applyEq(self, value):
return self.apply(value)
def applyGe(self, min_value):
return self.applyInRange(min_value, None)
def applyLe(self, max_value):
return self.applyInRange(None, max_value)
def applyGt(self, min_value):
return self.applyInRange(min_value, None, excludemin=True)
def applyLt(self, max_value):
return self.applyInRange(None, max_value, excludemax=True)
def applyAny(self, values):
queries = list(values)
return self.search(queries, operator='or')
def applyInRange(self, start, end, excludemin=False, excludemax=False):
return self.family.IF.multiunion(
self._fwd_index.values(
start, end, excludemin=excludemin, excludemax=excludemax)
)
def nsort(docids, rev_index):
for docid in docids:
try:
yield (rev_index[docid], docid)
except KeyError:
continue
def fwscan_wins(limit, rlen, numdocs):
"""
Primitive curve-fitting to see if forward scan will beat both
nbest and timsort for a particular limit/rlen/numdocs tuple. In
sortbench tests up to 'numdocs' sizes of 65536, this curve fit had
a 95%+ accuracy rate, except when 'numdocs' is < 64, then its
lowest accuracy percentage was 83%. Thus, it could still use some
work, but accuracy at very small index sizes is not terribly
important for the author.
"""
docratio = rlen / float(numdocs)
if limit:
limitratio = limit / float(numdocs)
else:
limitratio = 1
div = 65536.0
if docratio >= 16384/div:
# forward scan tends to beat nbest or timsort reliably when
# the rlen is greater than a quarter of the number of
# documents in the index
return True
if docratio >= 256/div:
# depending on the limit ratio, forward scan still has a
# chance to win over nbest or timsort even if the rlen is
# smaller than a quarter of the number of documents in the
# index, beginning reliably at a docratio of 512/65536.0. XXX
# It'd be nice to figure out a more concise way to express
# this.
if 512/div <= docratio < 1024/div and limitratio <= 4/div:
return True
elif 1024/div <= docratio < 2048/div and limitratio <= 32/div:
return True
elif 2048/div <= docratio < 4096/div and limitratio <= 128/div:
return True
elif 4096/div <= docratio < 8192/div and limitratio <= 512/div:
return True
elif 8192/div <= docratio < 16384/div and limitratio <= 4096/div:
return True
return False
def nbest_ascending_wins(limit, rlen, numdocs):
"""
Primitive curve-fitting to see if nbest ascending will beat
timsort for a particular limit/rlen/numdocs tuple. XXX This needs
work, particularly at small index sizes. It is currently
optimized for an index size of about 32768 (98% accuracy); it gets
about 93% accuracy at index size 65536.
"""
if not limit:
# n-best can't be used without a limit
return False
limitratio = limit / float(numdocs)
if numdocs <= 768:
return True
docratio = rlen / float(numdocs)
div = 65536.0
if docratio < 4096/div:
# nbest tends to win when the rlen is less than about 6% of the
# numdocs
return True
if docratio == 1 and limitratio <= 8192/div:
return True
elif 1 > docratio >= 32768/div and limitratio <= 4096/div:
return True
elif 32768/div > docratio >= 4096/div and limitratio <= 2048/div:
return True
return False | zerodbext.catalog | /zerodbext.catalog-0.8.4.tar.gz/zerodbext.catalog-0.8.4/zerodbext/catalog/indexes/field.py | field.py |
try:
from hashlib import md5
except: # pragma no cover
from md5 import new as md5
import six
from persistent import Persistent
from zope.interface import implementer
from zerodbext.catalog.indexes.keyword import CatalogKeywordIndex
from zerodbext.catalog.interfaces import ICatalogIndex
_marker = ()
@implementer(ICatalogIndex)
class CatalogFacetIndex(CatalogKeywordIndex):
"""Facet index.
Query types supported:
- Eq
- NotEq
- In
- NotIn
- Any
- NotAny
- All
- NotAll
"""
def __init__(self, discriminator, facets, family=None):
if not callable(discriminator):
if not isinstance(discriminator, six.string_types):
raise ValueError('discriminator value must be callable or a '
'string')
self.discriminator = discriminator
if family is not None:
self.family = family
self.facets = self.family.OO.Set(facets)
self._not_indexed = self.family.IF.Set()
self.clear()
def index_doc(self, docid, object):
""" Pass in an integer document id and an object supporting a
sequence of facet specifiers ala ['style:gucci:handbag'] via
the discriminator"""
if callable(self.discriminator):
value = self.discriminator(object, _marker)
else:
value = getattr(object, self.discriminator, _marker)
if value is _marker:
# unindex the previous value
self.unindex_doc(docid)
self._not_indexed.add(docid)
return None
if isinstance(value, Persistent):
raise ValueError('Catalog cannot index persistent object %s' %
value)
if docid in self._not_indexed:
self._not_indexed.remove(docid)
old = self._rev_index.get(docid)
if old is not None:
self.unindex_doc(docid)
changed = False
for facet in value:
L = []
categories = facet.split(':')
for category in categories:
L.append(category)
facet_candidate = ':'.join(L)
for fac in self.facets:
if fac == facet_candidate:
changed = True
fwset = self._fwd_index.get(fac)
if fwset is None:
fwset = self.family.IF.Set()
self._fwd_index[fac] = fwset
fwset.insert(docid)
revset = self._rev_index.get(docid)
if revset is None:
revset = self.family.OO.Set()
self._rev_index[docid] = revset
revset.insert(fac)
if changed:
self._num_docs.change(1)
return value
def counts(self, docids, omit_facets=()):
""" Given a set of docids (usually returned from query),
provide count information for further facet narrowing.
Optionally omit count information for facets and their
ancestors that are in 'omit_facets' (a sequence of facets)"""
effective_omits = self.family.OO.Set()
for omit_facet in omit_facets:
L = []
categories = omit_facet.split(':')
for category in categories:
L.append(category)
effective_omits.insert(':'.join(L))
include_facets = self.family.OO.difference(self.facets,
effective_omits)
counts = {}
isect_cache = {}
for docid in docids:
available_facets = self._rev_index.get(docid)
ck = cachekey(available_facets)
appropriate_facets = isect_cache.get(ck)
if appropriate_facets is None:
appropriate_facets = self.family.OO.intersection(
include_facets, available_facets)
isect_cache[ck] = appropriate_facets
for facet in appropriate_facets:
count = counts.get(facet, 0)
count += 1
counts[facet] = count
return counts
def cachekey(set):
h = md5()
for item in sorted(list(set)):
h.update(item.encode())
return h.hexdigest() | zerodbext.catalog | /zerodbext.catalog-0.8.4.tar.gz/zerodbext.catalog-0.8.4/zerodbext/catalog/indexes/facet.py | facet.py |
import six
from zope.interface import implementer
import BTrees
from zerodbext.catalog.interfaces import ICatalogIndex
from zerodbext.catalog.indexes.common import CatalogIndex
_marker = object()
@implementer(ICatalogIndex)
class CatalogPathIndex2(CatalogIndex): #pragma NO COVERAGE
"""
DEPRECATED
Index for model paths (tokens separated by '/' characters or
tuples representing a model path).
A path index may be queried to obtain all subobjects (optionally
limited by depth) of a certain path.
This index differs from the original
``zerodbext.catalog.indexes.path.CatalogPath`` index inasmuch as it
actually retains a graph representation of the objects in the path
space instead of relying on 'level' information; query results
relying on this level information may or may not be correct for
any given tree. Use of this index is suggested rather than the
``path`` index.
Query types supported:
Eq
"""
attr_discriminator = None # b/w compat
family = BTrees.family32
def __init__(self, discriminator, attr_discriminator=None):
if not callable(discriminator):
if not isinstance(discriminator, six.string_types):
raise ValueError('discriminator value must be callable or a '
'string')
self.discriminator = discriminator
if attr_discriminator is not None and not callable(attr_discriminator):
if not isinstance(attr_discriminator, six.string_types):
raise ValueError('attr_discriminator value must be callable '
'or a string')
self.attr_discriminator = attr_discriminator
self.clear()
def clear(self):
self.docid_to_path = self.family.IO.BTree()
self.path_to_docid = self.family.OI.BTree()
self.adjacency = self.family.IO.BTree()
self.disjoint = self.family.OO.BTree()
self.docid_to_attr = self.family.IO.BTree()
def __len__(self):
return len(self.docid_to_path)
def __nonzero__(self):
return True
def _getPathTuple(self, path):
if not path:
raise ValueError('path must be nonempty (not %s)' % str(path))
if isinstance(path, six.string_types):
path = path.rstrip('/')
path = tuple(path.split('/'))
if path[0] != '':
raise ValueError('Path must be absolute (not %s)' % str(path))
return tuple(path)
def _getObjectPath(self, object):
if callable(self.discriminator):
path = self.discriminator(object, _marker)
else:
path = getattr(object, self.discriminator, _marker)
return path
def _getObjectAttr(self, object):
if callable(self.attr_discriminator):
attr = self.attr_discriminator(object, _marker)
else:
attr = getattr(object, self.attr_discriminator, _marker)
return attr
def index_doc(self, docid, object):
path = self._getObjectPath(object)
if path is _marker:
self.unindex_doc(docid)
return None
path = self._getPathTuple(path)
if self.attr_discriminator is not None:
attr = self._getObjectAttr(object)
if attr is not _marker:
self.docid_to_attr[docid] = attr
self.docid_to_path[docid] = path
self.path_to_docid[path] = docid
if path in self.disjoint:
self.adjacency[docid] = self.disjoint[path]
del self.disjoint[path]
if len(path) > 1:
parent_path = tuple(path[:-1])
parent_docid = self.path_to_docid.get(parent_path)
if parent_docid is None:
theset = self.disjoint.get(parent_path)
if theset is None:
theset = self.family.IF.Set()
self.disjoint[parent_path] = theset
else:
theset = self.adjacency.get(parent_docid)
if theset is None:
theset = self.family.IF.Set()
self.adjacency[parent_docid] = theset
theset.insert(docid)
def unindex_doc(self, docid):
path = self.docid_to_path.get(docid)
if path is None:
return
if len(path) > 1:
parent_path = tuple(path[:-1])
parent_docid = self.path_to_docid.get(parent_path)
if parent_docid is not None: # might be disjoint
self.adjacency[parent_docid].remove(docid)
if not self.adjacency[parent_docid]:
del self.adjacency[parent_docid]
else:
self.disjoint[parent_path].remove(docid)
if not self.disjoint[parent_path]:
del self.disjoint[parent_path]
stack = [docid]
while stack:
docid = stack.pop()
path = self.docid_to_path[docid]
del self.path_to_docid[path]
del self.docid_to_path[docid]
if docid in self.docid_to_attr:
del self.docid_to_attr[docid]
next_docids = self.adjacency.get(docid)
if next_docids is None:
next_docids = self.disjoint.get(path)
if next_docids is not None:
del self.disjoint[path]
stack.extend(next_docids)
else:
del self.adjacency[docid]
stack.extend(next_docids)
def reindex_doc(self, docid, object):
path = self._getPathTuple(self._getObjectPath(object))
if self.docid_to_path.get(docid) != path:
self.unindex_doc(docid)
self.index_doc(docid, object)
return True
else:
if self.attr_discriminator is not None:
attr = self._getObjectAttr(object)
if docid in self.docid_to_attr:
if attr is _marker:
del self.docid_to_attr[docid]
return True
elif attr != self.docid_to_attr[docid]:
self.docid_to_attr[docid] = attr
return True
else:
if attr is not _marker:
self.docid_to_attr[docid] = attr
return True
return False
def _indexed(self):
return self.docid_to_path.keys()
def search(self, path, depth=None, include_path=False, attr_checker=None):
""" Provided a path string (e.g. ``/path/to/object``) or a
path tuple (e.g. ``('', 'path', 'to', 'object')``, or a path
list (e.g. ``['', 'path', 'to' object'])``), search the index
for document ids representing subelements of the path
specified by the path argument.
If the ``path`` argment is specified as a tuple or list, its
first element must be the empty string. If the ``path``
argument is specified as a string, it must begin with a ``/``
character. In other words, paths passed to the ``search``
method must be absolute.
If the ``depth`` argument is specified, return only documents
at this depth and below. Depth ``0`` will returns the empty
set (or only the docid for the ``path`` specified if
``include_path`` is also True). Depth ``1`` will return
docids related to direct subobjects of the path (plus the
docid for the ``path`` specified if ``include_path`` is also
True). Depth ``2`` will return docids related to direct
subobjects and the docids of the children of those subobjects,
and so on.
If ``include_path`` is False, the docid of the object
specified by the ``path`` argument is *not* returned as part
of the search results. If ``include_path`` is True, the
object specified by the ``path`` argument *is* returned as
part of the search results.
If ``attr_checker`` is not None, it must be a callback that
accepts two arguments: the first argument will be the
attribute value found, the second argument is a sequence of
all previous attributes encountered during this search (in
path order). If ``attr_checker`` returns True, traversal will
continue; otherwise, traversal will cease.
"""
if attr_checker is None:
return self._simple_search(path, depth, include_path)
else:
return self._attr_search(path, depth, include_path, attr_checker)
def _simple_search(self, path, depth, include_path):
""" Codepath taken when no attr checker is used """
path = self._getPathTuple(path)
sets = []
if include_path:
try:
docid = self.path_to_docid[path]
except KeyError:
pass # XXX should we just return an empty set?
else:
sets.append(self.family.IF.Set([docid]))
stack = [path]
plen = len(path)
while stack:
nextpath = stack.pop()
if depth is not None and len(nextpath) - plen >= depth:
continue
try:
docid = self.path_to_docid[nextpath]
except KeyError:
continue # XXX we can't search from an unindexed root path?
try:
theset = self.adjacency[docid]
except KeyError:
pass
else:
sets.append(theset)
for docid in theset:
try:
newpath = self.docid_to_path[docid]
except KeyError:
continue
stack.append(newpath)
return self.family.IF.multiunion(sets)
def _attr_search(self, path, depth, include_path, attr_checker):
""" Codepath taken when an attr checker is used """
path = self._getPathTuple(path)
leading_attrs = []
result = {}
plen = len(path)
# make sure we get "leading" attrs
for p in range(plen-1):
subpath = path[:p+1]
try:
docid = self.path_to_docid[subpath]
except KeyError:
continue # XXX should we just return an empty set?
attr = self.docid_to_attr.get(docid, _marker)
if attr is not _marker:
remove_from_closest(result, subpath, docid)
leading_attrs.append(attr)
result[subpath] = ((docid, leading_attrs[:]),
self.family.IF.Set())
stack = [(path, leading_attrs)]
attrset = self.family.IF.Set()
while stack:
nextpath, attrs = stack.pop()
try:
docid = self.path_to_docid[nextpath]
except KeyError:
continue # XXX we can't search from an unindexed root path?
attr = self.docid_to_attr.get(docid, _marker)
if attr is _marker:
if include_path and nextpath == path:
add_to_closest(
result, nextpath, self.family.IF.Set([docid]))
if depth is not None and len(nextpath) - plen >= depth:
continue
else:
remove_from_closest(result, nextpath, docid)
attrs.append(attr)
if nextpath == path:
if include_path:
attrset = self.family.IF.Set([docid])
else:
attrset = self.family.IF.Set()
else:
attrset = self.family.IF.Set([docid])
result[nextpath] = ((docid, attrs), attrset)
if depth is not None and len(nextpath) - plen >= depth:
continue
try:
theset = self.adjacency[docid]
except KeyError:
pass
else:
add_to_closest(result, nextpath, theset)
for docid in theset:
try:
newpath = self.docid_to_path[docid]
except KeyError:
continue
stack.append((newpath, attrs[:]))
return attr_checker(result.values())
def apply_intersect(self, query, docids):
""" Default apply_intersect implementation """
result = self.apply(query)
if docids is None:
return result
return self.family.IF.weightedIntersection(result, docids)[1]
def apply(self, query):
""" Search the path index using the query. If ``query`` is a
string, a tuple, or a list, it is treated as the ``path``
argument to use to search. If it is any other object, it is
assumed to be a dictionary with at least a value for the
``query`` key, which is treated as a path. The dictionary can
also optionally specify the ``depth`` and whether to include
the docid referenced by the path argument (the ``query`` key)
in the set of docids returned (``include_path``). See the
documentation for the ``search`` method of this class to
understand paths, depths, and the ``include_path`` argument.
"""
if isinstance(query, tuple(list(six.string_types) + [tuple, list])):
path = query
depth = None
include_path = False
attr_checker = None
else:
path = query['query']
depth = query.get('depth', None)
include_path = query.get('include_path', False)
attr_checker = query.get('attr_checker', None)
return self.search(path, depth, include_path, attr_checker)
def applyEq(self, query):
return self.apply(query)
def add_to_closest(sofar, thispath, theset):
paths = sorted(sofar.keys(), reverse=True)
for path in paths:
pathlen = len(path)
if thispath[:pathlen] == path:
sofar[path][1].update(theset)
break
def remove_from_closest(sofar, thispath, docid):
paths = sorted(sofar.keys(), reverse=True)
for path in paths:
pathlen = len(path)
if thispath[:pathlen] == path:
theset = sofar[path][1]
if docid in theset:
theset.remove(docid)
break | zerodbext.catalog | /zerodbext.catalog-0.8.4.tar.gz/zerodbext.catalog-0.8.4/zerodbext/catalog/indexes/path2.py | path2.py |
import six
from zope.interface import implementer
from zope.index.interfaces import IIndexSort
from zope.index.text import TextIndex
from zerodbext.catalog.interfaces import ICatalogIndex
from zerodbext.catalog.indexes.common import CatalogIndex
@implementer(ICatalogIndex, IIndexSort)
class CatalogTextIndex(CatalogIndex, TextIndex):
""" Full-text index.
Query types supported:
- Contains
- DoesNotContain
- Eq
- NotEq
"""
def __init__(self, discriminator, lexicon=None, index=None):
if not callable(discriminator):
if not isinstance(discriminator, six.string_types):
raise ValueError('discriminator value must be callable or a '
'string')
self.discriminator = discriminator
self._not_indexed = self.family.IF.Set()
TextIndex.__init__(self, lexicon, index)
self.clear()
def reindex_doc(self, docid, object):
# index_doc knows enough about reindexing to do the right thing
return self.index_doc(docid, object)
def _indexed(self):
return self.index._docwords.keys()
def sort(self, result, reverse=False, limit=None, sort_type=None):
"""Sort by text relevance.
This only works if the query includes at least one text query,
leading to a weighted result. This method raises TypeError
if the result is not weighted.
A weighted result is a dictionary-ish object that has docids
as keys and floating point weights as values. This method
sorts the dictionary by weight and returns the sorted
docids as a list.
"""
if not result:
return result
if not hasattr(result, 'items'):
raise TypeError(
"Unable to sort by relevance because the search "
"result does not contain weights. To produce a weighted "
"result, include a text search in the query.")
items = [(weight, docid) for (docid, weight) in result.items()]
# when reverse is false, output largest weight first.
# when reverse is true, output smallest weight first.
items.sort(reverse=not reverse)
result = [docid for (weight, docid) in items]
if limit:
result = result[:limit]
return result
def applyContains(self, value):
return self.apply(value)
applyEq = applyContains | zerodbext.catalog | /zerodbext.catalog-0.8.4.tar.gz/zerodbext.catalog-0.8.4/zerodbext/catalog/indexes/text.py | text.py |
import six
from zope.interface import implementer
from persistent import Persistent
import BTrees
from BTrees.Length import Length
from zerodbext.catalog.interfaces import ICatalogIndex
from zerodbext.catalog.indexes.common import CatalogIndex
_marker = ()
@implementer(ICatalogIndex)
class CatalogPathIndex(CatalogIndex):
"""Index for model paths (tokens separated by '/' characters)
A path index stores all path components of the physical path of an object.
Internal datastructure:
- a physical path of an object is split into its components
- every component is kept as a key of a OOBTree in self._indexes
- the value is a mapping 'level of the path component' to
'all docids with this path component on this level'
Query types supported:
- Eq
- NotEq
"""
useOperator = 'or'
family = BTrees.family32
def __init__(self, discriminator):
if not callable(discriminator):
if not isinstance(discriminator, six.string_types):
raise ValueError('discriminator value must be callable or a '
'string')
self.discriminator = discriminator
self._not_indexed = self.family.IF.Set()
self.clear()
def clear(self):
self._depth = 0
self._index = self.family.OO.BTree()
self._unindex = self.family.IO.BTree()
self._length = Length(0)
def insertEntry(self, comp, id, level):
"""Insert an entry.
comp is a path component
id is the docid
level is the level of the component inside the path
"""
if comp not in self._index:
self._index[comp] = self.family.IO.BTree()
if level not in self._index[comp]:
self._index[comp][level] = self.family.IF.TreeSet()
self._index[comp][level].insert(id)
if level > self._depth:
self._depth = level
def index_doc(self, docid, object):
if callable(self.discriminator):
value = self.discriminator(object, _marker)
else:
value = getattr(object, self.discriminator, _marker)
if value is _marker:
# unindex the previous value
self.unindex_doc(docid)
# Store docid in set of unindexed docids
self._not_indexed.add(docid)
return None
if isinstance(value, Persistent):
raise ValueError('Catalog cannot index persistent object %s' %
value)
if docid in self._not_indexed:
# Remove from set of unindexed docs if it was in there.
self._not_indexed.remove(docid)
path = value
if isinstance(path, (list, tuple)):
path = '/'+ '/'.join(path[1:])
comps = [c for c in path.split('/') if c]
if docid not in self._unindex:
self._length.change(1)
for i in range(len(comps)):
self.insertEntry(comps[i], docid, i)
self._unindex[docid] = path
return 1
def unindex_doc(self, docid):
_not_indexed = self._not_indexed
if docid in _not_indexed:
_not_indexed.remove(docid)
if docid not in self._unindex:
return
comps = self._unindex[docid].split('/')
for level in range(len(comps[1:])):
comp = comps[level+1]
try:
self._index[comp][level].remove(docid)
if not self._index[comp][level]:
del self._index[comp][level]
if not self._index[comp]:
del self._index[comp]
except KeyError:
pass
self._length.change(-1)
del self._unindex[docid]
def _indexed(self):
return self._unindex.keys()
def search(self, path, default_level=0):
"""
path is either a string representing a
relative URL or a part of a relative URL or
a tuple (path,level).
level >= 0 starts searching at the given level
level < 0 not implemented yet
"""
if isinstance(path, six.string_types):
level = default_level
else:
level = int(path[1])
path = path[0]
comps = [c for c in path.split('/') if c]
if len(comps) == 0:
return self.family.IF.Set(self._unindex.keys())
results = None
if level >= 0:
for i, comp in enumerate(comps):
if comp not in self._index:
return self.family.IF.Set()
if level+i not in self._index[comp]:
return self.family.IF.Set()
results = self.family.IF.intersection(
results, self._index[comp][level+i])
else:
for level in range(self._depth + 1):
ids = None
for i, comp in enumerate(comps):
try:
ids = self.family.IF.intersection(
ids, self._index[comp][level+i])
except KeyError:
break
else:
results = self.family.IF.union(results, ids)
return results
def numObjects(self):
""" return the number distinct values """
return len(self._unindex)
def getEntryForObject(self, docid):
""" Takes a document ID and returns all the information
we have on that specific object.
"""
return self._unindex.get(docid)
def apply(self, query):
"""
"""
level = 0
operator = self.useOperator
if isinstance(query, six.string_types):
paths = [query]
elif isinstance(query, (tuple, list)):
paths = query
else:
paths = query.get('query', [])
if isinstance(paths, six.string_types):
paths = [ paths ]
level = query.get('level', 0)
operator = query.get('operator', self.useOperator).lower()
sets = []
for path in paths:
sets.append(self.search(path, level))
if operator == 'or':
rs = self.family.IF.multiunion(sets)
else:
rs = None
sets.sort(key=len)
for set in sets:
rs = self.family.IF.intersection(rs, set)
if not rs:
break
if rs:
return rs
else:
return self.family.IF.Set()
applyEq = apply | zerodbext.catalog | /zerodbext.catalog-0.8.4.tar.gz/zerodbext.catalog-0.8.4/zerodbext/catalog/indexes/path.py | path.py |
from kiteconnect import KiteTicker
from tickersaver.utils.log import logger_instance
from urllib.parse import quote_plus
from tickersaver.cache.sqllite_cache import Sqllite
from tickersaver.fetcher.kite.orders import Order
import os, csv, datetime, json, argparse
logger = logger_instance
class KT(KiteTicker):
def init_db(self, config):
sql = Sqllite()
sql.init_ltp_db(config.get("dbpath"))
self.sql = sql
def _create_connection(self, url, **kwargs):
wsstoken = os.getenv("zwsstoken") or self.config.get("wsstoken")
wsstoken = quote_plus(wsstoken)
username = os.getenv("ZUSERNAME") or self.config.get("username")
url = 'wss://ws.zerodha.com/?api_key=kitefront&user_id={}&enctoken={}&uid=1&user-agent=kite3-web&version=2.9.1'.format(
username, wsstoken)
super(KT, self)._create_connection(url, **kwargs)
def on_ticks(ws, ticks):
config = ws.config
filename = config.get("tickerfile_path")
# exit on 15:24
dt = datetime.datetime.now()
# if dt.time() >= datetime.time(15, 24, 0):
# logger.info("Exiting as Indian MIS market hours have closed")
# ws.close(code=4000, reason="market_close_time")
# If ticker file has changed then refresh the instrument list from the ticker file
dynamic_config_mod_time = os.stat(filename).st_mtime
if hasattr(ws, 'file_mod_time') and dynamic_config_mod_time > ws.file_mod_time:
logger.info("File Changed - resubscribing")
with open(filename, 'r') as fp:
csvreader = csv.reader(fp)
existing_position_list_file = list(csvreader)
newsublist = [int(x[0]) for x in existing_position_list_file]
logger.info("File Changed Unsubscribing {}".format(ws.instrument_list))
ws.unsubscribe(ws.instrument_list)
sub_list = ws.always_on_instrument_list + newsublist
logger.info("File Changed Subscribing {}".format(sub_list))
ws.subscribe(sub_list)
ws.instrument_list = sub_list
ws.file_mod_time = dynamic_config_mod_time
# touch the below file to refresh the sub list dynamically
if os.path.exists(config.get("instrument_touch_path")) and config.get("subscribe_current_positions"):
pos = ws.order.get_positions().json()
ws.order.write_positions_tofile(pos, filename)
os.remove(config.get("instrument_touch_path"))
logger.debug("Sample Data: {}".format(ticks))
logger.info("Tick received")
for i in ticks:
key = str(i['instrument_token'])
ws.sql.set_ltp(key, i['last_price'])
def on_close(ws, code, reason):
logger.info("Close received with the code, reason - {}, {}".format(code, reason))
if code == 4000:
logger.info("Exiting as market hours have ended, in on_close - {}".format(code))
ws.stop()
def on_connect(ws, response): # noqa
config = ws.config
filename = config.get("tickerfile_path")
if not os.path.exists(filename):
with open(filename, 'w') as fp:
logger.info("Creating empty file - {}".format(filename))
# Callback on successful connect.
if config.get("subscribe_current_positions"):
pos = ws.order.get_positions().json()
ws.order.write_positions_tofile(pos, filename)
dynamic_config_mod_time = os.stat(filename).st_mtime
with open(filename, 'r') as fp:
csvreader = csv.reader(fp)
existing_position_list_file = list(csvreader)
newsublist = [int(x[0]) for x in existing_position_list_file]
ws.file_mod_time = dynamic_config_mod_time
sub_list = ws.always_on_instrument_list + newsublist
ws.instrument_list = sub_list
logger.info("Subscribe list : {}".format(sub_list))
ws.subscribe(sub_list)
ws.set_mode(ws.MODE_LTP, sub_list)
def start_stream(config):
order = Order(config)
kws = KT("", "")
kws.init_db(config)
kws.config = config
# Assign the callbacks.
kws.on_ticks = on_ticks
kws.on_connect = on_connect
kws.on_close = on_close
kws.instrument_list = []
kws.always_on_instrument_list = config.get("default_instruments")
kws.order = order
kws.connect()
def main():
parser = argparse.ArgumentParser(description='Zerodha Ticker Saver')
parser.add_argument('-c', '--config', help='Configuration file path', required=True)
args = parser.parse_args()
config_filepath = args.config
with open(config_filepath) as fp:
config = fp.read()
config = json.loads(config)
username = os.getenv("ZUSERNAME")
wsstoken = os.getenv("ZWSSTOKEN")
# If not set in env variable then check if value is set in the config file
if not username:
username = config.get("zusername", "")
if not wsstoken:
wsstoken = config.get("zwsstoken", "")
if not username or not wsstoken:
logger.error("Auth information not set in environment variable or config, exiting!!")
exit(5)
config["username"] = username
config["wsstoken"] = wsstoken
start_stream(config)
if __name__ == '__main__':
main() | zerodha-tickersaver | /zerodha_tickersaver-1.1.2-py3-none-any.whl/tickersaver/fetcher/kite/ws_tick_fetcher.py | ws_tick_fetcher.py |
import requests, os
from tickersaver.utils.log import logger_instance
from kiteconnect import KiteConnect
from http import HTTPStatus
logger = logger_instance
class Order(object):
def __init__(self, config):
self.config = config
self.initiate_buffer = 0
self.stoploss_buffer = 0
self.headers = {
'Content-Type': 'application/x-www-form-urlencoded',
'Accept': 'application/json, text/plain, */*',
'Authorization': 'enctoken {}'.format(os.getenv("ZWSSTOKEN") or self.config.get("zwsstoken")),
'Accept-Language': 'en-us',
'Host': 'kite.zerodha.com',
'Origin': 'https://kite.zerodha.com',
'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_2) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/13.0.4 Safari/605.1.15',
'Referer': 'https://kite.zerodha.com/positions',
'X-Kite-Version': '3.0.4',
'X-Kite-Userid': os.getenv("ZUSERNAME") or self.config.get("zusername")
}
def place_order(self, trading_symbol, transaction_type=KiteConnect.TRANSACTION_TYPE_BUY, quantity=0,
order_type=KiteConnect.ORDER_TYPE_MARKET, trigger_price=0,
exchange=KiteConnect.EXCHANGE_NSE, product=KiteConnect.PRODUCT_MIS):
data = {
'variety': 'regular',
'exchange': exchange,
'tradingsymbol': trading_symbol,
'transaction_type': transaction_type,
'order_type': order_type,
'quantity': quantity,
'price': '0',
'product': product,
'validity': 'DAY',
'disclosed_quantity': '0',
'trigger_price': trigger_price,
'squareoff': '0',
'stoploss': '0',
'trailing_stoploss': '0',
'user_id': os.getenv("ZUSERNAME") or self.config.get("zusername")
}
logger.info(
"Firing {} Position for {} for {} quantity ".format(
transaction_type,
trading_symbol,
quantity))
response = requests.post('https://kite.zerodha.com/oms/orders/regular', headers=self.headers, cookies={},
data=data)
logger.debug("Position attempted Status:{}, Response:{}".format(response.status_code, response.json()))
return response
def get_positions(self):
logger.info("Getting position details")
url = 'https://kite.zerodha.com/oms/portfolio/positions'
response = requests.get(url, headers=self.headers)
logger.debug("Position Details Status:{}, Response:{}".format(response.status_code, response.json()))
if response.status_code == HTTPStatus.OK:
return response
def truncate_file(self, filename):
with open(filename, 'a') as fp:
logger.info("Truncating the file - {} to 0 bytes".format(filename))
fp.truncate(0)
def write_positions_tofile(self, pos, filename, temp_write_seconds=-1):
import csv, copy, time
existing_position_list = []
existing_position_list_file = []
with open(filename, 'r') as fp:
csvreader = csv.reader(fp)
existing_position_list_file = list(csvreader)
existing_position_list = copy.deepcopy(existing_position_list_file)
for i in pos['data']['net']:
tmp_list = [str(i['instrument_token']), i['tradingsymbol']]
if tmp_list not in existing_position_list_file:
existing_position_list_file.append(tmp_list)
with open(filename, 'w') as fp:
csvwriter = csv.writer(fp)
csvwriter.writerows(existing_position_list_file)
# this logic is to write instuments temporarily in the csv file uesd by ticker to fetch ltp
if temp_write_seconds > 0:
logger.info("Sleeping for {} seconds for subscribe to complete of all strikes".format(temp_write_seconds))
time.sleep(temp_write_seconds)
with open(filename, 'a') as fp1:
logger.info("Truncating the file - {} to 0 bytes".format(filename))
fp1.truncate(0)
with open(filename, 'w') as fp2:
csvwriter = csv.writer(fp2)
csvwriter.writerows(existing_position_list)
def get_orders(self):
response = requests.get('https://kite.zerodha.com/oms/orders', headers=self.headers)
logger.debug("Order Details Status:{}, Response:{}".format(response.status_code, response.json()))
if response.status_code == 200:
return response
def square_off_positions_level(self, open_buy_positions, open_sell_positions, level,
exchange=KiteConnect.EXCHANGE_NFO):
# squares of Sell first followed by Buy due to margin issue
for pos in open_sell_positions:
logger.info("Closing all open SELL positions as level of {} is hit".format(level))
tradingsymbol = pos['tradingsymbol']
transaction_type = KiteConnect.TRANSACTION_TYPE_BUY
quantity = abs(pos['quantity'])
product = pos['product']
self.place_order(tradingsymbol, transaction_type=transaction_type, quantity=quantity,
exchange=exchange, product=product)
for pos in open_buy_positions:
logger.info("Closing all open BUY positions as level of {} is hit".format(level))
tradingsymbol = pos['tradingsymbol']
transaction_type = KiteConnect.TRANSACTION_TYPE_SELL
quantity = abs(pos['quantity'])
product = pos['product']
self.place_order(tradingsymbol, transaction_type=transaction_type, quantity=quantity,
exchange=exchange, product=product) | zerodha-tickersaver | /zerodha_tickersaver-1.1.2-py3-none-any.whl/tickersaver/fetcher/kite/orders.py | orders.py |
import sqlite3, datetime, json
from tickersaver.utils.log import logger_instance as logging
class Sqllite(object):
def init_ltp_db(self, dbpath):
self.con = sqlite3.connect(dbpath)
self.cursor = self.con.cursor()
self.cursor.execute(
"CREATE TABLE IF NOT EXISTS price (name text PRIMARY KEY ,ltp integer,time_stamp DATE DEFAULT (datetime('now','localtime')))")
self.cursor.execute("CREATE INDEX IF NOT EXISTS price_index on price (name)")
self.con.commit()
def init_option_chain_db(self):
dbpath = 'option_chain.db'
self.con = sqlite3.connect(dbpath)
self.cursor = self.con.cursor()
self.cursor.execute(
"CREATE TABLE IF NOT EXISTS option_chain (chain_date text PRIMARY KEY ,data text,time_stamp DATE DEFAULT (datetime('now','localtime')))")
self.cursor.execute("CREATE INDEX IF NOT EXISTS date_index on option_chain (chain_date)")
self.con.commit()
def get_ltp(self, name, time_window=1000):
d = datetime.datetime.now() - datetime.timedelta(seconds=time_window)
result = self.cursor.execute("select ltp from price where name = ? and time_stamp > ?", (name, d))
result = result.fetchone()
return result[0] if result else None
def set_ltp(self, key, price):
self.cursor.execute(
"INSERT INTO price (name,ltp) VALUES(?,?) ON CONFLICT(name) DO UPDATE SET ltp= ?, time_stamp=?",
(key, price, price, datetime.datetime.now()))
self.con.commit()
def get_chain(self, chain_date, time_window=1000):
try:
d = datetime.datetime.now() - datetime.timedelta(seconds=time_window)
chain_date = str(chain_date)
result = self.cursor.execute("select data from option_chain where chain_date = ? and time_stamp > ?", (chain_date, d))
result = result.fetchone()
logging.info("Getting Chain from cache - {}".format(result))
return json.loads(result[0]) if result else None
except Exception as e:
logging.exception("Error during getting chain from cache")
def set_chain(self, chain_date, data):
chain_date = str(chain_date)
data = json.dumps(data)
self.cursor.execute(
"INSERT INTO option_chain (chain_date,data) VALUES(?,?) ON CONFLICT(chain_date) DO UPDATE SET data= ?, "
"time_stamp=?",
(chain_date, data, data, datetime.datetime.now()))
logging.info("Setting Chain in cache - {}".format(data))
self.con.commit() | zerodha-tickersaver | /zerodha_tickersaver-1.1.2-py3-none-any.whl/tickersaver/cache/sqllite_cache.py | sqllite_cache.py |
import requests
def login(user_id,password,pin):
try:
r = requests.get('https://kite.zerodha.com/')
session_cookies = r.cookies
kf_session = session_cookies.get_dict()['kf_session']
headers = {
'cookie': f'_ga=GA1.2.1118120826.1632217744; signup_csrftoken=UxL0mcRzSKeIuwLqyQhMm95do2aELzoZI9Zz2NLaJ5b0igV90oyG8yHukHyXOIJ6; kf_session={kf_session}', }
data = {'user_id': user_id, 'password': password, }
rs = requests.post('https://kite.zerodha.com/api/login', headers=headers, data=data)
request_id = rs.json()['data']['request_id']
data = {
'user_id': user_id,
'request_id': request_id,
'twofa_value': pin,
'skip_session': '',
}
r = requests.post('https://kite.zerodha.com/api/twofa', headers=headers, data=data)
enctoken = str(r.cookies).split('enctoken=')[1].split(' for kite')[0]
headers = {'authorization': f'enctoken {enctoken}'}
return headers
except:
return 'NaN'
def profile(enctoken):
r = requests.get('https://kite.zerodha.com/oms/user/profile/full', headers=enctoken)
data = r.json()['data']
return data
def fund(enctoken):
r = requests.get('https://kite.zerodha.com/oms/user/margins', headers=enctoken)
data = r.json()['data']
return data
def order_history(enctoken):
r = requests.get('https://kite.zerodha.com/oms/orders', headers=enctoken)
data = r.json()['data']
return data
def gtt_history(enctoken):
r = requests.get('https://kite.zerodha.com/oms/gtt/triggers', headers=enctoken)
data = r.json()['data']
return data
def holdings(enctoken):
r = requests.get('https://kite.zerodha.com/oms/portfolio/holdings', headers=enctoken)
data = r.json()['data']
return data
def place_order(enctoken,exchange,tradingsymbol,transaction_type,order_type,quantity,price,product,trigger_price,squareoff,stoploss):
data = {
'variety': 'regular',
'exchange': exchange,
'tradingsymbol': tradingsymbol,
'transaction_type': transaction_type,
'order_type': order_type,
'quantity': quantity,
'price': price,
'product': product,
'validity': 'DAY',
'disclosed_quantity': 0,
'trigger_price':trigger_price,
'squareoff':squareoff,
'stoploss': stoploss,
'trailing_stoploss': 0,
'user_id': '0'
}
r = requests.post('https://kite.zerodha.com/oms/orders/regular', headers=enctoken, data=data)
data = r.json()
return data | zerodha-without-api | /zerodha_without_api-0.0.1-py3-none-any.whl/zerodha_without_api/zerodha.py | zerodha.py |
import requests
def login(clientid,Pass,YOB):
try:
json_data = {
'login_id': clientid,
'password': Pass,
'device': 'WEB',
}
response = requests.post('https://ant.aliceblueonline.com/api/v1/user/login', json=json_data)
twofa_token = response.json()['data']['twofa']['twofa_token']
json_data = {
'login_id': clientid,
'twofa': [{'question_id': '1', 'answer': YOB, }, ],
'twofa_token': twofa_token,
'type': 'GENERAL_QUESTIONS',
}
response = requests.post('https://ant.aliceblueonline.com/api/v1/user/twofa', json=json_data)
auth_token = response.json()['data']['auth_token']
headers = {'x-authorization-token': auth_token}
print(f'Login successfull')
return headers
except:
print('Login failed Please check your id and password')
auth_token = login('MN38002','Gaurav@123','1990')
def fund(auth_token):
params = {'client_id': '0', 'type':'all'}
response = requests.get('https://ant.aliceblueonline.com/api/v1/funds/view', params=params, headers=auth_token)
fnd = response.json()['data']
return fnd
def order_history(auth_token):
params = {'type': 'completed','client_id': '0',}
response = requests.get('https://ant.aliceblueonline.com/api/v1/orders', params=params, headers=auth_token)
orders = response.json()['data']['orders']
return orders
def trade_history(auth_token):
params = {'type': 'Trades','client_id': '0',}
response = requests.get('https://ant.aliceblueonline.com/api/v1/orders', params=params, headers=auth_token)
trades = response.json()['data']['orders']
return trades
def pending_order_history(auth_token):
params = {'type': 'pending','client_id': '0',}
response = requests.get('https://ant.aliceblueonline.com/api/v1/orders', params=params, headers=auth_token)
pendings = response.json()['data']['orders']
return pendings
def holdings(auth_token):
params = {'product_code': '','client_id': '0',}
r = requests.get('https://ant.aliceblueonline.com/api/v1/holdings', params=params,headers=auth_token)
holdings = r.json()['data']['holdings']
return holdings
def place_order(auth_token,exchange,instrument_token,order_type,price,quantity,product,order_side,trigger_price,stop_loss_value,square_off_value,trailing_stop_loss):
json_data = {
'exchange': exchange,
'instrument_token':int(instrument_token),
'client_id': '',
'order_type': order_type,
'price': float(price),
'quantity': int(quantity),
'disclosed_quantity': 0,
'validity': 'DAY',
'product': product,
'order_side': order_side,
'device': 'WEB',
'user_order_id': 0,
'trigger_price': float(trigger_price),
'stop_loss_value': float(stop_loss_value),
'square_off_value': float(square_off_value),
'trailing_stop_loss': float(trailing_stop_loss),
'is_trailing': False,
}
r = requests.post('https://ant.aliceblueonline.com/api/v1/orders',headers=auth_token,json=json_data)
return r.json()
def cancel_pending_order(auth_token,client_order_id):
params = {'client_id': '0',}
r = requests.delete(f'https://ant.aliceblueonline.com/api/v1/orders/{client_order_id}',params=params, headers=auth_token)
return r.json() | zerodha-without-api | /zerodha_without_api-0.0.1-py3-none-any.whl/alicelue_brocker_wihtoutapi/alice.py | alice.py |
import requests
def login(id,password,pin):
data = {"fy_id": id,"password": password,"app_id": "2","imei": "","recaptcha_token": ""}
r = requests.post('https://api.fyers.in/vagator/v1/login', json=data)
request_key = r.json()['request_key']
data2 = {"request_key": request_key, "identity_type": "pin", "identifier": pin, "recaptcha_token": ""}
r2 = requests.post('https://api.fyers.in/vagator/v1/verify_pin', json=data2)
refresh_token = r2.json()['data']['refresh_token']
access_token = r2.json()['data']['access_token']
headers = {'authorization': access_token, }
return headers
def fund(access_token):
r = requests.get('https://api.fyers.in/fydev/v1/funds', headers=access_token)
margin_used = r.json()['fund_limit'][6]['equityAmount']
available_margin = r.json()['fund_limit'][9]['equityAmount']
ledger_balance = r.json()['fund_limit'][0]['equityAmount']
return margin_used,available_margin,ledger_balance
def order_history(access_token):
r = requests.get('https://api.fyers.in/fydev/v1/orders', headers=access_token)
orders = r.json()['orderBook']
return orders
def trade_history(access_token):
r = requests.get('https://api.fyers.in/fydev/v1/tradebook', headers=access_token)
trades = r.json()['tradeBook']
return trades
def holding(access_token):
r = requests.get('https://api.fyers.in/fydev/v1/holdings', headers=access_token)
holdings = r.json()['tradeBook']
return holdings
def place_order(access_token,productType,side,exchange,symbol,qty,type,filledQty,limitPrice,stopPrice):
json_data = {
'noConfirm': True,
'productType': productType,
'side': 1 if side =="BUY" else -1,
'symbol': f'{exchange}:{symbol}',
'qty': int(qty),
'disclosedQty': 0,
'type': int(type),
'LTP': 0,
'validity': 'DAY',
'filledQty': int(filledQty),
'limitPrice': float(limitPrice),
'stopPrice': float(stopPrice),
'offlineOrder': False,
}
r = requests.post('https://api.fyers.in/fydev/v1/orders', headers=access_token, json=json_data)
data = r.json()
return data | zerodha-without-api | /zerodha_without_api-0.0.1-py3-none-any.whl/fyers_withoutapi/fyers.py | fyers.py |
=======
Zerodoc
=======
Version 0.2.3 Last updated 2014-08-17 [email protected]
Zerodoc is a "plain text format" in the spirit of `asciidoc <http://www.methods.co.nz/asciidoc/>`_, `POD <http://search.cpan.org/dist/perl/pod/perlpod.pod>`_,
`reStructuredText <http://docutils.sourceforge.net/rst.html>`_ or `markdown <http://daringfireball.net/projects/markdown/>`_, with an emphasis on simplicity and
extensibility. Very few formatting options are avaliable, both to
keep the parser simple and to make it easy to write new generators
for the whole format.
Included are a Python library can be used to translate an input
file or buffer into a tree that for wich generators can be easily
written, and a command line tool to call existing generators for
HTML, reStructuredText (that can then be converted or integrated
with other tools like Sphinx) and a JSON intermediate representation.
1. The zerodoc format
=====================
1.1 Paragraphs and lines
------------------------
Zerodoc files are simple text files organized in paragraphs. A
paragraph is a group of text lines separated from other paragraphs
by blank lines. Lists and source code copied *verbatim* can be
defined. An unintrusive format for links is used, based on
a references system.
Lines are limited to 72 characters for regular (not code
or diagrams) text. If you need to put something more into a line
(for example, a long URL), divide it in two and put a backslash (\)
with no spaces at the end.
Example:
::
This is a very long url that needs to be splitted in three:
http://www.reallyreallyreallylonguniformresourcelocatorredir\
ection.com/redirectionator.php?theredirectioncode=d72a565ab8\
7dedf7b5fa84b3ec4b9f11
Renders into:
This is a very long url that needs to be splitted in three:
http://www.reallyreallyreallylonguniformresourcelocatorredirection.com/redirectionator.php?theredirectioncode=d72a565ab87dedf7b5fa84b3ec4b9f11
1.2 Lists
---------
Lists are defined as paragrahps prefixed with a dash, and can be
nested. Example
::
- The first element in a list
- A nested element into the first consisting of two lines
that are joined on output
- Another nested element
- The third element in a list
Renders into:
- The first element in a list
- A nested element into the first consisting of two lines that are joined on output
- Another nested element
- The third element in a list
Backslash joining also occur inside list elements:
::
- The first element in a list. as it have two lines
with no backslash, an space is inserted between 'lines' and 'with'
- To join the two lines without adding a space a back\
slash is used. Note that the two spaces formatting the listline are
removed
renders into:
- The first element in a list. as it have two lines with no backslash, an space is inserted between 'lines' and 'with'
- To join the two lines without adding a space a backslash is used. Note that the two spaces formatting the listline are removed after the backslash
NOTE: There are no numbered lists. In the "phylosophy" of zerodoc,
numbers can not be omited from the original text nor 'computed'
because that will make the text less readable than it's processed
output.
1.3 Formatting attributes
-------------------------
Some attributes for the text inherited from other common formats and
email conventions are supported:
::
- This is an *emphasis*
- This is an _underline_ (cursive on certain displays or formats,
as in manual pages)
- This is a 'cursive'
Renders into:
- This is an *emphasis*
- This is an _underline_ (cursive on certain displays or formats, as in manual pages)
- This is a 'cursive'
1.4 Links
---------
Links can be included directly in the text along with their destination,
or referenced first in the text and then 'resolved' in another line.
Source of a link:
::
This `link`:http://www.google.com will redirect to google
Will render as:
This `link <http://www.google.com>`_ will redirect to google
Referenced links are 'resolved' in lists of links. This lists of links
will be removed from output directly. If the list is contained in a
section alone, the section is also removed from output. See the
'References' section at the end of the source code of this document
for an example. An 'autocontained' example could be:
::
This line contains two referenced links: `firstlink` and `secondlink`
- `firstlink`:http://www.google.com
- `secondlink`:http://www.google.com
Wich renders into:
This line contains two referenced links: `firstlink <http://www.google.com>`_ and `secondlink <http://www.google.com>`_
1.5 Source code
---------------
Source code is text that will be included verbatim in the output. In
source code, newlines are meaningful and no limits on line-length are
imposed. An example:
::
#include <stdio.h>
int main() {
// print hello world 100 times
for (int i = 0; i < 100; i++) {
printf("Hello, world!\n");
}
}
Source code is identified by one space before the content of
the first line and one or more spaces in the rest. No tabs can
be used, so either transform tabs-only source code before pasting
or use a tool like expand(1) to do it for you. Blank lines are also
included verbatim, up to the one delimiting the next 'regular'
paragraph (one that contains text and starts on the first column)
To illustrate source code, i am going to paste the source code (yo
dawg) of the example above, along with the regular paragraph-lines
sorrounding it:
::
source code, newlines are meaningful and no limits on line-length are
imposed. An example:
#include <stdio.h>
int main() {
// print hello world 100 times
for (int i = 0; i < 100; i++) {
printf("Hello, world!\n");
}
}
Source code is identified by one space before the content of
the first line and one or more spaces in the rest. No tabs can
When pygmentize is used, the default language for syntax highlighting
can be specified in options.
1.6 Diagrams and images
-----------------------
Diagrams can be either included directly in the output, just as
source code, or optionally converted to images (when this is
possible, for example in a manual page it does not make sense to
include images). Diagrams are converted using ditaa, aafigure,
ascii2svg or tikz depending on the options parsed to the renderer.
Refer to the `aafigure manual page <http://packages.python.org/aafigure/manual.html>`_ or to the `ditaa website <http://ditaa.sourceforge.net/>`_ for
help on this formats.
Diagrams are recognized by using TWO or more spaces before the
first line of them. Anything up to the next 'regular' paragraph
is considered part of the diagram.
Source-code paragraphs and diagrams can not be adjacent; they need
a 'regular' text paragraph (starting on the first column) between
them. This makes sense since no diagram can follow source code or
viceversa without at least an introduction of what the reader is
seeing.
1.6.1 ASCIIToSVG ascii-art diagrams
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The default for ascii art diagrams is `asciitosvg <https://9vx.org/~dho/a2s/>`_. As it name implies,
it converts text to SVG wich is quite convenient. It is written in php.
Example diagram: (asciitosvg)
.. image:: https://raw.githubusercontent.com/odkq/zerodoc/master/sphinx-config/images/zero6QR3h1.svg
1.6.2 aafigure ascii-art diagrams
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Another format to convert ascii art diagrams to graphics is aafigure. it
is written in Python and have quite convenient idioms for things like
sequence diagrams:
.. image:: https://raw.githubusercontent.com/odkq/zerodoc/master/sphinx-config/images/zeroSufppO.png
1.6.3 ditaa ascii-art diagrams
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Another common format for ascii art diagrams is ditaa. It does not
support svg output.
::
This is the source code of the following paragraph
(diagram taken from the `ditaa website`:
Example diagram: (ditaa)
+--------+ +-------+ +-------+
| | --+ ditaa +--> | |
| Text | +-------+ |diagram|
|Document| |!magic!| | |
| {d}| | | | |
+---+----+ +-------+ +-------+
: ^
| Lots of work |
+-------------------------+
This is the source code of the following paragraph
(diagram taken from the `ditaa website <http://ditaa.sourceforge.net/>`_
.. image:: https://raw.githubusercontent.com/odkq/zerodoc/master/sphinx-config/images/zeroSOJzdB.png
Note that there are two spaces before the first +---
1.6.4 TikZ diagrams
~~~~~~~~~~~~~~~~~~~
A Tikz diagram (from the Tikz examples)
.. image:: https://raw.githubusercontent.com/odkq/zerodoc/master/sphinx-config/images/zeroniXuwJ.png
LaTeX source code for that Tikz chunk:
::
\begin{tikzpicture}[auto,node distance=3cm,
thick,main node/.style={circle,fill=blue!20,draw,
font=\sffamily\Large\bfseries}]
\node[main node] (1) {1};
\node[main node] (2) [below left of=1] {2};
\node[main node] (3) [below right of=2] {3};
\node[main node] (4) [below right of=1] {4};
\path[every node/.style={font=\sffamily\small}]
(1) edge node [left] {0.6} (4)
edge [bend right] node[left] {0.3} (2)
edge [loop above] node {0.1} (1)
(2) edge node [right] {0.4} (1)
edge node {0.3} (4)
edge [loop left] node {0.4} (2)
edge [bend right] node[left] {0.1} (3)
(3) edge node [right] {0.8} (2)
edge [bend right] node[right] {0.2} (4)
(4) edge node [left] {0.2} (3)
edge [loop right] node {0.6} (4)
edge [bend right] node[right] {0.2} (1);
\end{tikzpicture}
1.6.6 Diagram tagging and autodetection
As with source code, the type of diagram is autodetected for Tikz and
gnuplot diagrams. This detection can be overriden by specifing it in
the first line of the diagram, between parenthesis.
1.7 Definition lists
--------------------
A definition list is a list of terms and corresponding definitions.
It usually renders (in HTML, man pages, ReST) in the text of the
definition indented with respect to the title. It is useful for
documenting functions and command line parameters.
Following is an example:
::
man ls
Display the manual page for the item (program) ls.
man -a intro
Display, in succession, all of the available intro manual
pages contained within the manual. It is possible
to quit between successive displays or skip any of them.
that renders into:
man ls
Display the manual page for the item (program) ls.
man -a intro
Display, in succession, all of the available intro manual
pages contained within the manual. It is possible
to quit between successive displays or skip any of them.
1.8 The default zerodoc structure
---------------------------------
1.8.1 Header
~~~~~~~~~~~~
The header in a zerodoc document contains the title, an optional
abstract and a table of contents. The table of contents needs to
be updated by hand (this is different from other well known text
formats but allow zerodoc to have free-form titles (no --- nor
~~~ nor any other form of markup is needed):
::
This is the title, that can spawn several
lines
This are one or several paragraphs of abstract
1. Title 1
2. Title 2
1.8.1.1 Title
~~~~~~~~~~~~~
The title can spawn several lines (a whole paragraph) that will be
joined together on output
The table of contents can be prefixed by a 'Table of conents' line
that will be recognized automatically as the TOC title. If that line
is not present, it will also be ommited on the transformed output.
1.8.1.2 Abstract
~~~~~~~~~~~~~~~~
The abstract is a group of paragraphs that appear before the Table
of content.
1.8.1.3 Table of contents
~~~~~~~~~~~~~~~~~~~~~~~~~
The table of contents is a list of the titles of the different
sections, for example
::
- 1. Section one
- 2. Section two
- 3. Third section
Will define the table of contents of a document, if found in the
header (after the abstract). If a title listed here is not found
in the document, an error is yielded.
1.8.2 Body
~~~~~~~~~~
The body is formed by several paragraphs. Paragraphs are divided
into sections by lines with titles. The lines with titles should
appear in the TOC and should have the same content as the TOC.
Optionally they can be in uppercase for clarity. As the transformed
document usually will have better ways to emphasize the title,
the lowercase format used in the TOC will be used regardless of
uppercase being used. For example, the next section of this document
starts with
::
2. INSTALLING ZERODOC
2.1 Prerrequisites
And in the TOC the pertinent lines appear as:
::
-- toc fragment --
- 1.7.1.3 Table of contents
- 1.7.2 Body
- 2. Installing zerodoc
- 2.1 Prerrequisites
As you can see on the start of the next section, the title appears
in lowercase (as in the TOC above)
2. Installing zerodoc
=====================
2.1 Prerrequisites
------------------
Zerodoc needs Python (2.6 or newer) the Python PLY 'lex and yacc'
utilities (2.5 or newer) and distutils for installation. Aditionally
when generating diagrams, the programs to parse them need to be
installed as well.
As an example, in a GNU/Linux Debian 6.0 'Squeeze' system, the
requirements can be installed using:
::
# apt-get install python-ply python-aafigure ditaa
To generate diagrams with gnuplot or tikz, install the pertinent
packages
::
# apt-get install gnuplot
# apt-get install texlive-picture
2.2 Installing the library and interpreter
------------------------------------------
2.2.1 Using a git snapshot
~~~~~~~~~~~~~~~~~~~~~~~~~~
Clone the github repository using
::
$ git clone git://github.com/odkq/zerodoc.git
Change to the zerodoc dir and call setup.py as root
::
$ cd zerodoc/
$ sudo ./setup.py install
2.2.2 Using pypi
~~~~~~~~~~~~~~~~
3. Using the command line converter
===================================
zerodoc - converts a zerodoc text file to HTML and many
other formats
3.1 SYNOPSIS
------------
Usage: zerodoc [options]
Options:
-h, --help show this help message and exit
-f FORMAT, --format=FORMAT Output format. If ommited, 'html'
-o OPTIONS, --options=OPTIONS Options for format renderer
-i FILE, --input=FILE Use <filename> as input file. If ommited, use stdin.
-O FILE, --output=FILE Use <filename> as output file. If ommited,use stdout.
3.2 HTML output options
-----------------------
ditaa
Use ditaa to format diagrams. When this option
is used, you can specify the path of the ditaa
.jar file with jarpath:<path>. If jarpath is
ommited, 'ditta' will be called (you can install
a command-line ditta wraper in Debian and others
with apt-get install ditaa)
jarpath:<path>
Location of the .jar path (there is no default,
'java' must be in the $PATH)
aafigure
Use aafigure to format diagrams
svg
Prefer svg in output when applicable (when the
converter outputs it and when the rendered format allows
for scalable graphics)
datauri
Do not generate image files, embbed the images
directly in the HTML using `DataURIscheme`
3.3 reStructuredText output options
-----------------------------------
notoc
Usually `reStructuredText` processors attach their own index in the
side (`sphinx-doc`, for example). In that case, you better do not
output the toc (it is still used to get section titles)
3.4 JSON output options
-----------------------
Json output has no options. It's output is the json render of the
parsed tree with no interpretation whatsoever
3.5 Confluence output options
-----------------------------
ditaa, jarpath, aafigure, datauri
With the same meaning as in the HTML output options
You can specify an output file, and paste it by hand into
the confluence 'edition' formulary, or you can make zerodoc client
upload it directly with this options:
folder:<folder>
Folder (path) for the uploaded document
user:<user>
User to use
passwd:<passd>
Password
host:<host>
Host
| zerodoc | /zerodoc-0.2.3.tar.gz/zerodoc-0.2.3/README.rst | README.rst |
Zerodoc
=======
Version 0.2.3 Last updated 2014-08-17 [email protected]
Zerodoc is a "plain text format" in the spirit of `asciidoc`, `POD`,
`reStructuredText` or `markdown`, with an emphasis on simplicity and
extensibility. Very few formatting options are avaliable, both to
keep the parser simple and to make it easy to write new generators
for the whole format.
Included are a Python library can be used to translate an input
file or buffer into a tree that for wich generators can be easily
written, and a command line tool to call existing generators for
HTML, reStructuredText (that can then be converted or integrated
with other tools like Sphinx) and a JSON intermediate representation.
Table of contents
- 1. The zerodoc format
- 1.1 Paragraphs and lines
- 1.2 Lists
- 1.3 Formatting attributes
- 1.4 Links
- 1.5 Source code
- 1.6 Diagrams and images
- 1.6.1 ASCIIToSVG ascii-art diagrams
- 1.6.2 aafigure ascii-art diagrams
- 1.6.3 ditaa ascii-art diagrams
- 1.6.4 TikZ diagrams
- 1.7 Definition lists
- 1.8 The default zerodoc structure
- 1.8.1 Header
- 1.8.1.1 Title
- 1.8.1.2 Abstract
- 1.8.1.3 Table of contents
- 1.8.2 Body
- 2. Installing zerodoc
- 2.1 Prerrequisites
- 2.2 Installing the library and interpreter
- 2.2.1 Using a git snapshot
- 2.2.2 Using pypi
- 3. Using the command line converter
- 3.1 SYNOPSIS
- 3.2 HTML output options
- 3.3 reStructuredText output options
- 3.4 JSON output options
- 3.5 Confluence output options
1. THE ZERODOC FORMAT
1.1 PARAGRAPHS AND LINES
Zerodoc files are simple text files organized in paragraphs. A
paragraph is a group of text lines separated from other paragraphs
by blank lines. Lists and source code copied *verbatim* can be
defined. An unintrusive format for links is used, based on
a references system.
Lines are limited to 72 characters for regular (not code
or diagrams) text. If you need to put something more into a line
(for example, a long URL), divide it in two and put a backslash (\)
with no spaces at the end.
Example:
This is a very long url that needs to be splitted in three:
http://www.reallyreallyreallylonguniformresourcelocatorredir\
ection.com/redirectionator.php?theredirectioncode=d72a565ab8\
7dedf7b5fa84b3ec4b9f11
Renders into:
This is a very long url that needs to be splitted in three:
http://www.reallyreallyreallylonguniformresourcelocatorredir\
ection.com/redirectionator.php?theredirectioncode=d72a565ab8\
7dedf7b5fa84b3ec4b9f11
1.2 LISTS
Lists are defined as paragrahps prefixed with a dash, and can be
nested. Example
- The first element in a list
- A nested element into the first consisting of two lines
that are joined on output
- Another nested element
- The third element in a list
Renders into:
- The first element in a list
- A nested element into the first consisting of two lines
that are joined on output
- Another nested element
- The third element in a list
Backslash joining also occur inside list elements:
- The first element in a list. as it have two lines
with no backslash, an space is inserted between 'lines' and 'with'
- To join the two lines without adding a space a back\
slash is used. Note that the two spaces formatting the listline are
removed
renders into:
- The first element in a list. as it have two lines
with no backslash, an space is inserted between 'lines' and 'with'
- To join the two lines without adding a space a back\
slash is used. Note that the two spaces formatting the listline are
removed after the backslash
NOTE: There are no numbered lists. In the "phylosophy" of zerodoc,
numbers can not be omited from the original text nor 'computed'
because that will make the text less readable than it's processed
output.
1.3 FORMATTING ATTRIBUTES
Some attributes for the text inherited from other common formats and
email conventions are supported:
- This is an *emphasis*
- This is an _underline_ (cursive on certain displays or formats,
as in manual pages)
- This is a 'cursive'
Renders into:
- This is an *emphasis*
- This is an _underline_ (cursive on certain displays or formats,
as in manual pages)
- This is a 'cursive'
1.4 LINKS
Links can be included directly in the text along with their destination,
or referenced first in the text and then 'resolved' in another line.
Source of a link:
This `link`:http://www.google.com will redirect to google
Will render as:
This `link`:http://www.google.com will redirect to google
Referenced links are 'resolved' in lists of links. This lists of links
will be removed from output directly. If the list is contained in a
section alone, the section is also removed from output. See the
'References' section at the end of the source code of this document
for an example. An 'autocontained' example could be:
This line contains two referenced links: `firstlink` and `secondlink`
- `firstlink`:http://www.google.com
- `secondlink`:http://www.google.com
Wich renders into:
This line contains two referenced links: `firstlink` and `secondlink`
- `firstlink`:http://www.google.com
- `secondlink`:http://www.google.com
1.5 SOURCE CODE
Source code is text that will be included verbatim in the output. In
source code, newlines are meaningful and no limits on line-length are
imposed. An example:
#include <stdio.h>
int main() {
// print hello world 100 times
for (int i = 0; i < 100; i++) {
printf("Hello, world!\n");
}
}
Source code is identified by one space before the content of
the first line and one or more spaces in the rest. No tabs can
be used, so either transform tabs-only source code before pasting
or use a tool like expand(1) to do it for you. Blank lines are also
included verbatim, up to the one delimiting the next 'regular'
paragraph (one that contains text and starts on the first column)
To illustrate source code, i am going to paste the source code (yo
dawg) of the example above, along with the regular paragraph-lines
sorrounding it:
source code, newlines are meaningful and no limits on line-length are
imposed. An example:
#include <stdio.h>
int main() {
// print hello world 100 times
for (int i = 0; i < 100; i++) {
printf("Hello, world!\n");
}
}
Source code is identified by one space before the content of
the first line and one or more spaces in the rest. No tabs can
When pygmentize is used, the default language for syntax highlighting
can be specified in options.
1.6 DIAGRAMS AND IMAGES
Diagrams can be either included directly in the output, just as
source code, or optionally converted to images (when this is
possible, for example in a manual page it does not make sense to
include images). Diagrams are converted using ditaa, aafigure,
ascii2svg or tikz depending on the options parsed to the renderer.
Refer to the `aafigure manual page` or to the `ditaa website` for
help on this formats.
Diagrams are recognized by using TWO or more spaces before the
first line of them. Anything up to the next 'regular' paragraph
is considered part of the diagram.
Source-code paragraphs and diagrams can not be adjacent; they need
a 'regular' text paragraph (starting on the first column) between
them. This makes sense since no diagram can follow source code or
viceversa without at least an introduction of what the reader is
seeing.
1.6.1 ASCIITOSVG ASCII-ART DIAGRAMS
The default for ascii art diagrams is `asciitosvg`. As it name implies,
it converts text to SVG wich is quite convenient. It is written in php.
Example diagram: (asciitosvg)
This is an asciitosvg diagram
.-------------------------.
|[Logo] |
| .---.-. .-----. .-----. |
| | .-. | +--> | | <--| |
| | '-' | | <--| +--> | |
| '---'-' '-----' '-----' |
| ascii 2 svg |
| |
'-------------------------'
https://9vx.org/~dho/a2s/
[Logo]: {"fill":"#88d","a2s:delref":true}
1.6.2 AAFIGURE ASCII-ART DIAGRAMS
Another format to convert ascii art diagrams to graphics is aafigure. it
is written in Python and have quite convenient idioms for things like
sequence diagrams:
Example diagram: (aafigure)
+---------+ +---------+ +---------+
|Object 1 | |Object 2 | |Object 3 |
+----+----+ +----+----+ +----+----+
| | |
| | |
X Example | |
X----------->X |
X X |
X<-----------X |
X | |
X Example | |
X------------------------>X
| | X
X----------->X X---+
X X X |
| | X<--+
X<------------------------X
X | |
| | |
| | |
1.6.3 DITAA ASCII-ART DIAGRAMS
Another common format for ascii art diagrams is ditaa. It does not
support svg output.
This is the source code of the following paragraph
(diagram taken from the `ditaa website`:
Example diagram: (ditaa)
+--------+ +-------+ +-------+
| | --+ ditaa +--> | |
| Text | +-------+ |diagram|
|Document| |!magic!| | |
| {d}| | | | |
+---+----+ +-------+ +-------+
: ^
| Lots of work |
+-------------------------+
This is the source code of the following paragraph
(diagram taken from the `ditaa website`:
Example diagram: (ditaa)
+--------+ +-------+ +-------+
| | --+ ditaa +--> | |
| Text | +-------+ |diagram|
|Document| |!magic!| | |
| {d}| | | | |
+---+----+ +-------+ +-------+
: ^
| Lots of work |
+-------------------------+
Note that there are two spaces before the first +---
1.6.4 TIKZ DIAGRAMS
A Tikz diagram (from the Tikz examples)
\begin{tikzpicture}[auto,node distance=3cm,
thick,main node/.style={circle,fill=blue!20,draw,
font=\sffamily\Large\bfseries}]
\node[main node] (1) {1};
\node[main node] (2) [below left of=1] {2};
\node[main node] (3) [below right of=2] {3};
\node[main node] (4) [below right of=1] {4};
\path[every node/.style={font=\sffamily\small}]
(1) edge node [left] {0.6} (4)
edge [bend right] node[left] {0.3} (2)
edge [loop above] node {0.1} (1)
(2) edge node [right] {0.4} (1)
edge node {0.3} (4)
edge [loop left] node {0.4} (2)
edge [bend right] node[left] {0.1} (3)
(3) edge node [right] {0.8} (2)
edge [bend right] node[right] {0.2} (4)
(4) edge node [left] {0.2} (3)
edge [loop right] node {0.6} (4)
edge [bend right] node[right] {0.2} (1);
\end{tikzpicture}
LaTeX source code for that Tikz chunk:
\begin{tikzpicture}[auto,node distance=3cm,
thick,main node/.style={circle,fill=blue!20,draw,
font=\sffamily\Large\bfseries}]
\node[main node] (1) {1};
\node[main node] (2) [below left of=1] {2};
\node[main node] (3) [below right of=2] {3};
\node[main node] (4) [below right of=1] {4};
\path[every node/.style={font=\sffamily\small}]
(1) edge node [left] {0.6} (4)
edge [bend right] node[left] {0.3} (2)
edge [loop above] node {0.1} (1)
(2) edge node [right] {0.4} (1)
edge node {0.3} (4)
edge [loop left] node {0.4} (2)
edge [bend right] node[left] {0.1} (3)
(3) edge node [right] {0.8} (2)
edge [bend right] node[right] {0.2} (4)
(4) edge node [left] {0.2} (3)
edge [loop right] node {0.6} (4)
edge [bend right] node[right] {0.2} (1);
\end{tikzpicture}
1.6.6 Diagram tagging and autodetection
As with source code, the type of diagram is autodetected for Tikz and
gnuplot diagrams. This detection can be overriden by specifing it in
the first line of the diagram, between parenthesis.
1.7 DEFINITION LISTS
A definition list is a list of terms and corresponding definitions.
It usually renders (in HTML, man pages, ReST) in the text of the
definition indented with respect to the title. It is useful for
documenting functions and command line parameters.
Following is an example:
man ls
Display the manual page for the item (program) ls.
man -a intro
Display, in succession, all of the available intro manual
pages contained within the manual. It is possible
to quit between successive displays or skip any of them.
that renders into:
man ls
Display the manual page for the item (program) ls.
man -a intro
Display, in succession, all of the available intro manual
pages contained within the manual. It is possible
to quit between successive displays or skip any of them.
1.8 THE DEFAULT ZERODOC STRUCTURE
1.8.1 HEADER
The header in a zerodoc document contains the title, an optional
abstract and a table of contents. The table of contents needs to
be updated by hand (this is different from other well known text
formats but allow zerodoc to have free-form titles (no --- nor
~~~ nor any other form of markup is needed):
This is the title, that can spawn several
lines
This are one or several paragraphs of abstract
1. Title 1
2. Title 2
1.8.1.1 TITLE
The title can spawn several lines (a whole paragraph) that will be
joined together on output
The table of contents can be prefixed by a 'Table of conents' line
that will be recognized automatically as the TOC title. If that line
is not present, it will also be ommited on the transformed output.
1.8.1.2 ABSTRACT
The abstract is a group of paragraphs that appear before the Table
of content.
1.8.1.3 TABLE OF CONTENTS
The table of contents is a list of the titles of the different
sections, for example
- 1. Section one
- 2. Section two
- 3. Third section
Will define the table of contents of a document, if found in the
header (after the abstract). If a title listed here is not found
in the document, an error is yielded.
1.8.2 BODY
The body is formed by several paragraphs. Paragraphs are divided
into sections by lines with titles. The lines with titles should
appear in the TOC and should have the same content as the TOC.
Optionally they can be in uppercase for clarity. As the transformed
document usually will have better ways to emphasize the title,
the lowercase format used in the TOC will be used regardless of
uppercase being used. For example, the next section of this document
starts with
2. INSTALLING ZERODOC
2.1 Prerrequisites
And in the TOC the pertinent lines appear as:
-- toc fragment --
- 1.7.1.3 Table of contents
- 1.7.2 Body
- 2. Installing zerodoc
- 2.1 Prerrequisites
As you can see on the start of the next section, the title appears
in lowercase (as in the TOC above)
2. INSTALLING ZERODOC
2.1 PRERREQUISITES
Zerodoc needs Python (2.6 or newer) the Python PLY 'lex and yacc'
utilities (2.5 or newer) and distutils for installation. Aditionally
when generating diagrams, the programs to parse them need to be
installed as well.
As an example, in a GNU/Linux Debian 6.0 'Squeeze' system, the
requirements can be installed using:
# apt-get install python-ply python-aafigure ditaa
To generate diagrams with gnuplot or tikz, install the pertinent
packages
# apt-get install gnuplot
# apt-get install texlive-picture
2.2 INSTALLING THE LIBRARY AND INTERPRETER
2.2.1 USING A GIT SNAPSHOT
Clone the github repository using
$ git clone git://github.com/odkq/zerodoc.git
Change to the zerodoc dir and call setup.py as root
$ cd zerodoc/
$ sudo ./setup.py install
2.2.2 USING PYPI
3. USING THE COMMAND LINE CONVERTER
zerodoc - converts a zerodoc text file to HTML and many
other formats
3.1 SYNOPSIS
Usage: zerodoc [options]
Options:
-h, --help
show this help message and exit
-f FORMAT, --format=FORMAT
Output format. If ommited, 'html'
-o OPTIONS, --options=OPTIONS
Options for format renderer
-i FILE, --input=FILE
Use <filename> as input file. If ommited, use
stdin.
-O FILE, --output=FILE
Use <filename> as output file. If ommited,use
stdout.
3.2 HTML OUTPUT OPTIONS
ditaa
Use ditaa to format diagrams. When this option
is used, you can specify the path of the ditaa
.jar file with jarpath:<path>. If jarpath is
ommited, 'ditta' will be called (you can install
a command-line ditta wraper in Debian and others
with apt-get install ditaa)
jarpath:<path>
Location of the .jar path (there is no default,
'java' must be in the $PATH)
aafigure
Use aafigure to format diagrams
svg
Prefer svg in output when applicable (when the
converter outputs it and when the rendered format allows
for scalable graphics)
datauri
Do not generate image files, embbed the images
directly in the HTML using `DataURIscheme`
3.3 reStructuredText output options
notoc
Usually `reStructuredText` processors attach their own index in the
side (`sphinx-doc`, for example). In that case, you better do not
output the toc (it is still used to get section titles)
3.4 JSON OUTPUT OPTIONS
Json output has no options. It's output is the json render of the
parsed tree with no interpretation whatsoever
3.5 CONFLUENCE OUTPUT OPTIONS
ditaa, jarpath, aafigure, datauri
With the same meaning as in the HTML output options
You can specify an output file, and paste it by hand into
the confluence 'edition' formulary, or you can make zerodoc client
upload it directly with this options:
folder:<folder>
Folder (path) for the uploaded document
user:<user>
User to use
passwd:<passd>
Password
host:<host>
Host
- `DataURIscheme`:http://en.wikipedia.org/wiki/Data_URI_scheme
- `git`:http://git-scm.com/
- `POD`:http://search.cpan.org/dist/perl/pod/perlpod.pod
- `markdown`:http://daringfireball.net/projects/markdown/
- `orgmode`:http://orgmode.org/
- `creole`:http://wikicreole.org/
- `mediawiki`:http://www.mediawiki.org/wiki/Help:Formatting
- `reST`:http://docutils.sourceforge.net/rst.html
- `asciidoc`:http://www.methods.co.nz/asciidoc/
- `rdoc`:http://rdoc.sourceforge.net/
- `textile`:http://www.textism.com/tools/textile/index.php
- `aafigure manual page`:http://packages.python.org/aafigure/manual.html
- `asciitosvg`:https://9vx.org/~dho/a2s/
- `ditaa website`:http://ditaa.sourceforge.net/
- `reStructuredText`:http://docutils.sourceforge.net/rst.html
- `sphinx-doc`:http://sphinx-doc.org/ | zerodoc | /zerodoc-0.2.3.tar.gz/zerodoc-0.2.3/README.zd | README.zd |
# Zerodose

A tool to assist in personalized abnormality investigation in combined FDG-PET/MRI imaging.
Created by the department of [Clinically Applied Artificial Intelligence](http://caai.dk/) at [Copenhagen University Hospital](https://www.rigshospitalet.dk/)
[][pypi_]
[][status]
[][python version]
[][license]
[][read the docs]
[][tests]
[pypi_]: https://pypi.org/project/zerodose/
[status]: https://pypi.org/project/zerodose/
[read the docs]: https://zerodose.readthedocs.io/
[tests]: https://github.com/ChristianHinge/zerodose/actions?workflow=Tests
[python version]: https://pypi.org/project/zerodose
## Installation
Note that a python3 installation is required for _Zerodose_ to work.
You can install _Zerodose_ via [pip] from [PyPI]:
```console
$ pip install zerodose
```
## Usage
### Synthesize baseline PET
```console
$ zerodose syn -i mr.nii.gz -m brain_mask.nii.gz -o sb_pet.nii.gz
```
### Create abnormality map
```console
$ zerodose abn -p pet.nii.gz -s sb_pet.nii.gz -m brain_mask.nii.gz -o abn.nii.gz
```
Please see the [Command-line Reference] for details.
## Hardware requirements
- TODO
## Issues and contributing
Contributions are very welcome.
If you encounter any problems,
please [file an issue] along with a description.
[pypi]: https://pypi.org/
[file an issue]: https://github.com/ChristianHinge/zerodose/issues
[pip]: https://pip.pypa.io/
<!-- github-only -->
[license]: https://github.com/ChristianHinge/zerodose/blob/main/LICENSE
[contributor guide]: https://github.com/ChristianHinge/zerodose/blob/main/CONTRIBUTING.md
[command-line reference]: https://zerodose.readthedocs.io/en/latest/usage.html
| zerodose | /zerodose-0.0.6.tar.gz/zerodose-0.0.6/README.md | README.md |
.. image:: https://github.com/snakypy/zeroed/workflows/Python%20package/badge.svg
:target: https://github.com/snakypy/zeroed
.. image:: https://img.shields.io/pypi/v/zeroed.svg
:target: https://pypi.python.org/pypi/zeroed
.. image:: https://travis-ci.com/snakypy/zeroed.svg?branch=master
:target: https://travis-ci.com/snakypy/zeroed
.. image:: https://img.shields.io/pypi/wheel/zeroed
:alt: PyPI - Wheel
.. image:: https://img.shields.io/badge/code%20style-black-000000.svg
:target: https://github.com/psf/black
.. image:: https://pyup.io/repos/github/snakypy/zeroed/shield.svg
:target: https://pyup.io/repos/github/snakypy/zeroed/
:alt: Updates
.. image:: https://img.shields.io/github/issues-raw/snakypy/zeroed
:alt: GitHub issues
.. image:: https://img.shields.io/github/license/snakypy/zeroed
:alt: GitHub license
:target: https://github.com/snakypy/zeroed/blob/master/LICENSE
Requirements
------------
To work correctly, you will first need:
* `python`_ (v3.8 or recent) must be installed.
* `pip`_ (v20.0 or recent) must be installed.
Installing
----------
Globally:
.. code-block:: shell
$ sudo pip install zeroed
For the user:
.. code-block:: shell
$ pip install zeroed --user
Using
-----
Access the official page of the project where you can find a description of use:
The gem is available as open source under the terms of the `MIT License`_ ©
Credits
-------
See, `AUTHORS`_.
Links
-----
* Code: https://github.com/snakypy/zeroed
* Documentation: https://github.com/snakypy/zeroed/blob/master/README.md
* Releases: https://pypi.org/project/zeroed/#history
* Issue tracker: https://github.com/snakypy/zeroed/issues
.. _AUTHORS: https://github.com/snakypy/zeroed/blob/master/AUTHORS.rst
.. _python: https://python.org
.. _pip: https://pip.pypa.io/en/stable/quickstart/
.. _MIT License: https://github.com/snakypy/zeroed/blob/master/LICENSE
| zeroed | /zeroed-0.1.0rc1.tar.gz/zeroed-0.1.0rc1/README.rst | README.rst |
# ZeroEventHub
This README file contains information specific to the Python port of the ZeroEventHub.
Please see the [main readme file](../../README.md) for an overview of what this project is about.
## Client
We recommend that you store the latest checkpoint/cursor for each partition in the client's
database. Example of simple single-partition consumption. *Note about the example*:
* Things starting with "my" is supplied by you
* Things starting with "their" is supplied by the service you connect to
```python
# Step 1: Setup
their_partition_count = 1 # documented contract with server
zeh_session = requests.Session() # you can setup the authentication on the session
client = zeroeventhub.Client(their_service_url, their_partition_count, zeh_session)
# Step 2: Load the cursors from last time we ran
cursors = my_get_cursors_from_db()
if not cursors:
# we have never run before, so we can get all events with FIRST_CURSOR
# (if we just want to receive new events from now, we would use LAST_CURSOR)
cursors = [
zeroeventhub.Cursor(partition_id, zeroeventhub.FIRST_CURSOR)
for partition_id in range(their_partition_count)
]
# Step 3: Enter listening loop...
page_of_events = PageEventReceiver()
while myStillWantToReadEvents:
# Step 4: Use ZeroEventHub client to fetch the next page of events.
client.fetch_events(
cursors,
my_page_size_hint,
page_of_events
)
# Step 5: Write the effect of changes to our own database and the updated
# cursor value in the same transaction.
with db.begin_transaction() as tx:
my_write_effect_of_events_to_db(tx, page_of_events.events)
my_write_cursors_to_db(tx, page_of_events.latest_checkpoints)
tx.commit()
cursors = page_of_events.latest_checkpoints
page_of_events.clear()
```
## Development
To run the test suite, assuming you already have Python 3.10 or later installed and on your `PATH`:
```sh
pip install poetry==1.5.1
poetry config virtualenvs.in-project true
poetry install --sync
poetry run coverage run --branch -m pytest
poetry run coverage html
```
Then, you can open the `htmlcov/index.html` file in your browser to look at the code coverage report.
Also, to pass the CI checks, you may want to run the following before pushing your changes:
```sh
poetry run black tests/ zeroeventhub/
poetry run pylint ./zeroeventhub/
poetry run flake8
poetry run mypy
```
| zeroeventhub | /zeroeventhub-0.1.1.tar.gz/zeroeventhub-0.1.1/README.md | README.md |
# pylint: disable=too-many-instance-attributes
# pylint: disable=too-many-arguments
# pylint: disable=line-too-long
from asyncio import Future
import urllib.request
import threading
import time
import json
import sys
import re
import websocket
from pkg_resources import get_distribution, DistributionNotFound
try:
__version__ = get_distribution(__name__).version
except DistributionNotFound:
__version__ = '0.0.0'
CMD_RESPONSE = 'response'
CMD_PING = 'ping'
CMD_PONG = 'pong'
class ZeroFrame:
"""Class for ZeroFrame WebSocket API."""
# Initialization & Connection
def __init__(
self,
site,
*,
multiuser_master_address=None,
multiuser_master_seed=None,
instance_host='127.0.0.1',
instance_port=43110,
instance_secure=False,
show_log=False,
show_error=False,
reconnect_attempts=-1,
reconnect_delay=5000
):
"""
Construct the class and set up a connection to the WebSocket server.
:param str site: target ZeroNet site address
:param str multiuser_master_address: master address for multiuser ZeroNet instance, defaults to `None`
:param str multiuser_master_seed: master seed for multiuser ZeroNet instance, defaults to `None`
:param str instance_host: host of ZeroNet instance, defaults to `127.0.0.1`
:param int instance_port: port of ZeroNet instance, defaults to `43110`
:param bool instance_secure: secure connection of ZeroNet instance, defaults to `False`
:param bool show_log=false: show log messages in console, defaults to `False`
:param bool show_error=false: show error messages in console, defaults to `False`
:param int reconnect_attempts: number of attempts, defaults to `-1`, no limit with `-1`, no reconnect with `0`
:param int reconnect_delay: number of delay in milliseconds, defaults to `5000`
"""
self.site = site
self.multiuser = {'master_address': multiuser_master_address, 'master_seed': multiuser_master_seed}
self.instance = {'host': instance_host, 'port': instance_port, 'secure': instance_secure}
self.show = {'log': show_log, 'error': show_error}
self.reconnect = {'attempts': reconnect_attempts, 'delay': reconnect_delay}
self.websocket_connected = False
self.websocket_closing = False
self.waiting_callbacks = {}
self.waiting_messages = []
self.next_message_id = 1
self.next_attempt_id = 1
self.wrapper_key = None
self.websocket = None
self._connect()
self._start()
def __getattr__(self, name):
"""
Proxy for accessing ZeroFrame commands.
Command name is accepted as an object's property and parameters are accepted as
a method's arguments. Command returns `asyncio.Future` with the result.
* Command with no arguments can be accessed with `zeroframe.cmdName()`.
* Command with keyword arguments can be accessed with `zeroframe.cmdName(key1=value1, key2=value2)`.
* Command with normal arguments can be accessed with `zeroframe.cmdName(value1, value2)`.
"""
return lambda *args, **kwargs: self.cmdp(name, *args, **kwargs)
def init(self):
"""
User-based initialization code.
:rtype: ZeroFrame
"""
return self
def _connect(self):
"""
Get wrapper key and connect to WebSocket.
:rtype: ZeroFrame
"""
wrapper_headers, wrapper_body = self._create_wrapper_request()
self.wrapper_user = self._get_wrapper_user(wrapper_headers)
self.wrapper_key = self._get_wrapper_key(wrapper_body)
self.websocket = self._get_websocket()
return self.init()
def _start(self):
"""
Start WebSocket in thread.
"""
wst = threading.Thread(target=self.websocket.run_forever)
wst.daemon = True
wst.start()
@staticmethod
def _create_instance_user(ws_url, old_user, new_user):
"""
Create user on multiuser ZeroNet instance.
"""
conn = websocket.create_connection(ws_url, cookie='master_address=' + old_user)
conn.send('{"cmd":"userLoginForm","params":[],"id":-1}')
conn.recv()
payload = {
'cmd': 'response',
'to': 1,
'result': new_user,
'id': 1
}
conn.send(json.dumps(payload))
conn.close()
def _create_wrapper_request(self):
"""
Create and return wrapper request.
:return: wrapper headers and body
:rtype: (str, str)
"""
site_url = 'http' + ('s' if self.instance['secure'] else '') + '://' + self.instance['host'] + ':' + str(self.instance['port']) + '/' + self.site
wrapper_request = urllib.request.Request(site_url, headers={'Accept': 'text/html', 'User-Agent': 'ZeroFramePy/' + __version__})
wrapper_response = urllib.request.urlopen(wrapper_request)
wrapper_headers = wrapper_response.info()
wrapper_body = wrapper_response.read()
return (wrapper_headers, wrapper_body)
@staticmethod
def _get_wrapper_user(wrapper_headers):
"""
Get and return wrapper user.
:return: wrapper user
:rtype: (str|None)
"""
try:
return re.search(r'master_address=([^;]+)', str(wrapper_headers)).group(1)
except AttributeError:
return None
@staticmethod
def _get_wrapper_key(wrapper_body):
"""
Get and return wrapper key.
:return: wrapper key
:rtype: str
"""
return re.search(r'wrapper_key = "(.*?)"', str(wrapper_body)).group(1)
def _get_websocket(self):
"""
Connect and return WebSocket.
:return: WebSocket connection
:rtype: object
"""
ws_url = 'ws' + ('s' if self.instance['secure'] else '') + '://' + self.instance['host'] + ':' + str(self.instance['port']) + '/Websocket?wrapper_key=' + self.wrapper_key
# Connection to instance without Multiuser plugin
if not self.wrapper_user and not self.multiuser['master_address']:
ws_client = websocket.WebSocketApp(ws_url)
# Connection to Multiuser instance with stored master seed
elif self.multiuser['master_address'] and not self.multiuser['master_seed']:
ws_client = websocket.WebSocketApp(ws_url, cookie='master_address=' + self.multiuser['master_address'])
# Connection to Multiuser instance without stored master seed
elif self.multiuser['master_address'] and self.multiuser['master_seed']:
self._create_instance_user(ws_url, self.wrapper_user, self.multiuser['master_seed'])
ws_client = websocket.WebSocketApp(ws_url, cookie='master_address=' + self.multiuser['master_address'])
# Connection to Multiuser instance with instance-provided account
else:
ws_client = websocket.WebSocketApp(ws_url, cookie='master_address=' + self.wrapper_user)
ws_client.on_message = self._on_request
ws_client.on_open = self._on_open_websocket
ws_client.on_error = self._on_error_websocket
ws_client.on_close = self._on_close_websocket
return ws_client
# Internal handlers
def _on_request(self, message):
"""
Internal on request handler.
It is triggered on every message from the WebSocket server.
It handles built-in commands and forwards others
to the user-based handler.
:func:`~zeroframe_ws_client.ZeroFrame.on_request`
:param str message: WebSocket message
"""
message = json.loads(message)
cmd = message['cmd']
if cmd == CMD_RESPONSE:
if message['to'] in self.waiting_callbacks:
self.waiting_callbacks[message['to']](message['result'])
del self.waiting_callbacks[message['to']]
elif cmd == CMD_PING:
self.response(message['id'], CMD_PONG)
else:
self.on_request(cmd, message)
def _on_open_websocket(self):
"""
Internal on open websocket handler.
It is triggered when the WebSocket connection is opened.
It sends waiting message and calls the user-based handler.
:func:`~zeroframe_ws_client.ZeroFrame.on_open_websocket`
"""
self.websocket_connected = True
for message in self.waiting_messages:
if not 'processed' in message:
self.websocket.send(json.dumps(message))
message['processed'] = True
self.on_open_websocket()
def _on_error_websocket(self, error):
"""
Internal on error websocket handler.
It is triggered on the WebSocket error. It calls the user-based client.
:func:`~zeroframe_ws_client.ZeroFrame.on_error_websocket`
:param object error: WebSocket exception
"""
self.on_error_websocket(error)
def _on_close_websocket(self):
"""
Internal on close websocket handler.
It is triggered when the WebSocket connection is closed.
It tries to reconnect if enabled and calls the user-based handler.
:func:`~zeroframe_ws_client.ZeroFrame.on_close_websocket`
"""
self.websocket_connected = False
self.on_close_websocket()
# Don't attempt reconnection if user closes socket
if self.websocket_closing:
return
# Don't attempt reconnection if reconnection is disabled
if self.reconnect['attempts'] == 0:
return
# Don't attempt reconnection if attempts has exceeded maximum number
if self.reconnect['attempts'] != -1 and self.next_attempt_id > self.reconnect['attempts']:
return
time.sleep(self.reconnect['delay'] / 1000)
self.websocket = self._get_websocket()
self._start()
# External handlers
def on_request(self, cmd, message): # pylint: disable=unused-argument
"""
User-based on request handler.
It is triggered on every message from the WebSocket server.
It can be used to add additional functionalities to
the client or handle received messages.
:param str cmd: name of received command
:param object message: message of received command
"""
self.log('Unknown request', message)
def on_open_websocket(self):
"""
User-based on open websocket handler.
It is triggered when the WebSocket connection is opened.
It can be used to notify user or check for server details.
"""
self.log('Websocket open')
def on_error_websocket(self, error):
"""
User-based on error websocket handler.
It is triggered on the WebSocket error.
It can be used to notify user or display errors.
:param object error: WebSocket error
"""
self.error('Websocket error', error)
def on_close_websocket(self):
"""
User-based on close websocket handler.
It is triggered when the WebSocket connection is closed.
It can be used to notify user or display connection error.
"""
self.log('Websocket close')
# Logging functions
def log(self, *args):
"""
Add log to console if enabled.
:param * *args: logs to add to console
"""
if self.show['log']:
print('[ZeroFrame]', *args, file=sys.stdout)
def error(self, *args):
"""
Add error to console if enabled.
:param * *args: errors to add to console
"""
if self.show['error']:
print('[ZeroFrame]', *args, file=sys.stderr)
# Command functions
def _send(self, message, cb=None): # pylint: disable=invalid-name
"""
Internally send raw message to ZeroFrame server and call callback.
If the connection is available, it directly sends a message. If the
connection is not available, it adds message to waiting message queue.
:func:`~zeroframe_ws_client.ZeroFrame.cmd`
:func:`~zeroframe_ws_client.ZeroFrame.cmdp`
:func:`~zeroframe_ws_client.ZeroFrame.response`
:param dict message: message to send
:param callable cb: message callback
"""
if not 'id' in message:
message['id'] = self.next_message_id
self.next_message_id += 1
if self.websocket_connected:
self.websocket.send(json.dumps(message))
else:
self.waiting_messages.append(message)
if cb:
self.waiting_callbacks[message['id']] = cb
def cmd(self, cmd, params=None, cb=None): # pylint: disable=invalid-name
"""
Send command to ZeroFrame server and call callback.
:param str cmd: name of command to send
:param any params: parameters of command to send
:param callable cb: command callback
"""
if not params:
params = {}
self._send({
'cmd': cmd,
'params': params
}, cb)
def cmdp(self, cmd, params=None):
"""
Send command to ZeroFrame server and return the result as asyncio future.
In most cases, the result will be dictionary which contains data.
Some commands don't have any result. In this case, the result
will probably be string `ok`.
`ZeroFrame API Reference <https://zeronet.io/docs/site_development/zeroframe_api_reference/>`_
:param str cmd: name of command to send
:param any params: parameters of command to send
:return: command response
:rtype: asyncio.Future<(dict|str)>
"""
future = Future()
self.cmd(cmd, params, future.set_result)
return future
def response(self, to, result): # pylint: disable=invalid-name
"""
Response to ZeroFrame message.
:param to cmd: message ID to response
:param result cmd: result to send
"""
self._send({
'cmd': CMD_RESPONSE,
'to': to,
'result': result
})
def close(self):
"""
Close websocket connection.
"""
self.websocket_closing = True
self.websocket.close()
self.on_close_websocket() | zeroframe-ws-client | /zeroframe_ws_client-1.2.0-py3-none-any.whl/zeroframe_ws_client/__init__.py | __init__.py |
[](https://pypi.python.org/pypi/zerofun/#history)
# 🙅 Zerofun
Remote function calls for array data using [ZMQ](https://zeromq.org/).
## Overview
Zerofun provides a `Server` that you can bind functions to and a `Client` that
can call the messages and receive their results. The function inputs and
results are both flat **dicts of Numpy arrays**. The data is sent efficiently
without serialization to maximize throughput.
## Installation
```sh
pip install zerofun
```
## Example
This example runs the server and client in the same Python program using
subprocesses, but they could also be separate Python scripts running on
different machines.
```python
def server():
import zerofun
server = zerofun.Server('tcp://*:2222')
server.bind('add', lambda data: {'result': data['foo'] + data['bar']})
server.bind('msg', lambda data: print('Message from client:', data['msg']))
server.run()
def client():
import zerofun
client = zerofun.Client('tcp://localhost:2222')
client.connect()
future = client.add({'foo': 1, 'bar': 1})
result = future.result()
print(result) # {'result': 2}
client.msg({'msg': 'Hello World'})
if __name__ == '__main__':
import zerofun
server_proc = zerofun.Process(server, start=True)
client_proc = zerofun.Process(client, start=True)
client_proc.join()
server_proc.terminate()
```
## Features
Several productivity and performance features are available:
- **Request batching:** The server can batch requests together so that the user
function receives a dict of stacked arrays and the function result will be
split and sent back to the corresponding clients.
- **Multithreading:** Servers can use a thread pool to process multiple
requests in parallel. Optionally, each function can also request its own
thread pool to allow functions to block (e.g. for rate limiting) without
blocking other functions.
- **Async clients:** Clients can send multiple overlapping requests and wait
on the results when needed using `Future` objects. The maximum number of
inflight requests can be limited to avoid requests building up when the
server is slower than the client.
- **Error handling:** Exceptions raised in server functions are reported to the
client and raised in `future.result()` or, if the user did not store the
future object, on the next request. Worker exception can also be reraised in
the server application using `server.check()`.
- **Heartbeating:** Clients can send ping requests when they have not received
a result from the server for a while, allowing to wait for results that take
a long time to compute without assuming connection loss.
- **Concurrency:** `Thread` and `Process` implementations with exception
forwarding that can be forcefully terminated by the parent, which Python
threads do not natively support. Stoppable threads and processes are also
available for coorperative shutdown.
- **GIL load reduction:** The `ProcServer` behaves just like the normal
`Server` but uses a background process to batch requests and fan out results,
substantially reducing GIL load for the server workers in the main process.
## Questions
Please open a [GitHub issue](https://github.com/danijar/zerofun/issues) for
each question. Over time, we will add common questions to the README.
| zerofun | /zerofun-1.2.0.tar.gz/zerofun-1.2.0/README.md | README.md |
<p align="center">
<a href="https://github.com/pyrogram/pyrogram">
<img src="https://docs.pyrogram.org/_static/pyrogram.png" alt="Pyrogram" width="128">
</a>
<br>
<b>Telegram MTProto API Framework for Python</b>
<br>
<a href="https://docs.pyrogram.org">
Documentation
</a>
•
<a href="https://docs.pyrogram.org/releases">
Releases
</a>
•
<a href="https://t.me/pyrogram">
News
</a>
</p>
## Pyrogram
> Elegant, modern and asynchronous Telegram MTProto API framework in Python for users and bots
``` python
from pyrogram import Client, filters
app = Client("my_account")
@app.on_message(filters.private)
async def hello(client, message):
await message.reply("Hello from Pyrogram!")
app.run()
```
**Pyrogram** is a modern, elegant and asynchronous [MTProto API](https://docs.pyrogram.org/topics/mtproto-vs-botapi)
framework. It enables you to easily interact with the main Telegram API through a user account (custom client) or a bot
identity (bot API alternative) using Python.
### Support
If you'd like to support Pyrogram, you can consider:
- [Become a GitHub sponsor](https://github.com/sponsors/delivrance).
- [Become a LiberaPay patron](https://liberapay.com/delivrance).
- [Become an OpenCollective backer](https://opencollective.com/pyrogram).
### Key Features
- **Ready**: Install Pyrogram with pip and start building your applications right away.
- **Easy**: Makes the Telegram API simple and intuitive, while still allowing advanced usages.
- **Elegant**: Low-level details are abstracted and re-presented in a more convenient way.
- **Fast**: Boosted up by [TgCrypto](https://github.com/pyrogram/tgcrypto), a high-performance cryptography library written in C.
- **Type-hinted**: Types and methods are all type-hinted, enabling excellent editor support.
- **Async**: Fully asynchronous (also usable synchronously if wanted, for convenience).
- **Powerful**: Full access to Telegram's API to execute any official client action and more.
### Installing
``` bash
pip3 install pyrogram
```
### Resources
- Check out the docs at https://docs.pyrogram.org to learn more about Pyrogram, get started right
away and discover more in-depth material for building your client applications.
- Join the official channel at https://t.me/pyrogram and stay tuned for news, updates and announcements.
| zerogram | /zerogram-1.0.0.tar.gz/zerogram-1.0.0/README.md | README.md |
import os
import re
import shutil
from functools import partial
from pathlib import Path
from typing import NamedTuple, List, Tuple
# from autoflake import fix_code
# from black import format_str, FileMode
HOME_PATH = Path("compiler/api")
DESTINATION_PATH = Path("zerogram/raw")
NOTICE_PATH = "NOTICE"
SECTION_RE = re.compile(r"---(\w+)---")
LAYER_RE = re.compile(r"//\sLAYER\s(\d+)")
COMBINATOR_RE = re.compile(r"^([\w.]+)#([0-9a-f]+)\s(?:.*)=\s([\w<>.]+);$", re.MULTILINE)
ARGS_RE = re.compile(r"[^{](\w+):([\w?!.<>#]+)")
FLAGS_RE = re.compile(r"flags(\d?)\.(\d+)\?")
FLAGS_RE_2 = re.compile(r"flags(\d?)\.(\d+)\?([\w<>.]+)")
FLAGS_RE_3 = re.compile(r"flags(\d?):#")
INT_RE = re.compile(r"int(\d+)")
CORE_TYPES = ["int", "long", "int128", "int256", "double", "bytes", "string", "Bool", "true"]
WARNING = """
# # # # # # # # # # # # # # # # # # # # # # # #
# !!! WARNING !!! #
# This is a generated file! #
# All changes made in this file will be lost! #
# # # # # # # # # # # # # # # # # # # # # # # #
""".strip()
# noinspection PyShadowingBuiltins
open = partial(open, encoding="utf-8")
types_to_constructors = {}
types_to_functions = {}
constructors_to_functions = {}
namespaces_to_types = {}
namespaces_to_constructors = {}
namespaces_to_functions = {}
class Combinator(NamedTuple):
section: str
qualname: str
namespace: str
name: str
id: str
has_flags: bool
args: List[Tuple[str, str]]
qualtype: str
typespace: str
type: str
def snake(s: str):
# https://stackoverflow.com/q/1175208
s = re.sub(r"(.)([A-Z][a-z]+)", r"\1_\2", s)
return re.sub(r"([a-z0-9])([A-Z])", r"\1_\2", s).lower()
def camel(s: str):
return "".join([i[0].upper() + i[1:] for i in s.split("_")])
# noinspection PyShadowingBuiltins, PyShadowingNames
def get_type_hint(type: str) -> str:
is_flag = FLAGS_RE.match(type)
is_core = False
if is_flag:
type = type.split("?")[1]
if type in CORE_TYPES:
is_core = True
if type == "long" or "int" in type:
type = "int"
elif type == "double":
type = "float"
elif type == "string":
type = "str"
elif type in ["Bool", "true"]:
type = "bool"
else: # bytes and object
type = "bytes"
if type in ["Object", "!X"]:
return "TLObject"
if re.match("^vector", type, re.I):
is_core = True
sub_type = type.split("<")[1][:-1]
type = f"List[{get_type_hint(sub_type)}]"
if is_core:
return f"Optional[{type}] = None" if is_flag else type
else:
ns, name = type.split(".") if "." in type else ("", type)
type = f'"raw.base.' + ".".join([ns, name]).strip(".") + '"'
return f'{type}{" = None" if is_flag else ""}'
def sort_args(args):
"""Put flags at the end"""
args = args.copy()
flags = [i for i in args if FLAGS_RE.match(i[1])]
for i in flags:
args.remove(i)
for i in args[:]:
if re.match(r"flags\d?", i[0]) and i[1] == "#":
args.remove(i)
return args + flags
def remove_whitespaces(source: str) -> str:
"""Remove whitespaces from blank lines"""
lines = source.split("\n")
for i, _ in enumerate(lines):
if re.match(r"^\s+$", lines[i]):
lines[i] = ""
return "\n".join(lines)
def get_docstring_arg_type(t: str, is_list: bool = False, is_zerogram_type: bool = False):
if t in CORE_TYPES:
if t == "long":
return "``int`` ``64-bit``"
elif "int" in t:
size = INT_RE.match(t)
return f"``int`` ``{size.group(1)}-bit``" if size else "``int`` ``32-bit``"
elif t == "double":
return "``float`` ``64-bit``"
elif t == "string":
return "``str``"
elif t == "true":
return "``bool``"
else:
return f"``{t.lower()}``"
elif t == "TLObject" or t == "X":
return "Any object from :obj:`~zerogram.raw.types`"
elif t == "!X":
return "Any method from :obj:`~zerogram.raw.functions`"
elif t.lower().startswith("vector"):
return "List of " + get_docstring_arg_type(t.split("<", 1)[1][:-1], True)
else:
return f":obj:`{t} <zerogram.raw.base.{t}>`"
def get_references(t: str, kind: str):
if kind == "constructors":
t = constructors_to_functions.get(t)
elif kind == "types":
t = types_to_functions.get(t)
else:
raise ValueError("Invalid kind")
if t:
return "\n ".join(
f"- :obj:`{i} <zerogram.raw.functions.{i}>`"
for i in t
), len(t)
return None, 0
# noinspection PyShadowingBuiltins
def start(format: bool = False):
shutil.rmtree(DESTINATION_PATH / "types", ignore_errors=True)
shutil.rmtree(DESTINATION_PATH / "functions", ignore_errors=True)
shutil.rmtree(DESTINATION_PATH / "base", ignore_errors=True)
with open(HOME_PATH / "source/auth_key.tl") as f1, \
open(HOME_PATH / "source/sys_msgs.tl") as f2, \
open(HOME_PATH / "source/main_api.tl") as f3:
schema = (f1.read() + f2.read() + f3.read()).splitlines()
with open(HOME_PATH / "template/type.txt") as f1, \
open(HOME_PATH / "template/combinator.txt") as f2:
type_tmpl = f1.read()
combinator_tmpl = f2.read()
with open(NOTICE_PATH, encoding="utf-8") as f:
notice = []
for line in f.readlines():
notice.append(f"# {line}".strip())
notice = "\n".join(notice)
section = None
layer = None
combinators = []
for line in schema:
# Check for section changer lines
section_match = SECTION_RE.match(line)
if section_match:
section = section_match.group(1)
continue
# Save the layer version
layer_match = LAYER_RE.match(line)
if layer_match:
layer = layer_match.group(1)
continue
combinator_match = COMBINATOR_RE.match(line)
if combinator_match:
# noinspection PyShadowingBuiltins
qualname, id, qualtype = combinator_match.groups()
namespace, name = qualname.split(".") if "." in qualname else ("", qualname)
name = camel(name)
qualname = ".".join([namespace, name]).lstrip(".")
typespace, type = qualtype.split(".") if "." in qualtype else ("", qualtype)
type = camel(type)
qualtype = ".".join([typespace, type]).lstrip(".")
# Pingu!
has_flags = not not FLAGS_RE_3.findall(line)
args = ARGS_RE.findall(line)
# Fix arg name being "self" (reserved python keyword)
for i, item in enumerate(args):
if item[0] == "self":
args[i] = ("is_self", item[1])
combinator = Combinator(
section=section,
qualname=qualname,
namespace=namespace,
name=name,
id=f"0x{id}",
has_flags=has_flags,
args=args,
qualtype=qualtype,
typespace=typespace,
type=type
)
combinators.append(combinator)
for c in combinators:
qualtype = c.qualtype
if qualtype.startswith("Vector"):
qualtype = qualtype.split("<")[1][:-1]
d = types_to_constructors if c.section == "types" else types_to_functions
if qualtype not in d:
d[qualtype] = []
d[qualtype].append(c.qualname)
if c.section == "types":
key = c.namespace
if key not in namespaces_to_types:
namespaces_to_types[key] = []
if c.type not in namespaces_to_types[key]:
namespaces_to_types[key].append(c.type)
for k, v in types_to_constructors.items():
for i in v:
try:
constructors_to_functions[i] = types_to_functions[k]
except KeyError:
pass
# import json
# print(json.dumps(namespaces_to_types, indent=2))
for qualtype in types_to_constructors:
typespace, type = qualtype.split(".") if "." in qualtype else ("", qualtype)
dir_path = DESTINATION_PATH / "base" / typespace
module = type
if module == "Updates":
module = "UpdatesT"
os.makedirs(dir_path, exist_ok=True)
constructors = sorted(types_to_constructors[qualtype])
constr_count = len(constructors)
items = "\n ".join([f"- :obj:`{c} <zerogram.raw.types.{c}>`" for c in constructors])
docstring = f"This base type has {constr_count} constructor{'s' if constr_count > 1 else ''} available.\n\n"
docstring += f" Constructors:\n .. hlist::\n :columns: 2\n\n {items}"
references, ref_count = get_references(qualtype, "types")
if references:
docstring += f"\n\n See Also:\n This object can be returned by " \
f"{ref_count} method{'s' if ref_count > 1 else ''}:" \
f"\n\n .. hlist::\n :columns: 2\n\n " + references
with open(dir_path / f"{snake(module)}.py", "w") as f:
f.write(
type_tmpl.format(
notice=notice,
warning=WARNING,
docstring=docstring,
name=type,
qualname=qualtype,
types=", ".join([f"raw.types.{c}" for c in constructors]),
doc_name=snake(type).replace("_", "-")
)
)
for c in combinators:
sorted_args = sort_args(c.args)
arguments = (
(", *, " if c.args else "") +
(", ".join(
[f"{i[0]}: {get_type_hint(i[1])}"
for i in sorted_args]
) if sorted_args else "")
)
fields = "\n ".join(
[f"self.{i[0]} = {i[0]} # {i[1]}"
for i in sorted_args]
) if sorted_args else "pass"
docstring = ""
docstring_args = []
for i, arg in enumerate(sorted_args):
arg_name, arg_type = arg
is_optional = FLAGS_RE.match(arg_type)
flag_number = is_optional.group(1) if is_optional else -1
arg_type = arg_type.split("?")[-1]
docstring_args.append(
"{}{}: {}".format(
arg_name,
" (optional)".format(flag_number) if is_optional else "",
get_docstring_arg_type(arg_type, is_zerogram_type=c.namespace == "zerogram")
)
)
if c.section == "types":
docstring += f"This object is a constructor of the base type :obj:`~zerogram.raw.base.{c.qualtype}`.\n\n"
else:
docstring += f"Telegram API method.\n\n"
docstring += f" Details:\n - Layer: ``{layer}``\n - ID: ``{c.id}``\n\n"
if docstring_args:
docstring += " Parameters:\n " + "\n ".join(docstring_args)
else:
docstring += " **No parameters required.**"
if c.section == "functions":
docstring += "\n\n Returns:\n " + get_docstring_arg_type(c.qualtype)
else:
references, count = get_references(c.qualname, "constructors")
if references:
docstring += f"\n\n See Also:\n This object can be returned by " \
f"{count} method{'s' if count > 1 else ''}:" \
f"\n\n .. hlist::\n :columns: 2\n\n " + references
write_types = read_types = "" if c.has_flags else "# No flags\n "
for arg_name, arg_type in c.args:
flag = FLAGS_RE_2.match(arg_type)
if re.match(r"flags\d?", arg_name) and arg_type == "#":
write_flags = []
for i in c.args:
flag = FLAGS_RE_2.match(i[1])
if flag:
if arg_name != f"flags{flag.group(1)}":
continue
if flag.group(3) == "true" or flag.group(3).startswith("Vector"):
write_flags.append(f"{arg_name} |= (1 << {flag.group(2)}) if self.{i[0]} else 0")
else:
write_flags.append(
f"{arg_name} |= (1 << {flag.group(2)}) if self.{i[0]} is not None else 0")
write_flags = "\n ".join([
f"{arg_name} = 0",
"\n ".join(write_flags),
f"b.write(Int({arg_name}))\n "
])
write_types += write_flags
read_types += f"\n {arg_name} = Int.read(b)\n "
continue
if flag:
number, index, flag_type = flag.groups()
if flag_type == "true":
read_types += "\n "
read_types += f"{arg_name} = True if flags{number} & (1 << {index}) else False"
elif flag_type in CORE_TYPES:
write_types += "\n "
write_types += f"if self.{arg_name} is not None:\n "
write_types += f"b.write({flag_type.title()}(self.{arg_name}))\n "
read_types += "\n "
read_types += f"{arg_name} = {flag_type.title()}.read(b) if flags{number} & (1 << {index}) else None"
elif "vector" in flag_type.lower():
sub_type = arg_type.split("<")[1][:-1]
write_types += "\n "
write_types += f"if self.{arg_name}:\n "
write_types += "b.write(Vector(self.{}{}))\n ".format(
arg_name, f", {sub_type.title()}" if sub_type in CORE_TYPES else ""
)
read_types += "\n "
read_types += "{} = TLObject.read(b{}) if flags{} & (1 << {}) else []\n ".format(
arg_name, f", {sub_type.title()}" if sub_type in CORE_TYPES else "", number, index
)
else:
write_types += "\n "
write_types += f"if self.{arg_name} is not None:\n "
write_types += f"b.write(self.{arg_name}.write())\n "
read_types += "\n "
read_types += f"{arg_name} = TLObject.read(b) if flags{number} & (1 << {index}) else None\n "
else:
if arg_type in CORE_TYPES:
write_types += "\n "
write_types += f"b.write({arg_type.title()}(self.{arg_name}))\n "
read_types += "\n "
read_types += f"{arg_name} = {arg_type.title()}.read(b)\n "
elif "vector" in arg_type.lower():
sub_type = arg_type.split("<")[1][:-1]
write_types += "\n "
write_types += "b.write(Vector(self.{}{}))\n ".format(
arg_name, f", {sub_type.title()}" if sub_type in CORE_TYPES else ""
)
read_types += "\n "
read_types += "{} = TLObject.read(b{})\n ".format(
arg_name, f", {sub_type.title()}" if sub_type in CORE_TYPES else ""
)
else:
write_types += "\n "
write_types += f"b.write(self.{arg_name}.write())\n "
read_types += "\n "
read_types += f"{arg_name} = TLObject.read(b)\n "
slots = ", ".join([f'"{i[0]}"' for i in sorted_args])
return_arguments = ", ".join([f"{i[0]}={i[0]}" for i in sorted_args])
compiled_combinator = combinator_tmpl.format(
notice=notice,
warning=WARNING,
name=c.name,
docstring=docstring,
slots=slots,
id=c.id,
qualname=f"{c.section}.{c.qualname}",
arguments=arguments,
fields=fields,
read_types=read_types,
write_types=write_types,
return_arguments=return_arguments
)
directory = "types" if c.section == "types" else c.section
dir_path = DESTINATION_PATH / directory / c.namespace
os.makedirs(dir_path, exist_ok=True)
module = c.name
if module == "Updates":
module = "UpdatesT"
with open(dir_path / f"{snake(module)}.py", "w") as f:
f.write(compiled_combinator)
d = namespaces_to_constructors if c.section == "types" else namespaces_to_functions
if c.namespace not in d:
d[c.namespace] = []
d[c.namespace].append(c.name)
for namespace, types in namespaces_to_types.items():
with open(DESTINATION_PATH / "base" / namespace / "__init__.py", "w") as f:
f.write(f"{notice}\n\n")
f.write(f"{WARNING}\n\n")
for t in types:
module = t
if module == "Updates":
module = "UpdatesT"
f.write(f"from .{snake(module)} import {t}\n")
if not namespace:
f.write(f"from . import {', '.join(filter(bool, namespaces_to_types))}")
for namespace, types in namespaces_to_constructors.items():
with open(DESTINATION_PATH / "types" / namespace / "__init__.py", "w") as f:
f.write(f"{notice}\n\n")
f.write(f"{WARNING}\n\n")
for t in types:
module = t
if module == "Updates":
module = "UpdatesT"
f.write(f"from .{snake(module)} import {t}\n")
if not namespace:
f.write(f"from . import {', '.join(filter(bool, namespaces_to_constructors))}\n")
for namespace, types in namespaces_to_functions.items():
with open(DESTINATION_PATH / "functions" / namespace / "__init__.py", "w") as f:
f.write(f"{notice}\n\n")
f.write(f"{WARNING}\n\n")
for t in types:
module = t
if module == "Updates":
module = "UpdatesT"
f.write(f"from .{snake(module)} import {t}\n")
if not namespace:
f.write(f"from . import {', '.join(filter(bool, namespaces_to_functions))}")
with open(DESTINATION_PATH / "all.py", "w", encoding="utf-8") as f:
f.write(notice + "\n\n")
f.write(WARNING + "\n\n")
f.write(f"layer = {layer}\n\n")
f.write("objects = {")
for c in combinators:
f.write(f'\n {c.id}: "zerogram.raw.{c.section}.{c.qualname}",')
f.write('\n 0xbc799737: "zerogram.raw.core.BoolFalse",')
f.write('\n 0x997275b5: "zerogram.raw.core.BoolTrue",')
f.write('\n 0x1cb5c415: "zerogram.raw.core.Vector",')
f.write('\n 0x73f1f8dc: "zerogram.raw.core.MsgContainer",')
f.write('\n 0xae500895: "zerogram.raw.core.FutureSalts",')
f.write('\n 0x0949d9dc: "zerogram.raw.core.FutureSalt",')
f.write('\n 0x3072cfa1: "zerogram.raw.core.GzipPacked",')
f.write('\n 0x5bb8e511: "zerogram.raw.core.Message",')
f.write("\n}\n")
if "__main__" == __name__:
HOME_PATH = Path(".")
DESTINATION_PATH = Path("../../zerogram/raw")
NOTICE_PATH = Path("../../NOTICE")
start(format=False) | zerogram | /zerogram-1.0.0.tar.gz/zerogram-1.0.0/compiler/api/compiler.py | compiler.py |
import ast
import os
import re
import shutil
HOME = "compiler/docs"
DESTINATION = "docs/source/telegram"
zerogram_API_DEST = "docs/source/api"
FUNCTIONS_PATH = "zerogram/raw/functions"
TYPES_PATH = "zerogram/raw/types"
BASE_PATH = "zerogram/raw/base"
FUNCTIONS_BASE = "functions"
TYPES_BASE = "types"
BASE_BASE = "base"
def snek(s: str):
s = re.sub(r"(.)([A-Z][a-z]+)", r"\1_\2", s)
return re.sub(r"([a-z0-9])([A-Z])", r"\1_\2", s).lower()
def generate(source_path, base):
all_entities = {}
def build(path, level=0):
last = path.split("/")[-1]
for i in os.listdir(path):
try:
if not i.startswith("__"):
build("/".join([path, i]), level=level + 1)
except NotADirectoryError:
with open(path + "/" + i, encoding="utf-8") as f:
p = ast.parse(f.read())
for node in ast.walk(p):
if isinstance(node, ast.ClassDef):
name = node.name
break
else:
continue
full_path = os.path.basename(path) + "/" + snek(name).replace("_", "-") + ".rst"
if level:
full_path = base + "/" + full_path
os.makedirs(os.path.dirname(DESTINATION + "/" + full_path), exist_ok=True)
with open(DESTINATION + "/" + full_path, "w", encoding="utf-8") as f:
f.write(
page_template.format(
title=name,
title_markup="=" * len(name),
full_class_path="zerogram.raw.{}".format(
".".join(full_path.split("/")[:-1]) + "." + name
)
)
)
if last not in all_entities:
all_entities[last] = []
all_entities[last].append(name)
build(source_path)
for k, v in sorted(all_entities.items()):
v = sorted(v)
entities = []
for i in v:
entities.append(snek(i).replace("_", "-"))
if k != base:
inner_path = base + "/" + k + "/index" + ".rst"
module = "zerogram.raw.{}.{}".format(base, k)
else:
for i in sorted(list(all_entities), reverse=True):
if i != base:
entities.insert(0, "{0}/index".format(i))
inner_path = base + "/index" + ".rst"
module = "zerogram.raw.{}".format(base)
with open(DESTINATION + "/" + inner_path, "w", encoding="utf-8") as f:
if k == base:
f.write(":tocdepth: 1\n\n")
k = "Raw " + k
f.write(
toctree.format(
title=k.title(),
title_markup="=" * len(k),
module=module,
entities="\n ".join(entities)
)
)
f.write("\n")
def zerogram_api():
def get_title_list(s: str) -> list:
return [i.strip() for i in [j.strip() for j in s.split("\n") if j] if i]
# Methods
categories = dict(
utilities="""
Utilities
start
stop
run
restart
add_handler
remove_handler
stop_transmission
export_session_string
set_parse_mode
""",
messages="""
Messages
send_message
forward_messages
copy_message
copy_media_group
send_photo
send_audio
send_document
send_sticker
send_video
send_animation
send_voice
send_video_note
send_media_group
send_location
send_venue
send_contact
send_cached_media
send_reaction
edit_message_text
edit_message_caption
edit_message_media
edit_message_reply_markup
edit_inline_text
edit_inline_caption
edit_inline_media
edit_inline_reply_markup
send_chat_action
delete_messages
get_messages
get_media_group
get_chat_history
get_chat_history_count
read_chat_history
send_poll
vote_poll
stop_poll
retract_vote
send_dice
search_messages
search_messages_count
search_global
search_global_count
download_media
stream_media
get_discussion_message
get_discussion_replies
get_discussion_replies_count
""",
chats="""
Chats
join_chat
leave_chat
ban_chat_member
unban_chat_member
restrict_chat_member
promote_chat_member
set_administrator_title
set_chat_photo
delete_chat_photo
set_chat_title
set_chat_description
set_chat_permissions
pin_chat_message
unpin_chat_message
unpin_all_chat_messages
get_chat
get_chat_member
get_chat_members
get_chat_members_count
get_dialogs
get_dialogs_count
set_chat_username
get_nearby_chats
archive_chats
unarchive_chats
add_chat_members
create_channel
create_group
create_supergroup
delete_channel
delete_supergroup
delete_user_history
set_slow_mode
mark_chat_unread
get_chat_event_log
get_chat_online_count
get_send_as_chats
set_send_as_chat
set_chat_protected_content
""",
users="""
Users
get_me
get_users
get_chat_photos
get_chat_photos_count
set_profile_photo
delete_profile_photos
set_username
update_profile
block_user
unblock_user
get_common_chats
""",
invite_links="""
Invite Links
get_chat_invite_link
export_chat_invite_link
create_chat_invite_link
edit_chat_invite_link
revoke_chat_invite_link
delete_chat_invite_link
get_chat_invite_link_joiners
get_chat_invite_link_joiners_count
get_chat_admin_invite_links
get_chat_admin_invite_links_count
get_chat_admins_with_invite_links
get_chat_join_requests
delete_chat_admin_invite_links
approve_chat_join_request
approve_all_chat_join_requests
decline_chat_join_request
decline_all_chat_join_requests
""",
contacts="""
Contacts
add_contact
delete_contacts
import_contacts
get_contacts
get_contacts_count
""",
password="""
Password
enable_cloud_password
change_cloud_password
remove_cloud_password
""",
bots="""
Bots
get_inline_bot_results
send_inline_bot_result
answer_callback_query
answer_inline_query
request_callback_answer
send_game
set_game_score
get_game_high_scores
set_bot_commands
get_bot_commands
delete_bot_commands
set_bot_default_privileges
get_bot_default_privileges
set_chat_menu_button
get_chat_menu_button
answer_web_app_query
""",
authorization="""
Authorization
connect
disconnect
initialize
terminate
send_code
resend_code
sign_in
sign_in_bot
sign_up
get_password_hint
check_password
send_recovery_code
recover_password
accept_terms_of_service
log_out
""",
advanced="""
Advanced
invoke
resolve_peer
save_file
"""
)
root = zerogram_API_DEST + "/methods"
shutil.rmtree(root, ignore_errors=True)
os.mkdir(root)
with open(HOME + "/template/methods.rst") as f:
template = f.read()
with open(root + "/index.rst", "w") as f:
fmt_keys = {}
for k, v in categories.items():
name, *methods = get_title_list(v)
fmt_keys.update({k: "\n ".join("{0} <{0}>".format(m) for m in methods)})
for method in methods:
with open(root + "/{}.rst".format(method), "w") as f2:
title = "{}()".format(method)
f2.write(title + "\n" + "=" * len(title) + "\n\n")
f2.write(".. automethod:: zerogram.Client.{}()".format(method))
functions = ["idle", "compose"]
for func in functions:
with open(root + "/{}.rst".format(func), "w") as f2:
title = "{}()".format(func)
f2.write(title + "\n" + "=" * len(title) + "\n\n")
f2.write(".. autofunction:: zerogram.{}()".format(func))
f.write(template.format(**fmt_keys))
# Types
categories = dict(
users_chats="""
Users & Chats
User
Chat
ChatPreview
ChatPhoto
ChatMember
ChatPermissions
ChatPrivileges
ChatInviteLink
ChatAdminWithInviteLinks
ChatEvent
ChatEventFilter
ChatMemberUpdated
ChatJoinRequest
ChatJoiner
Dialog
Restriction
""",
messages_media="""
Messages & Media
Message
MessageEntity
Photo
Thumbnail
Audio
Document
Animation
Video
Voice
VideoNote
Contact
Location
Venue
Sticker
Game
WebPage
Poll
PollOption
Dice
Reaction
VideoChatScheduled
VideoChatStarted
VideoChatEnded
VideoChatMembersInvited
WebAppData
""",
bot_keyboards="""
Bot keyboards
ReplyKeyboardMarkup
KeyboardButton
ReplyKeyboardRemove
InlineKeyboardMarkup
InlineKeyboardButton
LoginUrl
ForceReply
CallbackQuery
GameHighScore
CallbackGame
WebAppInfo
MenuButton
MenuButtonCommands
MenuButtonWebApp
MenuButtonDefault
SentWebAppMessage
""",
bot_commands="""
Bot commands
BotCommand
BotCommandScope
BotCommandScopeDefault
BotCommandScopeAllPrivateChats
BotCommandScopeAllGroupChats
BotCommandScopeAllChatAdministrators
BotCommandScopeChat
BotCommandScopeChatAdministrators
BotCommandScopeChatMember
""",
input_media="""
Input Media
InputMedia
InputMediaPhoto
InputMediaVideo
InputMediaAudio
InputMediaAnimation
InputMediaDocument
InputPhoneContact
""",
inline_mode="""
Inline Mode
InlineQuery
InlineQueryResult
InlineQueryResultCachedAudio
InlineQueryResultCachedDocument
InlineQueryResultCachedAnimation
InlineQueryResultCachedPhoto
InlineQueryResultCachedSticker
InlineQueryResultCachedVideo
InlineQueryResultCachedVoice
InlineQueryResultArticle
InlineQueryResultAudio
InlineQueryResultContact
InlineQueryResultDocument
InlineQueryResultAnimation
InlineQueryResultLocation
InlineQueryResultPhoto
InlineQueryResultVenue
InlineQueryResultVideo
InlineQueryResultVoice
ChosenInlineResult
""",
input_message_content="""
InputMessageContent
InputMessageContent
InputTextMessageContent
""",
authorization="""
Authorization
SentCode
TermsOfService
"""
)
root = zerogram_API_DEST + "/types"
shutil.rmtree(root, ignore_errors=True)
os.mkdir(root)
with open(HOME + "/template/types.rst") as f:
template = f.read()
with open(root + "/index.rst", "w") as f:
fmt_keys = {}
for k, v in categories.items():
name, *types = get_title_list(v)
fmt_keys.update({k: "\n ".join(types)})
# noinspection PyShadowingBuiltins
for type in types:
with open(root + "/{}.rst".format(type), "w") as f2:
title = "{}".format(type)
f2.write(title + "\n" + "=" * len(title) + "\n\n")
f2.write(".. autoclass:: zerogram.types.{}()\n".format(type))
f.write(template.format(**fmt_keys))
# Bound Methods
categories = dict(
message="""
Message
Message.click
Message.delete
Message.download
Message.forward
Message.copy
Message.pin
Message.unpin
Message.edit
Message.edit_text
Message.edit_caption
Message.edit_media
Message.edit_reply_markup
Message.reply
Message.reply_text
Message.reply_animation
Message.reply_audio
Message.reply_cached_media
Message.reply_chat_action
Message.reply_contact
Message.reply_document
Message.reply_game
Message.reply_inline_bot_result
Message.reply_location
Message.reply_media_group
Message.reply_photo
Message.reply_poll
Message.reply_sticker
Message.reply_venue
Message.reply_video
Message.reply_video_note
Message.reply_voice
Message.get_media_group
Message.react
""",
chat="""
Chat
Chat.archive
Chat.unarchive
Chat.set_title
Chat.set_description
Chat.set_photo
Chat.ban_member
Chat.unban_member
Chat.restrict_member
Chat.promote_member
Chat.get_member
Chat.get_members
Chat.add_members
Chat.join
Chat.leave
Chat.mark_unread
Chat.set_protected_content
Chat.unpin_all_messages
""",
user="""
User
User.archive
User.unarchive
User.block
User.unblock
""",
callback_query="""
Callback Query
CallbackQuery.answer
CallbackQuery.edit_message_text
CallbackQuery.edit_message_caption
CallbackQuery.edit_message_media
CallbackQuery.edit_message_reply_markup
""",
inline_query="""
InlineQuery
InlineQuery.answer
""",
chat_join_request="""
ChatJoinRequest
ChatJoinRequest.approve
ChatJoinRequest.decline
"""
)
root = zerogram_API_DEST + "/bound-methods"
shutil.rmtree(root, ignore_errors=True)
os.mkdir(root)
with open(HOME + "/template/bound-methods.rst") as f:
template = f.read()
with open(root + "/index.rst", "w") as f:
fmt_keys = {}
for k, v in categories.items():
name, *bound_methods = get_title_list(v)
fmt_keys.update({"{}_hlist".format(k): "\n ".join("- :meth:`~{}`".format(bm) for bm in bound_methods)})
fmt_keys.update(
{"{}_toctree".format(k): "\n ".join("{} <{}>".format(bm.split(".")[1], bm) for bm in bound_methods)})
# noinspection PyShadowingBuiltins
for bm in bound_methods:
with open(root + "/{}.rst".format(bm), "w") as f2:
title = "{}()".format(bm)
f2.write(title + "\n" + "=" * len(title) + "\n\n")
f2.write(".. automethod:: zerogram.types.{}()".format(bm))
f.write(template.format(**fmt_keys))
def start():
global page_template
global toctree
shutil.rmtree(DESTINATION, ignore_errors=True)
with open(HOME + "/template/page.txt", encoding="utf-8") as f:
page_template = f.read()
with open(HOME + "/template/toctree.txt", encoding="utf-8") as f:
toctree = f.read()
generate(TYPES_PATH, TYPES_BASE)
generate(FUNCTIONS_PATH, FUNCTIONS_BASE)
generate(BASE_PATH, BASE_BASE)
zerogram_api()
if "__main__" == __name__:
FUNCTIONS_PATH = "../../zerogram/raw/functions"
TYPES_PATH = "../../zerogram/raw/types"
BASE_PATH = "../../zerogram/raw/base"
HOME = "."
DESTINATION = "../../docs/source/telegram"
zerogram_API_DEST = "../../docs/source/api"
start() | zerogram | /zerogram-1.0.0.tar.gz/zerogram-1.0.0/compiler/docs/compiler.py | compiler.py |
import csv
import os
import re
import shutil
HOME = "compiler/errors"
DEST = "zerogram/errors/exceptions"
NOTICE_PATH = "NOTICE"
def snek(s):
# https://stackoverflow.com/questions/1175208/elegant-python-function-to-convert-camelcase-to-snake-case
s = re.sub(r"(.)([A-Z][a-z]+)", r"\1_\2", s)
return re.sub(r"([a-z0-9])([A-Z])", r"\1_\2", s).lower()
def caml(s):
s = snek(s).split("_")
return "".join([str(i.title()) for i in s])
def start():
shutil.rmtree(DEST, ignore_errors=True)
os.makedirs(DEST)
files = [i for i in os.listdir("{}/source".format(HOME))]
with open(NOTICE_PATH, encoding="utf-8") as f:
notice = []
for line in f.readlines():
notice.append("# {}".format(line).strip())
notice = "\n".join(notice)
with open("{}/all.py".format(DEST), "w", encoding="utf-8") as f_all:
f_all.write(notice + "\n\n")
f_all.write("count = {count}\n\n")
f_all.write("exceptions = {\n")
count = 0
for i in files:
code, name = re.search(r"(\d+)_([A-Z_]+)", i).groups()
f_all.write(" {}: {{\n".format(code))
init = "{}/__init__.py".format(DEST)
if not os.path.exists(init):
with open(init, "w", encoding="utf-8") as f_init:
f_init.write(notice + "\n\n")
with open(init, "a", encoding="utf-8") as f_init:
f_init.write("from .{}_{} import *\n".format(name.lower(), code))
with open("{}/source/{}".format(HOME, i), encoding="utf-8") as f_csv, \
open("{}/{}_{}.py".format(DEST, name.lower(), code), "w", encoding="utf-8") as f_class:
reader = csv.reader(f_csv, delimiter="\t")
super_class = caml(name)
name = " ".join([str(i.capitalize()) for i in re.sub(r"_", " ", name).lower().split(" ")])
sub_classes = []
f_all.write(" \"_\": \"{}\",\n".format(super_class))
for j, row in enumerate(reader):
if j == 0:
continue
count += 1
if not row: # Row is empty (blank line)
continue
error_id, error_message = row
sub_class = caml(re.sub(r"_X", "_", error_id))
sub_class = re.sub(r"^2", "Two", sub_class)
sub_class = re.sub(r" ", "", sub_class)
f_all.write(" \"{}\": \"{}\",\n".format(error_id, sub_class))
sub_classes.append((sub_class, error_id, error_message))
with open("{}/template/class.txt".format(HOME), "r", encoding="utf-8") as f_class_template:
class_template = f_class_template.read()
with open("{}/template/sub_class.txt".format(HOME), "r", encoding="utf-8") as f_sub_class_template:
sub_class_template = f_sub_class_template.read()
class_template = class_template.format(
notice=notice,
super_class=super_class,
code=code,
docstring='"""{}"""'.format(name),
sub_classes="".join([sub_class_template.format(
sub_class=k[0],
super_class=super_class,
id="\"{}\"".format(k[1]),
docstring='"""{}"""'.format(k[2])
) for k in sub_classes])
)
f_class.write(class_template)
f_all.write(" },\n")
f_all.write("}\n")
with open("{}/all.py".format(DEST), encoding="utf-8") as f:
content = f.read()
with open("{}/all.py".format(DEST), "w", encoding="utf-8") as f:
f.write(re.sub("{count}", str(count), content))
if "__main__" == __name__:
HOME = "."
DEST = "../../zerogram/errors/exceptions"
NOTICE_PATH = "../../NOTICE"
start() | zerogram | /zerogram-1.0.0.tar.gz/zerogram-1.0.0/compiler/errors/compiler.py | compiler.py |
.. -*- rst -*-
ZeroGroup
=========
Package Description
-------------------
ZeroGroup is a simple wrapper class for managing multiple ZeroMQ sockets and
streams.
Installation
------------
The package may be installed as follows: ::
pip install zerogroup
Usage Examples
--------------
See the `examples` directory for demos of how to use the class.
Development
-----------
The latest release of the package may be obtained from
`Github <https://github.com/lebedov/zerogroup>`_.
Author
------
See the included AUTHORS.rst file for more information.
License
-------
This software is licensed under the
`BSD License <http://www.opensource.org/licenses/bsd-license.php>`_.
See the included LICENSE.rst file for more information.
| zerogroup | /zerogroup-0.1.0.tar.gz/zerogroup-0.1.0/README.rst | README.rst |
import os
import shutil
import sys
import tempfile
import tarfile
import optparse
import subprocess
import platform
import textwrap
from distutils import log
try:
from site import USER_SITE
except ImportError:
USER_SITE = None
DEFAULT_VERSION = "2.2"
DEFAULT_URL = "https://pypi.python.org/packages/source/s/setuptools/"
def _python_cmd(*args):
"""
Return True if the command succeeded.
"""
args = (sys.executable,) + args
return subprocess.call(args) == 0
def _install(tarball, install_args=()):
# extracting the tarball
tmpdir = tempfile.mkdtemp()
log.warn('Extracting in %s', tmpdir)
old_wd = os.getcwd()
try:
os.chdir(tmpdir)
tar = tarfile.open(tarball)
_extractall(tar)
tar.close()
# going in the directory
subdir = os.path.join(tmpdir, os.listdir(tmpdir)[0])
os.chdir(subdir)
log.warn('Now working in %s', subdir)
# installing
log.warn('Installing Setuptools')
if not _python_cmd('setup.py', 'install', *install_args):
log.warn('Something went wrong during the installation.')
log.warn('See the error message above.')
# exitcode will be 2
return 2
finally:
os.chdir(old_wd)
shutil.rmtree(tmpdir)
def _build_egg(egg, tarball, to_dir):
# extracting the tarball
tmpdir = tempfile.mkdtemp()
log.warn('Extracting in %s', tmpdir)
old_wd = os.getcwd()
try:
os.chdir(tmpdir)
tar = tarfile.open(tarball)
_extractall(tar)
tar.close()
# going in the directory
subdir = os.path.join(tmpdir, os.listdir(tmpdir)[0])
os.chdir(subdir)
log.warn('Now working in %s', subdir)
# building an egg
log.warn('Building a Setuptools egg in %s', to_dir)
_python_cmd('setup.py', '-q', 'bdist_egg', '--dist-dir', to_dir)
finally:
os.chdir(old_wd)
shutil.rmtree(tmpdir)
# returning the result
log.warn(egg)
if not os.path.exists(egg):
raise IOError('Could not build the egg.')
def _do_download(version, download_base, to_dir, download_delay):
egg = os.path.join(to_dir, 'setuptools-%s-py%d.%d.egg'
% (version, sys.version_info[0], sys.version_info[1]))
if not os.path.exists(egg):
tarball = download_setuptools(version, download_base,
to_dir, download_delay)
_build_egg(egg, tarball, to_dir)
sys.path.insert(0, egg)
# Remove previously-imported pkg_resources if present (see
# https://bitbucket.org/pypa/setuptools/pull-request/7/ for details).
if 'pkg_resources' in sys.modules:
del sys.modules['pkg_resources']
import setuptools
setuptools.bootstrap_install_from = egg
def use_setuptools(version=DEFAULT_VERSION, download_base=DEFAULT_URL,
to_dir=os.curdir, download_delay=15):
to_dir = os.path.abspath(to_dir)
rep_modules = 'pkg_resources', 'setuptools'
imported = set(sys.modules).intersection(rep_modules)
try:
import pkg_resources
except ImportError:
return _do_download(version, download_base, to_dir, download_delay)
try:
pkg_resources.require("setuptools>=" + version)
return
except pkg_resources.DistributionNotFound:
return _do_download(version, download_base, to_dir, download_delay)
except pkg_resources.VersionConflict as VC_err:
if imported:
msg = textwrap.dedent("""
The required version of setuptools (>={version}) is not available,
and can't be installed while this script is running. Please
install a more recent version first, using
'easy_install -U setuptools'.
(Currently using {VC_err.args[0]!r})
""").format(VC_err=VC_err, version=version)
sys.stderr.write(msg)
sys.exit(2)
# otherwise, reload ok
del pkg_resources, sys.modules['pkg_resources']
return _do_download(version, download_base, to_dir, download_delay)
def _clean_check(cmd, target):
"""
Run the command to download target. If the command fails, clean up before
re-raising the error.
"""
try:
subprocess.check_call(cmd)
except subprocess.CalledProcessError:
if os.access(target, os.F_OK):
os.unlink(target)
raise
def download_file_powershell(url, target):
"""
Download the file at url to target using Powershell (which will validate
trust). Raise an exception if the command cannot complete.
"""
target = os.path.abspath(target)
cmd = [
'powershell',
'-Command',
"(new-object System.Net.WebClient).DownloadFile(%(url)r, %(target)r)" % vars(),
]
_clean_check(cmd, target)
def has_powershell():
if platform.system() != 'Windows':
return False
cmd = ['powershell', '-Command', 'echo test']
devnull = open(os.path.devnull, 'wb')
try:
try:
subprocess.check_call(cmd, stdout=devnull, stderr=devnull)
except:
return False
finally:
devnull.close()
return True
download_file_powershell.viable = has_powershell
def download_file_curl(url, target):
cmd = ['curl', url, '--silent', '--output', target]
_clean_check(cmd, target)
def has_curl():
cmd = ['curl', '--version']
devnull = open(os.path.devnull, 'wb')
try:
try:
subprocess.check_call(cmd, stdout=devnull, stderr=devnull)
except:
return False
finally:
devnull.close()
return True
download_file_curl.viable = has_curl
def download_file_wget(url, target):
cmd = ['wget', url, '--quiet', '--output-document', target]
_clean_check(cmd, target)
def has_wget():
cmd = ['wget', '--version']
devnull = open(os.path.devnull, 'wb')
try:
try:
subprocess.check_call(cmd, stdout=devnull, stderr=devnull)
except:
return False
finally:
devnull.close()
return True
download_file_wget.viable = has_wget
def download_file_insecure(url, target):
"""
Use Python to download the file, even though it cannot authenticate the
connection.
"""
try:
from urllib.request import urlopen
except ImportError:
from urllib2 import urlopen
src = dst = None
try:
src = urlopen(url)
# Read/write all in one block, so we don't create a corrupt file
# if the download is interrupted.
data = src.read()
dst = open(target, "wb")
dst.write(data)
finally:
if src:
src.close()
if dst:
dst.close()
download_file_insecure.viable = lambda: True
def get_best_downloader():
downloaders = [
download_file_powershell,
download_file_curl,
download_file_wget,
download_file_insecure,
]
for dl in downloaders:
if dl.viable():
return dl
def download_setuptools(version=DEFAULT_VERSION, download_base=DEFAULT_URL,
to_dir=os.curdir, delay=15,
downloader_factory=get_best_downloader):
"""Download setuptools from a specified location and return its filename
`version` should be a valid setuptools version number that is available
as an egg for download under the `download_base` URL (which should end
with a '/'). `to_dir` is the directory where the egg will be downloaded.
`delay` is the number of seconds to pause before an actual download
attempt.
``downloader_factory`` should be a function taking no arguments and
returning a function for downloading a URL to a target.
"""
# making sure we use the absolute path
to_dir = os.path.abspath(to_dir)
tgz_name = "setuptools-%s.tar.gz" % version
url = download_base + tgz_name
saveto = os.path.join(to_dir, tgz_name)
if not os.path.exists(saveto): # Avoid repeated downloads
log.warn("Downloading %s", url)
downloader = downloader_factory()
downloader(url, saveto)
return os.path.realpath(saveto)
def _extractall(self, path=".", members=None):
"""Extract all members from the archive to the current working
directory and set owner, modification time and permissions on
directories afterwards. `path' specifies a different directory
to extract to. `members' is optional and must be a subset of the
list returned by getmembers().
"""
import copy
import operator
from tarfile import ExtractError
directories = []
if members is None:
members = self
for tarinfo in members:
if tarinfo.isdir():
# Extract directories with a safe mode.
directories.append(tarinfo)
tarinfo = copy.copy(tarinfo)
tarinfo.mode = 448 # decimal for oct 0700
self.extract(tarinfo, path)
# Reverse sort directories.
directories.sort(key=operator.attrgetter('name'), reverse=True)
# Set correct owner, mtime and filemode on directories.
for tarinfo in directories:
dirpath = os.path.join(path, tarinfo.name)
try:
self.chown(tarinfo, dirpath)
self.utime(tarinfo, dirpath)
self.chmod(tarinfo, dirpath)
except ExtractError as e:
if self.errorlevel > 1:
raise
else:
self._dbg(1, "tarfile: %s" % e)
def _build_install_args(options):
"""
Build the arguments to 'python setup.py install' on the setuptools package
"""
return ['--user'] if options.user_install else []
def _parse_args():
"""
Parse the command line for options
"""
parser = optparse.OptionParser()
parser.add_option(
'--user', dest='user_install', action='store_true', default=False,
help='install in user site package (requires Python 2.6 or later)')
parser.add_option(
'--download-base', dest='download_base', metavar="URL",
default=DEFAULT_URL,
help='alternative URL from where to download the setuptools package')
parser.add_option(
'--insecure', dest='downloader_factory', action='store_const',
const=lambda: download_file_insecure, default=get_best_downloader,
help='Use internal, non-validating downloader'
)
options, args = parser.parse_args()
# positional arguments are ignored
return options
def main(version=DEFAULT_VERSION):
"""Install or upgrade setuptools and EasyInstall"""
options = _parse_args()
tarball = download_setuptools(download_base=options.download_base,
downloader_factory=options.downloader_factory)
return _install(tarball, _build_install_args(options))
if __name__ == '__main__':
sys.exit(main()) | zerogroup | /zerogroup-0.1.0.tar.gz/zerogroup-0.1.0/ez_setup.py | ez_setup.py |
.. -*- rst -*-
License
=======
Copyright (c) 2014, Lev Givon.
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following
disclaimer in the documentation and/or other materials provided
with the distribution.
* Neither the name of Lev Givon nor the names of any
contributors may be used to endorse or promote products derived
from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
| zerogroup | /zerogroup-0.1.0.tar.gz/zerogroup-0.1.0/LICENSE.rst | LICENSE.rst |
import itertools, logging, time
import threading as th
import multiprocessing as mp
import zmq
import zerogroup
PORT_DATA = 6000
PORT_CTRL = 6001
def a_func():
logger = logging.getLogger('a')
g = zerogroup.ZeroGroup()
g.add(zmq.ROUTER, 'tcp://*:%i' % PORT_DATA, True, {}, 'data')
g.add(zmq.DEALER, 'tcp://localhost:%i' % PORT_CTRL, False,
{zmq.IDENTITY: 'a'}, 'ctrl', True)
g.create()
def handler(msg):
if msg[0] == 'quit':
g.flush('ctrl')
g.stop_loop()
handler.running = False
logger.info('recv ctrl quit')
handler.running = True
g.on_recv('ctrl', handler)
g.start_loop(True)
c = itertools.count()
while True:
data = str(c.next())
g.send_multipart('data', ['b', data])
g.send('data', data)
logger.info('sent data %s' % data)
_, data = g.recv_multipart('data')
logger.info('recv data %s' % data)
# Send null data when exiting to prevent destination node from
# hanging on recv:
if not handler.running:
for i in xrange(5):
g.send_multipart('data', ['b', 'NULL'])
break
def b_func():
logger = logging.getLogger('b')
g = zerogroup.ZeroGroup()
g.add(zmq.DEALER, 'tcp://localhost:%i' % PORT_DATA, False,
{zmq.IDENTITY: 'b'}, 'data')
g.add(zmq.DEALER, 'tcp://localhost:%i' % PORT_CTRL, False,
{zmq.IDENTITY: 'b'}, 'ctrl', True)
g.create()
def handler(msg):
if msg[0] == 'quit':
g.flush('ctrl')
g.stop_loop()
handler.running = False
logger.info('recv ctrl quit')
handler.running = True
g.on_recv('ctrl', handler)
g.start_loop(True)
c = itertools.count()
while True:
data = str(c.next())
g.send('data', data)
logger.info('sent data %s' % data)
time.sleep(0.01)
data = g.recv('data')
logger.info('recv data %s' % data)
# Send null data when exiting to prevent destination node from
# hanging on recv:
if not handler.running:
for i in xrange(5):
g.send('data', 'NULL')
break
if __name__ == '__main__':
logging.basicConfig(level=logging.DEBUG,
format='%(asctime)s %(name)s %(levelname)s %(message)s')
ctx = zmq.Context()
sock = ctx.socket(zmq.ROUTER)
sock.bind('tcp://*:%i' % PORT_CTRL)
a = mp.Process(target=a_func)
a.start()
b = mp.Process(target=b_func)
b.start()
time.sleep(2)
for i in ['a', 'b']:
sock.send_multipart([i, 'quit']) | zerogroup | /zerogroup-0.1.0.tar.gz/zerogroup-0.1.0/examples/sync_demo.py | sync_demo.py |
import textwrap
import zerohash
class ZerohashError(Exception):
def __init__(
self,
messages=None,
errors=None,
body=None,
status_code=None,
headers=None,
):
# self._messages = messages
# self.errors = errors
if headers is None:
headers = {}
if body is None:
body = {}
self._body = body
self.status_code = status_code
self.headers = headers or {}
self.request_id = self.headers.get("X-Request-Id", None)
self.messages = self.construct_messages()
self.errors = self.construct_errors()
def construct_messages(self):
# if self._body is None or "messages" not in self._body:
# return self._messages
# messages = getattr(self._body, "messages", self._messages)
if isinstance(self._body, str):
return [self._body]
messages = self._body.get("messages", [])
if isinstance(messages, str):
return [messages]
return messages
def construct_errors(self):
errors = []
# Append an error object for each message
for msg in self.messages:
errors.append(
zerohash.resources.error_object.ErrorObject.construct_from(
dict(message=msg), zerohash.credentials
)
)
# Now append an error object for each item in "errors" or the string in "error" in the body
for msg in self._body.get("errors", []):
errors.append(
zerohash.resources.error_object.ErrorObject.construct_from(
dict(message=msg), zerohash.credentials
)
)
# if hasattr(self._body, "error"):
if self._body.get("error"):
errors.append(
zerohash.resources.error_object.ErrorObject.construct_from(
dict(message=self._body["error"]), zerohash.credentials
)
)
return errors
def __str__(self):
msg = self.messages
for e in self.errors:
if isinstance(e, dict):
msg.append(e.get("message", e))
if self.request_id:
return f"Request {self.request_id}: {msg}"
else:
return "Unknown Zero Hash Error"
class APIError(ZerohashError):
"""Used for 5XX errors received from the Zero Hash API"""
pass
class ClientError(ZerohashError):
"""Used for 4XX errors received from the Zero Hash API"""
pass
class UnknownError(ZerohashError):
pass
class AuthenticationError(ZerohashError):
pass
class MalformedAuthorizationError(ZerohashError):
pass
class APIConnectionError(ZerohashError):
def __init__(
self,
message_body,
http_body=None,
http_status=None,
json_body=None,
headers=None,
code=None,
should_retry=False,
):
message_body = textwrap.fill(message_body)
super().__init__(message_body, http_body, http_status, headers, code)
self.should_retry = should_retry | zerohash-python | /zerohash-python-0.0.9.tar.gz/zerohash-python-0.0.9/zerohash/error.py | error.py |
import hashlib
import hmac
import json
from base64 import b64decode, b64encode
from datetime import datetime
from logging import getLogger
from typing import Any, Dict, Optional
from urllib.parse import urljoin
import requests
from dotenv import find_dotenv, load_dotenv
logger = getLogger(__name__)
import os
# NB: THESE CREDENTIALS SHOULD NOT BE STORED IN PLAINTEXT
# Keys here are kept in plaintext for the purposes of demonstration
# We encourage you to encrypt your keys and decrypt them only when being used
load_dotenv(find_dotenv())
URL_BASE = "api.cert.zerohash.com"
HTTP_BASE = "https://" + URL_BASE
API_PUBLIC_KEY = os.environ["PUBLIC_KEY"] # "usjHuLksaeBXWSsa8uU7ES"
API_PRIVATE_KEY = os.environ[
"PRIVATE_KEY"
] # 2mC4ZvVd4goRkuJm+rjr9byUiaUW1b6tVN4xy9QXNSE=
PASSPHRASE = os.environ["PASSPHRASE"] # testingisgreat
def sign(
api_key: str, method: str, route: str, json_body: str, timestamp: str
) -> bytes:
"""Given a key and data, create and sign a payload.
:param api_key: Key to sign the message with
:param method: HTTP method
:param route: Relative route. EX. /fills
:param json_body: JSON as a string. Usually created via json.dumps(dict)
:param timestamp: Unix Epoch time as a string
:return: Base64 encoded digest
"""
msg = bytes(timestamp + method + route + json_body, encoding="utf-8")
hm = hmac.new(key=b64decode(api_key), msg=msg, digestmod=hashlib.sha256)
return b64encode(hm.digest())
def headers() -> Dict[str, Any]:
"""Create a header template for use in HTTP requests."""
return {
"X-SCX-API-KEY": API_PUBLIC_KEY,
"X-SCX-SIGNED": "", # Put here to make sure we alway send something
# The datetime.timestamp function is available only in Python 3.3+
"X-SCX-TIMESTAMP": str(int(datetime.now().timestamp())), # Unix Epoch
"X-SCX-PASSPHRASE": PASSPHRASE,
}
def make_seed_request(
method: str, url: str, body: Optional[Dict[str, str]] = None
) -> requests.Response:
"""Create and send an HTTP request with a signature to the Zero Hash API.
:param method: HTTP method
:param url: Relative route. EX. /fills
:param body: Dictionary for serializing into the JSON body of the request. For GET requests,
this can be omitted or set to an empty dict. Nothing will be sent, but it is
required for the signature.
:return: requests.Response object
"""
if body is None:
body = {}
h = headers()
json_body = json.dumps(body, separators=(",", ":"))
h["X-SCX-SIGNED"] = sign(
API_PRIVATE_KEY, method, url, json_body, h["X-SCX-TIMESTAMP"]
)
args = {"method": method, "url": urljoin(HTTP_BASE, url)}
logger.info("Making {} request to {}".format(method, urljoin(URL_BASE, url)))
if body:
args["data"] = json_body
h["Content-Type"] = "application/json"
logger.debug(json_body)
args["headers"] = h
print(args)
# Since we don't know if it's a GET or POST, use the generic request function and create an
# args dict so that we can conditionally pass data/JSON
return requests.request(**args)
def make_seed_reqauest_hardcoded():
body = {}
h = {
"X-SCX-API-KEY": "dhMsj1QcGP3TsepKPRBRcW",
"X-SCX-SIGNED": "", # Put here to make sure we alway send something
# The datetime.timestamp function is available only in Python 3.3+
"X-SCX-TIMESTAMP": "1633116917",
"X-SCX-PASSPHRASE": "thisiscool",
} | zerohash-python | /zerohash-python-0.0.9.tar.gz/zerohash-python-0.0.9/zerohash/dummy.py | dummy.py |
import logging
import textwrap
import threading
import time
from requests.adapters import HTTPAdapter
from requests.packages.urllib3.util.retry import Retry
from zerohash import error
logging.basicConfig(level=logging.DEBUG)
try:
import requests
except ImportError:
requests = None
def new_default_http_client(*args, **kwargs):
if requests:
return RequestsClient(*args, **kwargs)
else:
raise Exception(
"Requests must be installed as it is currently the only option for an HTTP client."
)
class HttpClient:
MAX_DELAY = 2 # in seconds
INITIAL_DELAY = 0.5 # in seconds
def __init__(self):
self._thread_local = threading.local()
def request_with_retries(self, method, url, headers, post_data=None, proxies=None):
num_retries = 0
while True:
try:
response = self.request(method, url, headers, post_data)
connection_error = None
except error.APIConnectionError as e:
response = None
connection_error = e
if self._should_retry(response, connection_error, num_retries):
num_retries += 1
sleep_time = self._sleep_time_seconds(num_retries, response)
time.sleep(sleep_time)
else:
if response is not None:
return response
else:
raise connection_error
def request(self, method, url, headers, post_data=None):
raise NotImplementedError(
f"HttpClient sublcass must implement `request` method."
)
@property
def _max_network_retries(self):
from zerohash import max_network_retries
return max_network_retries
def _should_retry(self, response, api_connection_error, num_retries):
if num_retries > self._max_network_retries:
return False
if response is None:
# TODO: we eventually want the subclasses to handle this. for now, default to not retry on connection
# issues/timeouts.
return False
content, status_code, rheaders = response
if status_code >= 500:
print("should retry...")
return True
return False
def _sleep_time_seconds(self, num_retries, response=None):
sleep_seconds = min(
self.MAX_DELAY, self.INITIAL_DELAY * (2 ** (num_retries - 1))
) # Double delay with each retry until we reach the max delay
return sleep_seconds
class RequestsClient(HttpClient):
name = "requests"
def __init__(self, timeout=30):
self._timeout = timeout
self._session = None
super().__init__()
def request(self, method, url, headers, post_data=None):
if getattr(self._thread_local, "session", None) is None:
self._thread_local.session = self._session or requests.Session()
self._thread_local.session.keep_alive = (
False # TODO: remove this for performance improvement
)
retry = Retry(
total=self._max_network_retries, connect=5, backoff_factor=0.1
)
adapter = HTTPAdapter(max_retries=retry)
self._thread_local.session.mount("http://", adapter)
self._thread_local.session.mount("https://", adapter)
try:
res = self._thread_local.session.request(
method,
url,
headers=headers,
data=post_data,
timeout=self._timeout,
verify=True, # TODO: if all else fails, set this to False
)
except Exception as e:
self._handle_request_error(e)
return res.content, res.status_code, res.headers
def _handle_request_error(self, e):
""""""
if isinstance(e, requests.exceptions.ConnectionError):
msg = (
"Request [Connection Error] detected communicating with [Zero Hash API]"
)
err = f"{type(e).__name__}: {str(e)}"
should_retry = True
if isinstance(e, requests.Timeout):
msg = "Request [Timeout] detected communicating with [Zero Hash API]"
err = f"{type(e).__name__}: {str(e)}"
should_retry = True
if isinstance(e, requests.RequestException):
msg = "Request [Exception] detected communicating with [Zero Hash API]"
err = f"{type(e).__name__}: {str(e)}"
should_retry = True
else:
msg = (
"Unexpected connection error communicating with Zero Hash. "
"There is probably a configuration issue locally."
)
err = "A %s was raised" % (type(e).__name__,)
if str(e):
err += " with error message %s" % (str(e),)
else:
err += " with no error message."
should_retry = False
msg = textwrap.fill(msg) + "\n\n(Network error: %s)" % (err,)
raise error.APIConnectionError(msg, should_retry=should_retry) | zerohash-python | /zerohash-python-0.0.9.tar.gz/zerohash-python-0.0.9/zerohash/http_client.py | http_client.py |
import datetime
import hashlib
import hmac
import json
import urllib
import uuid
from base64 import b64decode, b64encode
from urllib.parse import urljoin
import zerohash
from zerohash import error, error_classes, http_client
from zerohash.zerohash_response import ZerohashResponse
class APIRequestor:
def __init__(
self,
credentials=None,
api_base=None,
client=None,
):
self.api_base = api_base or zerohash.api_base
self.credentials = credentials or zerohash.credentials
if client:
self._client = client
elif zerohash.default_http_client:
self._client = zerohash.default_http_client
else:
"""If no default http client is set, set one to avoid creating one for every request"""
zerohash.default_http_client = http_client.new_default_http_client()
self._client = zerohash.default_http_client
def handle_error_response(self, rbody, rcode, resp, rheaders):
try:
err = self.specific_api_error(rbody, rcode, resp, rheaders)
except (KeyError, TypeError):
raise error.APIError(
f"Invalid response from Zero Hash API: {rbody}. HTTP response code {rcode}",
rbody,
rcode,
resp,
)
raise err
def specific_api_error(self, rbody, rcode, resp, rheaders):
# api_error_code = resp["code"]
if 400 <= rcode < 500:
return zerohash.error.ClientError(
body=resp, # resp is serialized, rbody is a bytes string of the body b""
headers=rheaders,
status_code=rcode,
)
elif rcode > 500:
return zerohash.error.APIError(headers=rheaders, status_code=rcode)
else:
return zerohash.error.UnknownError(body=resp, status_code=rcode)
# try:
# return error_classes.ERROR_CLASSES[api_error_code](
# code=resp["code"],
# body=resp,
# status_code=rcode,
# headers=rheaders,
# )
# except KeyError:
# return error.UnknownError(body=resp, status_code=rcode)
def interpret_response(self, rbody, rcode, rheaders):
try:
resp = ZerohashResponse(rbody, rcode, rheaders)
except Exception:
raise Exception(
f"Invalid response from API: {rbody} (HTTP response code: {rcode})",
rbody,
rcode,
rheaders,
)
if not 200 <= rcode < 300:
self.handle_error_response(rbody, rcode, resp.data, rheaders)
return resp
def _sign(
self, private_key: str, method: str, route: str, json_body: str, timestamp: str
) -> bytes:
"""Given a key and data, create and sign a payload.
:param api_key: Key to sign the message with
:param method: HTTP method
:param route: Relative route. EX. /fills
:param json_body: JSON as a string. Usually created via json.dumps(dict)
:param timestamp: Unix Epoch time as a string
:return: Base64 encoded digest
"""
msg = bytes(timestamp + method.upper() + route + json_body, encoding="utf-8")
hm = hmac.new(key=b64decode(private_key), msg=msg, digestmod=hashlib.sha256)
return b64encode(hm.digest())
def request_headers(self, credentials, method, route, json_body):
referrer = "https://api.getlinus.io"
user_agent = "Zerohash Python Library"
timestamp = str(int(datetime.datetime.now().timestamp()))
headers = {
"X-SCX-API-KEY": credentials.public_key,
"X-SCX-SIGNED": self._sign(
private_key=credentials.private_key,
method=method,
route=route,
json_body=json_body,
timestamp=timestamp,
),
"X-SCX-TIMESTAMP": timestamp, # Unix Epoch
"X-SCX-PASSPHRASE": credentials.passphrase,
"Content-Type": "application/json",
}
return headers
def request(self, method, url, params={}, headers=None):
rbody, rcode, rheaders = self.request_raw(method, url, params)
resp = self.interpret_response(rbody, rcode, rheaders)
return resp
def request_raw(self, method, url, params={}, supplied_headers=None):
if self.credentials:
credentials = self.credentials
else:
from zerohash import credentials
credentials = credentials
if credentials is None:
raise error.MalformedAuthorizationError("Missing credentials")
abs_url = "%s%s" % (self.api_base, url)
if method.lower() in (
"post",
"put",
"patch",
):
post_data = json.dumps(params, separators=(",", ":"))
elif method.lower() in (
"get",
"delete",
):
post_data = json.dumps({}, separators=(",", ":"))
if params:
new_path = "?" + urllib.parse.urlencode(params)
abs_url = abs_url + new_path
url = url + new_path
# abs_url = abs_url + "?" + urllib.parse.urlencode(params)
headers = self.request_headers(
credentials, method, route=url, json_body=post_data
)
rbody, rcode, rheaders = self._client.request_with_retries(
method, abs_url, headers, post_data
)
return rbody, rcode, rheaders | zerohash-python | /zerohash-python-0.0.9.tar.gz/zerohash-python-0.0.9/zerohash/api_requestor.py | api_requestor.py |
# zeroinger
提升编码效率,有效延长程序猿寿命的小工具集
## 目录
* 安装
* 依赖条件
* pip3安装
* 使用方法
* 时间相关
* Excel/CSV读写
* 配置文件读取
* 文本文件读写
* 更新日志
## 安装
### 依赖条件
* python>=3.6.0
* logzero==1.5.0
### pip3安装
```
pip3 install --upgrade zeroinger
```
## 使用方法
### 时间相关
#### StopWatch
```
from zeroinger.time.stopwatch import StopWatch
import time
# 创建实例
timer = StopWatch.create_instance()
time.sleep(1)
# 获取从开始到现在的耗时
print('当前耗时',timer.duration())
# 添加一个计时快照
cost = timer.add_snapshot()
print('快照1时间点', cost)
time.sleep(1)
cost = timer.add_snapshot()
print('快照2时间点', cost)
snapshot_list = timer.list_snapshot()
print('所有快照时间点', snapshot_list)
# 重置计时器
timer.reset()
#--------------------------------
当前耗时 1004
快照1时间点 1005
快照2时间点 2006
所有快照时间点 [1005, 2006]
```
### Excel/CSV相关
#### XLSX
##### 读取excel
```
from zeroinger.excel.xlsx import XLSX
test_read_file_path = os.path.join(os.path.dirname(__file__), 'read_test_file.xlsx')
data = XLSX.read_dict_sheet(test_read_file_path, 0)
print(data)
#--------------
[{'列1': 1, '列2': 4, '列3': 7}, {'列1': 2, '列2': 5, '列3': 8}, {'列1': 3, '列2': 6, '列3': 9}]
```
##### 写入excel
```
from zeroinger.excel.xlsx import XLSX
golden = [{'列1': 1, '列2': 4, '列3': 7}, {'列1': 2, '列2': 5, '列3': 8}, {'列1': 3, '列2': 6, '列3': 9}]
test_write_file_path = os.path.join(os.path.dirname(__file__), 'write_test_file.xlsx')
XLSX.write_dict_sheet(test_write_file_path, golden)
```
### 压缩文件读写
## 更新日志
- 2020/01/06 新增压缩文件读取方法 | zeroinger | /zeroinger-1.2.8.tar.gz/zeroinger-1.2.8/README.md | README.md |
==========================
zerokspot.recipe.distutils
==========================
This recipe offers a simple way to install dependencies that are only
available as distutils-archives::
[buildout]
parts = part
[part]
recipe = zerokspot.recipe.distutils
urls =
http://domain.com/file.tar.gz
This will install the package into ``${buildout:parts-directory}/part/`` and
make its library components available via ``${part:extra-path}``.
Options
-------
urls
A list of packages (one per line) that should be installed into
``${buildout:parts-directory}/<partname>``.
Additionally provided variables
-------------------------------
location
Points to the prefix of the installed package
extra-path
Points to the site-package-directory within the prefix
Disclaimer
----------
Function-wise this recipe is inspired by Kevin Teague's
`collective.recipe.distutils`_, but solves some aspects a little bit different.
For instance, this recipe uses setup.py's ``--prefix``-argument in order to
also support the installation of packages that have a script-component. It
also distinguishes between ``${part:location}`` and ``${part:extra-path}``
with the first representing the prefix-directory while the latter pointing
to the respective "site-packages"-directory.
.. _`collective.recipe.distutils`: http://pypi.python.org/pypi/collective.recipe.distutils/0.1
| zerokspot.recipe.distutils | /zerokspot.recipe.distutils-0.1.2.tar.gz/zerokspot.recipe.distutils-0.1.2/README.rst | README.rst |
import os, sys, site, subprocess, shutil, tempfile, urllib2, logging, string
import os.path
import zc.buildout
import setuptools.archive_util
import distutils.core
class Recipe(object):
def __init__(self, buildout, name, options):
self.buildout, self.name, self.options = buildout, name, options
self.logger = logging.getLogger(self.name)
options['location'] = os.path.join(
buildout['buildout']['parts-directory'], name)
self.location = options['location']
buildout['buildout'].setdefault('downloads-cache',
os.path.join(buildout['buildout']['directory'], 'downloads'))
self.downloads = buildout['buildout']['downloads-cache']
options['extra-path'] = os.path.join(self.location, 'lib',
'python%d.%d' % sys.version_info[:2],
'site-packages')
self.offline = buildout['buildout']['offline'].lower() == 'true'
if not os.path.exists(self.downloads):
os.mkdir(self.downloads)
self.urls = options['urls'].splitlines()
self.urls = map(string.strip, self.urls)
self.urls = filter(len, self.urls)
def install(self):
if not os.path.exists(self.options['extra-path']):
self.logger.debug("Creating %s" % (self.options['extra-path'],))
os.makedirs(self.options['extra-path'])
for url in self.urls:
self.logger.info("Processing %s" % (url,))
path = self._get_archive(url)
tmp = tempfile.mkdtemp(prefix='buildout-')
try:
args = ['install', '--prefix=%s' % (self.location,)]
self.logger.debug("Extracting into %s" % (tmp,))
setuptools.archive_util.unpack_archive(path, tmp)
# Let's find our setup.py
search_paths = [os.path.join(tmp, 'setup.py'),]
for d in os.listdir(tmp):
search_paths.append(os.path.join(tmp, d, 'setup.py'))
setup_path = None
for p in search_paths:
self.logger.debug("Checking %s" % (p,))
if os.path.exists(p):
setup_path = p
if setup_path is None:
raise zc.buildout.UserError, \
"Could not find a setup.py in this package"
self.logger.info("Installing into %s" % (self.location,))
self._install_pkg(setup_path)
finally:
shutil.rmtree(tmp)
return self.location
def update(self):
pass
def _install_pkg(self, setup_path):
old_dir = os.getcwd()
os.chdir(os.path.dirname(setup_path))
env = os.environ.copy()
env['PYTHONPATH'] = env.get('PYTHONPATH', '') + ':' + \
self.options['extra-path']
try:
cmd = [sys.executable, 'setup.py', 'install',
'--prefix="%s"' % (self.location,)]
subprocess.call(' '.join(cmd), env=env, shell=True)
finally:
os.chdir(old_dir)
def _get_archive(self, url):
fname = self._get_filename(url)
path = os.path.join(self.downloads, fname)
if os.path.exists(path):
self.logger.debug(" -> already cached")
else:
if self.offline:
raise zc.buildout.UserError, \
"Can not download archive because of offline-mode"
self.logger.debug(" -> downloading")
out = open(path, 'wb+')
try:
fp = urllib2.urlopen(url)
for line in fp:
out.write(line)
finally:
out.close()
return path
def _get_filename(self, url):
return os.path.basename(url) | zerokspot.recipe.distutils | /zerokspot.recipe.distutils-0.1.2.tar.gz/zerokspot.recipe.distutils-0.1.2/zerokspot/recipe/distutils/__init__.py | __init__.py |
.. important::
This package is no longer actively maintained and therefor won't see any new
features added to it. For more information please check out `the wiki <https://github.com/zerok/zerokspot.gitrecipe/wiki/EOL>`_
This simple recipe for zc.buildout fetches data from a given repository
and stores it into its part's directory. A simple task using this
could look like this::
[myapp]
recipe=zerokspot.recipe.git
repository=git://github.com/zerok/zerokspot.gitrecipe.git
rev=7c73978b55fcadbe2cd6f2abbefbedb5a85c2c8c
This would store the repository under ${buildout:directory}/parts/myapp
and keep it at exactly this revision, no matter what happens on the
server.
The recipe has following options:
repository
The absolute URL of the repository to be fetched
rev
A revision/commit within this repository the environment
should use.
branch
If you want to stay up to date with a certain branch other than
"master", use this.
paths
List of relative paths to packages to develop. Must be used together
with as_egg=true.
newest
This overrides the newest-option of the global setting for this
part
as_egg
Set to True if you want the checkout to be registered as a
development egg in your buildout.
cache-name
Name of the repository in the download-cache directory.
recursive
Follow submodules (Note that submodules are not cloned from the download
cache).
Offline installation
--------------------
If you want to install a part from the download-cache, this is now possible, too::
[buildout]
parts = myapp
download-cache = /var/cache/buildout
install-from-cache = true
[mylib]
recipe = zerokspot.recipe.git
repository = http://domain.com/repo.git
With this configuration, the recipe will look for /var/cache/buildout/repo and
clone it into the local parts/ folder.
The recipe also supports an additional "cache-name" setting that lets you
configure the folder name of the repository in the download cache.
| zerokspot.recipe.git | /zerokspot.recipe.git-0.6.1.tar.gz/zerokspot.recipe.git-0.6.1/README.rst | README.rst |
import subprocess
import os.path
import zc.buildout
def git(operation, args, message, ignore_errnos=None, verbose=False):
"""
Execute a git operation with the given arguments. If it fails, raise an
exception with the given message. If ignore_errnos is a list of status
codes, they will be not handled as errors if returned by git.
"""
if verbose:
real_args = list(args)
else:
real_args = ['-q'] + list(args)
command = r'git %s ' + ' '.join(('"%s"', ) * len(real_args))
command = command % ((operation, ) + tuple(real_args))
status = subprocess.call(command, shell=True)
if ignore_errnos is None:
ignore_errnos = []
if status != 0 and status not in ignore_errnos:
raise zc.buildout.UserError(message)
def get_reponame(url, branch = None, rev = None):
"""
Given the URL of a repository, this function returns the name of it after
a clone process.
"""
base = filter(lambda x: len(x), url.split('/'))[-1]
if base.endswith('.git'):
base = base[:-4]
if rev != None or branch != None:
base = base + '@' + (rev or branch)
return base
class Recipe(object):
"""
This recipe supports following options:
repository
Path to the repository that should be cloned
branch
Which branch should be cloned. If none is given, "master" is used by
default.
rev
Revision that should be used. This is useful if you want to freeze
the source at a given revision. If this is used, an update won't do
all that much when executed.
paths
List of relative paths to packages to develop. Must be used together
with as_egg=true.
as_egg
Set to True if you want the checkout to be registered as a
development egg in your buildout.
recursive
Set to True if you want the clone to be recursive, and the updates
to include submodule updates.
Do note that submodules are not cloned from the download cache!
"""
def __init__(self, buildout, name, options):
self.buildout, self.name, self.options = buildout, name, options
self.repository = options['repository']
self.branch = options.get('branch', 'master')
self.rev = options.get('rev', None)
self.newest = options.get('newest',
buildout['buildout'].get('newest', "false")).lower() == 'true'
self.offline = options.get('offline', 'false').lower() == 'true'
self.cache_install = self.offline or options.get('install-from-cache',
buildout['buildout'].get('install-from-cache', 'false')) \
.lower() == 'true'
self.cache_name = options.get('cache-name',
get_reponame(self.repository))
self.download_cache = self.buildout['buildout'] \
.get('download-cache', None)
if self.download_cache:
self.cache_path = os.path.join(
buildout['buildout']['download-cache'],
self.cache_name)
else:
self.cache_path = None
options['location'] = os.path.join(
buildout['buildout']['parts-directory'], name)
self.as_egg = options.get('as_egg', 'false').lower() == 'true'
self.recursive = options.get('recursive', 'false').lower() == 'true'
self.root_dir = self.buildout['buildout']['directory']
self.cache_created = False
self.cache_updated = False
self.part_updated = False
self.cache_cloned = False
self.installed_from_cache = False
self.paths = options.get('paths', None)
self.verbose = int(buildout['buildout'].get('verbosity', 0)) > 0
def install(self):
"""
Method called when installing a part (or when the part's config
was changed. It clones the the given git repository and checks
out the requested branch or commit.
Returns the path to the part's directory.
"""
if self.cache_install:
if not self.download_cache:
raise zc.buildout.UserError("Offline mode requested and no "
"download-cache specified")
if os.path.exists(self.cache_path):
self._clone_cache()
self.installed_from_cache = True
else:
raise zc.buildout.UserError("No repository in the download "
"cache directory.")
else:
if self.download_cache:
if not os.path.exists(self.cache_path):
self._clone_upstream()
if self.newest:
# Update the cache first
self._update_cache()
else:
self.installed_from_cache = True
self._clone_cache()
else:
self._clone(self.repository, self.options['location'])
if self.as_egg:
self._install_as_egg()
return self.options['location']
def update(self):
"""
Called when the buildout is called again without the local
configuration having been altered. If no revision was
requested and the newest-option enabled it tries to update the
requested branch.
"""
if self.rev is None and self.newest:
# Do an update of the current branch
if self.verbose:
print "Pulling updates from origin"
if not self.cache_install and self.download_cache:
self._update_cache()
self._update_part()
if self.recursive:
self._update_part_submodules()
os.chdir(self.options['location'])
if self.as_egg:
self._install_as_egg()
else:
# "newest" is also automatically disabled if "offline"
# is set.
if self.verbose:
print "Pulling disable for this part"
def _clone(self, from_, to):
"""
Clone a repository located at ``from_`` to ``to``.
"""
try:
args = ('--recursive', from_, to,) if self.recursive \
else (from_, to,)
git('clone', args, "Couldn't clone %s into %s" % (
from_, to, ), verbose=True)
os.chdir(to)
if not '[branch "%s"]' % self.branch in open(os.path.join('.git', 'config')).read():
git('branch', ('--track', self.branch, 'origin/%s' % self.branch),
"Failed to set up to track remote branch", verbose=True)
if not "ref: refs/heads/%s" % self.branch in open(os.path.join('.git', 'HEAD')).read():
git('checkout', (self.branch,), "Failed to switch to branch '%s'" % self.branch,
ignore_errnos=[128])
if self.rev is not None:
git('checkout', (self.rev, ), "Failed to checkout revision")
finally:
os.chdir(self.root_dir)
def _clone_cache(self):
"""
Clone the cache into the parts directory.
"""
if not os.path.exists(self.cache_path):
self._clone_upstream()
self._clone(self.cache_path, self.options['location'])
self.cache_cloned = True
def _clone_upstream(self):
"""
Clone the upstream repository into the cache
"""
self._clone(self.repository, self.cache_path)
self.cache_created = True
def _update_cache(self):
"""
Updates the cached repository.
"""
self._update_repository(self.cache_path)
self.cache_updated = True
def _update_part(self):
"""
Updates the repository in the buildout's parts directory.
"""
self._update_repository(self.options['location'])
self.part_updated = True
def _update_repository(self, path):
"""
Update the repository from the given path
"""
try:
os.chdir(path)
git('pull', ('origin', self.branch, ),
"Failed to update repository", verbose=True)
finally:
os.chdir(self.root_dir)
def _update_part_submodules(self):
"""
Updates the repository submodules in the buildout's parts directory.
"""
self._update_submodules(self.options['location'])
def _update_submodules(self, path):
"""
Update the submodules from the given path
"""
try:
os.chdir(path)
git('submodule', ('update', '--init', '--recursive',),
"Failed to update submodules")
finally:
os.chdir(self.root_dir)
def _install_as_egg(self):
"""
Install clone as development egg.
"""
def _install(path, target):
zc.buildout.easy_install.develop(path, target)
target = self.buildout['buildout']['develop-eggs-directory']
if self.paths:
for path in self.paths.split():
path = os.path.join(self.options['location'], path.strip())
_install(path, target)
else:
_install(self.options['location'], target) | zerokspot.recipe.git | /zerokspot.recipe.git-0.6.1.tar.gz/zerokspot.recipe.git-0.6.1/zerokspot/recipe/git/__init__.py | __init__.py |
Zeroless Tools
==============
.. _badges_start:
|Build Status| |Coverage Status| |Codacy| |PyPi| |Docs| |License|
.. _badges_end:
Most people used to networking programming are aware that NetCat is a very useful tool
to establish and test TCP/UDP connections on the fly. The ZeroMQ community, however, do
not provide an equivalent application. So that, in order to test your ZMQ sockets, you
would have to code your own solution. For tackling that issue, the Zeroless Command
Line Interface (CLI) was created.
So that you can test your 0MQ connections in a language agnostic fashion, despite the
used messaging pattern.
Installation
------------
.. _install_content_start:
.. code-block:: bash
$ pip install zeroless-tools
.. _install_content_end:
Usage
-----
.. _usage_content_start:
.. code-block:: bash
$ zeroserver -h
usage: Zeroless Server Cli [-h] [-n amount of parts]
[a port between 1024 and 65535]
{rep,push,sub,pair,req,pub,pull} ...
The Zeroless Server Cli shall create an endpoint for accepting connections
and bind it to the chosen ØMQ messaging pattern
positional arguments:
[a port between 1024 and 65535]
the open port to bind/connect to
optional arguments:
-h, --help show this help message and exit
-n amount of parts, --numParts amount of parts
the amount of parts (i.e. frames) per message
(default=1)
messaging pattern:
The ØMQ API implements several messaging patterns, each one defining a
particular network topology
{rep,push,sub,pair,req,pub,pull}
Choose among Publish/Subscribe (Pub/Sub),
Request/Reply (Req/Rep), Pipeline (Push/Pull) and
Exclusive Pair (Pair)
This program is free software: you can redistribute it and/or modify it
under the terms of the GNU General Public License as published by the Free
Software Foundation, either version 3 of the License, or (at your option)
any later version
.. code-block:: bash
$ zeroclient -h
usage: Zeroless Client Cli [-h] [-i IP] [-n amount of parts]
[a port between 1024 and 65535]
{sub,push,pair,pull,req,rep,pub} ...
The Zeroless Client Cli shall connect to the specified endpoint using the
chosen ØMQ messaging pattern
positional arguments:
[a port between 1024 and 65535]
the open port to bind/connect to
optional arguments:
-h, --help show this help message and exit
-i IP, --ip IP the IP of the endpoint to connect to
(default=127.0.0.1)
-n amount of parts, --numParts amount of parts
the amount of parts (i.e. frames) per message
(default=1)
messaging pattern:
The ØMQ API implements several messaging patterns, each one defining a
particular network topology
{rep,push,sub,pair,req,pub,pull}
Choose among Publish/Subscribe (Pub/Sub),
Request/Reply (Req/Rep), Pipeline (Push/Pull) and
Exclusive Pair (Pair)
This program is free software: you can redistribute it and/or modify it
under the terms of the GNU General Public License as published by the Free
Software Foundation, either version 3 of the License, or (at your option)
any later version
.. _usage_content_end:
Testing
-------
.. _testing_content_start:
To run individual tests:
.. code-block:: bash
$ py.test tests/test_desired_module.py
To run all the tests:
.. code-block:: bash
$ python setup.py test
Alternatively, you can use tox:
.. code-block:: bash
$ tox
.. _testing_content_end:
License
-------
.. _license_content_start:
Copyright 2014 Lucas Lira Gomes [email protected]
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>.
.. _license_content_end:
.. _available_badges_start:
.. |Build Status| image:: https://img.shields.io/travis/zmqless/zeroless-tools.svg?style=flat
:target: https://travis-ci.org/zmqless/zeroless-tools
.. |Coverage Status| image:: https://coveralls.io/repos/zmqless/zeroless-tools/badge.svg?branch=master&service=github
:target: https://coveralls.io/github/zmqless/zeroless-tools?branch=master
.. |Docs| image:: https://readthedocs.org/projects/zeroless-tools/badge/?version=latest
:target: https://readthedocs.org/projects/zeroless-tools/?badge=latest
.. |License| image:: https://img.shields.io/pypi/l/zeroless-tools.svg?style=flat
:target: https://www.gnu.org/licenses/gpl.html
.. |Codacy| image:: https://www.codacy.com/project/badge/7c9d91aa311747aaabeff3197fdbe1f8
:target: https://www.codacy.com/app/x8lucas8x/zeroless-tools
.. |PyPi| image:: https://img.shields.io/pypi/v/zeroless-tools.svg?style=flat
:target: https://pypi.python.org/pypi/zeroless-tools
.. _available_badges_end: | zeroless-tools | /zeroless-tools-0.2.2.tar.gz/zeroless-tools-0.2.2/README.rst | README.rst |
import sys
import warnings
def read_and_print(receiver, num_parts=1):
for i in range(num_parts):
data = next(receiver)
if isinstance(data, bytes):
print(data.decode("utf-8"))
else:
for part in data:
print(part.decode('utf-8'))
def wait_and_write(sender, num_parts):
for i in range(num_parts):
data = input().encode('utf-8')
sender(data)
class PubExecutor:
def __init__(self, socket, num_parts, topic, embed_topic):
self._sender = socket.pub(topic=topic.encode('utf-8'), embed_topic=embed_topic)
self._num_parts = num_parts
def execute(self):
try:
wait_and_write(self._sender, self._num_parts)
except ValueError:
print("ERROR: The message was not sent, since the topic was not in"
" the beginning of the first frame of the message. If you want"
" the topic to be automatically sent, set the --embedTopic flag."
, file=sys.stderr)
class SubExecutor:
def __init__(self, socket, num_parts, topics):
with warnings.catch_warnings():
warnings.simplefilter("ignore")
self._receiver = socket.sub(topics=[topic.encode('utf-8') for topic in topics])
self._num_parts = num_parts
def execute(self):
read_and_print(self._receiver, self._num_parts)
class PushExecutor:
def __init__(self, socket, num_parts):
self._sender = socket.push()
self._num_parts = num_parts
def execute(self):
wait_and_write(self._sender, self._num_parts)
class PullExecutor:
def __init__(self, socket, num_parts):
self._receiver = socket.pull()
self._num_parts = num_parts
def execute(self):
read_and_print(self._receiver, self._num_parts)
class ReqExecutor:
def __init__(self, socket, num_parts):
self._sender, self._receiver = socket.request()
self._num_parts = num_parts
def execute(self):
wait_and_write(self._sender, self._num_parts)
read_and_print(self._receiver, self._num_parts)
class RepExecutor:
def __init__(self, socket, num_parts):
self._sender, self._receiver = socket.reply()
self._num_parts = num_parts
def execute(self):
read_and_print(self._receiver, self._num_parts)
wait_and_write(self._sender, self._num_parts)
class ReadOnlyPairExecutor:
def __init__(self, socket, num_parts):
_, self._receiver = socket.pair()
self._num_parts = num_parts
def execute(self):
read_and_print(self._receiver, self._num_parts)
class WriteOnlyPairExecutor:
def __init__(self, socket, num_parts):
self._sender, _ = socket.pair()
self._num_parts = num_parts
def execute(self):
wait_and_write(self._sender, self._num_parts) | zeroless-tools | /zeroless-tools-0.2.2.tar.gz/zeroless-tools-0.2.2/zeroless_tools/SocketExecutor.py | SocketExecutor.py |
from .SocketExecutor import *
def pub(socket, args):
return PubExecutor(socket, args.numParts, args.topic, args.embedTopic)
def sub(socket, args):
return SubExecutor(socket, args.numParts, args.topics)
def push(socket, args):
return PushExecutor(socket, args.numParts)
def pull(socket, args):
return PullExecutor(socket, args.numParts)
def req(socket, args):
return ReqExecutor(socket, args.numParts)
def rep(socket, args):
return RepExecutor(socket, args.numParts)
def read_only_pair(socket, args):
return ReadOnlyPairExecutor(socket, args.numParts)
def write_only_pair(socket, args):
return WriteOnlyPairExecutor(socket, args.numParts)
def add_sender_command(subparser, name, description, callback):
subparser = subparser.add_parser(name, description=description)
subparser.set_defaults(socket_executor=callback)
return subparser
def add_receiver_command(subparser, name, description, callback):
subparser = subparser.add_parser(name, description=description)
subparser.set_defaults(socket_executor=callback)
return subparser
def add_sub_commands(parser):
parser.epilog = """This program is free software: you can redistribute it and/or modify it under the terms
of the GNU General Public License as published by the Free Software Foundation, either
version 3 of the License, or (at your option) any later version"""
parser.add_argument('port', type=int,
choices=range(1024,65535), metavar="[a port between 1024 and 65535]",
help='the open port to bind/connect to')
parser.add_argument('-n', '--numParts', metavar="amount of parts", type=int, default=1,
help='the amount of parts (i.e. frames) per message (default=1)')
subparsers = parser.add_subparsers(title='messaging pattern',
description='''The ØMQ API implements several messaging patterns, each
one defining a particular network topology''',
help='''Choose among Publish/Subscribe (Pub/Sub), Request/Reply (Req/Rep),
Pipeline (Push/Pull) and Exclusive Pair (Pair)''')
parser_pub = add_sender_command(subparsers, 'pub', 'This is a data distribution pattern', pub)
parser_pub.add_argument('-t', '--topic', type=str, default='',
help='the topic that messages are published to (default=all)')
parser_pub.add_argument('-e', '--embedTopic', action='store_true', default=False,
help='''set for the topic to be sent automatically as the
first part (i.e. frame) of every published message
(default=False)''')
parser_sub = add_receiver_command(subparsers, 'sub', 'This is a data distribution pattern', sub)
parser_sub.add_argument('-t', '--topics', type=str, nargs='+', default=[''],
help='the list of topics, separated by whitespaces, to subscribe to (default=all)')
parser_push = add_sender_command(subparsers, 'push',
'This is a parallel task distribution and collection pattern', push)
parser_pull = add_receiver_command(subparsers, 'pull',
'This is a parallel task distribution and collection pattern', pull)
parser_req = add_sender_command(subparsers, 'req',
'This is a remote procedure call and task distribution pattern', req)
parser_rep = add_receiver_command(subparsers, 'rep',
'This is a remote procedure call and task distribution pattern', rep)
parser_pair = subparsers.add_parser('pair',
description='This is an advanced low-level pattern for specific use cases')
pair_subparsers = parser_pair.add_subparsers(title='mode',
description='the mode of operation of the Exclusive Pair pattern',
help='Due to a current limitation, you cannot read and write '
'at the same Exclusive Pair connection')
parser_write_only_pair = add_sender_command(pair_subparsers,
'write-only', '',
write_only_pair)
parser_read_only_pair = add_receiver_command(pair_subparsers,
'read-only', '',
read_only_pair)
def run(socket_executor):
try:
socket_executor.execute()
except EOFError:
sys.exit()
except KeyboardInterrupt:
print()
print('You pressed Ctrl+C!')
sys.exit() | zeroless-tools | /zeroless-tools-0.2.2.tar.gz/zeroless-tools-0.2.2/zeroless_tools/helpers.py | helpers.py |
Zeroless
========
.. _badges_start:
|Build Status| |Coverage Status| |Codacy| |PyPi| |Docs| |License|
.. _badges_end:
Yet another ØMQ_ wrapper for Python. However, differing from PyZMQ_, which
tries to stay very close to the C++ implementation, this project aims to
make distributed systems employing ØMQ_ as pythonic as possible.
Being simpler to use, Zeroless doesn't supports all of the fine aspects
and features of ØMQ_. However, you can expect to find all the message
passing patterns you were accustomed to (i.e. pair, request/reply,
publisher/subscriber, push/pull). Depite that, the only transport
available is TCP, as threads are not as efficient in Python due to the
GIL and IPC is unix-only.
Installation
------------
.. _install_content_start:
.. code-block:: bash
$ pip install zeroless
.. _install_content_end:
Python API
----------
.. _python_api_content_start:
In the ``zeroless`` module, two classes can be used to define how distributed
entities are related (i.e. ``Server`` and ``Client``). To put it bluntly, with
the exception of the pair pattern, a client may be connected to multiple
servers, while a server may accept incoming connections from multiple clients.
Both servers and clients are able to create a *callable* and/or *iterable*,
depending on the message passing pattern. So that you can iterate over incoming
messages and/or call to transmit a message.
.. _python_api_content_end:
All examples assume:
.. code:: python
from zeroless import (Server, Client)
Push-Pull
~~~~~~~~~
.. _push_pull_content_start:
Useful for distributing the workload among a set of workers. A common
pattern in the Stream Processing field, being the cornestone of
applications like Apache Storm for instance. Also, it can be seen as a
generalisation of the Map-Reduce pattern.
.. _push_pull_content_end:
.. code:: python
# Binds the pull server to port 12345
# And assigns an iterable to wait for incoming messages
listen_for_push = Server(port=12345).pull()
for msg in listen_for_push:
print(msg)
.. code:: python
# Connects the client to as many servers as desired
client = Client()
client.connect_local(port=12345)
# Initiate a push client
# And assigns a callable to push messages
push = client.push()
for msg in [b"Msg1", b"Msg2", b"Msg3"]:
push(msg)
Publisher-Subscriber
~~~~~~~~~~~~~~~~~~~~
.. _pub_sub_content_start:
Useful for broadcasting messages to a set of peers. A common pattern for
allowing real-time notifications at the client side, without having to
resort to inneficient approaches like pooling. Online services like
PubNub or IoT protocols like MQTT are examples of this pattern usage.
.. _pub_sub_content_end:
.. code:: python
# Binds the publisher server to port 12345
# And assigns a callable to publish messages with the topic 'sh'
pub = Server(port=12345).pub(topic=b'sh', embed_topic=True)
# Gives publisher some time to get initial subscriptions
sleep(1)
for msg in [b"Msg1", b"Msg2", b"Msg3"]:
pub(msg)
.. code:: python
# Connects the client to as many servers as desired
client = Client()
client.connect_local(port=12345)
# Initiate a subscriber client
# Assigns an iterable to wait for incoming messages with the topic 'sh'
listen_for_pub = client.sub(topics=[b'sh'])
for topic, msg in listen_for_pub:
print(topic, ' - ', msg)
.. _pub_sub_appendix_start:
Note: ZMQ's topic filtering capabilities are publisher side since ZMQ 3.0.
Last but not least, SUB sockets that bind will not get any message before they
first ask for via the provided generator, so prefer to bind PUB sockets if
missing some messages is not an option.
.. _pub_sub_appendix_end:
Request-Reply
~~~~~~~~~~~~~
.. _req_rep_content_start:
Useful for RPC style calls. A common pattern for clients to request data
and receive a response associated with the request. The HTTP protocol is
well-known for adopting this pattern, being it essential for Restful
services.
.. _req_rep_content_end:
.. code:: python
# Binds the reply server to port 12345
# And assigns a callable and an iterable
# To both transmit and wait for incoming messages
reply, listen_for_request = Server(port=12345).reply()
for msg in listen_for_request:
print(msg)
reply(msg)
.. code:: python
# Connects the client to as many servers as desired
client = Client()
client.connect_local(port=12345)
# Initiate a request client
# And assigns a callable and an iterable
# To both transmit and wait for incoming messages
request, listen_for_reply = client.request()
for msg in [b"Msg1", b"Msg2", b"Msg3"]:
request(msg)
response = next(listen_for_reply)
print(response)
Pair
~~~~
.. _pair_content_start:
More often than not, this pattern will be unnecessary, as the above ones
or the mix of them suffices most use cases in distributed computing.
Regarding its capabilities, this pattern is the most similar alternative
to usual posix sockets among the aforementioned patterns. Therefore,
expect one-to-one and bidirectional communication.
.. _pair_content_end:
.. code:: python
# Binds the pair server to port 12345
# And assigns a callable and an iterable
# To both transmit and wait for incoming messages
pair, listen_for_pair = Server(port=12345).pair()
for msg in listen_for_pair:
print(msg)
pair(msg)
.. code:: python
# Connects the client to a single server
client = Client()
client.connect_local(port=12345)
# Initiate a pair client
# And assigns a callable and an iterable
# To both transmit and wait for incoming messages
pair, listen_for_pair = client.pair()
for msg in [b"Msg1", b"Msg2", b"Msg3"]:
pair(msg)
response = next(listen_for_pair)
print(response)
Logging
-------
.. _logging_content_start:
The ``zeroless`` module allows logging via a global `Logger object <https://docs.python.org/3/library/logging.html#logger-objects>`__.
.. code:: python
from zeroless import log
To enable it, just add an `Handler object <https://docs.python.org/3/library/logging.html#handler-objects>`__ and set an appropriate `logging level <https://docs.python.org/3/library/logging.html#logging-levels>`__.
.. _logging_content_end:
Testing
-------
.. _testing_content_start:
To run individual tests:
.. code-block:: bash
$ py.test tests/test_desired_module.py
To run all the tests:
.. code-block:: bash
$ python setup.py test
Alternatively, you can use tox:
.. code-block:: bash
$ tox
.. _testing_content_end:
Need help?
----------
For more information, please see our documentation_.
License
-------
.. _license_content_start:
Copyright 2014 Lucas Lira Gomes [email protected]
This library is free software; you can redistribute it and/or modify it
under the terms of the GNU Lesser General Public License as published by
the Free Software Foundation; either version 2.1 of the License, or (at
your option) any later version.
This library is distributed in the hope that it will be useful, but
WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Lesser
General Public License for more details.
You should have received a copy of the GNU Lesser General Public License
along with this library. If not, see http://www.gnu.org/licenses/.
.. _license_content_end:
.. _available_badges_start:
.. |Build Status| image:: https://img.shields.io/travis/zmqless/python-zeroless.svg?style=flat
:target: https://travis-ci.org/zmqless/python-zeroless
.. |Coverage Status| image:: https://coveralls.io/repos/zmqless/python-zeroless/badge.svg?branch=master&service=github
:target: https://coveralls.io/github/zmqless/python-zeroless?branch=master
.. |Docs| image:: https://readthedocs.org/projects/python-zeroless/badge/?version=latest
:target: https://readthedocs.org/projects/python-zeroless/?badge=latest
.. |License| image:: https://img.shields.io/pypi/l/zeroless.svg?style=flat
:target: https://www.gnu.org/licenses/lgpl-2.1.html
.. |Codacy| image:: https://www.codacy.com/project/badge/8499be83359e4eccaa363b14cda4cbe0
:target: https://www.codacy.com/app/x8lucas8x/python-zeroless
.. |PyPi| image:: https://img.shields.io/pypi/v/zeroless.svg?style=flat
:target: https://pypi.python.org/pypi/zeroless
.. _available_badges_end:
.. _ØMQ: http://www.zeromq.org
.. _PyZMQ: https://www.github.com/zeromq/pyzmq
.. _documentation: http://python-zeroless.readthedocs.org/en/latest/
| zeroless | /zeroless-1.0.0.tar.gz/zeroless-1.0.0/README.rst | README.rst |
from googleapiclient.discovery import build
from google.oauth2 import service_account
from googleapiclient.http import MediaIoBaseDownload, MediaFileUpload
scopes = ["https://www.googleapis.com/auth/drive"]
def get_cred(service_file):
return service_account.Credentials.from_service_account_file(
service_file, scopes=scopes
)
class GoogleDrive:
def __init__(self, service_file):
self.service = build("drive", "v3", credentials=get_cred(service_file))
def create_folder(self, folder_name, parent_id):
metadata = {
"name": folder_name,
"mimeType": "application/vnd.google-apps.folder",
"parents": [parent_id],
}
f = self.service.files().create(body=metadata, fields="id").execute()
return f.get("id")
def download(self, id, output_dir, output_filename=None):
if not output_filename:
output_filename = id
destination = f"{output_dir}/{output_filename}"
request = self.service.files().get_media(fileId=id)
file = open(destination, "wb")
media_request = MediaIoBaseDownload(file, request)
done = False
while not done:
status, done = media_request.next_chunk()
return destination
def upload(self, src, filename, parent, file_id=None):
# upload
file_metadata = {"name": filename}
if file_id:
f = (
self.service.files()
.update(
fileId=file_id,
body=file_metadata,
media_body=MediaFileUpload(
src, mimetype="application/octet-stream", resumable=True
),
fields="id",
)
.execute()
)
else:
file_metadata["parents"] = [parent]
f = (
self.service.files()
.create(
body=file_metadata,
media_body=MediaFileUpload(
src, mimetype="application/octet-stream", resumable=True
),
fields="id",
)
.execute()
)
return f.get("id")
def list_files(self, parent_id):
result = (
self.service.files()
.list(
q=f"'{parent_id}' in parents", spaces="drive", fields="files(id, name)"
)
.execute()
)
return result["files"] | zeroloader.py | /zeroloader.py-1.0.0-py3-none-any.whl/zeroloader/drive.py | drive.py |
MIT License
Copyright (c) 2020 0archive Project
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
| zeroloader.py | /zeroloader.py-1.0.0-py3-none-any.whl/zeroloader.py-1.0.0.dist-info/LICENSE.md | LICENSE.md |
from typing import List, Union
import albumentations as A
from albumentations.pytorch.transforms import ToTensorV2
from timm.models.registry import _model_default_cfgs
def build_post_transform(model: str, mean=(0.485, 0.456, 0.406), std=(0.229, 0.224, 0.225)):
if model:
cfg = _model_default_cfgs[model.split("timm/")[-1] if "timm/" in model else model]
mean = cfg["mean"]
std = cfg["std"]
print("Using data config", cfg)
return [A.Normalize(mean=mean, std=std), ToTensorV2()]
def build_pre_transform(size):
pre_transform = [A.LongestMaxSize(size), A.PadIfNeeded(size, size, border_mode=0)]
return pre_transform
def timm_inference_transform(model_name, image_size=224):
post_transform = build_post_transform(model_name)
pre_transform = [
A.LongestMaxSize(image_size),
A.PadIfNeeded(image_size, image_size, border_mode=0),
]
return A.Compose([*pre_transform, *post_transform])
def get_augment(augment_level: str) -> List[A.BasicTransform]:
if augment_level not in AUGMENTATIONS:
raise ValueError(f"Augmentation strategy has to be one of {AUGMENTATIONS.keys()}")
return AUGMENTATIONS[augment_level]
def build_visualize_transform(size, augment_level: str):
pre_transform = build_pre_transform(size)
augment_transform = get_augment(augment_level)
return A.Compose([*pre_transform, *augment_transform])
def build_training_transform(size, model, augment: Union[str, A.Compose]) -> A.Compose:
pre_transform = build_pre_transform(size)
post_transform = build_post_transform(model)
if isinstance(augment, str):
augment_transform = get_augment(augment)
else:
augment_transform = augment
return A.Compose([*pre_transform, *augment_transform, *post_transform])
def build_inference_transform(model: str, size=224) -> A.Compose:
pre_transform = build_pre_transform(size)
post_transform = build_post_transform(model)
return A.Compose([*pre_transform, *post_transform])
def build_eval_transform(model, size) -> A.Compose:
pre_transform = build_pre_transform(size)
post_transform = build_post_transform(model)
return A.Compose([*pre_transform, *post_transform])
AUGMENTATIONS = {
"hard_1": [
A.RandomRotate90(),
A.Flip(),
A.Transpose(),
A.OneOf(
[
A.IAAAdditiveGaussianNoise(),
A.GaussNoise(),
],
p=0.2,
),
A.OneOf(
[
A.MotionBlur(p=0.2),
A.MedianBlur(blur_limit=3, p=0.1),
A.Blur(blur_limit=3, p=0.1),
],
p=0.2,
),
A.ShiftScaleRotate(shift_limit=0.0625, scale_limit=0.2, rotate_limit=45, p=0.2),
A.OneOf(
[
A.OpticalDistortion(p=0.3),
A.GridDistortion(p=0.1),
A.IAAPiecewiseAffine(p=0.3),
],
p=0.2,
),
A.OneOf(
[
A.CLAHE(clip_limit=2),
A.IAASharpen(),
A.IAAEmboss(),
A.RandomBrightnessContrast(),
],
p=0.3,
),
A.HueSaturationValue(p=0.3),
],
"medium": [
A.OneOf(
[
A.IAAAdditiveGaussianNoise(),
A.GaussNoise(),
],
p=0.2,
),
A.OneOf(
[
A.MotionBlur(p=0.2),
A.MedianBlur(blur_limit=3, p=0.1),
A.Blur(blur_limit=3, p=0.1),
],
p=0.2,
),
A.ShiftScaleRotate(shift_limit=0.0625, scale_limit=0.2, rotate_limit=45, p=0.2),
A.OneOf(
[
A.OpticalDistortion(p=0.3),
A.GridDistortion(p=0.1),
A.IAAPiecewiseAffine(p=0.3),
],
p=0.2,
),
A.OneOf(
[
A.CLAHE(clip_limit=2),
A.IAASharpen(),
A.IAAEmboss(),
A.RandomBrightnessContrast(),
],
p=0.3,
),
A.HueSaturationValue(p=0.3),
A.CoarseDropout(
max_holes=1,
max_height=100,
max_width=50,
p=0.66,
min_holes=1,
min_height=50,
min_width=20,
),
A.JpegCompression(),
],
"medium3": [
A.OneOf(
[
A.IAAAdditiveGaussianNoise(),
A.GaussNoise(),
],
p=0.5,
),
A.OneOf(
[
A.MotionBlur(blur_limit=15, p=1),
A.MedianBlur(blur_limit=15, p=1),
A.Blur(blur_limit=15, p=1),
],
p=1,
),
A.ShiftScaleRotate(shift_limit=0.0625, scale_limit=0.1, rotate_limit=15, p=0.2),
A.OneOf(
[
A.CLAHE(clip_limit=2),
A.IAASharpen(),
A.IAAEmboss(),
A.RandomBrightnessContrast(),
],
p=0.3,
),
A.HueSaturationValue(p=0.3),
A.CoarseDropout(
max_holes=1,
max_height=100,
max_width=50,
p=0.66,
min_holes=1,
min_height=50,
min_width=20,
),
A.JpegCompression(),
],
"medium2": [
A.OneOf(
[
A.IAAAdditiveGaussianNoise(),
A.GaussNoise(),
],
p=0.5,
),
A.OneOf(
[
A.MotionBlur(p=0.2),
A.MedianBlur(blur_limit=3, p=0.1),
A.Blur(blur_limit=3, p=0.1),
],
p=0.5,
),
A.ShiftScaleRotate(shift_limit=0.0625, scale_limit=0.2, rotate_limit=15, p=0.2),
A.OneOf(
[
A.OpticalDistortion(p=0.3),
A.GridDistortion(p=0.1),
A.IAAPiecewiseAffine(p=0.3),
],
p=0.5,
),
A.OneOf(
[
A.CLAHE(clip_limit=2),
A.IAASharpen(),
A.IAAEmboss(),
A.RandomBrightnessContrast(),
],
p=0.3,
),
A.HueSaturationValue(p=0.3),
A.CoarseDropout(
max_holes=1,
max_height=100,
max_width=50,
p=0.66,
min_holes=1,
min_height=50,
min_width=20,
),
A.JpegCompression(),
A.HorizontalFlip(p=0.33),
],
"medium2_strong_blur": [
A.OneOf(
[
A.IAAAdditiveGaussianNoise(),
A.GaussNoise(),
],
p=0.5,
),
A.OneOf(
[
# A.MotionBlur(p=0.2),
# A.MedianBlur(blur_limit=3, p=0.1),
A.Blur(blur_limit=(9, 11), p=1),
],
p=1,
),
A.ShiftScaleRotate(shift_limit=0.0625, scale_limit=0.2, rotate_limit=15, p=0.2),
A.OneOf(
[
A.OpticalDistortion(p=0.3),
A.GridDistortion(p=0.1),
A.IAAPiecewiseAffine(p=0.3),
],
p=0.5,
),
A.OneOf(
[
A.CLAHE(clip_limit=2),
A.IAASharpen(),
A.IAAEmboss(),
A.RandomBrightnessContrast(),
],
p=0.3,
),
A.HueSaturationValue(p=0.3),
A.CoarseDropout(
max_holes=1,
max_height=100,
max_width=50,
p=0.66,
min_holes=1,
min_height=50,
min_width=20,
),
A.JpegCompression(),
A.HorizontalFlip(p=0.33),
],
"medium4": [
A.OneOf(
[
A.IAAAdditiveGaussianNoise(),
A.GaussNoise(),
],
p=0.5,
),
A.OneOf(
[
A.MotionBlur(p=0.2),
A.MedianBlur(blur_limit=7, p=0.1),
A.Blur(blur_limit=7, p=0.1),
],
p=0.5,
),
A.ShiftScaleRotate(shift_limit=0.0625, scale_limit=0.2, rotate_limit=15, p=0.2, border_mode=0),
A.OneOf(
[
A.CLAHE(clip_limit=2),
A.IAASharpen(),
A.IAAEmboss(),
A.RandomBrightnessContrast(),
A.ColorJitter(),
A.ToGray(),
],
p=0.3,
),
A.HueSaturationValue(p=0.3),
A.CoarseDropout(
max_holes=2,
max_height=100,
max_width=50,
p=0.66,
min_holes=1,
min_height=50,
min_width=20,
),
A.JpegCompression(),
A.HorizontalFlip(p=0.33),
],
"easy_2": [A.RandomBrightnessContrast()],
"noaug": [],
"new": [
A.IAAAdditiveGaussianNoise(p=0.5),
A.IAASuperpixels(p=1),
A.ImageCompression(p=1),
A.IAAPerspective(p=1),
A.RGBShift(r_shift_limit=50, b_shift_limit=50, g_shift_limit=50, p=1),
A.Posterize(p=1, num_bits=3),
A.IAAAdditiveGaussianNoise(p=1, loc=0, scale=(30, 50)),
],
"kaggle": [
# A.HorizontalFlip(p=0.5),
A.OneOf(
[
A.CLAHE(clip_limit=2),
A.IAASharpen(),
A.IAAEmboss(),
A.RandomBrightnessContrast(),
],
p=0.3,
),
A.OneOf(
[
A.IAAAdditiveGaussianNoise(),
A.GaussNoise(),
],
p=0.5,
),
A.OneOf(
[
A.MotionBlur(p=0.2),
A.MedianBlur(blur_limit=7, p=0.1),
A.Blur(blur_limit=7, p=0.1),
],
p=0.5,
),
A.HueSaturationValue(p=0.3),
A.RGBShift(r_shift_limit=10, b_shift_limit=10, g_shift_limit=10, p=0.1),
A.ImageCompression(quality_lower=50, quality_upper=100),
A.ShiftScaleRotate(shift_limit=0.1, scale_limit=0.1, rotate_limit=10, border_mode=0, p=0.5),
],
} | zeroml | /dataloading/image/transforms.py | transforms.py |
Pyre
====
This is a Python port of [Zyre](http://zyre.org) 1.0, implementing the same [ZRE protocol](http://rfc.zeromq.org/spec:36).
# Pyre - an open-source framework for proximity-based peer-to-peer applications
## Description
Pyre does local area discovery and clustering. A Pyre node broadcasts
UDP beacons, and connects to peers that it finds. This class wraps a
Pyre node with a message-based API.
All incoming events are messages delivered via the recv call of a Pyre
instance. The first frame defines the type of the message, and following
frames provide further values:
ENTER fromnode headers
a new peer has entered the network
EXIT fromnode
a peer has left the network
JOIN fromnode groupname
a peer has joined a specific group
LEAVE fromnode groupname
a peer has joined a specific group
WHISPER fromnode message
a peer has sent this node a message
SHOUT fromnode groupname message
a peer has sent one of our groups a message
In SHOUT and WHISPER the message is a single frame in this version
of Pyre. In ENTER, the headers frame contains a packed dictionary,
that can be unpacked using json.loads(msg) (see chat client).
To join or leave a group, use the join() and leave() methods.
To set a header value, use the set_header() method. To send a message
to a single peer, use whisper(). To send a message to a group, use
shout().
## Installation
For now use Pip:
pip install https://github.com/zeromq/pyre/archive/master.zip
## API
import pyre
# Constructor, creates a new Zyre node. Note that until you start the
# node it is silent and invisible to other nodes on the network.
node = pyre.Pyre()
# Set node header; these are provided to other nodes during discovery
# and come in each ENTER message.
node.set_header(name, value)
# (TODO: Currently a Pyre node starts immediately) Start node, after setting header values. When you start a node it
# begins discovery and connection.
node.start()
# Stop node, this signals to other peers that this node will go away.
# This is polite; however you can also just destroy the node without
# stopping it.
node.stop()
# Join a named group; after joining a group you can send messages to
# the group and all Zyre nodes in that group will receive them.
node.join(group)
# Leave a group
node.leave(group)
# Receive next message from network; the message may be a control
# message (ENTER, EXIT, JOIN, LEAVE) or data (WHISPER, SHOUT).
# Returns a list of message frames
msgs = node.recv();
# Send message to single peer, specified as a UUID object (import uuid)
# Destroys message after sending
node.whisper(peer, msg)
# Send message to a named group
# Destroys message after sending
node.shout(group, msg);
# Send string to single peer specified as a UUID string.
# String is formatted using printf specifiers.
node.whispers(peer, msg_string)
# Send message to a named group
# Destroys message after sending
node.shouts(group, msg_string);
# Return handle to the Zyre node, for polling
node.get_socket()
# use node.get_socket().getsockopt(zmq.FD) to acquire
# the filedescriptor
# Don't use this for getting Pyre events you can use the
# node.inbox to get those events
## Example Chat Client
```python
try:
from zyre_pyzmq import Zyre as Pyre
except Exception as e:
print("using Python native module", e)
from pyre import Pyre
from pyre import zhelper
import zmq
import uuid
import logging
import sys
import json
def chat_task(ctx, pipe):
n = Pyre("CHAT")
n.set_header("CHAT_Header1","example header1")
n.set_header("CHAT_Header2","example header2")
n.join("CHAT")
n.start()
poller = zmq.Poller()
poller.register(pipe, zmq.POLLIN)
print(n.socket())
poller.register(n.socket(), zmq.POLLIN)
print(n.socket())
while(True):
items = dict(poller.poll())
print(n.socket(), items)
if pipe in items and items[pipe] == zmq.POLLIN:
message = pipe.recv()
# message to quit
if message.decode('utf-8') == "$$STOP":
break
print("CHAT_TASK: %s" % message)
n.shouts("CHAT", message.decode('utf-8'))
else:
#if n.socket() in items and items[n.socket()] == zmq.POLLIN:
cmds = n.recv()
msg_type = cmds.pop(0)
print("NODE_MSG TYPE: %s" % msg_type)
print("NODE_MSG PEER: %s" % uuid.UUID(bytes=cmds.pop(0)))
print("NODE_MSG NAME: %s" % cmds.pop(0))
if msg_type.decode('utf-8') == "SHOUT":
print("NODE_MSG GROUP: %s" % cmds.pop(0))
elif msg_type.decode('utf-8') == "ENTER":
headers = json.loads(cmds.pop(0).decode('utf-8'))
print("NODE_MSG HEADERS: %s" % headers)
for key in headers:
print("key = {0}, value = {1}".format(key, headers[key]))
print("NODE_MSG CONT: %s" % cmds)
n.stop()
if __name__ == '__main__':
# Create a StreamHandler for debugging
logger = logging.getLogger("pyre")
logger.setLevel(logging.INFO)
logger.addHandler(logging.StreamHandler())
logger.propagate = False
ctx = zmq.Context()
chat_pipe = zhelper.zthread_fork(ctx, chat_task)
# input in python 2 is different
if sys.version_info.major < 3:
input = raw_input
while True:
try:
msg = input()
chat_pipe.send(msg.encode('utf_8'))
except (KeyboardInterrupt, SystemExit):
break
chat_pipe.send("$$STOP".encode('utf_8'))
print("FINISHED")
```
Look at the [ZOCP](https://github.com/z25/pyZOCP) project for examples of how Pyre can be
integrated into different environments and frameworks, i.e.:
- [Urwid](https://github.com/z25/pyZOCP/blob/master/examples/urwZOCP.py)
- [Blender](https://github.com/z25/pyZOCP/blob/master/examples/BpyZOCP.py)
- [Glib](https://github.com/z25/pyZOCP/blob/master/examples/glib_node.py)
- [QT](https://github.com/z25/pyZOCP/blob/master/examples/qt_ui_node.py)
Pyre uses the [Python Logging](https://docs.python.org/3.4/library/logging.html) module.
To change the debug level:
```
# Create a StreamHandler for debugging
logger = logging.getLogger("pyre")
logger.setLevel(logging.INFO)
# i.e. logging.DEBUG, logging.WARNING
logger.addHandler(logging.StreamHandler())
logger.propagate = False
```
## Requirements
Python only needs PyZMQ. On some older versions of Python
it also needs the [ipaddress](https://docs.python.org/3.4/library/ipaddress.html?highlight=ipaddress#module-ipaddress) module.
The recommended Python version is 3.3+
## Project Organization
Pyre is owned by all its authors and contributors. This is an open source
project licensed under the LGPLv3. To contribute to Zyre please read the
[C4.1 process](http://rfc.zeromq.org/spec:22) that we use.
To report an issue, use the [PYRE issue tracker](https://github.com/zeromq/pyre/issues) at github.com.
For more information on this project's maintenance, see [`MAINTENANCE.md`](MAINTENANCE.md).
| zeromq-pyre | /zeromq-pyre-0.3.4.tar.gz/zeromq-pyre-0.3.4/README.md | README.md |
import zmq
import time
import struct
import socket
import uuid
import logging
import sys
# local modules
from . import __version_info__
from . import zbeacon
from . import zhelper
from .zactor import ZActor
from .zsocket import ZSocket
from .pyre_node import PyreNode
from .pyre_event import PyreEvent
logger = logging.getLogger(__name__)
try:
raw_input # Python 2
except NameError:
raw_input = input # Python 3
class Pyre(object):
def __init__(self, name=None, ctx=None, *args, **kwargs):
"""Constructor, creates a new Zyre node. Note that until you start the
node it is silent and invisible to other nodes on the network.
The node name is provided to other nodes during discovery. If you
specify NULL, Zyre generates a randomized node name from the UUID.
Args:
name (str): The name of the node
Kwargs:
ctx: PyZMQ Context, if not specified a new context will be created
"""
super(Pyre, self).__init__(*args, **kwargs)
ctx = kwargs.get('ctx')
if ctx == None:
ctx = zmq.Context()
self._ctx = ctx
self._uuid = None
self._name = name
self.verbose = False
self.inbox, self._outbox = zhelper.zcreate_pipe(self._ctx)
# Start node engine and wait for it to be ready
self.actor = ZActor(self._ctx, PyreNode, self._outbox)
# Send name, if any, to node backend
if (self._name):
self.actor.send_unicode("SET NAME", zmq.SNDMORE)
self.actor.send_unicode(self._name)
#def __del__(self):
# We need to explicitly destroy the actor
# to make sure our node thread is stopped
#self.actor.destroy()
def __bool__(self):
"Determine whether the object is valid by converting to boolean" # Python 3
return True #TODO
def __nonzero__(self):
"Determine whether the object is valid by converting to boolean" # Python 2
return True #TODO
def uuid(self):
"""Return our node UUID string, after successful initialization"""
if not self._uuid:
self.actor.send_unicode("UUID")
self._uuid = uuid.UUID(bytes=self.actor.recv())
return self._uuid
# Return our node name, after successful initialization
def name(self):
"""Return our node name, after successful initialization"""
if not self._name:
self.actor.send_unicode("NAME")
self._name = self.actor.recv().decode('utf-8')
return self._name
# Not in Zyre api
def set_name(self, name):
logger.warning("DEPRECATED: set name in constructor, this method will be removed!")
self.actor.send_unicode("SET NAME", zmq.SNDMORE)
self.actor.send_unicode(name)
def set_header(self, key, value):
"""Set node header; these are provided to other nodes during discovery
and come in each ENTER message."""
self.actor.send_unicode("SET HEADER", flags=zmq.SNDMORE)
self.actor.send_unicode(key, flags=zmq.SNDMORE)
self.actor.send_unicode(value)
def set_verbose(self):
"""Set verbose mode; this tells the node to log all traffic as well as
all major events."""
self.actor.send_unicode("SET VERBOSE")
def set_port(self, port_nbr):
"""Set UDP beacon discovery port; defaults to 5670, this call overrides
that so you can create independent clusters on the same network, for
e.g. development vs. production. Has no effect after zyre_start()."""
self.actor.send_unicode("SET PORT", zmq.SNDMORE)
self.actor.send(port_nbr)
def set_interval(self, interval):
"""Set UDP beacon discovery interval, in milliseconds. Default is instant
beacon exploration followed by pinging every 1,000 msecs."""
self.actor.send_unicode("SET INTERVAL", zmq.SNDMORE)
self.actor.send_unicode(interval)
def set_interface(self, value):
"""Set network interface for UDP beacons. If you do not set this, CZMQ will
choose an interface for you. On boxes with several interfaces you should
specify which one you want to use, or strange things can happen."""
logging.debug("set_interface not implemented") #TODO
# TODO: check args from zyre
def set_endpoint(self, format, *args):
"""By default, Zyre binds to an ephemeral TCP port and broadcasts the local
host name using UDP beaconing. When you call this method, Zyre will use
gossip discovery instead of UDP beaconing. You MUST set-up the gossip
service separately using zyre_gossip_bind() and _connect(). Note that the
endpoint MUST be valid for both bind and connect operations. You can use
inproc://, ipc://, or tcp:// transports (for tcp://, use an IP address
that is meaningful to remote as well as local nodes). Returns 0 if
the bind was successful, else -1."""
self.actor.send_unicode("SET ENDPOINT", zmq.SNDMORE)
self.actor.send_unicode(format)
# TODO: We haven't implemented gossiping yet
#def gossip_bind(self, format, *args):
#def gossip_connect(self, format, *args):
def start(self):
"""Start node, after setting header values. When you start a node it
begins discovery and connection. Returns 0 if OK, -1 if it wasn't
possible to start the node."""
self.actor.send_unicode("START")
# the backend will signal back
self.actor.resolve().wait()
def stop(self):
"""Stop node; this signals to other peers that this node will go away.
This is polite; however you can also just destroy the node without
stopping it."""
self.actor.send_unicode("STOP", flags=zmq.DONTWAIT)
# the backend will signal back
self.actor.resolve().wait()
self.actor.destroy()
# Receive next message from node
def recv(self):
"""Receive next message from network; the message may be a control
message (ENTER, EXIT, JOIN, LEAVE) or data (WHISPER, SHOUT).
"""
return self.inbox.recv_multipart()
def join(self, group):
"""Join a named group; after joining a group you can send messages to
the group and all Zyre nodes in that group will receive them."""
self.actor.send_unicode("JOIN", flags=zmq.SNDMORE)
self.actor.send_unicode(group)
def leave(self, group):
"""Leave a group"""
self.actor.send_unicode("LEAVE", flags=zmq.SNDMORE)
self.actor.send_unicode(group)
# Send message to single peer; peer ID is first frame in message
def whisper(self, peer, msg_p):
"""Send message to single peer, specified as a UUID string
Destroys message after sending"""
self.actor.send_unicode("WHISPER", flags=zmq.SNDMORE)
self.actor.send(peer.bytes, flags=zmq.SNDMORE)
if isinstance(msg_p, list):
self.actor.send_multipart(msg_p)
else:
self.actor.send(msg_p)
def shout(self, group, msg_p):
"""Send message to a named group
Destroys message after sending"""
self.actor.send_unicode("SHOUT", flags=zmq.SNDMORE)
self.actor.send_unicode(group, flags=zmq.SNDMORE)
if isinstance(msg_p, list):
self.actor.send_multipart(msg_p)
else:
self.actor.send(msg_p)
# TODO: checks args from zyre
def whispers(self, peer, format, *args):
"""Send formatted string to a single peer specified as UUID string"""
self.actor.send_unicode("WHISPER", flags=zmq.SNDMORE)
self.actor.send(peer.bytes, flags=zmq.SNDMORE)
self.actor.send_unicode(format)
def shouts(self, group, format, *args):
"""Send formatted string to a named group"""
self.actor.send_unicode("SHOUT", flags=zmq.SNDMORE)
self.actor.send_unicode(group, flags=zmq.SNDMORE)
self.actor.send_unicode(format)
def peers(self):
"""Return list of current peer ids."""
self.actor.send_unicode("PEERS")
peers = self.actor.recv_pyobj()
return peers
def peers_by_group(self, group):
"""Return list of current peer ids."""
self.actor.send_unicode("PEERS BY GROUP", flags=zmq.SNDMORE)
self.actor.send_unicode(group)
peers_by_group = self.actor.recv_pyobj()
return peers_by_group
def endpoint(self):
"""Return own endpoint"""
self.actor.send_unicode("ENDPOINT")
endpoint = self.actor.recv_unicode()
return endpoint
def recent_events(self):
"""Iterator that yields recent `PyreEvent`s"""
while self.socket().get(zmq.EVENTS) & zmq.POLLIN:
yield PyreEvent(self)
def events(self):
"""Iterator that yields `PyreEvent`s indefinitely"""
while True:
yield PyreEvent(self)
# --------------------------------------------------------------------------
# Return the name of a connected peer. Caller owns the
# string.
# DEPRECATED: This is dropped in Zyre api. You receive names through events
def get_peer_name(self, peer):
logger.warning("get_peer_name() is deprecated, will be removed")
self.actor.send_unicode("PEER NAME", zmq.SNDMORE)
self.actor.send(peer.bytes)
name = self.actor.recv_unicode()
return name
def peer_address(self, peer):
"""Return the endpoint of a connected peer."""
self.actor.send_unicode("PEER ENDPOINT", zmq.SNDMORE)
self.actor.send(peer.bytes)
adr = self.actor.recv_unicode()
return adr
def peer_header_value(self, peer, name):
"""Return the value of a header of a conected peer.
Returns null if peer or key doesn't exist."""
self.actor.send_unicode("PEER HEADER", zmq.SNDMORE)
self.actor.send(peer.bytes, zmq.SNDMORE)
self.actor.send_unicode(name)
value = self.actor.recv_unicode()
return value
def peer_headers(self, peer):
"""Return the value of a header of a conected peer.
Returns null if peer or key doesn't exist."""
self.actor.send_unicode("PEER HEADERS", zmq.SNDMORE)
self.actor.send(peer.bytes)
headers = self.actor.recv_pyobj()
return headers
def own_groups(self):
"""Return list of currently joined groups."""
self.actor.send_unicode("OWN GROUPS");
groups = self.actor.recv_pyobj()
return groups
def peer_groups(self):
"""Return list of groups known through connected peers."""
self.actor.send_unicode("PEER GROUPS")
groups = self.actor.recv_pyobj()
return groups
# Return node socket, for direct polling of socket
def socket(self):
"""Return socket for talking to the Zyre node, for polling"""
return self.inbox
@staticmethod
def version():
return __version_info__
def chat_task(ctx, pipe):
n = Pyre(ctx=ctx)
n.join("CHAT")
n.start()
poller = zmq.Poller()
poller.register(pipe, zmq.POLLIN)
poller.register(n.socket(), zmq.POLLIN)
while(True):
items = dict(poller.poll())
if pipe in items:
message = pipe.recv()
if message == '$TERM':
break
logger.debug("CHAT_TASK: {0}".format(message))
n.shout("CHAT", message)
if n.socket() in items:
event = PyreEvent(n)
logger.debug("NODE_MSG TYPE: {0}".format(event.type))
logger.debug("NODE_MSG PEER: {0}".format(event.peer_uuid))
if event.type == "SHOUT":
logger.debug("NODE_MSG GROUP: {0}".format(event.group))
logger.debug("NODE_MSG CONT: {0}".format(event.msg))
n.stop()
if __name__ == '__main__':
logging.basicConfig()
logging.getLogger('__main__').setLevel(logging.DEBUG)
ctx = zmq.Context()
chat_pipe = zhelper.zthread_fork(ctx, chat_task)
while True:
try:
msg = raw_input()
chat_pipe.send_string(msg)
except (KeyboardInterrupt, SystemExit):
chat_pipe.send_string('$TERM')
break
logger.debug("Exiting") | zeromq-pyre | /zeromq-pyre-0.3.4.tar.gz/zeromq-pyre-0.3.4/pyre/pyre.py | pyre.py |
import zmq
import uuid
import logging
import struct
import socket
import time
import sys
from .zactor import ZActor
from .zbeacon import ZBeacon
from .zre_msg import ZreMsg
from .pyre_peer import PyrePeer
from .pyre_group import PyreGroup
BEACON_VERSION = 1
ZRE_DISCOVERY_PORT = 5670
REAP_INTERVAL = 1.0 # Once per second
logger = logging.getLogger(__name__)
class PyreNode(object):
def __init__(self, ctx, pipe, outbox, *args, **kwargs):
self._ctx = ctx #... until we use zbeacon actor
self._pipe = pipe # We send command replies and signals to the pipe
# Pipe back to application
self.outbox = outbox # Outbox back to application
self._terminated = False # API shut us down
self._verbose = False # Log all traffic (logging module?)
self.beacon_port = ZRE_DISCOVERY_PORT # Beacon port number
self.interval = 0 # Beacon interval 0=default
self.beacon = None # Beacon actor
self.beacon_socket = None # Beacon socket for polling
self.poller = zmq.Poller() # Socket poller
self.identity = uuid.uuid4() # Our UUID as object
self.bound = False
self.inbox = ctx.socket(zmq.ROUTER) # Our inbox socket (ROUTER)
try:
self.inbox.setsockopt(zmq.ROUTER_HANDOVER, 1)
except AttributeError as e:
logging.warning("can't set ROUTER_HANDOVER, needs zmq version >=4.1 but installed is {0}".format(zmq.zmq_version()))
self.poller.register(self._pipe, zmq.POLLIN)
self.name = str(self.identity)[:6] # Our public name (default=first 6 uuid chars)
self.endpoint = "" # Our public endpoint
self.port = 0 # Our inbox port, if any
self.status = 0 # Our own change counter
self.peers = {} # Hash of known peers, fast lookup
self.peer_groups = {} # Groups that our peers are in
self.own_groups = {} # Groups that we are in
self.headers = {} # Our header values
# TODO: gossip stuff
#self.start()
self.run()
# def __del__(self):
# destroy beacon
def start(self):
# TODO: If application didn't bind explicitly, we grab an ephemeral port
# on all available network interfaces. This is orthogonal to
# beaconing, since we can connect to other peers and they will
# gossip our endpoint to others.
if self.beacon_port:
# Start beacon discovery
self.beacon = ZActor(self._ctx, ZBeacon)
if self._verbose:
self.beacon.send_unicode("VERBOSE")
# Our hostname is provided by zbeacon
self.beacon.send_unicode("CONFIGURE", zmq.SNDMORE)
self.beacon.send(struct.pack("I", self.beacon_port))
hostname = self.beacon.recv_unicode()
#if self.interval:
# self.beacon.set_interval(self.interval)
# Our hostname is provided by zbeacon
self.port = self.inbox.bind_to_random_port("tcp://*")
if self.port < 0:
# Die on bad interface or port exhaustion
logging.critical("Random port assignment for incoming messages failed. Exiting.")
sys.exit(-1)
else:
self.bound = True
self.endpoint = "tcp://%s:%d" %(hostname, self.port)
# Set broadcast/listen beacon
transmit = struct.pack('cccb16sH', b'Z', b'R', b'E',
BEACON_VERSION, self.identity.bytes,
socket.htons(self.port))
self.beacon.send_unicode("PUBLISH", zmq.SNDMORE)
self.beacon.send(transmit)
# construct the header filter (to discard none zre messages)
filter = struct.pack("ccc", b'Z', b'R', b'E')
self.beacon.send_unicode("SUBSCRIBE",zmq.SNDMORE)
self.beacon.send(filter)
self.beacon_socket = self.beacon.resolve()
self.poller.register(self.beacon_socket, zmq.POLLIN)
#else:
# TODO: gossip stuff
# Start polling on inbox
self.poller.register(self.inbox, zmq.POLLIN)
#logger.debug("Node identity: {0}".format(self.identity))
def stop(self):
logger.debug("Pyre node: stopping beacon")
if self.beacon:
if self.beacon.is_running:
stop_transmit = struct.pack('cccb16sH', b'Z',b'R',b'E',
BEACON_VERSION, self.identity.bytes,
socket.htons(0))
self.beacon.send_unicode("PUBLISH", zmq.SNDMORE)
self.beacon.send(stop_transmit)
# Give time for beacon to go out
time.sleep(0.001)
self.poller.unregister(self.beacon_socket)
self.beacon.destroy()
self.beacon = None
self.beacon_socket = None
self.beacon_port = 0
if self.bound:
# Stop polling on inbox
self.poller.unregister(self.inbox)
self.outbox.send_unicode("STOP", zmq.SNDMORE)
self.outbox.send(self.identity.bytes, zmq.SNDMORE)
self.outbox.send_unicode(self.name)
def bind(self, endpoint):
logger.warning("Not implemented")
# Send message to all peers
def send_peer(self, peer, msg):
peer.send(msg)
# TODO: log_item, dump
# Here we handle the different control messages from the front-end
def recv_api(self):
request = self._pipe.recv_multipart()
command = request.pop(0).decode('UTF-8')
if command == "UUID":
self._pipe.send(self.identity.bytes)
elif command == "NAME":
self._pipe.send_unicode(self.name)
elif command == "SET NAME":
self.name = request.pop(0).decode('UTF-8')
elif command == "SET HEADER":
header_name = request.pop(0).decode('UTF-8')
header_value = request.pop(0).decode('UTF-8')
self.headers.update({header_name: header_value})
elif command == "SET VERBOSE":
self.verbose = True
elif command == "SET PORT":
self.beacon_port = int(request.pop(0))
elif command == "SET INTERVAL":
self.interval = int(request.pop(0))
#elif command == "SET ENDPOINT":
# TODO: gossip start and endpoint setting
# TODO: GOSSIP BIND, GOSSIP CONNECT
#elif command == "BIND":
# # TODO: Needs a wait-signal
# endpoint = request.pop(0).decode('UTF-8')
# self.bind(endpoint)
#elif command == "CONNECT":
# # TODO: Needs a wait-signal
# endpoint = request.pop(0).decode('UTF-8')
# self.connect(endpoint)
elif command == "START":
# zsock_signal (self->pipe, zyre_node_start (self));
self.start()
self._pipe.signal()
elif command == "STOP":
# zsock_signal (self->pipe, zyre_node_stop (self));
self.stop()
self._pipe.signal()
elif command == "WHISPER":
# Get peer to send message to
peer_id = uuid.UUID(bytes=request.pop(0))
# Send frame on out to peer's mailbox, drop message
# if peer doesn't exist (may have been destroyed)
if self.peers.get(peer_id):
msg = ZreMsg(ZreMsg.WHISPER)
msg.set_address(peer_id)
msg.content = request
self.peers[peer_id].send(msg)
elif command == "SHOUT":
# Get group to send message to
grpname = request.pop(0).decode('UTF-8')
msg = ZreMsg(ZreMsg.SHOUT)
msg.set_group(grpname)
msg.content = request # request may contain multipart message
if self.peer_groups.get(grpname):
self.peer_groups[grpname].send(msg)
else:
logger.warning("Group {0} not found.".format(grpname))
elif command == "JOIN":
grpname = request.pop(0).decode('UTF-8')
grp = self.own_groups.get(grpname)
if not grp:
# Only send if we're not already in group
grp = PyreGroup(grpname)
self.own_groups[grpname] = grp
msg = ZreMsg(ZreMsg.JOIN)
msg.set_group(grpname)
self.status += 1
msg.set_status(self.status)
for peer in self.peers.values():
peer.send(msg)
logger.debug("Node is joining group {0}".format(grpname))
elif command == "LEAVE":
grpname = request.pop(0).decode('UTF-8')
grp = self.own_groups.get(grpname)
if grp:
# Only send if we're actually in group
msg = ZreMsg(ZreMsg.LEAVE)
msg.set_group(grpname)
self.status += 1
msg.set_status(self.status)
for peer in self.peers.values():
peer.send(msg)
self.own_groups.pop(grpname)
logger.debug("Node is leaving group {0}".format(grpname))
elif command == "PEERS":
self._pipe.send_pyobj(list(self.peers.keys()))
elif command == "PEERS BY GROUP":
grpname = request.pop(0).decode('UTF-8')
grp = self.require_peer_group(grpname)
self._pipe.send_pyobj(list(grp.peers.keys()))
elif command == "ENDPOINT":
self._pipe.send_unicode(self.endpoint)
elif command == "PEER NAME":
id = uuid.UUID(bytes=request.pop(0))
peer = self.peers.get(id)
if peer:
self._pipe.send_unicode("%s" %peer.get_name())
else:
self._pipe.send_unicode("")
elif command == "PEER ENDPOINT":
id = uuid.UUID(bytes=request.pop(0))
peer = self.peers.get(id)
if peer:
self._pipe.send_unicode("%s" %peer.get_endpoint())
else:
self._pipe.send_unicode("")
elif command == "PEER HEADER":
id = uuid.UUID(bytes=request.pop(0))
key = request.pop(0).decode('UTF-8')
peer = self.peers.get(id)
if not peer:
self._pipe.send_unicode("")
else:
self._pipe.send_unicode(peer.get_header(key))
elif command == "PEER HEADERS":
id = uuid.UUID(bytes=request.pop(0))
peer = self.peers.get(id)
if not peer:
self._pipe.send_unicode("")
else:
self._pipe.send_pyobj(peer.get_headers())
elif command == "PEER GROUPS":
self._pipe.send_pyobj(list(self.peer_groups.keys()))
elif command == "OWN GROUPS":
self._pipe.send_pyobj(list(self.own_groups.keys()))
elif command == "DUMP":
# TODO: zyre_node_dump (self);
pass
elif command == "$TERM":
# this is often not printed if program terminates
logger.debug("Pyre node: shutting down")
self._terminated = True
else:
logger.warning("Unkown Node API command: {0}".format(command))
def purge_peer(self, peer, endpoint):
if (peer.get_endpoint() == endpoint):
self.remove_peer(peer)
peer.disconnect()
logger.debug("Purge peer: {0}{1}".format(peer,endpoint))
# Find or create peer via its UUID string
def require_peer(self, identity, endpoint):
p = self.peers.get(identity)
if not p:
# Purge any previous peer on same endpoint
for peer_id, peer in self.peers.copy().items():
self.purge_peer(peer, endpoint)
p = PyrePeer(self._ctx, identity)
self.peers[identity] = p
p.set_origin(self.name);
# TODO: this could be handy, to set verbosity on a specific peer
#zyre_peer_set_verbose (peer, self->verbose);
p.connect(self.identity, endpoint)
# Handshake discovery by sending HELLO as first message
m = ZreMsg(ZreMsg.HELLO)
m.set_endpoint(self.endpoint)
m.set_groups(self.own_groups.keys())
m.set_status(self.status)
m.set_name(self.name)
m.set_headers(self.headers)
p.send(m)
return p
# Remove peer from group, if it's a member
def delete_peer(self, peer, group):
group.leave(peer)
# Remove a peer from our data structures
def remove_peer(self, peer):
# Tell the calling application the peer has gone
self.outbox.send_unicode("EXIT", zmq.SNDMORE)
self.outbox.send(peer.get_identity().bytes, zmq.SNDMORE)
self.outbox.send_unicode(peer.get_name())
logger.debug("({0}) EXIT name={1}".format(peer, peer.get_endpoint()))
# Remove peer from any groups we've got it in
for grp in self.peer_groups.values():
self.delete_peer(peer, grp)
# To destroy peer, we remove from peers hash table (dict)
self.peers.pop(peer.get_identity())
# Find or create group via its name
def require_peer_group(self, groupname):
grp = self.peer_groups.get(groupname)
if not grp:
# somehow a dict containing peers is passed if
# I don't force the peers arg to an empty dict
grp = PyreGroup(groupname, peers={})
self.peer_groups[groupname] = grp
return grp
def join_peer_group(self, peer, groupname):
grp = self.require_peer_group(groupname)
grp.join(peer)
# Now tell the caller about the peer joined group
self.outbox.send_unicode("JOIN", flags=zmq.SNDMORE)
self.outbox.send(peer.get_identity().bytes, flags=zmq.SNDMORE)
self.outbox.send_unicode(peer.get_name(), flags=zmq.SNDMORE)
self.outbox.send_unicode(groupname)
logger.debug("({0}) JOIN name={1} group={2}".format(self.name, peer.get_name(), groupname))
return grp
def leave_peer_group(self, peer, groupname):
# Tell the caller about the peer joined group
self.outbox.send_unicode("LEAVE", flags=zmq.SNDMORE)
self.outbox.send(peer.get_identity().bytes, flags=zmq.SNDMORE)
self.outbox.send_unicode(peer.get_name(), flags=zmq.SNDMORE)
self.outbox.send_unicode(groupname)
# Now remove the peer from the group
grp = self.require_peer_group(groupname)
grp.leave(peer)
logger.debug("({0}) LEAVE name={1} group={2}".format(self.name, peer.get_name(), groupname))
# Here we handle messages coming from other peers
def recv_peer(self):
zmsg = ZreMsg()
zmsg.recv(self.inbox)
#msgs = self.inbox.recv_multipart()
# Router socket tells us the identity of this peer
# First frame is sender identity
id = zmsg.get_address()
# On HELLO we may create the peer if it's unknown
# On other commands the peer must already exist
peer = self.peers.get(id)
if zmsg.id == ZreMsg.HELLO:
if (peer):
# remove fake peers
if peer.get_ready():
self.remove_peer(peer)
elif peer.endpoint == self.endpoint:
# We ignore HELLO, if peer has same endpoint as current node
return
peer = self.require_peer(id, zmsg.get_endpoint())
peer.set_ready(True)
# Ignore command if peer isn't ready
if not peer or not peer.get_ready():
logger.warning("Peer {0} isn't ready".format(peer))
return
if peer.messages_lost(zmsg):
logger.warning("{0} messages lost from {1}".format(self.identity, peer.identity))
self.remove_peer(peer)
return
# Now process each command
if zmsg.id == ZreMsg.HELLO:
# Store properties from HELLO command into peer
peer.set_name(zmsg.get_name())
peer.set_headers(zmsg.get_headers())
# Now tell the caller about the peer
self.outbox.send_unicode("ENTER", flags=zmq.SNDMORE)
self.outbox.send(peer.get_identity().bytes, flags=zmq.SNDMORE)
self.outbox.send_unicode(peer.get_name(), flags=zmq.SNDMORE)
self.outbox.send_json(peer.get_headers(),flags=zmq.SNDMORE)
self.outbox.send_unicode(peer.get_endpoint())
logger.debug("({0}) ENTER name={1} endpoint={2}".format(self.name, peer.get_name(), peer.get_endpoint()))
# Join peer to listed groups
for grp in zmsg.get_groups():
self.join_peer_group(peer, grp)
# Now take peer's status from HELLO, after joining groups
peer.set_status(zmsg.get_status())
elif zmsg.id == ZreMsg.WHISPER:
# Pass up to caller API as WHISPER event
self.outbox.send_unicode("WHISPER", zmq.SNDMORE)
self.outbox.send(peer.get_identity().bytes, zmq.SNDMORE)
self.outbox.send_unicode(peer.get_name(), zmq.SNDMORE)
self.outbox.send_multipart(zmsg.content)
elif zmsg.id == ZreMsg.SHOUT:
# Pass up to caller API as WHISPER event
self.outbox.send_unicode("SHOUT", zmq.SNDMORE)
self.outbox.send(peer.get_identity().bytes, zmq.SNDMORE)
self.outbox.send_unicode(peer.get_name(), zmq.SNDMORE)
self.outbox.send_unicode(zmsg.get_group(), zmq.SNDMORE)
self.outbox.send_multipart(zmsg.content)
elif zmsg.id == ZreMsg.PING:
peer.send(ZreMsg(id=ZreMsg.PING_OK))
elif zmsg.id == ZreMsg.JOIN:
self.join_peer_group(peer, zmsg.get_group())
assert(zmsg.get_status() == peer.get_status())
elif zmsg.id == ZreMsg.LEAVE:
#self.leave_peer_group(zmsg.get_group())
self.leave_peer_group(peer, zmsg.get_group())
assert(zmsg.get_status() == peer.get_status())
# Activity from peer resets peer timers
peer.refresh()
def recv_beacon(self):
# Get IP address and beacon of peer
try:
ipaddress, frame = self.beacon_socket.recv_multipart()
except ValueError:
return
beacon = struct.unpack('cccb16sH', frame)
# Ignore anything that isn't a valid beacon
if beacon[3] != BEACON_VERSION:
logger.warning("Invalid ZRE Beacon version: {0}".format(beacon[3]))
return
peer_id = uuid.UUID(bytes=beacon[4])
#print("peerId: %s", peer_id)
port = socket.ntohs(beacon[5])
# if we receive a beacon with port 0 this means the peer exited
if port:
endpoint = "tcp://%s:%d" %(ipaddress.decode('UTF-8'), port)
peer = self.require_peer(peer_id, endpoint)
peer.refresh()
else:
# Zero port means peer is going away; remove it if
# we had any knowledge of it already
peer = self.peers.get(peer_id)
# remove the peer (delete)
if peer:
logger.debug("Received 0 port beacon, removing peer {0}".format(peer))
self.remove_peer(peer)
else:
logger.warning(self.peers)
logger.warning("We don't know peer id {0}".format(peer_id))
# TODO: Handle gossip dat
# We do this once a second:
# - if peer has gone quiet, send TCP ping
# - if peer has disappeared, expire it
def ping_peer(self, peer_id):
peer = self.peers.get(peer_id)
if time.time() > peer.expired_at:
logger.debug("({0}) peer expired name={1} endpoint={2}".format(self.name, peer.get_name(), peer.get_endpoint()))
self.remove_peer(peer)
elif time.time() > peer.evasive_at:
# If peer is being evasive, force a TCP ping.
# TODO: do this only once for a peer in this state;
# it would be nicer to use a proper state machine
# for peer management.
logger.debug("({0}) peer seems dead/slow name={1} endpoint={2}".format(self.name, peer.get_name(), peer.get_endpoint()))
msg = ZreMsg(ZreMsg.PING)
peer.send(msg)
# --------------------------------------------------------------------------
# This is the actor that runs a single node; it uses one thread, creates
# a zyre_node object at start and destroys that when finishing.
def run(self):
# Signal actor successfully initialized
self._pipe.signal()
reap_at = time.time() + REAP_INTERVAL
while not self._terminated:
timeout = reap_at - time.time()
if timeout < 0:
timeout = 0
items = dict(self.poller.poll(timeout * 1000))
if self._pipe in items and items[self._pipe] == zmq.POLLIN:
self.recv_api()
if self.inbox in items and items[self.inbox] == zmq.POLLIN:
self.recv_peer()
if self.beacon_socket in items and items[self.beacon_socket] == zmq.POLLIN:
self.recv_beacon()
if time.time() >= reap_at:
reap_at = time.time() + REAP_INTERVAL
# Ping all peers and reap any expired ones
for peer_id in self.peers.copy().keys():
self.ping_peer(peer_id) | zeromq-pyre | /zeromq-pyre-0.3.4.tar.gz/zeromq-pyre-0.3.4/pyre/pyre_node.py | pyre_node.py |
import binascii
import itertools
import os
import random
import sys
import threading
import zmq
from . import zsocket
try:
u = unicode
except NameError:
u = str
# --------------------------------------------------------------------------
# Create a pipe, which consists of two PAIR sockets connected over inproc.
# The pipe is configured to use a default 1000 hwm setting. Returns the
# frontend and backend sockets.
def zcreate_pipe(ctx, hwm=1000):
backend = zsocket.ZSocket(ctx, zmq.PAIR)
frontend = zsocket.ZSocket(ctx, zmq.PAIR)
backend.set_hwm(hwm)
frontend.set_hwm(hwm)
# close immediately on shutdown
backend.setsockopt(zmq.LINGER, 0)
frontend.setsockopt(zmq.LINGER, 0)
endpoint = "inproc://zactor-%04x-%04x\n"\
%(random.randint(0, 0x10000), random.randint(0, 0x10000))
while True:
try:
frontend.bind(endpoint)
except:
endpoint = "inproc://zactor-%04x-%04x\n"\
%(random.randint(0, 0x10000), random.randint(0, 0x10000))
else:
break
backend.connect(endpoint)
return (frontend, backend)
def zthread_fork(ctx, func, *args, **kwargs):
"""
Create an attached thread. An attached thread gets a ctx and a PAIR
pipe back to its parent. It must monitor its pipe, and exit if the
pipe becomes unreadable. Returns pipe, or NULL if there was an error.
"""
a = ctx.socket(zmq.PAIR)
a.setsockopt(zmq.LINGER, 0)
a.setsockopt(zmq.RCVHWM, 100)
a.setsockopt(zmq.SNDHWM, 100)
a.setsockopt(zmq.SNDTIMEO, 5000)
a.setsockopt(zmq.RCVTIMEO, 5000)
b = ctx.socket(zmq.PAIR)
b.setsockopt(zmq.LINGER, 0)
b.setsockopt(zmq.RCVHWM, 100)
b.setsockopt(zmq.SNDHWM, 100)
b.setsockopt(zmq.SNDTIMEO, 5000)
b.setsockopt(zmq.RCVTIMEO, 5000)
iface = "inproc://%s" % binascii.hexlify(os.urandom(8))
a.bind(iface)
b.connect(iface)
thread = threading.Thread(target=func, args=((ctx, b) + args), kwargs=kwargs)
thread.daemon = False
thread.start()
return a
from ctypes import c_char, c_char_p
from ctypes import c_uint, c_uint8, c_uint16, c_uint32
from ctypes import c_short, c_ushort
from ctypes import c_void_p, pointer
from ctypes import CDLL, Structure, Union
from sys import platform
if platform.startswith("win") and sys.version.startswith("2"):
import win_inet_pton
from socket import AF_INET, AF_INET6, inet_ntop
try:
from socket import AF_PACKET
except ImportError:
AF_PACKET = -1
if platform.startswith("darwin") or platform.startswith("freebsd"):
AF_LINK = 18
IFT_ETHER = 0x6
else:
AF_LINK = -1
IFT_ETHER = -1
def get_ifaddrs():
"""
A method for retrieving info of the network interfaces.
Returns a nested dictionary containing everything it found.
{
ifname:
{
familynr:
{
addr:
netmask:
etc...
Found this at http://pastebin.com/wxjai3Mw with some modification to
make it work on OSX.
"""
if platform.startswith("win"):
return get_win_ifaddrs()
# getifaddr structs
class ifa_ifu_u(Union):
_fields_ = [
("ifu_broadaddr", c_void_p),
("ifu_dstaddr", c_void_p)
]
class ifaddrs(Structure):
_fields_ = [
("ifa_next", c_void_p),
("ifa_name", c_char_p),
("ifa_flags", c_uint),
("ifa_addr", c_void_p),
("ifa_netmask", c_void_p),
("ifa_ifu", ifa_ifu_u),
("ifa_data", c_void_p)
]
# AF_UNKNOWN / generic
if platform.startswith("darwin") or platform.startswith("freebsd"):
class sockaddr(Structure):
_fields_ = [
("sa_len", c_uint8),
("sa_family", c_uint8),
("sa_data", (c_uint8 * 14))
]
else:
class sockaddr(Structure):
_fields_ = [
("sa_family", c_uint16),
("sa_data", (c_uint8 * 14))
]
# AF_INET / IPv4
class in_addr(Union):
_fields_ = [
("s_addr", c_uint32),
]
if platform.startswith("darwin") or platform.startswith("freebsd"):
class sockaddr_in(Structure):
_fields_ = [
("sin_len", c_uint8),
("sin_family", c_uint8),
("sin_port", c_ushort),
("sin_addr", in_addr),
("sin_zero", (c_char * 8)) # padding
]
else:
class sockaddr_in(Structure):
_fields_ = [
("sin_family", c_short),
("sin_port", c_ushort),
("sin_addr", in_addr),
("sin_zero", (c_char * 8)) # padding
]
# AF_INET6 / IPv6
class in6_u(Union):
_fields_ = [
("u6_addr8", (c_uint8 * 16)),
("u6_addr16", (c_uint16 * 8)),
("u6_addr32", (c_uint32 * 4))
]
class in6_addr(Union):
_fields_ = [
("in6_u", in6_u),
]
if platform.startswith("darwin") or platform.startswith("freebsd"):
class sockaddr_in6(Structure):
_fields_ = [
("sin6_len", c_uint8),
("sin6_family", c_uint8),
("sin6_port", c_ushort),
("sin6_flowinfo", c_uint32),
("sin6_addr", in6_addr),
("sin6_scope_id", c_uint32),
]
else:
class sockaddr_in6(Structure):
_fields_ = [
("sin6_family", c_short),
("sin6_port", c_ushort),
("sin6_flowinfo", c_uint32),
("sin6_addr", in6_addr),
("sin6_scope_id", c_uint32),
]
# AF_PACKET / Linux
class sockaddr_ll(Structure):
_fields_ = [
("sll_family", c_uint16),
("sll_protocol", c_uint16),
("sll_ifindex", c_uint32),
("sll_hatype", c_uint16),
("sll_pktype", c_uint8),
("sll_halen", c_uint8),
("sll_addr", (c_uint8 * 8))
]
# AF_LINK / BSD|OSX
class sockaddr_dl(Structure):
_fields_ = [
("sdl_len", c_uint8),
("sdl_family", c_uint8),
("sdl_index", c_uint16),
("sdl_type", c_uint8),
("sdl_nlen", c_uint8),
("sdl_alen", c_uint8),
("sdl_slen", c_uint8),
("sdl_data", (c_uint8 * 46))
]
if platform.startswith("darwin"):
libc = CDLL("libSystem.dylib")
elif platform.startswith("freebsd"):
libc = CDLL("libc.so")
else:
libc = CDLL("libc.so.6")
ptr = c_void_p(None)
result = libc.getifaddrs(pointer(ptr))
if result:
return None
ifa = ifaddrs.from_address(ptr.value)
result = []
while ifa:
# Python 2 gives us a string, Python 3 an array of bytes
if type(ifa.ifa_name) is str:
name = ifa.ifa_name
else:
name = ifa.ifa_name.decode()
if ifa.ifa_addr:
sa = sockaddr.from_address(ifa.ifa_addr)
data = {}
if sa.sa_family == AF_INET:
if ifa.ifa_addr is not None:
si = sockaddr_in.from_address(ifa.ifa_addr)
data['addr'] = inet_ntop(AF_INET, si.sin_addr)
if ifa.ifa_netmask is not None:
si = sockaddr_in.from_address(ifa.ifa_netmask)
data['netmask'] = inet_ntop(AF_INET, si.sin_addr)
# check if a valid broadcast address is set and retrieve it
# 0x2 == IFF_BROADCAST
if ifa.ifa_flags & 0x2:
si = sockaddr_in.from_address(ifa.ifa_ifu.ifu_broadaddr)
data['broadcast'] = inet_ntop(AF_INET, si.sin_addr)
if sa.sa_family == AF_INET6:
if ifa.ifa_addr is not None:
si = sockaddr_in6.from_address(ifa.ifa_addr)
data['addr'] = inet_ntop(AF_INET6, si.sin6_addr)
if data['addr'].startswith('fe80:'):
data['scope'] = si.sin6_scope_id
if ifa.ifa_netmask is not None:
si = sockaddr_in6.from_address(ifa.ifa_netmask)
data['netmask'] = inet_ntop(AF_INET6, si.sin6_addr)
if sa.sa_family == AF_PACKET:
if ifa.ifa_addr is not None:
si = sockaddr_ll.from_address(ifa.ifa_addr)
addr = ""
total = 0
for i in range(si.sll_halen):
total += si.sll_addr[i]
addr += "%02x:" % si.sll_addr[i]
addr = addr[:-1]
if total > 0:
data['addr'] = addr
if sa.sa_family == AF_LINK:
dl = sockaddr_dl.from_address(ifa.ifa_addr)
if dl.sdl_type == IFT_ETHER:
addr = ""
for i in range(dl.sdl_alen):
addr += "%02x:" % dl.sdl_data[dl.sdl_nlen + i]
addr = addr[:-1]
data['addr'] = addr
if len(data) > 0:
iface = {}
for interface in result:
if name in interface.keys():
iface = interface
break
if iface:
iface[name][sa.sa_family] = data
else:
iface[name] = { sa.sa_family : data }
result.append(iface)
if ifa.ifa_next:
ifa = ifaddrs.from_address(ifa.ifa_next)
else:
break
libc.freeifaddrs(ptr)
return result
def get_win_ifaddrs():
"""
A method for retrieving info of the network
interfaces. Returns a nested dictionary of
interfaces in Windows.
"""
# based on code from jaraco and many other attempts
# on internet.
# Fixed by <@gpotter2> from scapy's implementation to
# add IPv6 support + fix structures
import ctypes
import struct
import ipaddress
import ctypes.wintypes
from ctypes.wintypes import DWORD, WCHAR, BYTE, BOOL
from socket import AF_INET
# from iptypes.h
MAX_ADAPTER_ADDRESS_LENGTH = 8
MAX_DHCPV6_DUID_LENGTH = 130
GAA_FLAG_INCLUDE_PREFIX = ctypes.c_ulong(0x0010)
class in_addr(Structure):
_fields_ = [("byte", ctypes.c_ubyte * 4)]
class in6_addr(ctypes.Structure):
_fields_ = [("byte", ctypes.c_ubyte * 16)]
class sockaddr_in(ctypes.Structure):
_fields_ = [("sin_family", ctypes.c_short),
("sin_port", ctypes.c_ushort),
("sin_addr", in_addr),
("sin_zero", 8 * ctypes.c_char)]
class sockaddr_in6(ctypes.Structure):
_fields_ = [("sin6_family", ctypes.c_short),
("sin6_port", ctypes.c_ushort),
("sin6_flowinfo", ctypes.c_ulong),
("sin6_addr", in6_addr),
("sin6_scope_id", ctypes.c_ulong)]
class SOCKADDR_INET(ctypes.Union):
_fields_ = [("Ipv4", sockaddr_in),
("Ipv6", sockaddr_in6),
("si_family", ctypes.c_short)]
LPSOCKADDR_INET = ctypes.POINTER(SOCKADDR_INET)
class SOCKET_ADDRESS(ctypes.Structure):
_fields_ = [
('address', LPSOCKADDR_INET),
('length', ctypes.c_int),
]
class _IP_ADAPTER_ADDRESSES_METRIC(ctypes.Structure):
_fields_ = [
('length', ctypes.c_ulong),
('interface_index', DWORD),
]
class _IP_ADAPTER_ADDRESSES_U1(ctypes.Union):
_fields_ = [
('alignment', ctypes.c_ulonglong),
('metric', _IP_ADAPTER_ADDRESSES_METRIC),
]
class IP_ADAPTER_UNICAST_ADDRESS(ctypes.Structure):
pass
PIP_ADAPTER_UNICAST_ADDRESS = ctypes.POINTER(IP_ADAPTER_UNICAST_ADDRESS)
IP_ADAPTER_UNICAST_ADDRESS._fields_ = [
("length", ctypes.c_ulong),
("flags", DWORD),
("next", PIP_ADAPTER_UNICAST_ADDRESS),
("address", SOCKET_ADDRESS),
("prefix_origin", ctypes.c_int),
("suffix_origin", ctypes.c_int),
("dad_state", ctypes.c_int),
("valid_lifetime", ctypes.c_ulong),
("preferred_lifetime", ctypes.c_ulong),
("lease_lifetime", ctypes.c_ulong),
("on_link_prefix_length", ctypes.c_ubyte)
]
class IP_ADAPTER_PREFIX(ctypes.Structure):
pass
PIP_ADAPTER_PREFIX = ctypes.POINTER(IP_ADAPTER_PREFIX)
IP_ADAPTER_PREFIX._fields_ = [
("alignment", ctypes.c_ulonglong),
("next", PIP_ADAPTER_PREFIX),
("address", SOCKET_ADDRESS),
("prefix_length", ctypes.c_ulong)
]
class IP_ADAPTER_ADDRESSES(ctypes.Structure):
pass
LP_IP_ADAPTER_ADDRESSES = ctypes.POINTER(IP_ADAPTER_ADDRESSES)
# for now, just use void * for pointers to unused structures
PIP_ADAPTER_ANYCAST_ADDRESS = ctypes.c_void_p
PIP_ADAPTER_MULTICAST_ADDRESS = ctypes.c_void_p
PIP_ADAPTER_DNS_SERVER_ADDRESS = ctypes.c_void_p
#PIP_ADAPTER_PREFIX = ctypes.c_void_p
PIP_ADAPTER_WINS_SERVER_ADDRESS_LH = ctypes.c_void_p
PIP_ADAPTER_GATEWAY_ADDRESS_LH = ctypes.c_void_p
PIP_ADAPTER_DNS_SUFFIX = ctypes.c_void_p
IF_OPER_STATUS = ctypes.c_uint # this is an enum, consider http://code.activestate.com/recipes/576415/
IF_LUID = ctypes.c_uint64
NET_IF_COMPARTMENT_ID = ctypes.c_uint32
GUID = ctypes.c_byte*16
NET_IF_NETWORK_GUID = GUID
NET_IF_CONNECTION_TYPE = ctypes.c_uint # enum
TUNNEL_TYPE = ctypes.c_uint # enum
IP_ADAPTER_ADDRESSES._fields_ = [
('length', ctypes.c_ulong),
('interface_index', DWORD),
('next', LP_IP_ADAPTER_ADDRESSES),
('adapter_name', ctypes.c_char_p),
('first_unicast_address', PIP_ADAPTER_UNICAST_ADDRESS),
('first_anycast_address', PIP_ADAPTER_ANYCAST_ADDRESS),
('first_multicast_address', PIP_ADAPTER_MULTICAST_ADDRESS),
('first_dns_server_address', PIP_ADAPTER_DNS_SERVER_ADDRESS),
('dns_suffix', ctypes.c_wchar_p),
('description', ctypes.c_wchar_p),
('friendly_name', ctypes.c_wchar_p),
('byte', BYTE * MAX_ADAPTER_ADDRESS_LENGTH),
('physical_address_length', DWORD),
('flags', DWORD),
('mtu', DWORD),
('interface_type', DWORD),
('oper_status', IF_OPER_STATUS),
('ipv6_interface_index', DWORD),
('zone_indices', DWORD * 16),
('first_prefix', PIP_ADAPTER_PREFIX),
('transmit_link_speed', ctypes.c_uint64),
('receive_link_speed', ctypes.c_uint64),
('first_wins_server_address', PIP_ADAPTER_WINS_SERVER_ADDRESS_LH),
('first_gateway_address', PIP_ADAPTER_GATEWAY_ADDRESS_LH),
('ipv4_metric', ctypes.c_ulong),
('ipv6_metric', ctypes.c_ulong),
('luid', IF_LUID),
('dhcpv4_server', SOCKET_ADDRESS),
('compartment_id', NET_IF_COMPARTMENT_ID),
('network_guid', NET_IF_NETWORK_GUID),
('connection_type', NET_IF_CONNECTION_TYPE),
('tunnel_type', TUNNEL_TYPE),
('dhcpv6_server', SOCKET_ADDRESS),
('dhcpv6_client_duid', ctypes.c_byte * MAX_DHCPV6_DUID_LENGTH),
('dhcpv6_client_duid_length', ctypes.c_ulong),
('dhcpv6_iaid', ctypes.c_ulong),
('first_dns_suffix', PIP_ADAPTER_DNS_SUFFIX),
]
def GetAdaptersAddresses(af=0):
"""
Returns an iteratable list of adapters.
param:
- af: the address family to read on
"""
size = ctypes.c_ulong()
AF_UNSPEC = 0
flags = GAA_FLAG_INCLUDE_PREFIX
GetAdaptersAddresses = ctypes.windll.iphlpapi.GetAdaptersAddresses
GetAdaptersAddresses.argtypes = [
ctypes.c_ulong,
ctypes.c_ulong,
ctypes.c_void_p,
ctypes.POINTER(IP_ADAPTER_ADDRESSES),
ctypes.POINTER(ctypes.c_ulong),
]
GetAdaptersAddresses.restype = ctypes.c_ulong
res = GetAdaptersAddresses(af, flags, None, None, size)
if res != 0x6f: # BUFFER OVERFLOW -> populate size
raise RuntimeError("Error getting structure length (%d)" % res)
pointer_type = ctypes.POINTER(IP_ADAPTER_ADDRESSES)
buffer = ctypes.create_string_buffer(size.value)
struct_p = ctypes.cast(buffer, pointer_type)
res = GetAdaptersAddresses(af, flags, None, struct_p, size)
if res != 0x0: # NO_ERROR
raise RuntimeError("Error retrieving table (%d)" % res)
while struct_p:
yield struct_p.contents
struct_p = struct_p.contents.next
result = []
# In theory, we could use AF_UNSPEC = 0, but it doesn't work in practice
for i in itertools.chain(GetAdaptersAddresses(AF_INET), GetAdaptersAddresses(AF_INET6)):
#print("--------------------------------------")
#print("IF: {0}".format(i.description))
#print("\tdns_suffix: {0}".format(i.dns_suffix))
#print("\tinterface type: {0}".format(i.interface_type))
fu = i.first_unicast_address.contents
ad = fu.address.address.contents
#print("\tfamily: {0}".format(ad.family))
if ad.si_family == AF_INET:
ip_bytes = bytes(bytearray(ad.Ipv4.sin_addr))
ip = ipaddress.IPv4Address(ip_bytes)
ip_if = ipaddress.IPv4Interface(u("{0}/{1}".format(ip, fu.on_link_prefix_length)))
elif ad.si_family == AF_INET6:
ip_bytes = bytes(bytearray(ad.Ipv6.sin6_addr))
ip = ipaddress.IPv6Address(ip_bytes)
ip_if = ipaddress.IPv6Interface(u("{0}/{1}".format(ip, fu.on_link_prefix_length)))
#print("\tipaddress: {0}".format(ip))
#print("\tnetmask: {0}".format(ip_if.netmask))
#print("\tnetwork: {0}".format(ip_if.network.network_address))
#print("\tbroadcast: {0}".format(ip_if.network.broadcast_address))
#print("\tmask length: {0}".format(fu.on_link_prefix_length))
data = {}
data['addr'] = "{0}".format(ip)
data['netmask'] = "{0}".format(ip_if.netmask)
data['broadcast'] = "{0}".format(ip_if.network.broadcast_address)
data['network'] = "{0}".format(ip_if.network.network_address)
name = i.description
#result[i.description] = { ad.family : d}
iface = {}
for interface in result:
if name in interface.keys():
iface = interface
break
if iface:
iface[name][ad.si_family] = data
else:
iface[name] = { ad.si_family : data }
result.append(iface)
return result | zeromq-pyre | /zeromq-pyre-0.3.4.tar.gz/zeromq-pyre-0.3.4/pyre/zhelper.py | zhelper.py |
import time
import zmq
import logging
logger = logging.getLogger(__name__)
class PyrePeer(object):
PEER_EXPIRED = 30 # expire after 10s
PEER_EVASIVE = 10 # mark evasive after 5s
def __init__(self, ctx, identity):
# TODO: what to do with container?
self._ctx = ctx # ZMQ context
self.mailbox = None # Socket through to peer
self.identity = identity # Identity UUID
self.endpoint = None # Endpoint connected to
self.name = "notset" # Peer's public name
self.origin = "unknown" # Origin node's public name
self.evasive_at = 0 # Peer is being evasive
self.expired_at = 0 # Peer has expired by now
self.connected = False # Peer will send messages
self.ready = False # Peer has said Hello to us
self.status = 0 # Our status counter
self.sent_sequence = 0 # Outgoing message sequence
self.want_sequence = 0 # Incoming message sequence
self.headers = {} # Peer headers
def __del__(self):
self.disconnect()
# Connect peer mailbox
def connect(self, reply_to, endpoint):
if self.connected:
return
# Create new outgoing socket (drop any messages in transit)
self.mailbox = zmq.Socket(self._ctx, zmq.DEALER)
# Set our caller 'From' identity so that receiving node knows
# who each message came from.
# Set our own identity on the socket so that receiving node
# knows who each message came from. Note that we cannot use
# the UUID directly as the identity since it may contain a
# zero byte at the start, which libzmq does not like for
# historical and arguably bogus reasons that it nonetheless
# enforces.
# we set linger to 0 by default (In zyre this is done by czmq's zsys)
self.mailbox.setsockopt(zmq.LINGER, 0)
self.mailbox.setsockopt(zmq.IDENTITY, b'\x01' + reply_to.bytes)
# Set a high-water mark that allows for reasonable activity
self.mailbox.setsockopt(zmq.SNDHWM, PyrePeer.PEER_EXPIRED * 100)
# Send messages immediately or return EAGAIN
self.mailbox.setsockopt(zmq.SNDTIMEO, 0)
# Connect through to peer node
logger.debug("Connecting to peer {0} on endpoint {1}".format(self.identity, endpoint))
self.mailbox.connect(endpoint)
self.endpoint = endpoint
self.connected = True
self.ready = False
# Disconnect peer mailbox
# No more messages will be sent to peer until connected again
def disconnect(self):
# If connected, destroy socket and drop all pending messages
if (self.connected):
logger.debug("{0} Disconnecting peer {1}".format(self.origin, self.name))
self.mailbox.close()
self.mailbox = None
self.endpoint = ""
self.connected = False
self.ready = False
# end disconnect
# Send message to peer
def send(self, msg):
if self.connected:
self.sent_sequence += 1
self.sent_sequence = self.sent_sequence % 65535
msg.set_sequence(self.sent_sequence)
try:
msg.send(self.mailbox)
except zmq.Again as e:
self.disconnect()
logger.debug("{0} Error while sending {1} to peer={2} sequence={3}".format(self.origin,
msg.get_command(),
self.name,
msg.get_sequence()))
return -1
logger.debug("{0} send {1} to peer={2} sequence={3}".format(self.origin,
msg.get_command(),
self.name,
msg.get_sequence()))
else:
logger.debug("Peer {0} is not connected".format(self.identity))
# end send
# Return peer connected status
def is_connected(self):
return self.connected
# end is_connected
# Return peer identity string
def get_identity(self):
return self.identity
# end get_identity
# Return peer connection endpoint
def get_endpoint(self):
if self.connected:
return self.endpoint
else:
return ""
# end get_endpoint
# Register activity at peer
def refresh(self):
self.evasive_at = time.time() + self.PEER_EVASIVE
self.expired_at = time.time() + self.PEER_EXPIRED
# end refresh
# Return future evasive time
def evasive_at(self):
return self.evasive_at
# end evasiv_at
# Return future expired time
def expired_at(self):
return self.expired_at
# end expired_at
# Return peer name
def get_name(self):
return self.name
# Set peer name
def set_name(self, name):
self.name = name
# Set current node name, for logging
def set_origin(self, origin):
self.origin = origin
# Return peer status
def get_status(self):
return self.status
# end get_status
# Set peer status
def set_status(self, status):
self.status = status & 0xFF
# end set_status
# Return peer ready state
def get_ready(self):
return self.ready
# end get_ready
# Set peer ready state
def set_ready(self, ready):
self.ready = ready
# end set_ready
# Get peer header value
def get_header(self, key):
return self.headers.get(key, "")
# end get_header
# Get peer headers
def get_headers(self):
return self.headers
# Set peer headers
def set_headers(self, headers):
self.headers = headers
# end set_headers
# Check if messages were lost from peer, returns true if they were
def messages_lost(self, msg):
# The sequence number set by the peer, and our own calculated
# sequence number should be the same.
logger.debug("(%s) recv %s from peer=%s sequence=%d"%(
self.origin,
msg.get_command(),
self.name,
msg.get_sequence()) )
if msg.get_command == "HELLO":
self.want_sequence = 1
else:
self.want_sequence += 1
self.want_sequence = self.want_sequence % 65535
if self.want_sequence != msg.get_sequence():
logger.debug("(%s) seq error from peer=%s expect=%d, got=%d",
self.origin,
self.name,
self.want_sequence,
msg.get_sequence())
return True;
return False
# end check_message | zeromq-pyre | /zeromq-pyre-0.3.4.tar.gz/zeromq-pyre-0.3.4/pyre/pyre_peer.py | pyre_peer.py |
import zmq
import threading
import logging
from . import zsocket
from . import zhelper
logger = logging.getLogger(__name__)
class ZActor(object):
ZACTOR_TAG = 0x0005cafe
def __init__(self, ctx, actor, *args, **kwargs):
self.tag = self.ZACTOR_TAG
self.ctx = ctx
# Create front-to-back pipe pair
self.pipe, self.shim_pipe = zhelper.zcreate_pipe(ctx)
self.shim_handler = actor
self.shim_args = (self.ctx, self.shim_pipe)+args
self.shim_kwargs = kwargs
self.is_running = False
self.thread = threading.Thread(target=self.run)
# we manage threads exiting ourselves!
self.thread.daemon = False
self.thread.start()
# Mandatory handshake for new actor so that constructor returns only
# when actor has also initialized. This eliminates timing issues at
# application start up.
self.pipe.wait()
def run(self):
self.is_running = True
self.shim_handler(*self.shim_args, **self.shim_kwargs)
self.shim_pipe.set(zmq.SNDTIMEO, 0)
self.shim_pipe.signal()
self.shim_pipe.close()
self.is_running = False
def destroy(self):
# Signal the actor to end and wait for the thread exit code
# If the pipe isn't connected any longer, assume child thread
# has already quit due to other reasons and don't collect the
# exit signal.
if self.tag == 0xDeadBeef:
logger.warning("Zactor: already destroyed")
return
try:
self.pipe.set(zmq.SNDTIMEO, 0)
self.pipe.send_unicode("$TERM")
# misschien self.pipe.wait()????
self.pipe.wait()
except zmq.error.Again:
pass
self.pipe.close()
self.tag = 0xDeadBeef;
def send(self, *args, **kwargs):
return self.pipe.send(*args, **kwargs)
def send_unicode(self, *args, **kwargs):
return self.pipe.send_unicode(*args, **kwargs)
def send_multipart(self, *args, **kwargs):
return self.pipe.send_multipart(*args, **kwargs)
def send_pyobj(self, *args, **kwargs):
return self.pipe.send_pyobj(*args, **kwargs)
def recv(self, *args, **kwargs):
return self.pipe.recv(*args, **kwargs)
def recv_unicode(self, *args, **kwargs):
return self.pipe.recv_unicode(*args, **kwargs)
def recv_multipart(self, *args, **kwargs):
return self.pipe.recv_multipart(*args, **kwargs)
def recv_pyobj(self, *args, **kwargs):
return self.pipe.recv_pyobj(*args, **kwargs)
# --------------------------------------------------------------------------
# Probe the supplied object, and report if it looks like a zactor_t.
def is_zactor(self):
return isinstance(ZActor, self)
# --------------------------------------------------------------------------
# Probe the supplied reference. If it looks like a zactor_t instance,
# return the underlying libzmq actor handle; else if it looks like
# a libzmq actor handle, return the supplied value.
# In Python we just return the pipe socket
def resolve(self):
return self.pipe
def echo_actor(ctx, pipe, *args):
# Do some initialization
pipe.signal()
terminated = False;
while not terminated:
msg = pipe.recv_multipart();
command = msg.pop(0)
if command == b"$TERM":
terminated = True
elif command == b"ECHO":
pipe.send(msg.pop(0))
else:
print("E: invalid message to actor")
pipe.signal()
def zactor_test(verbose=False):
print(" * zactor: ")
actor = ZActor(zmq.Context(), echo_actor, "Hello, World")
actor.send_unicode("ECHO", zmq.SNDMORE)
actor.send_unicode("This is a string")
msg = actor.recv()
print("RECEIVED: %s" %msg)
actor.destroy()
print("OK");
if __name__ == '__main__':
zactor_test() | zeromq-pyre | /zeromq-pyre-0.3.4.tar.gz/zeromq-pyre-0.3.4/pyre/zactor.py | zactor.py |
import logging
import ipaddress
import socket
import zmq
import struct
import time
from sys import platform
from .zactor import ZActor
from . import zhelper
from .zhelper import u
logger = logging.getLogger(__name__)
INTERVAL_DFLT = 1.0
BEACON_MAX = 255 # Max size of beacon data
MULTICAST_GRP = '225.25.25.25'
ENETDOWN = 50 #socket error, network is down
ENETUNREACH = 51 #socket error, network unreachable
class ZBeacon(object):
def __init__(self, ctx, pipe, *args, **kwargs):
self.ctx = ctx # ZMQ context
self.pipe = pipe # Actor command pipe
self.udpsock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM, socket.IPPROTO_UDP)
# UDP socket for send/recv
self.port_nbr = 0 # UDP port number we work on
self.interval = INTERVAL_DFLT # Beacon broadcast interval
self.ping_at = 0 # Next broadcast time
self.transmit = None # Beacon transmit data
self.filter = b"" # Beacon filter data
self.terminated = False # Did caller ask us to quit?
self.verbose = False # Verbose logging enabled?
self.hostname = "" # Saved host name
self.address = None
self.network_address = None
self.broadcast_address = None
self.interface_name = None
self.run()
def __del__(self):
if self.udpsock:
self.udpsock.close()
def prepare_udp(self):
try:
self._prepare_socket()
except ValueError:
logger.exception("Error preparing socket:")
return
try:
self.udpsock.setsockopt(socket.SOL_SOCKET, socket.SO_BROADCAST, 1)
self.udpsock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
# On some platforms we have to ask to reuse the port
try:
self.udpsock.setsockopt(socket.SOL_SOCKET,
socket.SO_REUSEPORT, 1)
except AttributeError:
pass
if self.broadcast_address.is_multicast:
# TTL
self.udpsock.setsockopt(socket.IPPROTO_IP,
socket.IP_MULTICAST_TTL, 2)
# TODO: This should only be used if we do not have inproc method!
self.udpsock.setsockopt(socket.IPPROTO_IP, socket.IP_MULTICAST_LOOP, 1)
# Usually, the system administrator specifies the
# default interface multicast datagrams should be
# sent from. The programmer can override this and
# choose a concrete outgoing interface for a given
# socket with this option.
#
# this results in the loopback address?
# host = socket.gethostbyname(socket.gethostname())
# self.udpsock.setsockopt(socket.SOL_IP, socket.IP_MULTICAST_IF, socket.inet_aton(host))
# You need to tell the kernel which multicast groups
# you are interested in. If no process is interested
# in a group, packets destined to it that arrive to
# the host are discarded.
# You can always fill this last member with the
# wildcard address (INADDR_ANY) and then the kernel
# will deal with the task of choosing the interface.
#
# Maximum memberships: /proc/sys/net/ipv4/igmp_max_memberships
# self.udpsock.setsockopt(socket.SOL_IP, socket.IP_ADD_MEMBERSHIP,
# socket.inet_aton("225.25.25.25") + socket.inet_aton(host))
self.udpsock.bind(("", self.port_nbr))
group = socket.inet_aton("{0}".format(self.broadcast_address))
mreq = struct.pack('4sl', group, socket.INADDR_ANY)
self.udpsock.setsockopt(socket.SOL_IP,
socket.IP_ADD_MEMBERSHIP, mreq)
else:
# Platform specifics
if platform.startswith("linux"):
# on linux we bind to the broadcast address and send to
# the broadcast address
self.udpsock.bind((str(self.broadcast_address),
self.port_nbr))
else:
self.udpsock.bind(("", self.port_nbr))
logger.debug("Set up a broadcast beacon to {0}:{1}".format(self.broadcast_address, self.port_nbr))
except socket.error:
logger.exception("Initializing of {0} raised an exception".format(self.__class__.__name__))
def _prepare_socket(self):
netinf = zhelper.get_ifaddrs()
logger.debug("Available interfaces: {0}".format(netinf))
for iface in netinf:
# Loop over the interfaces and their settings to try to find the broadcast address.
# ipv4 only currently and needs a valid broadcast address
for name, data in iface.items():
logger.debug("Checking out interface {0}.".format(name))
# For some reason the data we need lives in the "2" section of the interface.
data_2 = data.get(2)
if not data_2:
logger.debug("No data_2 found for interface {0}.".format(name))
continue
address_str = data_2.get("addr")
netmask_str = data_2.get("netmask")
if not address_str or not netmask_str:
logger.debug("Address or netmask not found for interface {0}.".format(name))
continue
if isinstance(address_str, bytes):
address_str = address_str.decode("utf8")
if isinstance(netmask_str, bytes):
netmask_str = netmask_str.decode("utf8")
interface_string = "{0}/{1}".format(address_str, netmask_str)
interface = ipaddress.ip_interface(u(interface_string))
if interface.is_loopback:
logger.debug("Interface {0} is a loopback device.".format(name))
continue
if interface.is_link_local:
logger.debug("Interface {0} is a link-local device.".format(name))
continue
self.address = interface.ip
self.network_address = interface.network.network_address
self.broadcast_address = interface.network.broadcast_address
self.interface_name = name
if self.address:
break
logger.debug("Finished scanning interfaces.")
if not self.address:
self.network_address = ipaddress.IPv4Address(u('127.0.0.1'))
self.broadcast_address = ipaddress.IPv4Address(u(MULTICAST_GRP))
self.interface_name = 'loopback'
self.address = u('127.0.0.1')
logger.debug("Address: {0}".format(self.address))
logger.debug("Network: {0}".format(self.network_address))
logger.debug("Broadcast: {0}".format(self.broadcast_address))
logger.debug("Interface name: {0}".format(self.interface_name))
def configure(self, port_nbr):
self.port_nbr = port_nbr
self.prepare_udp()
self.pipe.send_unicode(str(self.address))
def handle_pipe(self):
# Get just the commands off the pipe
request = self.pipe.recv_multipart()
command = request.pop(0).decode('UTF-8')
if not command:
return -1 # Interrupted
if self.verbose:
logger.debug("zbeacon: API command={0}".format(command))
if command == "VERBOSE":
self.verbose = True
elif command == "CONFIGURE":
port = struct.unpack('I', request.pop(0))[0]
self.configure(port)
elif command == "PUBLISH":
self.transmit = request.pop(0)
if self.interval == 0:
self.interval = INTERVAL_DFLT
# Start broadcasting immediately
self.ping_at = time.time()
elif command == "SILENCE":
self.transmit = None
elif command == "SUBSCRIBE":
self.filter = request.pop(0)
elif command == "UNSUBSCRIBE":
self.filter = None
elif command == "$TERM":
self.terminated = True
else:
logger.error("zbeacon: - invalid command: {0}".format(command))
def handle_udp(self):
try:
frame, addr = self.udpsock.recvfrom(BEACON_MAX)
except Exception as e:
logger.exception("Exception while receiving: {0}".format(e))
return
peername = addr[0]
# If filter is set, check that beacon matches it
is_valid = False
if self.filter is not None:
if len(self.filter) <= len(frame):
match_data = frame[:len(self.filter)]
if (match_data == self.filter):
is_valid = True
# If valid, discard our own broadcasts, which UDP echoes to us
if is_valid and self.transmit:
if frame == self.transmit:
is_valid = False
# If still a valid beacon, send on to the API
if is_valid:
self.pipe.send_unicode(peername, zmq.SNDMORE)
self.pipe.send(frame)
def send_beacon(self):
try:
self.udpsock.sendto(self.transmit, (str(self.broadcast_address),
self.port_nbr))
except OSError as e:
# network down, just wait, it could come back up again.
# socket call errors 50 and 51 relate to the network being
# down or unreachable, the recommended action to take is to
# try again so we don't terminate in these cases.
if e.errno in [ENETDOWN, ENETUNREACH]: pass
# all other cases, we'll terminate
else:
logger.debug("Network seems gone, exiting zbeacon")
self.terminated = True
except socket.error:
logger.debug("Network seems gone, exiting zbeacon")
self.terminated = True
def run(self):
# Signal actor successfully initialized
self.pipe.signal()
self.poller = zmq.Poller()
self.poller.register(self.pipe, zmq.POLLIN)
self.poller.register(self.udpsock, zmq.POLLIN)
while not self.terminated:
timeout = 1
if self.transmit:
timeout = self.ping_at - time.time()
if timeout < 0:
timeout = 0
# Poll on API pipe and on UDP socket
items = dict(self.poller.poll(timeout * 1000))
if self.pipe in items and items[self.pipe] == zmq.POLLIN:
self.handle_pipe()
if self.udpsock.fileno() in items and items[self.udpsock.fileno()] == zmq.POLLIN:
self.handle_udp()
if self.transmit and time.time() >= self.ping_at:
self.send_beacon()
self.ping_at = time.time() + self.interval
if __name__ == '__main__':
import zmq
import struct
import time
speaker = ZActor(zmq.Context(), ZBeacon)
speaker.send_unicode("VERBOSE")
speaker.send_unicode("CONFIGURE", zmq.SNDMORE)
speaker.send(struct.pack("I", 9999))
speaker.send_unicode("PUBLISH", zmq.SNDMORE)
import uuid
transmit = struct.pack('cccb16sH', b'Z', b'R', b'E',
1, uuid.uuid4().bytes,
socket.htons(1300))
speaker.send(transmit)
speaker.destroy() | zeromq-pyre | /zeromq-pyre-0.3.4.tar.gz/zeromq-pyre-0.3.4/pyre/zbeacon.py | zbeacon.py |
import zmq
import struct
class ZSocket(zmq.Socket):
def __init__(self, *args, **kwargs):
super(ZSocket, self).__init__(*args, **kwargs)
# --------------------------------------------------------------------------
# Send a signal over a socket. A signal is a zero-byte message.
# Signals are used primarily between threads, over pipe sockets.
# Returns -1 if there was an error sending the signal.
#def signal(self):
# self.send_unicode("")
# --------------------------------------------------------------------------
# Wait on a signal. Use this to coordinate between threads, over
# pipe pairs. Blocks until the signal is received. Returns -1 on error,
# 0 on success.
#def wait(self):
# while True:
# msg = self.recv()
# print("WAIT MSG", msg)
# --------------------------------------------------------------------------
# Send a signal over a socket. A signal is a short message carrying a
# success/failure code (by convention, 0 means OK). Signals are encoded
# to be distinguishable from "normal" messages. Accepts a zock_t or a
# zactor_t argument, and returns 0 if successful, -1 if the signal could
# not be sent.
# Send a signal over a socket. A signal is a zero-byte message.
# Signals are used primarily between threads, over pipe sockets.
# Returns -1 if there was an error sending the signal.
def signal(self, status=0):
signal_value = 0x7766554433221100 + status
self.send(struct.pack("Q", signal_value))
# --------------------------------------------------------------------------
# A signal is a message containing one frame with our 8-byte magic
# value. If we get anything else, we discard it and continue to look
# for the signal message
def wait(self):
while(True):
msg = self.recv()
if len(msg) == 8:
signal_value = struct.unpack('Q', msg)[0]
if (signal_value & 0xFFFFFFFFFFFFFF00) == 0x7766554433221100:
# return True or False based on the signal value send
return signal_value & 255
else:
return -1 | zeromq-pyre | /zeromq-pyre-0.3.4.tar.gz/zeromq-pyre-0.3.4/pyre/zsocket.py | zsocket.py |
import struct
import uuid
import zmq
import logging
STRING_MAX = 255
logger = logging.getLogger(__name__)
class ZreMsg(object):
VERSION = 2
HELLO = 1
WHISPER = 2
SHOUT = 3
JOIN = 4
LEAVE = 5
PING = 6
PING_OK = 7
def __init__(self, id=None, *args, **kwargs):
self.address = ""
self.id = id
self.sequence = 0
self.endpoint = ""
self.groups = ()
self.group = None
self.status = 0
self.name = ""
self.headers = {}
self.content = b""
self.struct_data = kwargs.get("data", b'')
self._needle = 0
self._ceil = len(self.struct_data)
#def __del__(self):
def recv(self, input_socket):
# If we're reading from a ROUTER socket, get address
frames = input_socket.recv_multipart()
if input_socket.type == zmq.ROUTER:
self.address = frames.pop(0)
# we drop the first byte: TODO ref!
try:
self.address = uuid.UUID(bytes=self.address[1:])
except ValueError:
logger.debug("Peer identity frame empty or malformed")
return None
# Read and parse command in frame
self.struct_data = frames.pop(0)
if not self.struct_data:
return None
# Get and check protocol signature
if self._needle != 0:
logger.debug("Message already decoded for protocol signature")
self._ceil = len(self.struct_data)
signature = self._get_number2()
if signature != (0xAAA0 | 1):
logger.debug("Invalid signature {0}".format(signature))
return None
# Get message id and parse per message type
self.id = self._get_number1()
version = self._get_number1()
if version != 2:
logger.debug("Invalid version {0}".format(version))
return None
if self.id == ZreMsg.HELLO:
self.unpack_hello()
elif self.id == ZreMsg.WHISPER:
self.sequence = self._get_number2()
if len(frames):
self.content = frames
elif self.id == ZreMsg.SHOUT:
self.sequence = self._get_number2()
self.group = self._get_string()
if len(frames):
self.content = frames
elif self.id == ZreMsg.JOIN:
self.sequence = self._get_number2()
self.group = self._get_string()
self.status = self._get_number1()
elif self.id == ZreMsg.LEAVE:
self.sequence = self._get_number2()
self.group = self._get_string()
self.status = self._get_number1()
elif self.id == ZreMsg.PING:
self.sequence = self._get_number2()
elif self.id == ZreMsg.PING_OK:
self.sequence = self._get_number2()
else:
logger.debug("Message type {0} unknown".format(self.id))
# Send the zre_msg to the output, and destroy it
def send(self, output_socket):
# clear data
self.struct_data = b''
self._needle = 0
# add signature
self._put_number2(0xAAA0 | 1)
# add id
self._put_number1(self.id)
#print(self.struct_data)
# add version
self._put_number1(2)
if self.id == ZreMsg.HELLO:
self.pack_hello()
elif self.id == ZreMsg.WHISPER:
self._put_number2(self.sequence)
# add content in a new frame
elif self.id == ZreMsg.SHOUT:
self._put_number2(self.sequence)
self._put_string(self.group)
# add content in a new frame
elif self.id == ZreMsg.JOIN:
self._put_number2(self.sequence)
self._put_string(self.group)
self._put_number1(self.status)
elif self.id == ZreMsg.LEAVE:
self._put_number2(self.sequence)
self._put_string(self.group)
self._put_number1(self.status)
elif self.id == ZreMsg.PING:
self._put_number2(self.sequence)
elif self.id == ZreMsg.PING_OK:
self._put_number2(self.sequence)
else:
logger.debug("Message type {0} unknown".format(self.id))
# If we're sending to a ROUTER, we send the address first
if output_socket.type == zmq.ROUTER:
output_socket.send(self.address.bytes, zmq.SNDMORE)
# Now send the data frame
if (self.content):
output_socket.send(self.struct_data, zmq.SNDMORE)
if isinstance(self.content, list):
output_socket.send_multipart(self.content)
else:
output_socket.send(self.content)
else:
output_socket.send(self.struct_data)
# Send the HELLO to the output in one step
def send_hello(self, output, sequence, ipaddress, mailbox, groups, status, headers):
print("E: NOT IMPLEMENTED")
pass
# Send the WHISPER to the output in one step
def send_whisper(self, output, sequence, content):
print("E: NOT IMPLEMENTED")
pass
# Send the SHOUT to the output in one step
def send_shout(self, output, sequence, group, content):
print("E: NOT IMPLEMENTED")
pass
# Send the JOIN to the output in one step
def send_join(self, output, sequence, group, status):
print("E: NOT IMPLEMENTED")
pass
# Send the LEAVE to the output in one step
def send_leave(self, sequence, group, status):
print("E: NOT IMPLEMENTED")
pass
# Send the PING to the output in one step
def send_ping(self, output, sequence):
print("E: NOT IMPLEMENTED")
pass
# Send the PING_OK to the output in one step
def send_ping_ok(self, output, sequence):
print("E: NOT IMPLEMENTED")
pass
# Duplicate the zre_msg message
def dup(self):
print("E: NOT IMPLEMENTED")
pass
# Print contents of message to stdout
def dump(self):
print("E: NOT IMPLEMENTED")
pass
# Get/set the message address
def get_address(self):
return self.address
def set_address(self, address):
self.address = address
# Get the zre_msg id and printable command
def get_id(self):
return self.id
def set_id(self, id):
logger.warning("E: set_id NOT IMPLEMENTED")
def get_command(self):
if self.id == ZreMsg.HELLO:
return "HELLO"
if self.id == ZreMsg.WHISPER:
return "WHISPER"
if self.id == ZreMsg.SHOUT:
return "SHOUT"
if self.id == ZreMsg.JOIN:
return "JOIN"
if self.id == ZreMsg.LEAVE:
return "LEAVE"
if self.id == ZreMsg.PING:
return "PING"
if self.id == ZreMsg.PING_OK:
return "PING_OK"
def get_name(self):
return self.name
def set_name(self, name):
self.name = name
# Get/set the sequence field
def get_sequence(self):
return self.sequence
def set_sequence(self, sequence):
self.sequence = sequence
# Get/set the endpoint field
def get_endpoint(self):
return self.endpoint
def set_endpoint(self, endpoint):
self.endpoint = endpoint
# Get/set the ipaddress field
def get_ipaddress(self):
return self.ipaddress
def set_ipaddress(self, ipaddr):
self.ipaddress = ipaddr
# Get/set the mailbox field
def get_mailbox(self):
return self.mailbox
def set_mailbox(self, port):
self.mailbox = port
# Get/set the groups field
def get_groups(self):
return self.groups
def set_groups(self, groups):
self.groups = groups
# Iterate through the groups field, and append a groups value
# TODO: do we need this in python? l186 zre_msg.h
# Get/set the status field
def get_status(self):
return self.status
def set_status(self, status):
self.status = status
# Get/set the headers field
def get_headers(self):
return self.headers
def set_headers(self, headers):
self.headers = headers
# Get/set a value in the headers dictionary
# TODO: l208 zre_msg.h
# Get/set the group field
def get_group(self):
return self.group
def set_group(self, group):
self.group = group
def _get_string(self):
s_len = self._get_number1()
s = struct.unpack_from(str(s_len) + 's', self.struct_data, offset=self._needle)
self._needle += struct.calcsize('s' * s_len)
return s[0].decode('UTF-8')
def _get_number1(self):
num = struct.unpack_from('>B', self.struct_data, offset=self._needle)
self._needle += struct.calcsize('>B')
return num[0]
def _get_number2(self):
num = struct.unpack_from('>H', self.struct_data, offset=self._needle)
self._needle += struct.calcsize('>H')
return num[0]
def _get_number4(self):
num = struct.unpack_from('>I', self.struct_data, offset=self._needle)
self._needle += struct.calcsize('>I')
return num[0]
def _get_number8(self):
num = struct.unpack_from('>Q', self.struct_data, offset=self._needle)
self._needle += struct.calcsize('>Q')
return num[0]
def _get_long_string(self):
s_len = self._get_number4()
s = struct.unpack_from(str(s_len) + 's', self.struct_data, offset=self._needle)
self._needle += struct.calcsize('s' * s_len)
return s[0].decode('UTF-8')
def _put_string(self, s):
self._put_number1(len(s))
d = struct.pack('%is' % len(s), s.encode('UTF-8'))
self.struct_data += d
def _put_number1(self, nr):
d = struct.pack('>B', nr)
self.struct_data += d
def _put_number2(self, nr):
d = struct.pack('>H', nr)
self.struct_data += d
def _put_number4(self, nr):
d = struct.pack('>I', nr)
self.struct_data += d
def _put_number8(self, nr):
d = struct.pack('>Q', nr)
self.struct_data += d
def _put_long_string(self, s):
self._put_number4(len(s))
d = struct.pack('%is' % len(s), s.encode('UTF-8'))
self.struct_data += d
def unpack_hello(self):
"""unpack a zre hello packet
sequence number 2
endpoint string
groups strings
status number 1
name string
headers dictionary
"""
#self._needle = 0
self.sequence = self._get_number2()
#print(self.sequence)
#print("needle is at: %i"% self._needle )
self.endpoint = self._get_string()
#print(self.ipaddress)
#print("needle is at: %i"% self._needle )
group_len = self._get_number4()
#print("needle is at: %i"% self._needle )
#print("grouplen: ", group_len)
self.groups = []
for x in range(group_len):
self.groups.append(self._get_long_string())
#print(self.groups)
#print("post_group: needle is at: %i"% self._needle )
self.status = self._get_number1()
self.name = self._get_string()
headers_len = self._get_number4()
self.headers = {}
for x in range(headers_len):
key = self._get_string()
val = self._get_long_string()
self.headers.update({key: val})
#import ast
#for hdr in hdrlist:
# # TODO: safer to use ast.literal_eval
# headers.update(ast.literal_eval(hdr))
#print(self.headers)
def pack_hello(self):
"""Pack a zre hello packet
sequence number 2
endpoint string
groups strings
status number 1
name string
headers dictionary
"""
# clear data
#self.struct_data = b''
#print(len(self.struct_data))
#self._put_number2(0xAAA0)
#self._put_number1(self.id)
self._put_number2(self.sequence)
self._put_string(self.endpoint)
self._put_number4(len(self.groups))
for g in self.groups:
self._put_long_string(g)
self._put_number1(self.status)
self._put_string(self.name)
self._put_number4(len(self.headers))
for key, val in self.headers.items():
self._put_string(key)
self._put_long_string(val)
if __name__ == '__main__':
logger.addHandler(logging.StreamHandler())
logger.setLevel(logging.DEBUG)
# self._put_long_string("%s=%s" % (key, val)) # undefined: self, key, val
testdata = struct.pack('>Hb9sII2sI2sI2sbb4sIb1sI1sb1sI1s',
11, # sequence
9, # str length
b"192:20123", # endpoint
20123, # mailbox
3, # groups len
2, b"g1", # length + groupname
2, b"g2", # length + groupname
2, b"g3", # length + groupname
4, # status
4, b"NAME", # name
2, # header len
1, b"a", # length + dict
1, b"z", # length + dict
1, b"b", # length + dict
1, b"b" # length + dict
)
logger.debug("New ZRE HELLO message")
m = ZreMsg(ZreMsg.HELLO, data=testdata)
logger.debug("Unpack a HELLO message")
m.unpack_hello()
logger.debug("Pack a HELLO message")
m.pack_hello()
logger.debug("Unpack the packed HELLO message")
m.unpack_hello() | zeromq-pyre | /zeromq-pyre-0.3.4.tar.gz/zeromq-pyre-0.3.4/pyre/zre_msg.py | zre_msg.py |
# Zeroncy
A simple python tool to make your projet more decoupled.
Just put your project variables on a config file instead store in encironment variable. You can use a .env for json file...
[](https://badge.fury.io/py/zeroncy)

[](https://github.com/Lnvictor/zeroncy/actions/workflows/python-app.yml)
[](https://github.com/psf/black)
[](https://www.gnu.org/licenses/old-licenses/gpl-2.0.en.html)
## Installing
```console
pip install zeroncy
```
## How to Use
1. Using .env file
- create a .env file in project root and set variables...
```yml
DATABASE_URL=postgres://user:pwd@localhost:5432/psql
ALLOWED_HOSTS=localhost, 127.0.0.0.1
PORT=5000
```
- Then you could use your variables on your settings module...
```python
>>> import zeroncy
>>> zeroncy.config()
>>> zeroncy.get("DATABASE_URL")
'postgres://user:pwd@localhost:5432/psql'
# If you want a diferent type you can pass the cast parameter
>>> zeroncy.get("PORT", cast=int)
5000
# If your var has more than one value, you must set the many parameter to true...
>>> zeroncy.get("ALLOWED_HOSTS", many=True)
['localhost', '127.0.0.0.1']
```
2. Using .env.json file
- Create a .env.json file on project root:
```json
{
"DATABASE_URL": "postgres://user:pwd@localhost:5432/psql",
"ALLOWED_HOSTS": "localhost, 127.0.0.0.1",
"PORT": 5000
}
```
- Then you could use on a similar way as the previous
```python
>>> import zeroncy
>>> zeroncy.config(dict) # passes dict as parameter
>>> zeroncy.get("DATABASE_URL")
'postgres://user:pwd@localhost:5432/psql'
>>> zeroncy.get("PORT")
5000
>>> zeroncy.get("ALLOWED_HOSTS", many=True)
['localhost', '127.0.0.0.1']
# Note that on Json config you don't need to passes cast parameter for other types (Integer in this example)
```
# References
- This project was inpired by [python-decouple](https://github.com/henriquebastos/python-decouple) lib, it's a simpler adaption
- [Python Docs]()
---
# LICENSE
GNU GENERAL PUBLIC LICENSE
Version 3, 29 June 2007
Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
| zeroncy | /zeroncy-1.0.2.tar.gz/zeroncy-1.0.2/README.md | README.md |
# ZeroNet [](https://github.com/ZeroNetX/ZeroNet/actions/workflows/tests.yml) [](https://docs.zeronet.dev/1DeveLopDZL1cHfKi8UXHh2UBEhzH6HhMp/faq/) [](https://docs.zeronet.dev/1DeveLopDZL1cHfKi8UXHh2UBEhzH6HhMp/help_zeronet/donate/) [](https://hub.docker.com/r/canewsin/zeronet)
<!--TODO: Update Onion Site -->
Decentralized websites using Bitcoin crypto and the BitTorrent network - https://zeronet.dev / [ZeroNet Site](http://127.0.0.1:43110/1ZeroNetyV5mKY9JF1gsm82TuBXHpfdLX/), Unlike Bitcoin, ZeroNet Doesn't need a blockchain to run, But uses cryptography used by BTC, to ensure data integrity and validation.
## Why?
* We believe in open, free, and uncensored network and communication.
* No single point of failure: Site remains online so long as at least 1 peer is
serving it.
* No hosting costs: Sites are served by visitors.
* Impossible to shut down: It's nowhere because it's everywhere.
* Fast and works offline: You can access the site even if Internet is
unavailable.
## Features
* Real-time updated sites
* Namecoin .bit domains support
* Easy to setup: unpack & run
* Clone websites in one click
* Password-less [BIP32](https://github.com/bitcoin/bips/blob/master/bip-0032.mediawiki)
based authorization: Your account is protected by the same cryptography as your Bitcoin wallet
* Built-in SQL server with P2P data synchronization: Allows easier site development and faster page load times
* Anonymity: Full Tor network support with .onion hidden services instead of IPv4 addresses
* TLS encrypted connections
* Automatic uPnP port opening
* Plugin for multiuser (openproxy) support
* Works with any browser/OS
## How does it work?
* After starting `zeronet.py` you will be able to visit zeronet sites using
`http://127.0.0.1:43110/{zeronet_address}` (eg.
`http://127.0.0.1:43110/1HELLoE3sFD9569CLCbHEAVqvqV7U2Ri9d`).
* When you visit a new zeronet site, it tries to find peers using the BitTorrent
network so it can download the site files (html, css, js...) from them.
* Each visited site is also served by you.
* Every site contains a `content.json` file which holds all other files in a sha512 hash
and a signature generated using the site's private key.
* If the site owner (who has the private key for the site address) modifies the
site and signs the new `content.json` and publishes it to the peers.
Afterwards, the peers verify the `content.json` integrity (using the
signature), they download the modified files and publish the new content to
other peers.
#### [Slideshow about ZeroNet cryptography, site updates, multi-user sites »](https://docs.google.com/presentation/d/1_2qK1IuOKJ51pgBvllZ9Yu7Au2l551t3XBgyTSvilew/pub?start=false&loop=false&delayms=3000)
#### [Frequently asked questions »](https://docs.zeronet.dev/1DeveLopDZL1cHfKi8UXHh2UBEhzH6HhMp/faq/)
#### [ZeroNet Developer Documentation »](https://docs.zeronet.dev/1DeveLopDZL1cHfKi8UXHh2UBEhzH6HhMp/site_development/getting_started/)
## Screenshots


#### [More screenshots in ZeroNet docs »](https://docs.zeronet.dev/1DeveLopDZL1cHfKi8UXHh2UBEhzH6HhMp/using_zeronet/sample_sites/)
## How to join
### Windows
- Download [ZeroNet-win.zip](https://github.com/ZeroNetX/ZeroNet/releases/latest/download/ZeroNet-win.zip) (26MB)
- Unpack anywhere
- Run `ZeroNet.exe`
### macOS
- Download [ZeroNet-mac.zip](https://github.com/ZeroNetX/ZeroNet/releases/latest/download/ZeroNet-mac.zip) (14MB)
- Unpack anywhere
- Run `ZeroNet.app`
### Linux (x86-64bit)
- `wget https://github.com/ZeroNetX/ZeroNet/releases/latest/download/ZeroNet-linux.zip`
- `unzip ZeroNet-linux.zip`
- `cd ZeroNet-linux`
- Start with: `./ZeroNet.sh`
- Open the ZeroHello landing page in your browser by navigating to: http://127.0.0.1:43110/
__Tip:__ Start with `./ZeroNet.sh --ui_ip '*' --ui_restrict your.ip.address` to allow remote connections on the web interface.
### Android (arm, arm64, x86)
- minimum Android version supported 21 (Android 5.0 Lollipop)
- [<img src="https://play.google.com/intl/en_us/badges/images/generic/en_badge_web_generic.png"
alt="Download from Google Play"
height="80">](https://play.google.com/store/apps/details?id=in.canews.zeronetmobile)
- APK download: https://github.com/canewsin/zeronet_mobile/releases
### Android (arm, arm64, x86) Thin Client for Preview Only (Size 1MB)
- minimum Android version supported 16 (JellyBean)
- [<img src="https://play.google.com/intl/en_us/badges/images/generic/en_badge_web_generic.png"
alt="Download from Google Play"
height="80">](https://play.google.com/store/apps/details?id=dev.zeronetx.app.lite)
#### Docker
There is an official image, built from source at: https://hub.docker.com/r/canewsin/zeronet/
### Install from source
- `wget https://github.com/ZeroNetX/ZeroNet/releases/latest/download/ZeroNet-src.zip`
- `unzip ZeroNet-src.zip`
- `cd ZeroNet`
- `sudo apt-get update`
- `sudo apt-get install python3-pip`
- `sudo python3 -m pip install -r requirements.txt`
- Start with: `python3 zeronet.py`
- Open the ZeroHello landing page in your browser by navigating to: http://127.0.0.1:43110/
## Current limitations
* File transactions are not compressed
* No private sites
## How can I create a ZeroNet site?
* Click on **⋮** > **"Create new, empty site"** menu item on the site [ZeroHello](http://127.0.0.1:43110/1HELLoE3sFD9569CLCbHEAVqvqV7U2Ri9d).
* You will be **redirected** to a completely new site that is only modifiable by you!
* You can find and modify your site's content in **data/[yoursiteaddress]** directory
* After the modifications open your site, drag the topright "0" button to left, then press **sign** and **publish** buttons on the bottom
Next steps: [ZeroNet Developer Documentation](https://docs.zeronet.dev/1DeveLopDZL1cHfKi8UXHh2UBEhzH6HhMp/site_development/getting_started/)
## Help keep this project alive
- Bitcoin: 1ZeroNetyV5mKY9JF1gsm82TuBXHpfdLX (Preferred)
- LiberaPay: https://liberapay.com/PramUkesh
- Paypal: https://paypal.me/PramUkesh
- Others: [Donate](!https://docs.zeronet.dev/1DeveLopDZL1cHfKi8UXHh2UBEhzH6HhMp/help_zeronet/donate/#help-to-keep-zeronet-development-alive)
#### Thank you!
* More info, help, changelog, zeronet sites: https://www.reddit.com/r/zeronetx/
* Come, chat with us: [#zeronet @ FreeNode](https://kiwiirc.com/client/irc.freenode.net/zeronet) or on [gitter](https://gitter.im/canewsin/ZeroNet)
* Email: [email protected]
| zeronetx | /zeronetx-0.0.1.tar.gz/zeronetx-0.0.1/README.md | README.md |
School research project
------------------------
I created this package for my school research project.
I want to create an AI that can predict if a human where to get a heart failure.
For this I created a very long python file but I wanted to make this easier to understand.
So, I created this package to get a shorter file and make it easier to understand.
How to use
--------------
This package uses the scikit-learn package and therefore some classes like the MLPRegressor are the same.
1. Import the package: "import zeroone_ai"
2. Choose wich class you are going to use: "from zeroone_ai import MLPRegressor
3. (More comming in the future)
License
--------------
- scikit-learn (new BSD) https://pypi.org/project/scikit-learn/
- pandas (BSD 3-Clause License) https://github.com/pandas-dev/pandas/blob/master/LICENSE
- matplotlib (Python Software Foundation License (PSF)) https://pypi.org/project/matplotlib/3.4.3/
- numpy (BSD License (BSD)) https://github.com/numpy/numpy/blob/main/LICENSE.txt | zeroone-ai | /zeroone-ai-0.0.11.tar.gz/zeroone-ai-0.0.11/README.txt | README.txt |
import numpy as np
from numpy.core.numeric import tensordot
import pandas as pd
import pickle
import matplotlib.pyplot as matplot
from sklearn.model_selection import train_test_split
from sklearn.neural_network import MLPRegressor as MLPR
from sklearn.linear_model import LogisticRegression as LR
from sklearn.neural_network import MLPClassifier as MLPC
from sklearn.metrics import plot_confusion_matrix
def accuracy(pred_labels,true_labels):
lengte = len(true_labels)
all_errors = 0
for pred_label,true_label in zip(pred_labels,true_labels):
if pred_label == true_label:
all_errors += 1
MAE = all_errors / lengte
return MAE
def normalize(df, mapping):
result = df.copy()
for feature_name in df.columns:
max_value = mapping[feature_name]['max']
min_value = mapping[feature_name]['min']
result[feature_name] = (df[feature_name] - min_value) / (max_value - min_value)
return result
def plt(true_labels,pred_labels,label,length,yrange,ax):
if ax == None:
fig, ax = matplot.subplots()
index = np.arange(length)
bar_width = 0.25
opacity = 0.8
rects1 = ax.bar(index, true_labels[:length], bar_width, alpha=opacity,color='r',label='True_labels')
rects2 = ax.bar(index + bar_width, pred_labels[:length], bar_width,alpha=opacity,color='b',label='Pred_labels')
rects2 = ax.bar(index + 2*bar_width, abs(true_labels[:length]-pred_labels[:length]), bar_width,alpha=opacity,color='g',label='Absolute Error')
ax.ticklabel_format(style='plain')
ax.set_xlabel('')
ax.set_ylabel('')
ax.set_title(label)
if yrange != None:
ax.set_ylim(yrange)
ax.legend()
if ax == None:
matplot.show()
matplot.show()
class MLPRegressor:
def __init__(self,dataset,input_data,label,test_size=0.2,random_state=42,mapping=None):
self.label = label
self.mapping = mapping
dataset = pd.read_csv(dataset)
input_data = dataset[input_data]
if mapping == None:
input_data = (input_data - input_data.mean()) / (input_data.max() - input_data.min())
self.mean = input_data.mean()
self.max = input_data.max()
self.min = input_data.min()
else:
input_data = normalize(input_data, mapping)
label = dataset[self.label]
self.data_train, self.data_test, self.labels_train, self.labels_test = train_test_split(input_data,label,test_size=test_size,random_state=random_state)
def plot(self,length,yrange=None,ax=None):
pred_labels = self.model.predict(self.data_test)
true_labels = self.labels_test
pred_labels = np.round(np.clip(pred_labels,0,1))
plt(true_labels,pred_labels,self.label,length,yrange,ax)
def prediction(self, prediction_data):
if self.mapping == None:
prediction_data = (prediction_data - self.mean) / (self.max - self.min)
else:
prediction_data = normalize(prediction_data, self.mapping)
pred_label = self.model.predict(prediction_data)
return pred_label
def train(self,hidden_layer_sizes):
self.model = MLPR(hidden_layer_sizes)
self.model.fit(self.data_train,self.labels_train)
pred_labels = self.model.predict(self.data_test)
true_labels = self.labels_test
pred_labels = np.round(np.clip(pred_labels,0,1))
self.accuracy = accuracy(pred_labels,true_labels)
print('Het model heeft een nauwkeurigheid van {}.'.format(self.accuracy))
def epochtrain(self,hidden_layer_sizes,epochs,num_data):
self.model = MLPR(hidden_layer_sizes,max_iter=1,warm_start=True)
train_accs = []
test_accs = []
for epoch in range(epochs):
self.model.fit(self.data_train[:num_data], self.labels_test[:num_data])
pred_labels = self.model.predict(self.data_train[:num_data])
true_labels = self.labels_train[:num_data]
train_acc = accuracy(pred_labels,true_labels)
train_accs.append(train_acc)
pred_labels = self.model.predict(self.data_test[:1000])
true_labels = self.labels_test[:1000]
test_acc = accuracy(pred_labels,true_labels)
test_accs.append(test_acc)
matplot.plot(train_accs,label='Train acc')
matplot.plot(test_accs,label='Test acc')
matplot.xlabel('Epoch')
matplot.ylabel('Accuracy')
matplot.ylim(0.1)
matplot.legend()
matplot.plot()
matplot.show()
class MLPClassifier:
def __init__(self,dataset,input_data,label,test_size=0.2,random_state=42,mapping=None):
self.label = label
self.mapping = mapping
dataset = pd.read_csv(dataset)
input_data = dataset[input_data]
if mapping == None:
input_data = (input_data - input_data.mean()) / (input_data.max() - input_data.min())
self.mean = input_data.mean()
self.max = input_data.max()
self.min = input_data.min()
else:
input_data = normalize(input_data, mapping)
label = dataset[self.label]
self.data_train, self.data_test, self.labels_train, self.labels_test = train_test_split(input_data,label,test_size=test_size,random_state=random_state)
def plot(self,length,yrange=None,ax=None):
pred_labels = self.model.predict(self.data_test)
true_labels = self.labels_test
pred_labels = np.round(np.clip(pred_labels,0,1))
plt(true_labels,pred_labels,self.label,length,yrange,ax)
def prediction(self, prediction_data):
if self.mapping == None:
prediction_data = (prediction_data - self.mean) / (self.max - self.min)
else:
prediction_data = normalize(prediction_data, self.mapping)
pred_label = self.model.predict(prediction_data)
return pred_label
def train(self,hidden_layer_sizes):
self.model = MLPC(hidden_layer_sizes)
self.model.fit(self.data_train,self.labels_train)
pred_labels = self.model.predict(self.data_test)
true_labels = self.labels_test
pred_labels = np.round(np.clip(pred_labels,0,1))
self.accuracy = accuracy(pred_labels,true_labels)
print('Het model heeft een nauwkeurigheid van {}.'.format(self.accuracy))
def epochtrain(self,hidden_layer_sizes,epochs,num_data):
self.model = MLPC(hidden_layer_sizes,max_iter=1,warm_start=True)
train_accs = []
test_accs = []
for epoch in range(epochs):
self.model.fit(self.data_train[:num_data], self.labels_test[:num_data])
pred_labels = self.model.predict(self.data_train[:num_data])
true_labels = self.labels_train[:num_data]
train_acc = accuracy(pred_labels,true_labels)
train_accs.append(train_acc)
pred_labels = self.model.predict(self.data_test[:1000])
true_labels = self.labels_test[:1000]
test_acc = accuracy(pred_labels,true_labels)
test_accs.append(test_acc)
matplot.plot(train_accs,label='Train acc')
matplot.plot(test_accs,label='Test acc')
matplot.xlabel('Epoch')
matplot.ylabel('Accuracy')
matplot.ylim(0.1)
matplot.legend()
matplot.plot()
matplot.show()
class LogisticRegressor:
def __init__(self,dataset,input_data,label,test_size=0.2,random_state=42,mapping=None):
self.label = label
self.mapping = mapping
dataset = pd.read_csv(dataset)
input_data = dataset[input_data]
if mapping == None:
input_data = (input_data - input_data.mean()) / (input_data.max() - input_data.min())
self.mean = input_data.mean()
self.max = input_data.max()
self.min = input_data.min()
else:
input_data = normalize(input_data, mapping)
label = dataset[self.label]
self.data_train, self.data_test, self.labels_train, self.labels_test = train_test_split(input_data,label,test_size=test_size,random_state=random_state)
def plot(self,length,yrange=None,ax=None):
pred_labels = self.model.predict(self.data_test)
true_labels = self.labels_test
pred_labels = np.round(np.clip(pred_labels,0,1))
plt(true_labels,pred_labels,self.label,length,yrange,ax)
def prediction(self, prediction_data):
if self.mapping == None:
prediction_data = (prediction_data - self.mean) / (self.max - self.min)
else:
prediction_data = normalize(prediction_data, self.mapping)
pred_label = self.model.predict(prediction_data)
return pred_label
def train(self):
self.model = LR()
self.model.fit(self.data_train,self.labels_train)
pred_labels = self.model.predict(self.data_test)
true_labels = self.labels_test
pred_labels = np.round(np.clip(pred_labels,0,1))
self.accuracy = accuracy(pred_labels,true_labels)
print('Het model heeft een nauwkeurigheid van {}.'.format(self.accuracy)) | zeroone-ai | /zeroone-ai-0.0.11.tar.gz/zeroone-ai-0.0.11/zeroone_ai/__init__.py | __init__.py |
from time import sleep
from threading import Event, Thread
# Hardware-related hooks
import zerophone_hw
from pyric import pyw
# Local imports
from rpc_api import RPCApi
from sources import ThrottledSource, ConstantSource
# API version, kind-of-semver
api_version = (0,1,0)
api = RPCApi({"rpc_port":9376, "rpc_host":"127.0.0.1"})
version = zerophone_hw.hw_version
dcdc = None
led = None
charger = None
modem = None
def register_with(api, aliases=[], name=None):
def decorator(f):
api.register_function(f, function_name=name, aliases=aliases)
return f
return decorator
@register_with(api, name="api_version")
def get_api_version():
return api_version
@register_with(api, name="recreate_objects")
def setup_objects():
global dcdc, led, charger, modem
dcdc = zerophone_hw.USB_DCDC()
led = zerophone_hw.RGB_LED()
charger = zerophone_hw.Charger()
modem = zerophone_hw.GSM_Modem()
setup_objects()
# USB DC-DC functions
@register_with(api)
def dcdc_state():
return dcdc.gpio_state
@register_with(api)
def turn_dcdc_on():
dcdc.on()
return True
@register_with(api)
def turn_dcdc_off():
dcdc.off()
return True
""" Not sure if we need this kind of calling convention
@register_with(api)
def dcdc(state=None):
if state is not None:
return dcdc_state()
else:
if state:
return turn_dcdc_on()
else:
return turn_dcdc_off()
"""
# Hardware info functions
@register_with(api)
def get_board_version():
return version.string()
@register_with(api)
def get_hwlib_version():
return version.library()
serial_s = ConstantSource(version.get_serial)
@register_with(api)
def cpu_serial():
return serial_s.get()
# WiFi functions
def get_wif_from_wifs(wifs):
return wifs[0] # *very* simple heuristic =D
def pyw_link_info():
wifs = pyw.winterfaces()
if not wifs:
return None
wif = get_wif_from_wifs(wifs)
c = pyw.getcard(wif)
info = pyw.link(c)
info["card"] = wif
return info
wifi_info_source = ThrottledSource(pyw_link_info, 3)
@register_with(api)
def wifi_connected():
info = wifi_info_source.get()
if not info:
return None
return info.get("stat", None) == 'associated'
@register_with(api)
def wifi_info():
info = wifi_info_source.get()
if not info:
return None
return info
@register_with(api)
def wifi_strength():
info = wifi_info_source.get()
if not info:
return None
rss = info.get("rss", None)
if rss is None:
return None
# Will change soon, for now, just monitoring the possible values.
return rss
@register_with(api)
def wifi_ssid():
info = wifi_info_source.get()
if not info:
return None
return info.get("ssid", None)
@register_with(api)
def wifi_bssid():
info = wifi_info_source.get()
if not info:
return None
return info.get("bssid", None)
""" Not implemented in software yet
@register_with(api)
def wifi_powered():
return True
@register_with(api)
def turn_wifi_on():
return True
@register_with(api)
def turn_wifi_off():
return False
"""
# LED functions
@register_with(api)
def set_led_color(color_str):
return led.set_color(color_str)
@register_with(api)
def set_led_rgb(r, g, b):
return led.set_rgb(r, g, b)
# Charger functions
@register_with(api)
def charger_connected():
return charger.connected()
""" Not implemented in software yet
# Battery functions
@register_with(api)
def battery_level():
return 100
# GSM functions
@register_with(api)
def gsm_powered():
return False
@register_with(api)
def start_gsm():
modem.reset()
return False
@register_with(api)
def stop_gsm():
return False
@register_with(api)
def restart_gsm():
result = stop_gsm()
if not result:
return False
result = start_gsm()
if not result:
return False
return True
@register_with(api)
def gsm_strength():
return 20
"""
# The source polling API - is more efficient for sources that
# require resources for polling (i.e. requesting battery level
# from the GSM modem using AT commands). If there's anywhere
# that we can save ourselves a couple of context switches and
# CPU time, it's by using this polling API (together with
# ThrottledSource and whatever else comes in the future).
# "source_name":(callback, throttle_level)
sources = {"dcdc_state":(dcdc_state, 1),
#"gsm_running":gsm_running,
#"gsm_strength":gsm_strength,
"wifi_strength":(wifi_strength, 10),
"wifi_connected":(wifi_connected, 10),
"wifi_info":(wifi_info, 10),
"wifi_ssid":(wifi_ssid, 10),
"wifi_bssid":(wifi_bssid, 10),
"charger_connected":(charger_connected, 10),
#"battery_level":battery_level,
}
source_values = {s:None for s in sources.keys()}
source_refcount = {s:0 for s in sources.keys()}
source_throttle = {s:0 for s in sources.keys()}
source_timeouts = {s:0 for s in sources.keys()}
requests = []
source_timeout = 200
@register_with(api)
def request_source_poll(keys):
requests.append(keys)
for k in keys:
if k in source_refcount:
source_refcount[k] += 1
source_timeouts[k] = 0
else: # Unknown source, but we won't just error out on it, that wouldn't be nice
pass
@register_with(api, aliases=["get_polled_sources"])
def get_sources(keys):
data = {}
for k in keys:
if k in source_timeouts.keys():
source_timeouts[k] = 0
if k not in sources.keys():
v = "unavailable"
elif source_refcount[k] == 0:
v = "not polled"
else:
v = source_values.get(k, "unavailable")
data[k] = v
return data
@register_with(api)
def check_polled_sources(keys):
polled_sources = list_polled_sources()
return all([key in polled_sources for key in keys])
@register_with(api, aliases=["get_available_sources"])
def list_sources():
return list(sources.keys())
@register_with(api)
def list_polled_sources():
return [k for k,v in source_refcount.items() if v>0]
def polling_process():
sources_to_poll = list_polled_sources()
for source in sources_to_poll:
#print(source)
if source in sources.keys():
#print("polling source {} - throttle {}".format(source, source_throttle[source]))
if source_throttle[source] == sources[source][1]:
print("polling source {}".format(source))
source_values[source] = sources[source][0]()
source_throttle[source] = 0
else:
source_throttle[source] += 1
else:
source_values[source] = "unrecognized source"
do_run_polling = Event()
sleep_time = 0.1
def polling_loop():
do_run_polling.set()
while do_run_polling.isSet():
polling_process()
for source, value in source_timeouts.items():
if source_refcount[source] > 0:
#print("{} - {}".format(source, value))
if value >= source_timeout:
source_refcount[source] = 0
source_timeouts[source] = value + 1
sleep(sleep_time)
t = None
def run_polling():
global t
t = Thread(target=polling_loop)
t.daemon = True
t.start()
def main():
api.start_thread()
polling_loop()
if __name__ == "__main__":
api.start_thread()
#polling_loop()
run_polling() | zerophone-api-daemon | /zerophone_api_daemon-0.1.0.tar.gz/zerophone_api_daemon-0.1.0/zerophone_api_daemon/zerophone_api_daemon.py | zerophone_api_daemon.py |
import argparse
__all__ = ['get_hw_version_str', 'Charger', 'RGB_LED', 'USB_DCDC', "GSM_Modem"]
__version__ = '0.4.2'
import os
import sys
from copy import copy
from time import sleep
import gpio
sys.excepthook = sys.__excepthook__
# GPIO library workaround - it sets excepthook
# to PDB debug, that's good by itself, but it's going to
# propagate through apps' code, and that's not good
gpio.log.setLevel(gpio.logging.INFO)
# Otherwise, a bunch of stuff is printed on the screen
class Version(object):
"""
This helps us understand the hardware version of the phone.
For now, it only supports a database stored on the SD card,
tied to the serial number of the phone.
"""
version = None
version_db = "/etc/zphw.db"
cpuinfo_file = "/proc/cpuinfo"
default_version = "gamma"
serial_marker = "Serial"
autodetect_failed = True
def __init__(self):
pass
def read_database(self):
try:
with open(self.version_db, 'r') as f:
output = f.read()
except:
return {}
lines = output.split(os.linesep)
lines = [line.strip() for line in lines]
lines = list(filter(None, lines))
entries = dict([line.split(" ", 1) for line in lines])
return entries
def detect_version(self):
entries = self.read_database()
serial = self.get_serial()
if serial in entries:
self.autodetect_failed = False
return entries[serial]
else:
return self.default_version
def library(self):
return __version__
def set_version(self, version_str):
entries = self.read_database()
serial = self.get_serial()
lines = [" ".join(i) for i in entries.items()]
lines.append("{} {}".format(serial, version_str))
try:
with open(self.version_db, 'w') as f:
f.write(os.linesep.join(lines)+os.linesep)
except:
return False
else:
return True
def get_serial(self):
"""Get the CPU serial number"""
with open(self.cpuinfo_file, 'r') as f:
output = f.read()
lines = output.split(os.linesep)
lines = [line.strip() for line in lines]
lines = list(filter(None, lines))
for line in lines:
if line.startswith(self.serial_marker):
x = line[len(self.serial_marker):]
serial = x.split(':')[-1].strip()
return serial
return None
def string(self):
"""Get the version string"""
if not self.version or self.autodetect_failed:
self.version = self.detect_version()
return self.version
def version_unknown(self):
"""Tells whether the version could be autodetermined"""
if not self.version:
self.version = self.detect_version()
return self.autodetect_failed
hw_version = Version()
def get_hw_version_str():
return hw_version.string()
class Charger(object):
def __new__(cls, *args, **kwargs):
if get_hw_version_str() == "gamma":
return Charger_Gamma(*args, **kwargs)
elif get_hw_version_str() in ["delta", "delta-b"]:
return Charger_Delta(*args, **kwargs)
class Charger_Gamma(object):
chg_sense_gpio = 503
def __init__(self):
self.chg_sense_gpio_setup = False
def connected(self):
if not self.chg_sense_gpio_setup:
gpio.setup(self.chg_sense_gpio, gpio.IN)
self.chg_sense_gpio_setup = True
return bool(gpio.input(self.chg_sense_gpio))
class Charger_Delta(Charger_Gamma):
chg_sense_gpio = 508
class USB_DCDC(object):
def __new__(cls, *args, **kwargs):
if get_hw_version_str() in ["gamma", "delta", "delta-b"]:
return USB_DCDC_Gamma_Delta(*args, **kwargs)
class USB_DCDC_Gamma_Delta(object):
"""USB DCDC control for gamma/delta boards"""
gpio_exported = False
gpio_state = None
gpio_num = 510
def _set_state(self, state):
if not self.gpio_exported:
gpio.setup(self.gpio_num, gpio.OUT)
self.gpio_exported = True
self.gpio_state = state
gpio.set(self.gpio_num, not state)
def on(self):
"""Turns the DCDC on"""
self._set_state(True)
def off(self):
"""Turns the DCDC off"""
self._set_state(False)
def toggle(self):
"""Toggles DCDC state"""
self._set_state(not self.gpio_state)
class GSM_Modem(object):
def __new__(cls, *args, **kwargs):
if get_hw_version_str() == "gamma":
return GSM_Modem_Gamma(*args, **kwargs)
elif get_hw_version_str() in ["delta", "delta-b"]:
return GSM_Modem_Delta(*args, **kwargs)
class GSM_Modem_Gamma(object):
"""SIM800L modem control for the gamma board"""
gpio_dict = {"exported": False, "state": None, "num": None}
gpio_nums = {"ring": 501, "dtr": 500, "reset": 502}
def __init__(self):
self.gpios = {}
self._set_gpio_nums()
def _set_gpio_nums(self):
self.gpios = {name: copy(self.gpio_dict) for name in self.gpio_nums}
for name, num in self.gpio_nums.items():
self.gpios[name]["num"] = num
def _set_state(self, name, state):
g = self.gpios[name]
gpio_num = g["num"]
if not g["exported"]:
gpio.setup(gpio_num, gpio.OUT)
g["exported"] = True
g["state"] = state
gpio.set(gpio_num, state)
def reset(self):
self._set_state("reset", False)
sleep(1)
self._set_state("reset", True)
class GSM_Modem_Delta(GSM_Modem_Gamma):
"""SIM800L modem control for the delta board"""
gpio_nums = {"ring": 501, "dtr": 502, "reset": 496, "en": 500}
def enable_uart(self):
self._set_state("en", False)
def disable_uart(self):
self._set_state("en", True)
class RGB_LED(object):
def __new__(cls, *args, **kwargs):
if get_hw_version_str() == "gamma":
return RGB_LED_Gamma(*args, **kwargs)
elif get_hw_version_str() in ["delta", "delta-b"]:
return RGB_LED_Delta(*args, **kwargs)
class RGB_LED_Base(object):
color_mapping = {
"white": (255, 255, 255),
"red": (255, 0, 0),
"green": (0, 255, 0),
"blue": (0, 0, 255),
"none": (0, 0, 0)}
def __init__(self):
self._setup()
def off(self):
"""Turns the led off"""
self.set_rgb(0, 0, 0)
def __getattr__(self, name):
if name in self.color_mapping:
return lambda x=name: self.set_color(x)
def set_color(self, color_str):
"""Sets the color of the led from a string"""
try:
self.set_rgb(*self.color_mapping[color_str])
except KeyError:
raise ValueError("Color {} not found in color mapping!".format(color_str))
class RGB_LED_Gamma(RGB_LED_Base):
"""Controls the RGB led"""
def _setup(self):
for gpio_num in self._get_rgb_gpios():
gpio.setup(gpio_num, gpio.HIGH)
def _get_rgb_gpios(self):
# returns GPIOs for red, green, blue
return 498, 496, 497
def set_rgb(self, *colors):
"""Sets the color of the led from RGB values [0-255] range"""
colors = [int(c) for c in colors]
if len(colors) != 3 or any([type(color) != int for color in colors]):
raise TypeError("set_rgb expects three integer arguments - red, green and blue values!")
if any([color < 0 or color > 255 for color in colors]):
raise ValueError("set_rgb expects integers in range from 0 to 255!")
gpios = self._get_rgb_gpios()
for i, gpio_num in enumerate(gpios):
gpio_state = colors[i] < 255 # Only 0 and 255 are respected
gpio.set(gpio_num, gpio_state)
class RGB_LED_Delta(RGB_LED_Gamma):
def _get_rgb_gpios(self):
# returns GPIOs for red, green, blue
return 497, 498, 499
def add_object_subparser(obj, name, sub_parsers):
callable_functions = [func for func in dir(obj) if callable(getattr(obj, func)) and not func.startswith('_')]
object_help = str(obj.__doc__)
functions_help = '\n'.join(['\t{}\t{}'.format(func, getattr(obj, func).__doc__) for func in callable_functions if
getattr(obj, func).__doc__ is not None])
custom_subparser = sub_parsers.add_parser(
name,
description="{}\n{}".format(object_help, functions_help),
formatter_class=argparse.RawDescriptionHelpFormatter
)
custom_subparser.add_argument('command', type=str, choices=callable_functions)
custom_subparser.add_argument('params', type=str, nargs='*')
custom_subparser.set_defaults(__obj=obj)
def main():
parser = argparse.ArgumentParser(prog='zerophone_hw', description='Zerophone Hardware Command Line Interface')
parser.add_argument("-e",
help="Silence the 'not for end-users' warning",
action="store_true",
dest="nonenduser",
default=False)
subparsers = parser.add_subparsers()
add_object_subparser(hw_version, 'version', subparsers)
add_object_subparser(Charger(), 'charger', subparsers)
add_object_subparser(RGB_LED(), 'led', subparsers)
add_object_subparser(USB_DCDC(), 'dcdc', subparsers)
add_object_subparser(GSM_Modem(), 'modem', subparsers)
args = parser.parse_args()
if hasattr(args.__obj, '_setup'):
getattr(args.__obj, '_setup')()
if not args.nonenduser:
print("------ NOT TO BE USED BY END-USERS, USE THE 'zp' API INSTEAD ------")
result = getattr(args.__obj, args.command)(*args.params)
if result is not None:
print(result)
if isinstance(result, (bool, int)):
sys.exit(result)
else:
sys.exit(0)
if __name__ == "__main__":
main() | zerophone-hw | /zerophone_hw-0.4.2.tar.gz/zerophone_hw-0.4.2/zerophone_hw.py | zerophone_hw.py |
Zero POS
========
A zeroconf based POS printing daemon
What is Zero POS
----------------
Zero POS was designed to solve a very specific problem we faced at
Openlabs, but we thought could be generic enough for others to use. Here
is the use case:
The Tryton iPad based POS should be able to print to the commonly found
POS printer - a thermal printer like the Epson TM-T81. The printer only
talks over USB and does not have wireless printing capabilities!
With zeropos, you could bundle the printer with a low cost computer like
the Raspberry Pi and connect the printer to it and run zeropos daemon on
the raspberry pi. The printing service can be discovered over zero conf
from the iPad application and your application could send a POST request
to the service to print ZPL to the printer.
Installation
-------------
The quickest way to install this software is using pip
::
pip install zeropos
Administration
--------------
The daemon can be adminisered by opening the service URL from a browser.
TODO
----
1. Implement secutiry for the admin interface.
2. Write API documentation for the admin HTTP API.
| zeropos | /zeropos-0.1.2.tar.gz/zeropos-0.1.2/README.rst | README.rst |
# zeropy
[](https://badge.fury.io/py/zeroPy)
[](https://www.python.org/downloads/release/python-360/)
[](https://www.gnu.org/licenses/gpl-3.0)
Blockchain's api bind, python3.5+
# Requirements
- racrypt
- crcPy(now part, but it will be changed)
# Tests
Default tests:
```sh
sh tests.sh
```
Testing Api:
```sh
cd tests
vim node.ini
change ip, port
python3 -m unittest test_client.py
```
# Start
```python
import zeropy
client = zeropy.apiClient()
client.connect('127.0.0.1', 5000)
tmp_key = b'any 64 symbols'
#Blockchain should know with whom he works
client.send_info(tmp_key)
counters = client.get_counters()
print('blocks\n', counters.blocks, 'transactions\n', counters.transactions,
'binary data \n',counters)
```
| zeropy | /zeropy-0.4.0.tar.gz/zeropy-0.4.0/README.md | README.md |
Zerorm
======
Zerorm is a simple wrapper for three amazing packages. This repository is the
place where `TinyDB <https://github.com/msiemens/tinydb>`_, `Schematics <https://github.com/schematics/schematics>`_
and `Lifter <https://github.com/EliotBerriot/lifter>`_ together look like Django ORM.
It's still work in progress and not everything looks like Django ORM, but it will.
Installation
------------
.. code-block:: shell
pip install zerorm
Usage
-----
First create a file with models and database instance attached to every model:
.. code-block:: python
from zerorm import db, models
database = db('db.json')
class Message(models.Model):
author = models.StringType(required=True)
author_email = models.EmailType()
text = models.StringType()
views = models.IntType(min_value=0)
class Meta:
database = database
Now create some objects:
.. code-block:: pycon
>>> from models import Message
>>>
>>> bob_message = Message(author='Bob',
... author_email='[email protected]',
... text='Hello, everyone!')
>>> bob_message
<Message: Message object>
>>> bob_message.save() # Save object
1
>>>
>>> bob_message.views = 3
>>> bob_message.save() # Update object
>>>
>>> alice_message = Message.objects.create(author='Alice',
... text='Hi, Bob!',
... views=0)
>>> alice_message
<Message: Message object>
And try to retrieve them via *objects*
.. code-block:: pycon
>>> Message.objects.all()
<QuerySet, len() = 2>
>>> list(Message.objects.all())
[<Message: Message object>, <Message: Message object>]
>>>
>>> second_message = Message.objects.get(eid=2)
>>> second_message.author
'Alice'
>>>
>>> Message.objects.filter(views__gte=3) # Only Bob's message has 3 views
<QuerySet, len() = 1>
>>> list(Message.objects.filter(views__gte=3))
[<Message: Message object>]
You can also redefine model's *__str__* method for better repr just like in Django.
.. code-block:: python
class Message(models.Model):
...
def __str__(self):
return 'by {}'.format(self.author)
.. code-block:: pycon
>>> list(Message.objects.all())
[<Message: by Bob>, <Message: by Alice>]
License
-------
MIT. See LICENSE for details. | zerorm | /zerorm-0.2.0.tar.gz/zerorm-0.2.0/README.rst | README.rst |
<p align="center">
<img src="imgs/logo.png?raw=true)" width="600"/>
</p>
# Zero-dependency ROS-like middleware for Python
This library is intended to be used for small projects that require a simple middleware
for communication between processes. It is not intended to be a replacement for ROS.
<p align="center">
<a href="https://pypi.org/project/zeroros/">
<img alt="PyPI" src="https://img.shields.io/pypi/v/zeroros">
</a>
<a href="https://github.com/miquelmassot/zeroros/actions/workflows/python-publish.yml">
<img alt="Wheels" src="https://github.com/miquelmassot/zeroros/actions/workflows/python-publish.yml/badge.svg">
</a>
<a href="https://github.com/miquelmassot/zeroros">
<img src="https://img.shields.io/badge/platform-Linux%20%7C%20Windows%20%7C%20macOS-blue.svg" alt="platforms" />
</a>
<a href="https://github.com/miquelmassot/zeroros">
<img src="https://static.pepy.tech/badge/zeroros" alt="Downloads" />
</a>
<a href="https://github.com/miquelmassot/zeroros/blob/main/LICENSE">
<img alt="License" src="https://img.shields.io/badge/License-BSD_3--Clause-blue.svg">
</a>
<br/>
</p>
## Installation
Use pip to install the library:
```bash
pip install zeroros
```
## Usage
The library is composed of three main classes: `Publisher`, `Subscriber` and
`MessageBroker`.
### MessageBroker
The `MessageBroker` class is used to create a message broker that can be used by
publishers and subscribers to communicate with each other.
```python
from zeroros import MessageBroker
broker = MessageBroker()
```
### Publisher
The `Publisher` class is used to publish messages to a topic. The constructor takes two
arguments: the topic name and the message type. The topic name is a string, while the
message type is a Python class. The message type is used to serialize and deserialize
messages.
```python
from zeroros import Publisher
pub = Publisher("topic_name", String)
pub.publish("Hello world!")
```
### Subscriber
The `Subscriber` class is used to subscribe to a topic and receive messages. The constructor
takes two arguments: the topic name and the message type. The topic name is a string, while
the message type is a Python class. The message type is used to serialize and deserialize
messages.
```python
import time
from zeroros import Subscriber
def callback(msg):
print(msg)
sub = Subscriber("topic_name", String, callback)
while True:
# Do something else
time.sleep(1)
# Stop the subscriber
sub.stop()
```
### Messages
The library comes with a few built-in messages that can be used out of the box. The
following messages are available:
* `std_msgs.String`
* `std_msgs.Int`
* `std_msgs.Float`
* `std_msgs.Bool`
* `std_msgs.Header`
* `geometry_msgs.Vector3`
* `geometry_msgs.Vector3Stamped`
* `geometry_msgs.Twist`
* `geometry_msgs.Quaternion`
* `geometry_msgs.Pose`
* `geometry_msgs.PoseStamped`
* `geometry_msgs.PoseWithCovariance`
* `geometry_msgs.TwistWithCovariance`
* `nav_msgs.Odometry`
* `nav_msgs.Path`
* `sensors_msgs.LaserScan`
* More to come...
| zeroros | /zeroros-1.0.2.tar.gz/zeroros-1.0.2/README.md | README.md |
from __future__ import absolute_import
from future.utils import tobytes
import uuid
import random
from . import gevent_zmq as zmq
class Context(zmq.Context):
_instance = None
def __init__(self):
super(zmq.Context, self).__init__()
self._middlewares = []
self._hooks = {
'resolve_endpoint': [],
'load_task_context': [],
'get_task_context': [],
'server_before_exec': [],
'server_after_exec': [],
'server_inspect_exception': [],
'client_handle_remote_error': [],
'client_before_request': [],
'client_after_request': [],
'client_patterns_list': [],
}
self._reset_msgid()
# NOTE: pyzmq 13.0.0 messed up with setattr (they turned it into a
# non-op) and you can't assign attributes normally anymore, hence the
# tricks with self.__dict__ here
@property
def _middlewares(self):
return self.__dict__['_middlewares']
@_middlewares.setter
def _middlewares(self, value):
self.__dict__['_middlewares'] = value
@property
def _hooks(self):
return self.__dict__['_hooks']
@_hooks.setter
def _hooks(self, value):
self.__dict__['_hooks'] = value
@property
def _msg_id_base(self):
return self.__dict__['_msg_id_base']
@_msg_id_base.setter
def _msg_id_base(self, value):
self.__dict__['_msg_id_base'] = value
@property
def _msg_id_counter(self):
return self.__dict__['_msg_id_counter']
@_msg_id_counter.setter
def _msg_id_counter(self, value):
self.__dict__['_msg_id_counter'] = value
@property
def _msg_id_counter_stop(self):
return self.__dict__['_msg_id_counter_stop']
@_msg_id_counter_stop.setter
def _msg_id_counter_stop(self, value):
self.__dict__['_msg_id_counter_stop'] = value
@staticmethod
def get_instance():
if Context._instance is None:
Context._instance = Context()
return Context._instance
def _reset_msgid(self):
self._msg_id_base = tobytes(uuid.uuid4().hex)[8:]
self._msg_id_counter = random.randrange(0, 2 ** 32)
self._msg_id_counter_stop = random.randrange(self._msg_id_counter, 2 ** 32)
def new_msgid(self):
if self._msg_id_counter >= self._msg_id_counter_stop:
self._reset_msgid()
else:
self._msg_id_counter = (self._msg_id_counter + 1)
return tobytes('{0:08x}'.format(self._msg_id_counter)) + self._msg_id_base
def register_middleware(self, middleware_instance):
registered_count = 0
self._middlewares.append(middleware_instance)
for hook in self._hooks:
functor = getattr(middleware_instance, hook, None)
if functor is None:
try:
functor = middleware_instance.get(hook, None)
except AttributeError:
pass
if functor is not None:
self._hooks[hook].append(functor)
registered_count += 1
return registered_count
#
# client/server
#
def hook_resolve_endpoint(self, endpoint):
for functor in self._hooks['resolve_endpoint']:
endpoint = functor(endpoint)
return endpoint
def hook_load_task_context(self, event_header):
for functor in self._hooks['load_task_context']:
functor(event_header)
def hook_get_task_context(self):
event_header = {}
for functor in self._hooks['get_task_context']:
event_header.update(functor())
return event_header
#
# Server-side hooks
#
def hook_server_before_exec(self, request_event):
"""Called when a method is about to be executed on the server."""
for functor in self._hooks['server_before_exec']:
functor(request_event)
def hook_server_after_exec(self, request_event, reply_event):
"""Called when a method has been executed successfully.
This hook is called right before the answer is sent back to the client.
If the method streams its answer (i.e: it uses the zerorpc.stream
decorator) then this hook will be called once the reply has been fully
streamed (and right before the stream is "closed").
The reply_event argument will be None if the Push/Pull pattern is used.
"""
for functor in self._hooks['server_after_exec']:
functor(request_event, reply_event)
def hook_server_inspect_exception(self, request_event, reply_event, exc_infos):
"""Called when a method raised an exception.
The reply_event argument will be None if the Push/Pull pattern is used.
"""
task_context = self.hook_get_task_context()
for functor in self._hooks['server_inspect_exception']:
functor(request_event, reply_event, task_context, exc_infos)
#
# Client-side hooks
#
def hook_client_handle_remote_error(self, event):
exception = None
for functor in self._hooks['client_handle_remote_error']:
ret = functor(event)
if ret:
exception = ret
return exception
def hook_client_before_request(self, event):
"""Called when the Client is about to send a request.
You can see it as the counterpart of ``hook_server_before_exec``.
"""
for functor in self._hooks['client_before_request']:
functor(event)
def hook_client_after_request(self, request_event, reply_event, exception=None):
"""Called when an answer or a timeout has been received from the server.
This hook is called right before the answer is returned to the client.
You can see it as the counterpart of the ``hook_server_after_exec``.
If the called method was returning a stream (i.e: it uses the
zerorpc.stream decorator) then this hook will be called once the reply
has been fully streamed (when the stream is "closed") or when an
exception has been raised.
The optional exception argument will be a ``RemoteError`` (or whatever
type returned by the client_handle_remote_error hook) if an exception
has been raised on the server.
If the request timed out, then the exception argument will be a
``TimeoutExpired`` object and reply_event will be None.
"""
for functor in self._hooks['client_after_request']:
functor(request_event, reply_event, exception)
def hook_client_patterns_list(self, patterns):
for functor in self._hooks['client_patterns_list']:
patterns = functor(patterns)
return patterns | zerorpc-2 | /zerorpc_2-0.7.0-py3-none-any.whl/zerorpc/context.py | context.py |
from __future__ import absolute_import
from builtins import str
from builtins import zip
from future.utils import iteritems
import sys
import traceback
import gevent.pool
import gevent.queue
import gevent.event
import gevent.local
import gevent.lock
from . import gevent_zmq as zmq
from .exceptions import TimeoutExpired, RemoteError, LostRemote
from .channel import ChannelMultiplexer, BufferedChannel
from .socket import SocketBase
from .heartbeat import HeartBeatOnChannel
from .context import Context
from .decorators import DecoratorBase, rep
from . import patterns
from logging import getLogger
logger = getLogger(__name__)
class ServerBase(object):
def __init__(self, channel, methods=None, name=None, context=None,
pool_size=None, heartbeat=5):
self._multiplexer = ChannelMultiplexer(channel)
if methods is None:
methods = self
self._context = context or Context.get_instance()
self._name = name or self._extract_name()
self._task_pool = gevent.pool.Pool(size=pool_size)
self._acceptor_task = None
self._methods = self._filter_methods(ServerBase, self, methods)
self._inject_builtins()
self._heartbeat_freq = heartbeat
for (k, functor) in iteritems(self._methods):
if not isinstance(functor, DecoratorBase):
self._methods[k] = rep(functor)
@staticmethod
def _filter_methods(cls, self, methods):
if isinstance(methods, dict):
return methods
server_methods = set(k for k in dir(cls) if not k.startswith('_'))
return dict((k, getattr(methods, k))
for k in dir(methods)
if callable(getattr(methods, k)) and
not k.startswith('_') and k not in server_methods
)
@staticmethod
def _extract_name(methods):
return getattr(methods, '__name__', None) \
or getattr(type(methods), '__name__', None) \
or repr(methods)
def close(self):
self.stop()
self._multiplexer.close()
def _format_args_spec(self, args_spec, r=None):
if args_spec:
r = [dict(name=name) for name in args_spec[0]]
default_values = args_spec[3]
if default_values is not None:
for arg, def_val in zip(reversed(r), reversed(default_values)):
arg['default'] = def_val
return r
def _zerorpc_inspect(self):
methods = dict((m, f) for m, f in iteritems(self._methods)
if not m.startswith('_'))
detailled_methods = dict((m,
dict(args=self._format_args_spec(f._zerorpc_args()),
doc=f._zerorpc_doc())) for (m, f) in iteritems(methods))
return {'name': self._name,
'methods': detailled_methods}
def _inject_builtins(self):
self._methods['_zerorpc_list'] = lambda: [m for m in self._methods
if not m.startswith('_')]
self._methods['_zerorpc_name'] = lambda: self._name
self._methods['_zerorpc_ping'] = lambda: ['pong', self._name]
self._methods['_zerorpc_help'] = lambda m: \
self._methods[m]._zerorpc_doc()
self._methods['_zerorpc_args'] = \
lambda m: self._methods[m]._zerorpc_args()
self._methods['_zerorpc_inspect'] = self._zerorpc_inspect
def __call__(self, method, *args):
if method not in self._methods:
raise NameError(method)
return self._methods[method](*args)
def _print_traceback(self, protocol_v1, exc_infos):
logger.exception('')
exc_type, exc_value, exc_traceback = exc_infos
if protocol_v1:
return (repr(exc_value),)
human_traceback = traceback.format_exc()
name = exc_type.__name__
human_msg = str(exc_value)
return (name, human_msg, human_traceback)
def _async_task(self, initial_event):
protocol_v1 = initial_event.header.get(u'v', 1) < 2
channel = self._multiplexer.channel(initial_event)
hbchan = HeartBeatOnChannel(channel, freq=self._heartbeat_freq,
passive=protocol_v1)
bufchan = BufferedChannel(hbchan)
exc_infos = None
event = bufchan.recv()
try:
self._context.hook_load_task_context(event.header)
functor = self._methods.get(event.name, None)
if functor is None:
raise NameError(event.name)
functor.pattern.process_call(self._context, bufchan, event, functor)
except LostRemote:
exc_infos = list(sys.exc_info())
self._print_traceback(protocol_v1, exc_infos)
except Exception:
exc_infos = list(sys.exc_info())
human_exc_infos = self._print_traceback(protocol_v1, exc_infos)
reply_event = bufchan.new_event(u'ERR', human_exc_infos,
self._context.hook_get_task_context())
self._context.hook_server_inspect_exception(event, reply_event, exc_infos)
bufchan.emit_event(reply_event)
finally:
del exc_infos
bufchan.close()
def _acceptor(self):
while True:
initial_event = self._multiplexer.recv()
self._task_pool.spawn(self._async_task, initial_event)
def run(self):
self._acceptor_task = gevent.spawn(self._acceptor)
try:
self._acceptor_task.get()
finally:
self.stop()
self._task_pool.join(raise_error=True)
def stop(self):
if self._acceptor_task is not None:
self._acceptor_task.kill()
self._acceptor_task = None
class ClientBase(object):
def __init__(self, channel, context=None, timeout=30, heartbeat=5,
passive_heartbeat=False):
self._multiplexer = ChannelMultiplexer(channel,
ignore_broadcast=True)
self._context = context or Context.get_instance()
self._timeout = timeout
self._heartbeat_freq = heartbeat
self._passive_heartbeat = passive_heartbeat
def close(self):
self._multiplexer.close()
def _handle_remote_error(self, event):
exception = self._context.hook_client_handle_remote_error(event)
if not exception:
if event.header.get(u'v', 1) >= 2:
(name, msg, traceback) = event.args
exception = RemoteError(name, msg, traceback)
else:
(msg,) = event.args
exception = RemoteError('RemoteError', msg, None)
return exception
def _select_pattern(self, event):
for pattern in self._context.hook_client_patterns_list(
patterns.patterns_list):
if pattern.accept_answer(event):
return pattern
return None
def _process_response(self, request_event, bufchan, timeout):
def raise_error(ex):
bufchan.close()
self._context.hook_client_after_request(request_event, None, ex)
raise ex
try:
reply_event = bufchan.recv(timeout=timeout)
except TimeoutExpired:
raise_error(TimeoutExpired(timeout,
'calling remote method {0}'.format(request_event.name)))
pattern = self._select_pattern(reply_event)
if pattern is None:
raise_error(RuntimeError(
'Unable to find a pattern for: {0}'.format(request_event)))
return pattern.process_answer(self._context, bufchan, request_event,
reply_event, self._handle_remote_error)
def __call__(self, method, *args, **kargs):
# here `method` is either a string of bytes or an unicode string in
# Python2 and Python3. Python2: str aka a byte string containing ASCII
# (unless the user explicitly provide an unicode string). Python3: str
# aka an unicode string (unless the user explicitly provide a byte
# string).
# zerorpc protocol requires an utf-8 encoded string at the msgpack
# level. msgpack will encode any unicode string object to UTF-8 and tag
# it `string`, while a bytes string will be tagged `bin`.
#
# So when we get a bytes string, we assume it to be an UTF-8 string
# (ASCII is contained in UTF-8) that we decode to an unicode string.
# Right after, msgpack-python will re-encode it as UTF-8. Yes this is
# terribly inefficient with Python2 because most of the time `method`
# will already be an UTF-8 encoded bytes string.
if isinstance(method, bytes):
method = method.decode('utf-8')
timeout = kargs.get('timeout', self._timeout)
channel = self._multiplexer.channel()
hbchan = HeartBeatOnChannel(channel, freq=self._heartbeat_freq,
passive=self._passive_heartbeat)
bufchan = BufferedChannel(hbchan, inqueue_size=kargs.get('slots', 100))
xheader = self._context.hook_get_task_context()
request_event = bufchan.new_event(method, args, xheader)
self._context.hook_client_before_request(request_event)
bufchan.emit_event(request_event)
if kargs.get('async', False) is False:
return self._process_response(request_event, bufchan, timeout)
async_result = gevent.event.AsyncResult()
gevent.spawn(self._process_response, request_event, bufchan,
timeout).link(async_result)
return async_result
def __getattr__(self, method):
return lambda *args, **kargs: self(method, *args, **kargs)
class Server(SocketBase, ServerBase):
def __init__(self, methods=None, name=None, context=None, pool_size=None,
heartbeat=5):
SocketBase.__init__(self, zmq.ROUTER, context)
if methods is None:
methods = self
name = name or ServerBase._extract_name(methods)
methods = ServerBase._filter_methods(Server, self, methods)
ServerBase.__init__(self, self._events, methods, name, context,
pool_size, heartbeat)
def close(self):
ServerBase.close(self)
SocketBase.close(self)
class Client(SocketBase, ClientBase):
def __init__(self, connect_to=None, context=None, timeout=30, heartbeat=5,
passive_heartbeat=False):
SocketBase.__init__(self, zmq.DEALER, context=context)
ClientBase.__init__(self, self._events, context, timeout, heartbeat,
passive_heartbeat)
if connect_to:
self.connect(connect_to)
def close(self):
ClientBase.close(self)
SocketBase.close(self)
class Pusher(SocketBase):
def __init__(self, context=None, zmq_socket=zmq.PUSH):
super(Pusher, self).__init__(zmq_socket, context=context)
def __call__(self, method, *args):
self._events.emit(method, args,
self._context.hook_get_task_context())
def __getattr__(self, method):
return lambda *args: self(method, *args)
class Puller(SocketBase):
def __init__(self, methods=None, context=None, zmq_socket=zmq.PULL):
super(Puller, self).__init__(zmq_socket, context=context)
if methods is None:
methods = self
self._methods = ServerBase._filter_methods(Puller, self, methods)
self._receiver_task = None
def close(self):
self.stop()
super(Puller, self).close()
def __call__(self, method, *args):
if method not in self._methods:
raise NameError(method)
return self._methods[method](*args)
def _receiver(self):
while True:
event = self._events.recv()
try:
if event.name not in self._methods:
raise NameError(event.name)
self._context.hook_load_task_context(event.header)
self._context.hook_server_before_exec(event)
self._methods[event.name](*event.args)
# In Push/Pull their is no reply to send, hence None for the
# reply_event argument
self._context.hook_server_after_exec(event, None)
except Exception:
exc_infos = sys.exc_info()
try:
logger.exception('')
self._context.hook_server_inspect_exception(event, None, exc_infos)
finally:
del exc_infos
def run(self):
self._receiver_task = gevent.spawn(self._receiver)
try:
self._receiver_task.get()
finally:
self._receiver_task = None
def stop(self):
if self._receiver_task is not None:
self._receiver_task.kill(block=False)
class Publisher(Pusher):
def __init__(self, context=None):
super(Publisher, self).__init__(context=context, zmq_socket=zmq.PUB)
class Subscriber(Puller):
def __init__(self, methods=None, context=None):
super(Subscriber, self).__init__(methods=methods, context=context,
zmq_socket=zmq.SUB)
self._events.setsockopt(zmq.SUBSCRIBE, b'')
def fork_task_context(functor, context=None):
'''Wrap a functor to transfer context.
Usage example:
gevent.spawn(zerorpc.fork_task_context(myfunction), args...)
The goal is to permit context "inheritance" from a task to another.
Consider the following example:
zerorpc.Server receive a new event
- task1 is created to handle this event this task will be linked
to the initial event context. zerorpc.Server does that for you.
- task1 make use of some zerorpc.Client instances, the initial
event context is transfered on every call.
- task1 spawn a new task2.
- task2 make use of some zerorpc.Client instances, it's a fresh
context. Thus there is no link to the initial context that
spawned task1.
- task1 spawn a new fork_task_context(task3).
- task3 make use of some zerorpc.Client instances, the initial
event context is transfered on every call.
A real use case is a distributed tracer. Each time a new event is
created, a trace_id is injected in it or copied from the current task
context. This permit passing the trace_id from a zerorpc.Server to
another via zerorpc.Client.
The simple rule to know if a task need to be wrapped is:
- if the new task will make any zerorpc call, it should be wrapped.
'''
context = context or Context.get_instance()
xheader = context.hook_get_task_context()
def wrapped(*args, **kargs):
context.hook_load_task_context(xheader)
return functor(*args, **kargs)
return wrapped | zerorpc-2 | /zerorpc_2-0.7.0-py3-none-any.whl/zerorpc/core.py | core.py |
import gevent.pool
import gevent.queue
import gevent.event
import gevent.local
import gevent.lock
import logging
from .exceptions import TimeoutExpired
from .channel_base import ChannelBase
logger = logging.getLogger(__name__)
class ChannelMultiplexer(ChannelBase):
def __init__(self, events, ignore_broadcast=False):
self._events = events
self._active_channels = {}
self._channel_dispatcher_task = None
self._broadcast_queue = None
if events.recv_is_supported and not ignore_broadcast:
self._broadcast_queue = gevent.queue.Queue(maxsize=1)
self._channel_dispatcher_task = gevent.spawn(
self._channel_dispatcher)
@property
def recv_is_supported(self):
return self._events.recv_is_supported
@property
def emit_is_supported(self):
return self._events.emit_is_supported
def close(self):
if self._channel_dispatcher_task:
self._channel_dispatcher_task.kill()
def new_event(self, name, args, xheader=None):
return self._events.new_event(name, args, xheader)
def emit_event(self, event, timeout=None):
return self._events.emit_event(event, timeout)
def recv(self, timeout=None):
if self._broadcast_queue is not None:
event = self._broadcast_queue.get(timeout=timeout)
else:
event = self._events.recv(timeout=timeout)
return event
def _channel_dispatcher(self):
while True:
try:
event = self._events.recv()
except Exception:
logger.exception('zerorpc.ChannelMultiplexer ignoring error on recv')
continue
channel_id = event.header.get(u'response_to', None)
queue = None
if channel_id is not None:
channel = self._active_channels.get(channel_id, None)
if channel is not None:
queue = channel._queue
elif self._broadcast_queue is not None:
queue = self._broadcast_queue
if queue is None:
logger.warning('zerorpc.ChannelMultiplexer,'
' unable to route event: {0}'.format(
event.__str__(ignore_args=True)))
else:
queue.put(event)
def channel(self, from_event=None):
if self._channel_dispatcher_task is None:
self._channel_dispatcher_task = gevent.spawn(
self._channel_dispatcher)
return Channel(self, from_event)
@property
def active_channels(self):
return self._active_channels
@property
def context(self):
return self._events.context
class Channel(ChannelBase):
def __init__(self, multiplexer, from_event=None):
self._multiplexer = multiplexer
self._channel_id = None
self._zmqid = None
self._queue = gevent.queue.Queue(maxsize=1)
if from_event is not None:
self._channel_id = from_event.header[u'message_id']
self._zmqid = from_event.identity
self._multiplexer._active_channels[self._channel_id] = self
logger.debug('<-- new channel %s', self._channel_id)
self._queue.put(from_event)
@property
def recv_is_supported(self):
return self._multiplexer.recv_is_supported
@property
def emit_is_supported(self):
return self._multiplexer.emit_is_supported
def close(self):
if self._channel_id is not None:
del self._multiplexer._active_channels[self._channel_id]
logger.debug('-x- closed channel %s', self._channel_id)
self._channel_id = None
def new_event(self, name, args, xheader=None):
event = self._multiplexer.new_event(name, args, xheader)
if self._channel_id is None:
self._channel_id = event.header[u'message_id']
self._multiplexer._active_channels[self._channel_id] = self
logger.debug('--> new channel %s', self._channel_id)
else:
event.header[u'response_to'] = self._channel_id
event.identity = self._zmqid
return event
def emit_event(self, event, timeout=None):
self._multiplexer.emit_event(event, timeout)
def recv(self, timeout=None):
try:
event = self._queue.get(timeout=timeout)
except gevent.queue.Empty:
raise TimeoutExpired(timeout)
return event
@property
def context(self):
return self._multiplexer.context
class BufferedChannel(ChannelBase):
def __init__(self, channel, inqueue_size=100):
self._channel = channel
self._input_queue_size = inqueue_size
self._remote_queue_open_slots = 1
self._input_queue_reserved = 1
self._remote_can_recv = gevent.event.Event()
self._input_queue = gevent.queue.Queue()
self._verbose = False
self._on_close_if = None
self._recv_task = gevent.spawn(self._recver)
@property
def recv_is_supported(self):
return self._channel.recv_is_supported
@property
def emit_is_supported(self):
return self._channel.emit_is_supported
@property
def on_close_if(self):
return self._on_close_if
@on_close_if.setter
def on_close_if(self, cb):
self._on_close_if = cb
def close(self):
if self._recv_task is not None:
self._recv_task.kill()
self._recv_task = None
if self._channel is not None:
self._channel.close()
self._channel = None
def _recver(self):
while True:
event = self._channel.recv()
if event.name == u'_zpc_more':
try:
self._remote_queue_open_slots += int(event.args[0])
except Exception:
logger.exception('gevent_zerorpc.BufferedChannel._recver')
if self._remote_queue_open_slots > 0:
self._remote_can_recv.set()
elif self._input_queue.qsize() == self._input_queue_size:
raise RuntimeError(
'BufferedChannel, queue overflow on event:', event)
else:
self._input_queue.put(event)
if self._on_close_if is not None and self._on_close_if(event):
self._recv_task = None
self.close()
return
def new_event(self, name, args, xheader=None):
return self._channel.new_event(name, args, xheader)
def emit_event(self, event, timeout=None):
if self._remote_queue_open_slots == 0:
self._remote_can_recv.clear()
self._remote_can_recv.wait(timeout=timeout)
self._remote_queue_open_slots -= 1
try:
self._channel.emit_event(event)
except:
self._remote_queue_open_slots += 1
raise
def _request_data(self):
open_slots = self._input_queue_size - self._input_queue_reserved
self._input_queue_reserved += open_slots
self._channel.emit(u'_zpc_more', (open_slots,))
def recv(self, timeout=None):
# self._channel can be set to None by an 'on_close_if' callback if it
# sees a suitable message from the remote end...
#
if self._verbose and self._channel:
if self._input_queue_reserved < self._input_queue_size // 2:
self._request_data()
else:
self._verbose = True
try:
event = self._input_queue.get(timeout=timeout)
except gevent.queue.Empty:
raise TimeoutExpired(timeout)
self._input_queue_reserved -= 1
return event
@property
def channel(self):
return self._channel
@property
def context(self):
return self._channel.context | zerorpc-2 | /zerorpc_2-0.7.0-py3-none-any.whl/zerorpc/channel.py | channel.py |
import time
import gevent.pool
import gevent.queue
import gevent.event
import gevent.local
import gevent.lock
from .exceptions import LostRemote, TimeoutExpired
from .channel_base import ChannelBase
class HeartBeatOnChannel(ChannelBase):
def __init__(self, channel, freq=5, passive=False):
self._closed = False
self._channel = channel
self._heartbeat_freq = freq
self._input_queue = gevent.queue.Channel()
self._remote_last_hb = None
self._lost_remote = False
self._recv_task = gevent.spawn(self._recver)
self._heartbeat_task = None
self._parent_coroutine = gevent.getcurrent()
self._compat_v2 = None
if not passive:
self._start_heartbeat()
@property
def recv_is_supported(self):
return self._channel.recv_is_supported
@property
def emit_is_supported(self):
return self._channel.emit_is_supported
def close(self):
self._closed = True
if self._heartbeat_task is not None:
self._heartbeat_task.kill()
self._heartbeat_task = None
if self._recv_task is not None:
self._recv_task.kill()
self._recv_task = None
if self._channel is not None:
self._channel.close()
self._channel = None
def _heartbeat(self):
while True:
gevent.sleep(self._heartbeat_freq)
if self._remote_last_hb is None:
self._remote_last_hb = time.time()
if time.time() > self._remote_last_hb + self._heartbeat_freq * 2:
self._lost_remote = True
if not self._closed:
gevent.kill(self._parent_coroutine,
self._lost_remote_exception())
break
self._channel.emit(u'_zpc_hb', (0,)) # 0 -> compat with protocol v2
def _start_heartbeat(self):
if self._heartbeat_task is None and self._heartbeat_freq is not None and not self._closed:
self._heartbeat_task = gevent.spawn(self._heartbeat)
def _recver(self):
while True:
event = self._channel.recv()
if self._compat_v2 is None:
self._compat_v2 = event.header.get(u'v', 0) < 3
if event.name == u'_zpc_hb':
self._remote_last_hb = time.time()
self._start_heartbeat()
if self._compat_v2:
event.name = u'_zpc_more'
self._input_queue.put(event)
else:
self._input_queue.put(event)
def _lost_remote_exception(self):
return LostRemote('Lost remote after {0}s heartbeat'.format(
self._heartbeat_freq * 2))
def new_event(self, name, args, header=None):
if self._compat_v2 and name == u'_zpc_more':
name = u'_zpc_hb'
return self._channel.new_event(name, args, header)
def emit_event(self, event, timeout=None):
if self._lost_remote:
raise self._lost_remote_exception()
self._channel.emit_event(event, timeout)
def recv(self, timeout=None):
if self._lost_remote:
raise self._lost_remote_exception()
try:
return self._input_queue.get(timeout=timeout)
except gevent.queue.Empty:
raise TimeoutExpired(timeout)
@property
def channel(self):
return self._channel
@property
def context(self):
return self._channel.context | zerorpc-2 | /zerorpc_2-0.7.0-py3-none-any.whl/zerorpc/heartbeat.py | heartbeat.py |
from __future__ import absolute_import
from builtins import str
from builtins import range
import msgpack
import msgpack_numpy as m
m.patch() # Monkeypatching msgpack to handle Numpy
import gevent.pool
import gevent.queue
import gevent.event
import gevent.local
import gevent.lock
import logging
import sys
from . import gevent_zmq as zmq
from .exceptions import TimeoutExpired
from .context import Context
from .channel_base import ChannelBase
if sys.version_info < (2, 7):
def get_pyzmq_frame_buffer(frame):
return frame.buffer[:]
else:
def get_pyzmq_frame_buffer(frame):
return frame.buffer
# gevent <= 1.1.0.rc5 is missing the Python3 __next__ method.
if sys.version_info >= (3, 0) and gevent.version_info <= (1, 1, 0, 'rc', '5'):
setattr(gevent.queue.Channel, '__next__', gevent.queue.Channel.next)
logger = logging.getLogger(__name__)
class SequentialSender(object):
def __init__(self, socket):
self._socket = socket
def _send(self, parts):
e = None
for i in range(len(parts) - 1):
try:
self._socket.send(parts[i], copy=False, flags=zmq.SNDMORE)
except (gevent.GreenletExit, gevent.Timeout) as e:
if i == 0:
raise
self._socket.send(parts[i], copy=False, flags=zmq.SNDMORE)
try:
self._socket.send(parts[-1], copy=False)
except (gevent.GreenletExit, gevent.Timeout) as e:
self._socket.send(parts[-1], copy=False)
if e:
raise e
def __call__(self, parts, timeout=None):
if timeout:
with gevent.Timeout(timeout):
self._send(parts)
else:
self._send(parts)
class SequentialReceiver(object):
def __init__(self, socket):
self._socket = socket
def _recv(self):
e = None
parts = []
while True:
try:
part = self._socket.recv(copy=False)
except (gevent.GreenletExit, gevent.Timeout) as e:
if len(parts) == 0:
raise
part = self._socket.recv(copy=False)
parts.append(part)
if not part.more:
break
if e:
raise e
return parts
def __call__(self, timeout=None):
if timeout:
with gevent.Timeout(timeout):
return self._recv()
else:
return self._recv()
class Sender(SequentialSender):
def __init__(self, socket):
self._socket = socket
self._send_queue = gevent.queue.Channel()
self._send_task = gevent.spawn(self._sender)
def close(self):
if self._send_task:
self._send_task.kill()
def _sender(self):
for parts in self._send_queue:
super(Sender, self)._send(parts)
def __call__(self, parts, timeout=None):
try:
self._send_queue.put(parts, timeout=timeout)
except gevent.queue.Full:
raise TimeoutExpired(timeout)
class Receiver(SequentialReceiver):
def __init__(self, socket):
self._socket = socket
self._recv_queue = gevent.queue.Channel()
self._recv_task = gevent.spawn(self._recver)
def close(self):
if self._recv_task:
self._recv_task.kill()
self._recv_queue = None
def _recver(self):
while True:
parts = super(Receiver, self)._recv()
self._recv_queue.put(parts)
def __call__(self, timeout=None):
try:
return self._recv_queue.get(timeout=timeout)
except gevent.queue.Empty:
raise TimeoutExpired(timeout)
class Event(object):
__slots__ = ['_name', '_args', '_header', '_identity']
# protocol details:
# - `name` and `header` keys must be unicode strings.
# - `message_id` and 'response_to' values are opaque bytes string.
# - `v' value is an integer.
def __init__(self, name, args, context, header=None):
self._name = name
self._args = args
if header is None:
self._header = {u'message_id': context.new_msgid(), u'v': 3}
else:
self._header = header
self._identity = None
@property
def header(self):
return self._header
@property
def name(self):
return self._name
@name.setter
def name(self, v):
self._name = v
@property
def args(self):
return self._args
@property
def identity(self):
return self._identity
@identity.setter
def identity(self, v):
self._identity = v
def pack(self):
payload = (self._header, self._name, self._args)
r = msgpack.Packer(use_bin_type=True).pack(payload)
return r
@staticmethod
def unpack(blob):
unpacker = msgpack.Unpacker(encoding='utf-8')
unpacker.feed(blob)
unpacked_msg = unpacker.unpack()
try:
(header, name, args) = unpacked_msg
except Exception as e:
raise Exception('invalid msg format "{0}": {1}'.format(
unpacked_msg, e))
# Backward compatibility
if not isinstance(header, dict):
header = {}
return Event(name, args, None, header)
def __str__(self, ignore_args=False):
if ignore_args:
args = '[...]'
else:
args = self._args
try:
args = '<<{0}>>'.format(str(self.unpack(self._args)))
except Exception:
pass
if self._identity:
identity = ', '.join(repr(x.bytes) for x in self._identity)
return '<{0}> {1} {2} {3}'.format(identity, self._name,
self._header, args)
return '{0} {1} {2}'.format(self._name, self._header, args)
class Events(ChannelBase):
def __init__(self, zmq_socket_type, context=None):
self._debug = False
self._zmq_socket_type = zmq_socket_type
self._context = context or Context.get_instance()
self._socket = self._context.socket(zmq_socket_type)
if zmq_socket_type in (zmq.PUSH, zmq.PUB, zmq.DEALER, zmq.ROUTER):
self._send = Sender(self._socket)
elif zmq_socket_type in (zmq.REQ, zmq.REP):
self._send = SequentialSender(self._socket)
else:
self._send = None
if zmq_socket_type in (zmq.PULL, zmq.SUB, zmq.DEALER, zmq.ROUTER):
self._recv = Receiver(self._socket)
elif zmq_socket_type in (zmq.REQ, zmq.REP):
self._recv = SequentialReceiver(self._socket)
else:
self._recv = None
@property
def recv_is_supported(self):
return self._recv is not None
@property
def emit_is_supported(self):
return self._send is not None
def __del__(self):
try:
if not self._socket.closed:
self.close()
except (AttributeError, TypeError):
pass
def close(self):
try:
self._send.close()
except (AttributeError, TypeError, gevent.GreenletExit):
pass
try:
self._recv.close()
except (AttributeError, TypeError, gevent.GreenletExit):
pass
self._socket.close()
@property
def debug(self):
return self._debug
@debug.setter
def debug(self, v):
if v != self._debug:
self._debug = v
if self._debug:
logger.debug('debug enabled')
else:
logger.debug('debug disabled')
def _resolve_endpoint(self, endpoint, resolve=True):
if resolve:
endpoint = self._context.hook_resolve_endpoint(endpoint)
if isinstance(endpoint, (tuple, list)):
r = []
for sub_endpoint in endpoint:
r.extend(self._resolve_endpoint(sub_endpoint, resolve))
return r
return [endpoint]
def connect(self, endpoint, resolve=True):
r = []
for endpoint_ in self._resolve_endpoint(endpoint, resolve):
r.append(self._socket.connect(endpoint_))
logger.debug('connected to %s (status=%s)', endpoint_, r[-1])
return r
def bind(self, endpoint, resolve=True):
r = []
for endpoint_ in self._resolve_endpoint(endpoint, resolve):
r.append(self._socket.bind(endpoint_))
logger.debug('bound to %s (status=%s)', endpoint_, r[-1])
return r
def disconnect(self, endpoint, resolve=True):
r = []
for endpoint_ in self._resolve_endpoint(endpoint, resolve):
r.append(self._socket.disconnect(endpoint_))
logger.debug('disconnected from %s (status=%s)', endpoint_, r[-1])
return r
def new_event(self, name, args, xheader=None):
event = Event(name, args, context=self._context)
if xheader:
event.header.update(xheader)
return event
def emit_event(self, event, timeout=None):
if self._debug:
logger.debug('--> %s', event)
if event.identity:
parts = list(event.identity or list())
parts.extend([b'', event.pack()])
elif self._zmq_socket_type in (zmq.DEALER, zmq.ROUTER):
parts = (b'', event.pack())
else:
parts = (event.pack(),)
self._send(parts, timeout)
def recv(self, timeout=None):
parts = self._recv(timeout=timeout)
if len(parts) > 2:
identity = parts[0:-2]
blob = parts[-1]
elif len(parts) == 2:
identity = parts[0:-1]
blob = parts[-1]
else:
identity = None
blob = parts[0]
event = Event.unpack(get_pyzmq_frame_buffer(blob))
event.identity = identity
if self._debug:
logger.debug('<-- %s', event)
return event
def setsockopt(self, *args):
return self._socket.setsockopt(*args)
@property
def context(self):
return self._context | zerorpc-2 | /zerorpc_2-0.7.0-py3-none-any.whl/zerorpc/events.py | events.py |
# We want to act like zmq
from zmq import * # noqa
# Explicit import to please flake8
from zmq import ZMQError
# A way to access original zmq
import zmq as _zmq
import gevent.event
import gevent.core
import errno
from logging import getLogger
logger = getLogger(__name__)
class Context(_zmq.Context):
def socket(self, socket_type):
if self.closed:
raise _zmq.ZMQError(_zmq.ENOTSUP)
return Socket(self, socket_type)
class Socket(_zmq.Socket):
def __init__(self, context, socket_type):
super(Socket, self).__init__(context, socket_type)
on_state_changed_fd = self.getsockopt(_zmq.FD)
# NOTE: pyzmq 13.0.0 messed up with setattr (they turned it into a
# non-op) and you can't assign attributes normally anymore, hence the
# tricks with self.__dict__ here
self.__dict__["_readable"] = gevent.event.Event()
self.__dict__["_writable"] = gevent.event.Event()
try:
# gevent>=1.0
self.__dict__["_state_event"] = gevent.hub.get_hub().loop.io(
on_state_changed_fd, gevent.core.READ)
self._state_event.start(self._on_state_changed)
except AttributeError:
# gevent<1.0
self.__dict__["_state_event"] = \
gevent.core.read_event(on_state_changed_fd,
self._on_state_changed, persist=True)
def _on_state_changed(self, event=None, _evtype=None):
if self.closed:
self._writable.set()
self._readable.set()
return
while True:
try:
events = self.getsockopt(_zmq.EVENTS)
break
except ZMQError as e:
if e.errno not in (_zmq.EAGAIN, errno.EINTR):
raise
if events & _zmq.POLLOUT:
self._writable.set()
if events & _zmq.POLLIN:
self._readable.set()
def close(self):
if not self.closed and getattr(self, '_state_event', None):
try:
# gevent>=1.0
self._state_event.stop()
except AttributeError:
# gevent<1.0
self._state_event.cancel()
super(Socket, self).close()
def connect(self, *args, **kwargs):
while True:
try:
return super(Socket, self).connect(*args, **kwargs)
except _zmq.ZMQError as e:
if e.errno not in (_zmq.EAGAIN, errno.EINTR):
raise
def send(self, data, flags=0, copy=True, track=False):
if flags & _zmq.NOBLOCK:
return super(Socket, self).send(data, flags, copy, track)
flags |= _zmq.NOBLOCK
while True:
try:
msg = super(Socket, self).send(data, flags, copy, track)
# The following call, force polling the state of the zmq socket
# (POLLIN and/or POLLOUT). It seems that a POLLIN event is often
# missed when the socket is used to send at the same time,
# forcing to poll at this exact moment seems to reduce the
# latencies when a POLLIN event is missed. The drawback is a
# reduced throughput (roughly 8.3%) in exchange of a normal
# concurrency. In other hand, without the following line, you
# loose 90% of the performances as soon as there is simultaneous
# send and recv on the socket.
self._on_state_changed()
return msg
except _zmq.ZMQError as e:
if e.errno not in (_zmq.EAGAIN, errno.EINTR):
raise
self._writable.clear()
# The following sleep(0) force gevent to switch out to another
# coroutine and seems to refresh the notion of time that gevent may
# have. This definitively eliminate the gevent bug that can trigger
# a timeout too soon under heavy load. In theory it will incur more
# CPU usage, but in practice it balance even with the extra CPU used
# when the timeout triggers too soon in the following loop. So for
# the same CPU load, you get a better throughput (roughly 18.75%).
gevent.sleep(0)
while not self._writable.wait(timeout=1):
try:
if self.getsockopt(_zmq.EVENTS) & _zmq.POLLOUT:
logger.error("/!\\ gevent_zeromq BUG /!\\ "
"catching up after missing event (SEND) /!\\")
break
except ZMQError as e:
if e.errno not in (_zmq.EAGAIN, errno.EINTR):
raise
def recv(self, flags=0, copy=True, track=False):
if flags & _zmq.NOBLOCK:
return super(Socket, self).recv(flags, copy, track)
flags |= _zmq.NOBLOCK
while True:
try:
msg = super(Socket, self).recv(flags, copy, track)
# The following call, force polling the state of the zmq socket
# (POLLIN and/or POLLOUT). It seems that a POLLOUT event is
# often missed when the socket is used to receive at the same
# time, forcing to poll at this exact moment seems to reduce the
# latencies when a POLLOUT event is missed. The drawback is a
# reduced throughput (roughly 8.3%) in exchange of a normal
# concurrency. In other hand, without the following line, you
# loose 90% of the performances as soon as there is simultaneous
# send and recv on the socket.
self._on_state_changed()
return msg
except _zmq.ZMQError as e:
if e.errno not in (_zmq.EAGAIN, errno.EINTR):
raise
self._readable.clear()
# The following sleep(0) force gevent to switch out to another
# coroutine and seems to refresh the notion of time that gevent may
# have. This definitively eliminate the gevent bug that can trigger
# a timeout too soon under heavy load. In theory it will incur more
# CPU usage, but in practice it balance even with the extra CPU used
# when the timeout triggers too soon in the following loop. So for
# the same CPU load, you get a better throughput (roughly 18.75%).
gevent.sleep(0)
while not self._readable.wait(timeout=1):
try:
if self.getsockopt(_zmq.EVENTS) & _zmq.POLLIN:
logger.error("/!\\ gevent_zeromq BUG /!\\ "
"catching up after missing event (RECV) /!\\")
break
except ZMQError as e:
if e.errno not in (_zmq.EAGAIN, errno.EINTR):
raise | zerorpc-2 | /zerorpc_2-0.7.0-py3-none-any.whl/zerorpc/gevent_zmq.py | gevent_zmq.py |
class ReqRep(object):
def process_call(self, context, channel, req_event, functor):
context.hook_server_before_exec(req_event)
result = functor(*req_event.args)
rep_event = channel.new_event(u'OK', (result,),
context.hook_get_task_context())
context.hook_server_after_exec(req_event, rep_event)
channel.emit_event(rep_event)
def accept_answer(self, event):
return event.name in (u'OK', u'ERR')
def process_answer(self, context, channel, req_event, rep_event,
handle_remote_error):
try:
if rep_event.name == u'ERR':
exception = handle_remote_error(rep_event)
context.hook_client_after_request(req_event, rep_event, exception)
raise exception
context.hook_client_after_request(req_event, rep_event)
return rep_event.args[0]
finally:
channel.close()
class ReqStream(object):
def process_call(self, context, channel, req_event, functor):
context.hook_server_before_exec(req_event)
xheader = context.hook_get_task_context()
for result in iter(functor(*req_event.args)):
channel.emit(u'STREAM', result, xheader)
done_event = channel.new_event(u'STREAM_DONE', None, xheader)
# NOTE: "We" made the choice to call the hook once the stream is done,
# the other choice was to call it at each iteration. I donu't think that
# one choice is better than the other, so Iu'm fine with changing this
# or adding the server_after_iteration and client_after_iteration hooks.
context.hook_server_after_exec(req_event, done_event)
channel.emit_event(done_event)
def accept_answer(self, event):
return event.name in (u'STREAM', u'STREAM_DONE')
def process_answer(self, context, channel, req_event, rep_event,
handle_remote_error):
def is_stream_done(rep_event):
return rep_event.name == u'STREAM_DONE'
channel.on_close_if = is_stream_done
def iterator(req_event, rep_event):
try:
while rep_event.name == u'STREAM':
# Like in process_call, we made the choice to call the
# after_exec hook only when the stream is done.
yield rep_event.args
rep_event = channel.recv()
if rep_event.name == u'ERR':
exception = handle_remote_error(rep_event)
context.hook_client_after_request(req_event, rep_event, exception)
raise exception
context.hook_client_after_request(req_event, rep_event)
finally:
channel.close()
return iterator(req_event, rep_event)
patterns_list = [ReqStream(), ReqRep()] | zerorpc-2 | /zerorpc_2-0.7.0-py3-none-any.whl/zerorpc/patterns.py | patterns.py |
from __future__ import print_function
from builtins import map
import argparse
import json
import sys
import inspect
import os
import logging
import collections
from pprint import pprint
import zerorpc
parser = argparse.ArgumentParser(
description='Make a zerorpc call to a remote service.'
)
client_or_server = parser.add_mutually_exclusive_group()
client_or_server.add_argument('--client', action='store_true', default=True,
help='remote procedure call mode (default)')
client_or_server.add_argument('--server', action='store_false', dest='client',
help='turn a given python module into a server')
parser.add_argument('--connect', action='append', metavar='address',
help='specify address to connect to. Can be specified \
multiple times and in conjunction with --bind')
parser.add_argument('--bind', action='append', metavar='address',
help='specify address to listen to. Can be specified \
multiple times and in conjunction with --connect')
parser.add_argument('--timeout', default=30, metavar='seconds', type=int,
help='abort request after X seconds. \
(default: 30s, --client only)')
parser.add_argument('--heartbeat', default=5, metavar='seconds', type=int,
help='heartbeat frequency. You should always use \
the same frequency as the server. (default: 5s)')
parser.add_argument('--pool-size', default=None, metavar='count', type=int,
help='size of worker pool. --server only.')
parser.add_argument('-j', '--json', default=False, action='store_true',
help='arguments are in JSON format and will be be parsed \
before being sent to the remote')
parser.add_argument('-pj', '--print-json', default=False, action='store_true',
help='print result in JSON format.')
parser.add_argument('-?', '--inspect', default=False, action='store_true',
help='retrieve detailed informations for the given \
remote (cf: command) method. If not method, display \
a list of remote methods signature. (only for --client).')
parser.add_argument('--active-hb', default=False, action='store_true',
help='enable active heartbeat. The default is to \
wait for the server to send the first heartbeat')
parser.add_argument('-d', '--debug', default=False, action='store_true',
help='Print zerorpc debug msgs, \
like outgoing and incomming messages.')
parser.add_argument('address', nargs='?', help='address to connect to. Skip \
this if you specified --connect or --bind at least once')
parser.add_argument('command', nargs='?',
help='remote procedure to call if --client (default) or \
python module/class to load if --server. If no command is \
specified, a list of remote methods are displayed.')
parser.add_argument('params', nargs='*',
help='parameters for the remote call if --client \
(default)')
def setup_links(args, socket):
if args.bind:
for endpoint in args.bind:
print('binding to "{0}"'.format(endpoint))
socket.bind(endpoint)
addresses = []
if args.address:
addresses.append(args.address)
if args.connect:
addresses.extend(args.connect)
for endpoint in addresses:
print('connecting to "{0}"'.format(endpoint))
socket.connect(endpoint)
def run_server(args):
server_obj_path = args.command
sys.path.insert(0, os.getcwd())
if '.' in server_obj_path:
modulepath, objname = server_obj_path.rsplit('.', 1)
module = __import__(modulepath, fromlist=[objname])
server_obj = getattr(module, objname)
else:
server_obj = __import__(server_obj_path)
if callable(server_obj):
server_obj = server_obj()
server = zerorpc.Server(server_obj, heartbeat=args.heartbeat, pool_size=args.pool_size)
if args.debug:
server.debug = True
setup_links(args, server)
print('serving "{0}"'.format(server_obj_path))
return server.run()
# this function does a really intricate job to keep backward compatibility
# with a previous version of zerorpc, and lazily retrieving results if possible
def zerorpc_inspect_legacy(client, filter_method, long_doc, include_argspec):
if filter_method is None:
remote_methods = client._zerorpc_list()
else:
remote_methods = [filter_method]
def remote_detailled_methods():
for name in remote_methods:
if include_argspec:
argspec = client._zerorpc_args(name)
else:
argspec = None
docstring = client._zerorpc_help(name)
if docstring and not long_doc:
docstring = docstring.split('\n', 1)[0]
yield (name, argspec, docstring if docstring else '<undocumented>')
if not include_argspec:
longest_name_len = max(len(name) for name in remote_methods)
return (longest_name_len, ((name, doc) for name, argspec, doc in
remote_detailled_methods()))
r = [(name + (inspect.formatargspec(*argspec)
if argspec else '(...)'), doc)
for name, argspec, doc in remote_detailled_methods()]
longest_name_len = max(len(name) for name, doc in r) if r else 0
return (longest_name_len, r)
# handle the 'python formatted' _zerorpc_inspect, that return the output of
# "getargspec" from the python lib "inspect". A monstruosity from protocol v2.
def zerorpc_inspect_python_argspecs(remote_methods, filter_method, long_doc, include_argspec):
def format_method(name, argspec, doc):
if include_argspec:
name += (inspect.formatargspec(*argspec) if argspec else
'(...)')
if not doc:
doc = '<undocumented>'
elif not long_doc:
doc = doc.splitlines()[0]
return (name, doc)
r = [format_method(*methods_info) for methods_info in remote_methods if
filter_method is None or methods_info[0] == filter_method]
if not r:
return None
longest_name_len = max(len(name) for name, doc in r) if r else 0
return (longest_name_len, r)
# Handles generically formatted arguments (not tied to any specific programming language).
def zerorpc_inspect_generic(remote_methods, filter_method, long_doc, include_argspec):
def format_method(name, args, doc):
if include_argspec:
def format_arg(arg):
def_val = arg.get('default')
if def_val is None:
return arg['name']
return '{0}={1}'.format(arg['name'], def_val)
if args:
name += '({0})'.format(', '.join(map(format_arg, args)))
else:
name += '(??)'
if not doc:
doc = '<undocumented>'
elif not long_doc:
doc = doc.splitlines()[0]
return (name, doc)
methods = [format_method(name, details['args'], details['doc'])
for name, details in remote_methods.items()
if filter_method is None or name == filter_method]
longest_name_len = (max(len(name) for name, doc in methods)
if methods else 0)
return (longest_name_len, methods)
def zerorpc_inspect(client, method=None, long_doc=True, include_argspec=True):
try:
inspect_result = client._zerorpc_inspect()
remote_methods = inspect_result['methods']
legacy = False
except (zerorpc.RemoteError, NameError):
legacy = True
if legacy:
try:
service_name = client._zerorpc_name()
except (zerorpc.RemoteError):
service_name = 'N/A'
(longest_name_len, detailled_methods) = zerorpc_inspect_legacy(client,
method, long_doc, include_argspec)
else:
service_name = inspect_result.get('name', 'N/A')
if not isinstance(remote_methods, dict):
(longest_name_len,
detailled_methods) = zerorpc_inspect_python_argspecs(
remote_methods, method, long_doc, include_argspec)
(longest_name_len, detailled_methods) = zerorpc_inspect_generic(
remote_methods, method, long_doc, include_argspec)
return longest_name_len, detailled_methods, service_name
def run_client(args):
client = zerorpc.Client(timeout=args.timeout, heartbeat=args.heartbeat,
passive_heartbeat=not args.active_hb)
if args.debug:
client.debug = True
setup_links(args, client)
if not args.command:
(longest_name_len, detailled_methods, service) = zerorpc_inspect(client,
long_doc=False, include_argspec=args.inspect)
print('[{0}]'.format(service))
if args.inspect:
for (name, doc) in detailled_methods:
print(name)
else:
for (name, doc) in detailled_methods:
print('{0} {1}'.format(name.ljust(longest_name_len), doc))
return
if args.inspect:
(longest_name_len, detailled_methods, service) = zerorpc_inspect(client,
method=args.command)
if detailled_methods:
(name, doc) = detailled_methods[0]
print('[{0}]\n{1}\n\n{2}\n'.format(service, name, doc))
else:
print('[{0}]\nNo documentation for "{1}".'.format(service, args.command))
return
if args.json:
call_args = [json.loads(x) for x in args.params]
else:
call_args = args.params
results = client(args.command, *call_args)
if not isinstance(results, collections.Iterator):
if args.print_json:
json.dump(results, sys.stdout)
else:
pprint(results)
else:
# streaming responses
if args.print_json:
first = True
sys.stdout.write('[')
for result in results:
if first:
first = False
else:
sys.stdout.write(',')
json.dump(result, sys.stdout)
sys.stdout.write(']')
else:
for result in results:
pprint(result)
def main():
logging.basicConfig()
args = parser.parse_args()
if args.debug:
logging.getLogger().setLevel(logging.DEBUG)
if args.bind or args.connect:
if args.command:
args.params.insert(0, args.command)
args.command = args.address
args.address = None
if not (args.bind or args.connect or args.address):
parser.print_help()
return -1
if args.client:
return run_client(args)
if not args.command:
parser.print_help()
return -1
return run_server(args) | zerorpc-2 | /zerorpc_2-0.7.0-py3-none-any.whl/zerorpc/cli.py | cli.py |
zerorpc
=======
.. image:: https://travis-ci.org/0rpc/zerorpc-python.svg?branch=master
:target: https://travis-ci.org/0rpc/zerorpc-python
Mailing list: [email protected] (https://groups.google.com/d/forum/zerorpc)
zerorpc is a flexible RPC implementation based on zeromq and messagepack.
Service APIs exposed with zerorpc are called "zeroservices".
zerorpc can be used programmatically or from the command-line. It comes
with a convenient script, "zerorpc", allowing to:
* expose Python modules without modifying a single line of code,
* call those modules remotely through the command line.
Installation
------------
On most systems, its a matter of::
$ pip install zerorpc
Depending of the support from Gevent and PyZMQ on your system, you might need to install `libev` (for gevent) and `libzmq` (for pyzmq) with the development files.
Create a server with a one-liner
--------------------------------
Let's see zerorpc in action with a simple example. In a first terminal,
we will expose the Python "time" module::
$ zerorpc --server --bind tcp://*:1234 time
.. note::
The bind address uses the zeromq address format. You are not limited
to TCP transport: you could as well specify ipc:///tmp/time to use
host-local sockets, for instance. "tcp://\*:1234" is a short-hand to
"tcp://0.0.0.0:1234" and means "listen on TCP port 1234, accepting
connections on all IP addresses".
Call the server from the command-line
-------------------------------------
Now, in another terminal, call the exposed module::
$ zerorpc --client --connect tcp://127.0.0.1:1234 strftime %Y/%m/%d
Connecting to "tcp://127.0.0.1:1234"
"2011/03/07"
Since the client usecase is the most common one, "--client" is the default
parameter, and you can remove it safely::
$ zerorpc --connect tcp://127.0.0.1:1234 strftime %Y/%m/%d
Connecting to "tcp://127.0.0.1:1234"
"2011/03/07"
Moreover, since the most common usecase is to *connect* (as opposed to *bind*)
you can also omit "--connect"::
$ zerorpc tcp://127.0.0.1:1234 strftime %Y/%m/%d
Connecting to "tcp://127.0.0.1:1234"
"2011/03/07"
See remote service documentation
--------------------------------
You can introspect the remote service; it happens automatically if you don't
specify the name of the function you want to call::
$ zerorpc tcp://127.0.0.1:1234
Connecting to "tcp://127.0.0.1:1234"
tzset tzset(zone)
ctime ctime(seconds) -> string
clock clock() -> floating point number
struct_time <undocumented>
time time() -> floating point number
strptime strptime(string, format) -> struct_time
gmtime gmtime([seconds]) -> (tm_year, tm_mon, tm_mday, tm_hour, tm_min,
mktime mktime(tuple) -> floating point number
sleep sleep(seconds)
asctime asctime([tuple]) -> string
strftime strftime(format[, tuple]) -> string
localtime localtime([seconds]) -> (tm_year,tm_mon,tm_mday,tm_hour,tm_min,
Specifying non-string arguments
-------------------------------
Now, see what happens if we try to call a function expecting a non-string
argument::
$ zerorpc tcp://127.0.0.1:1234 sleep 3
Connecting to "tcp://127.0.0.1:1234"
Traceback (most recent call last):
[...]
TypeError: a float is required
That's because all command-line arguments are handled as strings. Don't worry,
we can specify any kind of argument using JSON encoding::
$ zerorpc --json tcp://127.0.0.1:1234 sleep 3
Connecting to "tcp://127.0.0.1:1234"
[wait for 3 seconds...]
null
zeroworkers: reversing bind and connect
---------------------------------------
Sometimes, you don't want your client to connect to the server; you want
your server to act as a kind of worker, and connect to a hub or queue which
will dispatch requests. You can achieve this by swapping "--bind" and
"--connect"::
$ zerorpc --bind tcp://*:1234 strftime %Y/%m/%d
We now have "something" wanting to call the "strftime" function, and waiting
for a worker to connect to it. Let's start the worker::
$ zerorpc --server tcp://127.0.0.1:1234 time
The worker will connect to the listening client and ask him "what should I
do?"; the client will send the "strftime" function call; the worker will
execute it and return the result. The first program will display the
local time and exit. The worker will remain running.
Listening on multiple addresses
-------------------------------
What if you want to run the same server on multiple addresses? Just repeat
the "--bind" option::
$ zerorpc --server --bind tcp://*:1234 --bind ipc:///tmp/time time
You can then connect to it using either "zerorpc tcp://\*:1234" or
"zerorpc ipc:///tmp/time".
Wait, there is more! You can even mix "--bind" and "--connect". That means
that your server will wait for requests on a given address, *and* connect
as a worker on another. Likewise, you can specify "--connect" multiple times,
so your worker will connect to multiple queues. If a queue is not running,
it won't affect the worker (that's the magic of zeromq).
.. warning:: A client should probably not connect to multiple addresses!
Almost all other scenarios will work; but if you ask a client to connect
to multiple addresses, and at least one of them has no server at the end,
the client will ultimately block. A client can, however, bind multiple
addresses, and will dispatch requests to available workers. If you want
to connect to multiple remote servers for high availability purposes,
you insert something like HAProxy in the middle.
Exposing a zeroservice programmatically
---------------------------------------
Of course, the command-line is simply a convenience wrapper for the zerorpc
python API. Below are a few examples.
Here's how to expose an object of your choice as a zeroservice::
class Cooler(object):
""" Various convenience methods to make things cooler. """
def add_man(self, sentence):
""" End a sentence with ", man!" to make it sound cooler, and
return the result. """
return sentence + ", man!"
def add_42(self, n):
""" Add 42 to an integer argument to make it cooler, and return the
result. """
return n + 42
def boat(self, sentence):
""" Replace a sentence with "I'm on a boat!", and return that,
because it's cooler. """
return "I'm on a boat!"
import zerorpc
s = zerorpc.Server(Cooler())
s.bind("tcp://0.0.0.0:4242")
s.run()
Let's save this code to *cooler.py* and run it::
$ python cooler.py
Now, in another terminal, let's try connecting to our awesome zeroservice::
$ zerorpc -j tcp://localhost:4242 add_42 1
43
$ zerorpc tcp://localhost:4242 add_man 'I own a mint-condition Volkswagen Golf'
"I own a mint-condition Volkswagen Golf, man!"
$ zerorpc tcp://localhost:4242 boat 'I own a mint-condition Volkswagen Golf, man!'
"I'm on a boat!"
Congratulations! You have just made the World a little cooler with your first
zeroservice, man!
| zerorpc | /zerorpc-0.6.3.tar.gz/zerorpc-0.6.3/README.rst | README.rst |
from __future__ import absolute_import
from future.utils import tobytes
import uuid
import random
from . import gevent_zmq as zmq
class Context(zmq.Context):
_instance = None
def __init__(self):
super(zmq.Context, self).__init__()
self._middlewares = []
self._hooks = {
'resolve_endpoint': [],
'load_task_context': [],
'get_task_context': [],
'server_before_exec': [],
'server_after_exec': [],
'server_inspect_exception': [],
'client_handle_remote_error': [],
'client_before_request': [],
'client_after_request': [],
'client_patterns_list': [],
}
self._reset_msgid()
# NOTE: pyzmq 13.0.0 messed up with setattr (they turned it into a
# non-op) and you can't assign attributes normally anymore, hence the
# tricks with self.__dict__ here
@property
def _middlewares(self):
return self.__dict__['_middlewares']
@_middlewares.setter
def _middlewares(self, value):
self.__dict__['_middlewares'] = value
@property
def _hooks(self):
return self.__dict__['_hooks']
@_hooks.setter
def _hooks(self, value):
self.__dict__['_hooks'] = value
@property
def _msg_id_base(self):
return self.__dict__['_msg_id_base']
@_msg_id_base.setter
def _msg_id_base(self, value):
self.__dict__['_msg_id_base'] = value
@property
def _msg_id_counter(self):
return self.__dict__['_msg_id_counter']
@_msg_id_counter.setter
def _msg_id_counter(self, value):
self.__dict__['_msg_id_counter'] = value
@property
def _msg_id_counter_stop(self):
return self.__dict__['_msg_id_counter_stop']
@_msg_id_counter_stop.setter
def _msg_id_counter_stop(self, value):
self.__dict__['_msg_id_counter_stop'] = value
@staticmethod
def get_instance():
if Context._instance is None:
Context._instance = Context()
return Context._instance
def _reset_msgid(self):
self._msg_id_base = tobytes(uuid.uuid4().hex)[8:]
self._msg_id_counter = random.randrange(0, 2 ** 32)
self._msg_id_counter_stop = random.randrange(self._msg_id_counter, 2 ** 32)
def new_msgid(self):
if self._msg_id_counter >= self._msg_id_counter_stop:
self._reset_msgid()
else:
self._msg_id_counter = (self._msg_id_counter + 1)
return tobytes('{0:08x}'.format(self._msg_id_counter)) + self._msg_id_base
def register_middleware(self, middleware_instance):
registered_count = 0
self._middlewares.append(middleware_instance)
for hook in self._hooks:
functor = getattr(middleware_instance, hook, None)
if functor is None:
try:
functor = middleware_instance.get(hook, None)
except AttributeError:
pass
if functor is not None:
self._hooks[hook].append(functor)
registered_count += 1
return registered_count
#
# client/server
#
def hook_resolve_endpoint(self, endpoint):
for functor in self._hooks['resolve_endpoint']:
endpoint = functor(endpoint)
return endpoint
def hook_load_task_context(self, event_header):
for functor in self._hooks['load_task_context']:
functor(event_header)
def hook_get_task_context(self):
event_header = {}
for functor in self._hooks['get_task_context']:
event_header.update(functor())
return event_header
#
# Server-side hooks
#
def hook_server_before_exec(self, request_event):
"""Called when a method is about to be executed on the server."""
for functor in self._hooks['server_before_exec']:
functor(request_event)
def hook_server_after_exec(self, request_event, reply_event):
"""Called when a method has been executed successfully.
This hook is called right before the answer is sent back to the client.
If the method streams its answer (i.e: it uses the zerorpc.stream
decorator) then this hook will be called once the reply has been fully
streamed (and right before the stream is "closed").
The reply_event argument will be None if the Push/Pull pattern is used.
"""
for functor in self._hooks['server_after_exec']:
functor(request_event, reply_event)
def hook_server_inspect_exception(self, request_event, reply_event, exc_infos):
"""Called when a method raised an exception.
The reply_event argument will be None if the Push/Pull pattern is used.
"""
task_context = self.hook_get_task_context()
for functor in self._hooks['server_inspect_exception']:
functor(request_event, reply_event, task_context, exc_infos)
#
# Client-side hooks
#
def hook_client_handle_remote_error(self, event):
exception = None
for functor in self._hooks['client_handle_remote_error']:
ret = functor(event)
if ret:
exception = ret
return exception
def hook_client_before_request(self, event):
"""Called when the Client is about to send a request.
You can see it as the counterpart of ``hook_server_before_exec``.
"""
for functor in self._hooks['client_before_request']:
functor(event)
def hook_client_after_request(self, request_event, reply_event, exception=None):
"""Called when an answer or a timeout has been received from the server.
This hook is called right before the answer is returned to the client.
You can see it as the counterpart of the ``hook_server_after_exec``.
If the called method was returning a stream (i.e: it uses the
zerorpc.stream decorator) then this hook will be called once the reply
has been fully streamed (when the stream is "closed") or when an
exception has been raised.
The optional exception argument will be a ``RemoteError`` (or whatever
type returned by the client_handle_remote_error hook) if an exception
has been raised on the server.
If the request timed out, then the exception argument will be a
``TimeoutExpired`` object and reply_event will be None.
"""
for functor in self._hooks['client_after_request']:
functor(request_event, reply_event, exception)
def hook_client_patterns_list(self, patterns):
for functor in self._hooks['client_patterns_list']:
patterns = functor(patterns)
return patterns | zerorpc2 | /zerorpc2-0.7.0-py3-none-any.whl/zerorpc/context.py | context.py |
from __future__ import absolute_import
from builtins import str
from builtins import zip
from future.utils import iteritems
import sys
import traceback
import gevent.pool
import gevent.queue
import gevent.event
import gevent.local
import gevent.lock
from . import gevent_zmq as zmq
from .exceptions import TimeoutExpired, RemoteError, LostRemote
from .channel import ChannelMultiplexer, BufferedChannel
from .socket import SocketBase
from .heartbeat import HeartBeatOnChannel
from .context import Context
from .decorators import DecoratorBase, rep
from . import patterns
from logging import getLogger
logger = getLogger(__name__)
class ServerBase(object):
def __init__(self, channel, methods=None, name=None, context=None,
pool_size=None, heartbeat=5):
self._multiplexer = ChannelMultiplexer(channel)
if methods is None:
methods = self
self._context = context or Context.get_instance()
self._name = name or self._extract_name()
self._task_pool = gevent.pool.Pool(size=pool_size)
self._acceptor_task = None
self._methods = self._filter_methods(ServerBase, self, methods)
self._inject_builtins()
self._heartbeat_freq = heartbeat
for (k, functor) in iteritems(self._methods):
if not isinstance(functor, DecoratorBase):
self._methods[k] = rep(functor)
@staticmethod
def _filter_methods(cls, self, methods):
if isinstance(methods, dict):
return methods
server_methods = set(k for k in dir(cls) if not k.startswith('_'))
return dict((k, getattr(methods, k))
for k in dir(methods)
if callable(getattr(methods, k)) and
not k.startswith('_') and k not in server_methods
)
@staticmethod
def _extract_name(methods):
return getattr(methods, '__name__', None) \
or getattr(type(methods), '__name__', None) \
or repr(methods)
def close(self):
self.stop()
self._multiplexer.close()
def _format_args_spec(self, args_spec, r=None):
if args_spec:
r = [dict(name=name) for name in args_spec[0]]
default_values = args_spec[3]
if default_values is not None:
for arg, def_val in zip(reversed(r), reversed(default_values)):
arg['default'] = def_val
return r
def _zerorpc_inspect(self):
methods = dict((m, f) for m, f in iteritems(self._methods)
if not m.startswith('_'))
detailled_methods = dict((m,
dict(args=self._format_args_spec(f._zerorpc_args()),
doc=f._zerorpc_doc())) for (m, f) in iteritems(methods))
return {'name': self._name,
'methods': detailled_methods}
def _inject_builtins(self):
self._methods['_zerorpc_list'] = lambda: [m for m in self._methods
if not m.startswith('_')]
self._methods['_zerorpc_name'] = lambda: self._name
self._methods['_zerorpc_ping'] = lambda: ['pong', self._name]
self._methods['_zerorpc_help'] = lambda m: \
self._methods[m]._zerorpc_doc()
self._methods['_zerorpc_args'] = \
lambda m: self._methods[m]._zerorpc_args()
self._methods['_zerorpc_inspect'] = self._zerorpc_inspect
def __call__(self, method, *args):
if method not in self._methods:
raise NameError(method)
return self._methods[method](*args)
def _print_traceback(self, protocol_v1, exc_infos):
logger.exception('')
exc_type, exc_value, exc_traceback = exc_infos
if protocol_v1:
return (repr(exc_value),)
human_traceback = traceback.format_exc()
name = exc_type.__name__
human_msg = str(exc_value)
return (name, human_msg, human_traceback)
def _async_task(self, initial_event):
protocol_v1 = initial_event.header.get(u'v', 1) < 2
channel = self._multiplexer.channel(initial_event)
hbchan = HeartBeatOnChannel(channel, freq=self._heartbeat_freq,
passive=protocol_v1)
bufchan = BufferedChannel(hbchan)
exc_infos = None
event = bufchan.recv()
try:
self._context.hook_load_task_context(event.header)
functor = self._methods.get(event.name, None)
if functor is None:
raise NameError(event.name)
functor.pattern.process_call(self._context, bufchan, event, functor)
except LostRemote:
exc_infos = list(sys.exc_info())
self._print_traceback(protocol_v1, exc_infos)
except Exception:
exc_infos = list(sys.exc_info())
human_exc_infos = self._print_traceback(protocol_v1, exc_infos)
reply_event = bufchan.new_event(u'ERR', human_exc_infos,
self._context.hook_get_task_context())
self._context.hook_server_inspect_exception(event, reply_event, exc_infos)
bufchan.emit_event(reply_event)
finally:
del exc_infos
bufchan.close()
def _acceptor(self):
while True:
initial_event = self._multiplexer.recv()
self._task_pool.spawn(self._async_task, initial_event)
def run(self):
self._acceptor_task = gevent.spawn(self._acceptor)
try:
self._acceptor_task.get()
finally:
self.stop()
self._task_pool.join(raise_error=True)
def stop(self):
if self._acceptor_task is not None:
self._acceptor_task.kill()
self._acceptor_task = None
class ClientBase(object):
def __init__(self, channel, context=None, timeout=30, heartbeat=5,
passive_heartbeat=False):
self._multiplexer = ChannelMultiplexer(channel,
ignore_broadcast=True)
self._context = context or Context.get_instance()
self._timeout = timeout
self._heartbeat_freq = heartbeat
self._passive_heartbeat = passive_heartbeat
def close(self):
self._multiplexer.close()
def _handle_remote_error(self, event):
exception = self._context.hook_client_handle_remote_error(event)
if not exception:
if event.header.get(u'v', 1) >= 2:
(name, msg, traceback) = event.args
exception = RemoteError(name, msg, traceback)
else:
(msg,) = event.args
exception = RemoteError('RemoteError', msg, None)
return exception
def _select_pattern(self, event):
for pattern in self._context.hook_client_patterns_list(
patterns.patterns_list):
if pattern.accept_answer(event):
return pattern
return None
def _process_response(self, request_event, bufchan, timeout):
def raise_error(ex):
bufchan.close()
self._context.hook_client_after_request(request_event, None, ex)
raise ex
try:
reply_event = bufchan.recv(timeout=timeout)
except TimeoutExpired:
raise_error(TimeoutExpired(timeout,
'calling remote method {0}'.format(request_event.name)))
pattern = self._select_pattern(reply_event)
if pattern is None:
raise_error(RuntimeError(
'Unable to find a pattern for: {0}'.format(request_event)))
return pattern.process_answer(self._context, bufchan, request_event,
reply_event, self._handle_remote_error)
def __call__(self, method, *args, **kargs):
# here `method` is either a string of bytes or an unicode string in
# Python2 and Python3. Python2: str aka a byte string containing ASCII
# (unless the user explicitly provide an unicode string). Python3: str
# aka an unicode string (unless the user explicitly provide a byte
# string).
# zerorpc protocol requires an utf-8 encoded string at the msgpack
# level. msgpack will encode any unicode string object to UTF-8 and tag
# it `string`, while a bytes string will be tagged `bin`.
#
# So when we get a bytes string, we assume it to be an UTF-8 string
# (ASCII is contained in UTF-8) that we decode to an unicode string.
# Right after, msgpack-python will re-encode it as UTF-8. Yes this is
# terribly inefficient with Python2 because most of the time `method`
# will already be an UTF-8 encoded bytes string.
if isinstance(method, bytes):
method = method.decode('utf-8')
timeout = kargs.get('timeout', self._timeout)
channel = self._multiplexer.channel()
hbchan = HeartBeatOnChannel(channel, freq=self._heartbeat_freq,
passive=self._passive_heartbeat)
bufchan = BufferedChannel(hbchan, inqueue_size=kargs.get('slots', 100))
xheader = self._context.hook_get_task_context()
request_event = bufchan.new_event(method, args, xheader)
self._context.hook_client_before_request(request_event)
bufchan.emit_event(request_event)
if kargs.get('async', False) is False:
return self._process_response(request_event, bufchan, timeout)
async_result = gevent.event.AsyncResult()
gevent.spawn(self._process_response, request_event, bufchan,
timeout).link(async_result)
return async_result
def __getattr__(self, method):
return lambda *args, **kargs: self(method, *args, **kargs)
class Server(SocketBase, ServerBase):
def __init__(self, methods=None, name=None, context=None, pool_size=None,
heartbeat=5):
SocketBase.__init__(self, zmq.ROUTER, context)
if methods is None:
methods = self
name = name or ServerBase._extract_name(methods)
methods = ServerBase._filter_methods(Server, self, methods)
ServerBase.__init__(self, self._events, methods, name, context,
pool_size, heartbeat)
def close(self):
ServerBase.close(self)
SocketBase.close(self)
class Client(SocketBase, ClientBase):
def __init__(self, connect_to=None, context=None, timeout=30, heartbeat=5,
passive_heartbeat=False):
SocketBase.__init__(self, zmq.DEALER, context=context)
ClientBase.__init__(self, self._events, context, timeout, heartbeat,
passive_heartbeat)
if connect_to:
self.connect(connect_to)
def close(self):
ClientBase.close(self)
SocketBase.close(self)
class Pusher(SocketBase):
def __init__(self, context=None, zmq_socket=zmq.PUSH):
super(Pusher, self).__init__(zmq_socket, context=context)
def __call__(self, method, *args):
self._events.emit(method, args,
self._context.hook_get_task_context())
def __getattr__(self, method):
return lambda *args: self(method, *args)
class Puller(SocketBase):
def __init__(self, methods=None, context=None, zmq_socket=zmq.PULL):
super(Puller, self).__init__(zmq_socket, context=context)
if methods is None:
methods = self
self._methods = ServerBase._filter_methods(Puller, self, methods)
self._receiver_task = None
def close(self):
self.stop()
super(Puller, self).close()
def __call__(self, method, *args):
if method not in self._methods:
raise NameError(method)
return self._methods[method](*args)
def _receiver(self):
while True:
event = self._events.recv()
try:
if event.name not in self._methods:
raise NameError(event.name)
self._context.hook_load_task_context(event.header)
self._context.hook_server_before_exec(event)
self._methods[event.name](*event.args)
# In Push/Pull their is no reply to send, hence None for the
# reply_event argument
self._context.hook_server_after_exec(event, None)
except Exception:
exc_infos = sys.exc_info()
try:
logger.exception('')
self._context.hook_server_inspect_exception(event, None, exc_infos)
finally:
del exc_infos
def run(self):
self._receiver_task = gevent.spawn(self._receiver)
try:
self._receiver_task.get()
finally:
self._receiver_task = None
def stop(self):
if self._receiver_task is not None:
self._receiver_task.kill(block=False)
class Publisher(Pusher):
def __init__(self, context=None):
super(Publisher, self).__init__(context=context, zmq_socket=zmq.PUB)
class Subscriber(Puller):
def __init__(self, methods=None, context=None):
super(Subscriber, self).__init__(methods=methods, context=context,
zmq_socket=zmq.SUB)
self._events.setsockopt(zmq.SUBSCRIBE, b'')
def fork_task_context(functor, context=None):
'''Wrap a functor to transfer context.
Usage example:
gevent.spawn(zerorpc.fork_task_context(myfunction), args...)
The goal is to permit context "inheritance" from a task to another.
Consider the following example:
zerorpc.Server receive a new event
- task1 is created to handle this event this task will be linked
to the initial event context. zerorpc.Server does that for you.
- task1 make use of some zerorpc.Client instances, the initial
event context is transfered on every call.
- task1 spawn a new task2.
- task2 make use of some zerorpc.Client instances, it's a fresh
context. Thus there is no link to the initial context that
spawned task1.
- task1 spawn a new fork_task_context(task3).
- task3 make use of some zerorpc.Client instances, the initial
event context is transfered on every call.
A real use case is a distributed tracer. Each time a new event is
created, a trace_id is injected in it or copied from the current task
context. This permit passing the trace_id from a zerorpc.Server to
another via zerorpc.Client.
The simple rule to know if a task need to be wrapped is:
- if the new task will make any zerorpc call, it should be wrapped.
'''
context = context or Context.get_instance()
xheader = context.hook_get_task_context()
def wrapped(*args, **kargs):
context.hook_load_task_context(xheader)
return functor(*args, **kargs)
return wrapped | zerorpc2 | /zerorpc2-0.7.0-py3-none-any.whl/zerorpc/core.py | core.py |
import gevent.pool
import gevent.queue
import gevent.event
import gevent.local
import gevent.lock
import logging
from .exceptions import TimeoutExpired
from .channel_base import ChannelBase
logger = logging.getLogger(__name__)
class ChannelMultiplexer(ChannelBase):
def __init__(self, events, ignore_broadcast=False):
self._events = events
self._active_channels = {}
self._channel_dispatcher_task = None
self._broadcast_queue = None
if events.recv_is_supported and not ignore_broadcast:
self._broadcast_queue = gevent.queue.Queue(maxsize=1)
self._channel_dispatcher_task = gevent.spawn(
self._channel_dispatcher)
@property
def recv_is_supported(self):
return self._events.recv_is_supported
@property
def emit_is_supported(self):
return self._events.emit_is_supported
def close(self):
if self._channel_dispatcher_task:
self._channel_dispatcher_task.kill()
def new_event(self, name, args, xheader=None):
return self._events.new_event(name, args, xheader)
def emit_event(self, event, timeout=None):
return self._events.emit_event(event, timeout)
def recv(self, timeout=None):
if self._broadcast_queue is not None:
event = self._broadcast_queue.get(timeout=timeout)
else:
event = self._events.recv(timeout=timeout)
return event
def _channel_dispatcher(self):
while True:
try:
event = self._events.recv()
except Exception:
logger.exception('zerorpc.ChannelMultiplexer ignoring error on recv')
continue
channel_id = event.header.get(u'response_to', None)
queue = None
if channel_id is not None:
channel = self._active_channels.get(channel_id, None)
if channel is not None:
queue = channel._queue
elif self._broadcast_queue is not None:
queue = self._broadcast_queue
if queue is None:
logger.warning('zerorpc.ChannelMultiplexer,'
' unable to route event: {0}'.format(
event.__str__(ignore_args=True)))
else:
queue.put(event)
def channel(self, from_event=None):
if self._channel_dispatcher_task is None:
self._channel_dispatcher_task = gevent.spawn(
self._channel_dispatcher)
return Channel(self, from_event)
@property
def active_channels(self):
return self._active_channels
@property
def context(self):
return self._events.context
class Channel(ChannelBase):
def __init__(self, multiplexer, from_event=None):
self._multiplexer = multiplexer
self._channel_id = None
self._zmqid = None
self._queue = gevent.queue.Queue(maxsize=1)
if from_event is not None:
self._channel_id = from_event.header[u'message_id']
self._zmqid = from_event.identity
self._multiplexer._active_channels[self._channel_id] = self
logger.debug('<-- new channel %s', self._channel_id)
self._queue.put(from_event)
@property
def recv_is_supported(self):
return self._multiplexer.recv_is_supported
@property
def emit_is_supported(self):
return self._multiplexer.emit_is_supported
def close(self):
if self._channel_id is not None:
del self._multiplexer._active_channels[self._channel_id]
logger.debug('-x- closed channel %s', self._channel_id)
self._channel_id = None
def new_event(self, name, args, xheader=None):
event = self._multiplexer.new_event(name, args, xheader)
if self._channel_id is None:
self._channel_id = event.header[u'message_id']
self._multiplexer._active_channels[self._channel_id] = self
logger.debug('--> new channel %s', self._channel_id)
else:
event.header[u'response_to'] = self._channel_id
event.identity = self._zmqid
return event
def emit_event(self, event, timeout=None):
self._multiplexer.emit_event(event, timeout)
def recv(self, timeout=None):
try:
event = self._queue.get(timeout=timeout)
except gevent.queue.Empty:
raise TimeoutExpired(timeout)
return event
@property
def context(self):
return self._multiplexer.context
class BufferedChannel(ChannelBase):
def __init__(self, channel, inqueue_size=100):
self._channel = channel
self._input_queue_size = inqueue_size
self._remote_queue_open_slots = 1
self._input_queue_reserved = 1
self._remote_can_recv = gevent.event.Event()
self._input_queue = gevent.queue.Queue()
self._verbose = False
self._on_close_if = None
self._recv_task = gevent.spawn(self._recver)
@property
def recv_is_supported(self):
return self._channel.recv_is_supported
@property
def emit_is_supported(self):
return self._channel.emit_is_supported
@property
def on_close_if(self):
return self._on_close_if
@on_close_if.setter
def on_close_if(self, cb):
self._on_close_if = cb
def close(self):
if self._recv_task is not None:
self._recv_task.kill()
self._recv_task = None
if self._channel is not None:
self._channel.close()
self._channel = None
def _recver(self):
while True:
event = self._channel.recv()
if event.name == u'_zpc_more':
try:
self._remote_queue_open_slots += int(event.args[0])
except Exception:
logger.exception('gevent_zerorpc.BufferedChannel._recver')
if self._remote_queue_open_slots > 0:
self._remote_can_recv.set()
elif self._input_queue.qsize() == self._input_queue_size:
raise RuntimeError(
'BufferedChannel, queue overflow on event:', event)
else:
self._input_queue.put(event)
if self._on_close_if is not None and self._on_close_if(event):
self._recv_task = None
self.close()
return
def new_event(self, name, args, xheader=None):
return self._channel.new_event(name, args, xheader)
def emit_event(self, event, timeout=None):
if self._remote_queue_open_slots == 0:
self._remote_can_recv.clear()
self._remote_can_recv.wait(timeout=timeout)
self._remote_queue_open_slots -= 1
try:
self._channel.emit_event(event)
except:
self._remote_queue_open_slots += 1
raise
def _request_data(self):
open_slots = self._input_queue_size - self._input_queue_reserved
self._input_queue_reserved += open_slots
self._channel.emit(u'_zpc_more', (open_slots,))
def recv(self, timeout=None):
# self._channel can be set to None by an 'on_close_if' callback if it
# sees a suitable message from the remote end...
#
if self._verbose and self._channel:
if self._input_queue_reserved < self._input_queue_size // 2:
self._request_data()
else:
self._verbose = True
try:
event = self._input_queue.get(timeout=timeout)
except gevent.queue.Empty:
raise TimeoutExpired(timeout)
self._input_queue_reserved -= 1
return event
@property
def channel(self):
return self._channel
@property
def context(self):
return self._channel.context | zerorpc2 | /zerorpc2-0.7.0-py3-none-any.whl/zerorpc/channel.py | channel.py |
import time
import gevent.pool
import gevent.queue
import gevent.event
import gevent.local
import gevent.lock
from .exceptions import LostRemote, TimeoutExpired
from .channel_base import ChannelBase
class HeartBeatOnChannel(ChannelBase):
def __init__(self, channel, freq=5, passive=False):
self._closed = False
self._channel = channel
self._heartbeat_freq = freq
self._input_queue = gevent.queue.Channel()
self._remote_last_hb = None
self._lost_remote = False
self._recv_task = gevent.spawn(self._recver)
self._heartbeat_task = None
self._parent_coroutine = gevent.getcurrent()
self._compat_v2 = None
if not passive:
self._start_heartbeat()
@property
def recv_is_supported(self):
return self._channel.recv_is_supported
@property
def emit_is_supported(self):
return self._channel.emit_is_supported
def close(self):
self._closed = True
if self._heartbeat_task is not None:
self._heartbeat_task.kill()
self._heartbeat_task = None
if self._recv_task is not None:
self._recv_task.kill()
self._recv_task = None
if self._channel is not None:
self._channel.close()
self._channel = None
def _heartbeat(self):
while True:
gevent.sleep(self._heartbeat_freq)
if self._remote_last_hb is None:
self._remote_last_hb = time.time()
if time.time() > self._remote_last_hb + self._heartbeat_freq * 2:
self._lost_remote = True
if not self._closed:
gevent.kill(self._parent_coroutine,
self._lost_remote_exception())
break
self._channel.emit(u'_zpc_hb', (0,)) # 0 -> compat with protocol v2
def _start_heartbeat(self):
if self._heartbeat_task is None and self._heartbeat_freq is not None and not self._closed:
self._heartbeat_task = gevent.spawn(self._heartbeat)
def _recver(self):
while True:
event = self._channel.recv()
if self._compat_v2 is None:
self._compat_v2 = event.header.get(u'v', 0) < 3
if event.name == u'_zpc_hb':
self._remote_last_hb = time.time()
self._start_heartbeat()
if self._compat_v2:
event.name = u'_zpc_more'
self._input_queue.put(event)
else:
self._input_queue.put(event)
def _lost_remote_exception(self):
return LostRemote('Lost remote after {0}s heartbeat'.format(
self._heartbeat_freq * 2))
def new_event(self, name, args, header=None):
if self._compat_v2 and name == u'_zpc_more':
name = u'_zpc_hb'
return self._channel.new_event(name, args, header)
def emit_event(self, event, timeout=None):
if self._lost_remote:
raise self._lost_remote_exception()
self._channel.emit_event(event, timeout)
def recv(self, timeout=None):
if self._lost_remote:
raise self._lost_remote_exception()
try:
return self._input_queue.get(timeout=timeout)
except gevent.queue.Empty:
raise TimeoutExpired(timeout)
@property
def channel(self):
return self._channel
@property
def context(self):
return self._channel.context | zerorpc2 | /zerorpc2-0.7.0-py3-none-any.whl/zerorpc/heartbeat.py | heartbeat.py |
from __future__ import absolute_import
from builtins import str
from builtins import range
import msgpack
import msgpack_numpy as m
m.patch() # Monkeypatching msgpack to handle Numpy
import gevent.pool
import gevent.queue
import gevent.event
import gevent.local
import gevent.lock
import logging
import sys
from . import gevent_zmq as zmq
from .exceptions import TimeoutExpired
from .context import Context
from .channel_base import ChannelBase
if sys.version_info < (2, 7):
def get_pyzmq_frame_buffer(frame):
return frame.buffer[:]
else:
def get_pyzmq_frame_buffer(frame):
return frame.buffer
# gevent <= 1.1.0.rc5 is missing the Python3 __next__ method.
if sys.version_info >= (3, 0) and gevent.version_info <= (1, 1, 0, 'rc', '5'):
setattr(gevent.queue.Channel, '__next__', gevent.queue.Channel.next)
logger = logging.getLogger(__name__)
class SequentialSender(object):
def __init__(self, socket):
self._socket = socket
def _send(self, parts):
e = None
for i in range(len(parts) - 1):
try:
self._socket.send(parts[i], copy=False, flags=zmq.SNDMORE)
except (gevent.GreenletExit, gevent.Timeout) as e:
if i == 0:
raise
self._socket.send(parts[i], copy=False, flags=zmq.SNDMORE)
try:
self._socket.send(parts[-1], copy=False)
except (gevent.GreenletExit, gevent.Timeout) as e:
self._socket.send(parts[-1], copy=False)
if e:
raise e
def __call__(self, parts, timeout=None):
if timeout:
with gevent.Timeout(timeout):
self._send(parts)
else:
self._send(parts)
class SequentialReceiver(object):
def __init__(self, socket):
self._socket = socket
def _recv(self):
e = None
parts = []
while True:
try:
part = self._socket.recv(copy=False)
except (gevent.GreenletExit, gevent.Timeout) as e:
if len(parts) == 0:
raise
part = self._socket.recv(copy=False)
parts.append(part)
if not part.more:
break
if e:
raise e
return parts
def __call__(self, timeout=None):
if timeout:
with gevent.Timeout(timeout):
return self._recv()
else:
return self._recv()
class Sender(SequentialSender):
def __init__(self, socket):
self._socket = socket
self._send_queue = gevent.queue.Channel()
self._send_task = gevent.spawn(self._sender)
def close(self):
if self._send_task:
self._send_task.kill()
def _sender(self):
for parts in self._send_queue:
super(Sender, self)._send(parts)
def __call__(self, parts, timeout=None):
try:
self._send_queue.put(parts, timeout=timeout)
except gevent.queue.Full:
raise TimeoutExpired(timeout)
class Receiver(SequentialReceiver):
def __init__(self, socket):
self._socket = socket
self._recv_queue = gevent.queue.Channel()
self._recv_task = gevent.spawn(self._recver)
def close(self):
if self._recv_task:
self._recv_task.kill()
self._recv_queue = None
def _recver(self):
while True:
parts = super(Receiver, self)._recv()
self._recv_queue.put(parts)
def __call__(self, timeout=None):
try:
return self._recv_queue.get(timeout=timeout)
except gevent.queue.Empty:
raise TimeoutExpired(timeout)
class Event(object):
__slots__ = ['_name', '_args', '_header', '_identity']
# protocol details:
# - `name` and `header` keys must be unicode strings.
# - `message_id` and 'response_to' values are opaque bytes string.
# - `v' value is an integer.
def __init__(self, name, args, context, header=None):
self._name = name
self._args = args
if header is None:
self._header = {u'message_id': context.new_msgid(), u'v': 3}
else:
self._header = header
self._identity = None
@property
def header(self):
return self._header
@property
def name(self):
return self._name
@name.setter
def name(self, v):
self._name = v
@property
def args(self):
return self._args
@property
def identity(self):
return self._identity
@identity.setter
def identity(self, v):
self._identity = v
def pack(self):
payload = (self._header, self._name, self._args)
r = msgpack.Packer(use_bin_type=True).pack(payload)
return r
@staticmethod
def unpack(blob):
unpacker = msgpack.Unpacker(encoding='utf-8')
unpacker.feed(blob)
unpacked_msg = unpacker.unpack()
try:
(header, name, args) = unpacked_msg
except Exception as e:
raise Exception('invalid msg format "{0}": {1}'.format(
unpacked_msg, e))
# Backward compatibility
if not isinstance(header, dict):
header = {}
return Event(name, args, None, header)
def __str__(self, ignore_args=False):
if ignore_args:
args = '[...]'
else:
args = self._args
try:
args = '<<{0}>>'.format(str(self.unpack(self._args)))
except Exception:
pass
if self._identity:
identity = ', '.join(repr(x.bytes) for x in self._identity)
return '<{0}> {1} {2} {3}'.format(identity, self._name,
self._header, args)
return '{0} {1} {2}'.format(self._name, self._header, args)
class Events(ChannelBase):
def __init__(self, zmq_socket_type, context=None):
self._debug = False
self._zmq_socket_type = zmq_socket_type
self._context = context or Context.get_instance()
self._socket = self._context.socket(zmq_socket_type)
if zmq_socket_type in (zmq.PUSH, zmq.PUB, zmq.DEALER, zmq.ROUTER):
self._send = Sender(self._socket)
elif zmq_socket_type in (zmq.REQ, zmq.REP):
self._send = SequentialSender(self._socket)
else:
self._send = None
if zmq_socket_type in (zmq.PULL, zmq.SUB, zmq.DEALER, zmq.ROUTER):
self._recv = Receiver(self._socket)
elif zmq_socket_type in (zmq.REQ, zmq.REP):
self._recv = SequentialReceiver(self._socket)
else:
self._recv = None
@property
def recv_is_supported(self):
return self._recv is not None
@property
def emit_is_supported(self):
return self._send is not None
def __del__(self):
try:
if not self._socket.closed:
self.close()
except (AttributeError, TypeError):
pass
def close(self):
try:
self._send.close()
except (AttributeError, TypeError, gevent.GreenletExit):
pass
try:
self._recv.close()
except (AttributeError, TypeError, gevent.GreenletExit):
pass
self._socket.close()
@property
def debug(self):
return self._debug
@debug.setter
def debug(self, v):
if v != self._debug:
self._debug = v
if self._debug:
logger.debug('debug enabled')
else:
logger.debug('debug disabled')
def _resolve_endpoint(self, endpoint, resolve=True):
if resolve:
endpoint = self._context.hook_resolve_endpoint(endpoint)
if isinstance(endpoint, (tuple, list)):
r = []
for sub_endpoint in endpoint:
r.extend(self._resolve_endpoint(sub_endpoint, resolve))
return r
return [endpoint]
def connect(self, endpoint, resolve=True):
r = []
for endpoint_ in self._resolve_endpoint(endpoint, resolve):
r.append(self._socket.connect(endpoint_))
logger.debug('connected to %s (status=%s)', endpoint_, r[-1])
return r
def bind(self, endpoint, resolve=True):
r = []
for endpoint_ in self._resolve_endpoint(endpoint, resolve):
r.append(self._socket.bind(endpoint_))
logger.debug('bound to %s (status=%s)', endpoint_, r[-1])
return r
def disconnect(self, endpoint, resolve=True):
r = []
for endpoint_ in self._resolve_endpoint(endpoint, resolve):
r.append(self._socket.disconnect(endpoint_))
logger.debug('disconnected from %s (status=%s)', endpoint_, r[-1])
return r
def new_event(self, name, args, xheader=None):
event = Event(name, args, context=self._context)
if xheader:
event.header.update(xheader)
return event
def emit_event(self, event, timeout=None):
if self._debug:
logger.debug('--> %s', event)
if event.identity:
parts = list(event.identity or list())
parts.extend([b'', event.pack()])
elif self._zmq_socket_type in (zmq.DEALER, zmq.ROUTER):
parts = (b'', event.pack())
else:
parts = (event.pack(),)
self._send(parts, timeout)
def recv(self, timeout=None):
parts = self._recv(timeout=timeout)
if len(parts) > 2:
identity = parts[0:-2]
blob = parts[-1]
elif len(parts) == 2:
identity = parts[0:-1]
blob = parts[-1]
else:
identity = None
blob = parts[0]
event = Event.unpack(get_pyzmq_frame_buffer(blob))
event.identity = identity
if self._debug:
logger.debug('<-- %s', event)
return event
def setsockopt(self, *args):
return self._socket.setsockopt(*args)
@property
def context(self):
return self._context | zerorpc2 | /zerorpc2-0.7.0-py3-none-any.whl/zerorpc/events.py | events.py |
# We want to act like zmq
from zmq import * # noqa
# Explicit import to please flake8
from zmq import ZMQError
# A way to access original zmq
import zmq as _zmq
import gevent.event
import gevent.core
import errno
from logging import getLogger
logger = getLogger(__name__)
class Context(_zmq.Context):
def socket(self, socket_type):
if self.closed:
raise _zmq.ZMQError(_zmq.ENOTSUP)
return Socket(self, socket_type)
class Socket(_zmq.Socket):
def __init__(self, context, socket_type):
super(Socket, self).__init__(context, socket_type)
on_state_changed_fd = self.getsockopt(_zmq.FD)
# NOTE: pyzmq 13.0.0 messed up with setattr (they turned it into a
# non-op) and you can't assign attributes normally anymore, hence the
# tricks with self.__dict__ here
self.__dict__["_readable"] = gevent.event.Event()
self.__dict__["_writable"] = gevent.event.Event()
try:
# gevent>=1.0
self.__dict__["_state_event"] = gevent.hub.get_hub().loop.io(
on_state_changed_fd, gevent.core.READ)
self._state_event.start(self._on_state_changed)
except AttributeError:
# gevent<1.0
self.__dict__["_state_event"] = \
gevent.core.read_event(on_state_changed_fd,
self._on_state_changed, persist=True)
def _on_state_changed(self, event=None, _evtype=None):
if self.closed:
self._writable.set()
self._readable.set()
return
while True:
try:
events = self.getsockopt(_zmq.EVENTS)
break
except ZMQError as e:
if e.errno not in (_zmq.EAGAIN, errno.EINTR):
raise
if events & _zmq.POLLOUT:
self._writable.set()
if events & _zmq.POLLIN:
self._readable.set()
def close(self):
if not self.closed and getattr(self, '_state_event', None):
try:
# gevent>=1.0
self._state_event.stop()
except AttributeError:
# gevent<1.0
self._state_event.cancel()
super(Socket, self).close()
def connect(self, *args, **kwargs):
while True:
try:
return super(Socket, self).connect(*args, **kwargs)
except _zmq.ZMQError as e:
if e.errno not in (_zmq.EAGAIN, errno.EINTR):
raise
def send(self, data, flags=0, copy=True, track=False):
if flags & _zmq.NOBLOCK:
return super(Socket, self).send(data, flags, copy, track)
flags |= _zmq.NOBLOCK
while True:
try:
msg = super(Socket, self).send(data, flags, copy, track)
# The following call, force polling the state of the zmq socket
# (POLLIN and/or POLLOUT). It seems that a POLLIN event is often
# missed when the socket is used to send at the same time,
# forcing to poll at this exact moment seems to reduce the
# latencies when a POLLIN event is missed. The drawback is a
# reduced throughput (roughly 8.3%) in exchange of a normal
# concurrency. In other hand, without the following line, you
# loose 90% of the performances as soon as there is simultaneous
# send and recv on the socket.
self._on_state_changed()
return msg
except _zmq.ZMQError as e:
if e.errno not in (_zmq.EAGAIN, errno.EINTR):
raise
self._writable.clear()
# The following sleep(0) force gevent to switch out to another
# coroutine and seems to refresh the notion of time that gevent may
# have. This definitively eliminate the gevent bug that can trigger
# a timeout too soon under heavy load. In theory it will incur more
# CPU usage, but in practice it balance even with the extra CPU used
# when the timeout triggers too soon in the following loop. So for
# the same CPU load, you get a better throughput (roughly 18.75%).
gevent.sleep(0)
while not self._writable.wait(timeout=1):
try:
if self.getsockopt(_zmq.EVENTS) & _zmq.POLLOUT:
logger.error("/!\\ gevent_zeromq BUG /!\\ "
"catching up after missing event (SEND) /!\\")
break
except ZMQError as e:
if e.errno not in (_zmq.EAGAIN, errno.EINTR):
raise
def recv(self, flags=0, copy=True, track=False):
if flags & _zmq.NOBLOCK:
return super(Socket, self).recv(flags, copy, track)
flags |= _zmq.NOBLOCK
while True:
try:
msg = super(Socket, self).recv(flags, copy, track)
# The following call, force polling the state of the zmq socket
# (POLLIN and/or POLLOUT). It seems that a POLLOUT event is
# often missed when the socket is used to receive at the same
# time, forcing to poll at this exact moment seems to reduce the
# latencies when a POLLOUT event is missed. The drawback is a
# reduced throughput (roughly 8.3%) in exchange of a normal
# concurrency. In other hand, without the following line, you
# loose 90% of the performances as soon as there is simultaneous
# send and recv on the socket.
self._on_state_changed()
return msg
except _zmq.ZMQError as e:
if e.errno not in (_zmq.EAGAIN, errno.EINTR):
raise
self._readable.clear()
# The following sleep(0) force gevent to switch out to another
# coroutine and seems to refresh the notion of time that gevent may
# have. This definitively eliminate the gevent bug that can trigger
# a timeout too soon under heavy load. In theory it will incur more
# CPU usage, but in practice it balance even with the extra CPU used
# when the timeout triggers too soon in the following loop. So for
# the same CPU load, you get a better throughput (roughly 18.75%).
gevent.sleep(0)
while not self._readable.wait(timeout=1):
try:
if self.getsockopt(_zmq.EVENTS) & _zmq.POLLIN:
logger.error("/!\\ gevent_zeromq BUG /!\\ "
"catching up after missing event (RECV) /!\\")
break
except ZMQError as e:
if e.errno not in (_zmq.EAGAIN, errno.EINTR):
raise | zerorpc2 | /zerorpc2-0.7.0-py3-none-any.whl/zerorpc/gevent_zmq.py | gevent_zmq.py |
class ReqRep(object):
def process_call(self, context, channel, req_event, functor):
context.hook_server_before_exec(req_event)
result = functor(*req_event.args)
rep_event = channel.new_event(u'OK', (result,),
context.hook_get_task_context())
context.hook_server_after_exec(req_event, rep_event)
channel.emit_event(rep_event)
def accept_answer(self, event):
return event.name in (u'OK', u'ERR')
def process_answer(self, context, channel, req_event, rep_event,
handle_remote_error):
try:
if rep_event.name == u'ERR':
exception = handle_remote_error(rep_event)
context.hook_client_after_request(req_event, rep_event, exception)
raise exception
context.hook_client_after_request(req_event, rep_event)
return rep_event.args[0]
finally:
channel.close()
class ReqStream(object):
def process_call(self, context, channel, req_event, functor):
context.hook_server_before_exec(req_event)
xheader = context.hook_get_task_context()
for result in iter(functor(*req_event.args)):
channel.emit(u'STREAM', result, xheader)
done_event = channel.new_event(u'STREAM_DONE', None, xheader)
# NOTE: "We" made the choice to call the hook once the stream is done,
# the other choice was to call it at each iteration. I donu't think that
# one choice is better than the other, so Iu'm fine with changing this
# or adding the server_after_iteration and client_after_iteration hooks.
context.hook_server_after_exec(req_event, done_event)
channel.emit_event(done_event)
def accept_answer(self, event):
return event.name in (u'STREAM', u'STREAM_DONE')
def process_answer(self, context, channel, req_event, rep_event,
handle_remote_error):
def is_stream_done(rep_event):
return rep_event.name == u'STREAM_DONE'
channel.on_close_if = is_stream_done
def iterator(req_event, rep_event):
try:
while rep_event.name == u'STREAM':
# Like in process_call, we made the choice to call the
# after_exec hook only when the stream is done.
yield rep_event.args
rep_event = channel.recv()
if rep_event.name == u'ERR':
exception = handle_remote_error(rep_event)
context.hook_client_after_request(req_event, rep_event, exception)
raise exception
context.hook_client_after_request(req_event, rep_event)
finally:
channel.close()
return iterator(req_event, rep_event)
patterns_list = [ReqStream(), ReqRep()] | zerorpc2 | /zerorpc2-0.7.0-py3-none-any.whl/zerorpc/patterns.py | patterns.py |
from __future__ import print_function
from builtins import map
import argparse
import json
import sys
import inspect
import os
import logging
import collections
from pprint import pprint
import zerorpc
parser = argparse.ArgumentParser(
description='Make a zerorpc call to a remote service.'
)
client_or_server = parser.add_mutually_exclusive_group()
client_or_server.add_argument('--client', action='store_true', default=True,
help='remote procedure call mode (default)')
client_or_server.add_argument('--server', action='store_false', dest='client',
help='turn a given python module into a server')
parser.add_argument('--connect', action='append', metavar='address',
help='specify address to connect to. Can be specified \
multiple times and in conjunction with --bind')
parser.add_argument('--bind', action='append', metavar='address',
help='specify address to listen to. Can be specified \
multiple times and in conjunction with --connect')
parser.add_argument('--timeout', default=30, metavar='seconds', type=int,
help='abort request after X seconds. \
(default: 30s, --client only)')
parser.add_argument('--heartbeat', default=5, metavar='seconds', type=int,
help='heartbeat frequency. You should always use \
the same frequency as the server. (default: 5s)')
parser.add_argument('--pool-size', default=None, metavar='count', type=int,
help='size of worker pool. --server only.')
parser.add_argument('-j', '--json', default=False, action='store_true',
help='arguments are in JSON format and will be be parsed \
before being sent to the remote')
parser.add_argument('-pj', '--print-json', default=False, action='store_true',
help='print result in JSON format.')
parser.add_argument('-?', '--inspect', default=False, action='store_true',
help='retrieve detailed informations for the given \
remote (cf: command) method. If not method, display \
a list of remote methods signature. (only for --client).')
parser.add_argument('--active-hb', default=False, action='store_true',
help='enable active heartbeat. The default is to \
wait for the server to send the first heartbeat')
parser.add_argument('-d', '--debug', default=False, action='store_true',
help='Print zerorpc debug msgs, \
like outgoing and incomming messages.')
parser.add_argument('address', nargs='?', help='address to connect to. Skip \
this if you specified --connect or --bind at least once')
parser.add_argument('command', nargs='?',
help='remote procedure to call if --client (default) or \
python module/class to load if --server. If no command is \
specified, a list of remote methods are displayed.')
parser.add_argument('params', nargs='*',
help='parameters for the remote call if --client \
(default)')
def setup_links(args, socket):
if args.bind:
for endpoint in args.bind:
print('binding to "{0}"'.format(endpoint))
socket.bind(endpoint)
addresses = []
if args.address:
addresses.append(args.address)
if args.connect:
addresses.extend(args.connect)
for endpoint in addresses:
print('connecting to "{0}"'.format(endpoint))
socket.connect(endpoint)
def run_server(args):
server_obj_path = args.command
sys.path.insert(0, os.getcwd())
if '.' in server_obj_path:
modulepath, objname = server_obj_path.rsplit('.', 1)
module = __import__(modulepath, fromlist=[objname])
server_obj = getattr(module, objname)
else:
server_obj = __import__(server_obj_path)
if callable(server_obj):
server_obj = server_obj()
server = zerorpc.Server(server_obj, heartbeat=args.heartbeat, pool_size=args.pool_size)
if args.debug:
server.debug = True
setup_links(args, server)
print('serving "{0}"'.format(server_obj_path))
return server.run()
# this function does a really intricate job to keep backward compatibility
# with a previous version of zerorpc, and lazily retrieving results if possible
def zerorpc_inspect_legacy(client, filter_method, long_doc, include_argspec):
if filter_method is None:
remote_methods = client._zerorpc_list()
else:
remote_methods = [filter_method]
def remote_detailled_methods():
for name in remote_methods:
if include_argspec:
argspec = client._zerorpc_args(name)
else:
argspec = None
docstring = client._zerorpc_help(name)
if docstring and not long_doc:
docstring = docstring.split('\n', 1)[0]
yield (name, argspec, docstring if docstring else '<undocumented>')
if not include_argspec:
longest_name_len = max(len(name) for name in remote_methods)
return (longest_name_len, ((name, doc) for name, argspec, doc in
remote_detailled_methods()))
r = [(name + (inspect.formatargspec(*argspec)
if argspec else '(...)'), doc)
for name, argspec, doc in remote_detailled_methods()]
longest_name_len = max(len(name) for name, doc in r) if r else 0
return (longest_name_len, r)
# handle the 'python formatted' _zerorpc_inspect, that return the output of
# "getargspec" from the python lib "inspect". A monstruosity from protocol v2.
def zerorpc_inspect_python_argspecs(remote_methods, filter_method, long_doc, include_argspec):
def format_method(name, argspec, doc):
if include_argspec:
name += (inspect.formatargspec(*argspec) if argspec else
'(...)')
if not doc:
doc = '<undocumented>'
elif not long_doc:
doc = doc.splitlines()[0]
return (name, doc)
r = [format_method(*methods_info) for methods_info in remote_methods if
filter_method is None or methods_info[0] == filter_method]
if not r:
return None
longest_name_len = max(len(name) for name, doc in r) if r else 0
return (longest_name_len, r)
# Handles generically formatted arguments (not tied to any specific programming language).
def zerorpc_inspect_generic(remote_methods, filter_method, long_doc, include_argspec):
def format_method(name, args, doc):
if include_argspec:
def format_arg(arg):
def_val = arg.get('default')
if def_val is None:
return arg['name']
return '{0}={1}'.format(arg['name'], def_val)
if args:
name += '({0})'.format(', '.join(map(format_arg, args)))
else:
name += '(??)'
if not doc:
doc = '<undocumented>'
elif not long_doc:
doc = doc.splitlines()[0]
return (name, doc)
methods = [format_method(name, details['args'], details['doc'])
for name, details in remote_methods.items()
if filter_method is None or name == filter_method]
longest_name_len = (max(len(name) for name, doc in methods)
if methods else 0)
return (longest_name_len, methods)
def zerorpc_inspect(client, method=None, long_doc=True, include_argspec=True):
try:
inspect_result = client._zerorpc_inspect()
remote_methods = inspect_result['methods']
legacy = False
except (zerorpc.RemoteError, NameError):
legacy = True
if legacy:
try:
service_name = client._zerorpc_name()
except (zerorpc.RemoteError):
service_name = 'N/A'
(longest_name_len, detailled_methods) = zerorpc_inspect_legacy(client,
method, long_doc, include_argspec)
else:
service_name = inspect_result.get('name', 'N/A')
if not isinstance(remote_methods, dict):
(longest_name_len,
detailled_methods) = zerorpc_inspect_python_argspecs(
remote_methods, method, long_doc, include_argspec)
(longest_name_len, detailled_methods) = zerorpc_inspect_generic(
remote_methods, method, long_doc, include_argspec)
return longest_name_len, detailled_methods, service_name
def run_client(args):
client = zerorpc.Client(timeout=args.timeout, heartbeat=args.heartbeat,
passive_heartbeat=not args.active_hb)
if args.debug:
client.debug = True
setup_links(args, client)
if not args.command:
(longest_name_len, detailled_methods, service) = zerorpc_inspect(client,
long_doc=False, include_argspec=args.inspect)
print('[{0}]'.format(service))
if args.inspect:
for (name, doc) in detailled_methods:
print(name)
else:
for (name, doc) in detailled_methods:
print('{0} {1}'.format(name.ljust(longest_name_len), doc))
return
if args.inspect:
(longest_name_len, detailled_methods, service) = zerorpc_inspect(client,
method=args.command)
if detailled_methods:
(name, doc) = detailled_methods[0]
print('[{0}]\n{1}\n\n{2}\n'.format(service, name, doc))
else:
print('[{0}]\nNo documentation for "{1}".'.format(service, args.command))
return
if args.json:
call_args = [json.loads(x) for x in args.params]
else:
call_args = args.params
results = client(args.command, *call_args)
if not isinstance(results, collections.Iterator):
if args.print_json:
json.dump(results, sys.stdout)
else:
pprint(results)
else:
# streaming responses
if args.print_json:
first = True
sys.stdout.write('[')
for result in results:
if first:
first = False
else:
sys.stdout.write(',')
json.dump(result, sys.stdout)
sys.stdout.write(']')
else:
for result in results:
pprint(result)
def main():
logging.basicConfig()
args = parser.parse_args()
if args.debug:
logging.getLogger().setLevel(logging.DEBUG)
if args.bind or args.connect:
if args.command:
args.params.insert(0, args.command)
args.command = args.address
args.address = None
if not (args.bind or args.connect or args.address):
parser.print_help()
return -1
if args.client:
return run_client(args)
if not args.command:
parser.print_help()
return -1
return run_server(args) | zerorpc2 | /zerorpc2-0.7.0-py3-none-any.whl/zerorpc/cli.py | cli.py |
# zeroscale
[](https://travis-ci.org/Rycieos/zeroscale)
[](https://coveralls.io/github/Rycieos/zeroscale?branch=master)
[](https://requires.io/github/Rycieos/zeroscale/requirements/?branch=master)
[
](https://hub.docker.com/repository/docker/rycieos/zeroscale)
[](https://hub.docker.com/repository/docker/rycieos/zeroscale/builds)
[](https://hub.docker.com/repository/docker/rycieos/zeroscale/tags)
[

](https://pypi.python.org/pypi/zeroscale)
Scale-to-zero any server
Some servers don't idle well. Either they constantly suck CPU doing nothing
(like Minecraft keeping spawn chunks always loaded), or they do things you
don't want them to while no clients are connected. If you have control over
the program, you could design it to do nothing while no clients are connected,
but if you don't, how can you prevent this waste?
`zeroscale` sits in front of a server and only spins it up when someone tries
to connect to it, proxying the connection. It can pause a server when no
clients are connected, and unpause it on connection, completely transparently
proxying the connection. It also supports shutting down the server when no
clients are connected, and while starting it up, send a message to the client
telling the user to wait.
## Usage
```
usage: zeroscale [-h] [--listen_port LISTEN_PORT] [--server_host SERVER_HOST]
[--server_port SERVER_PORT] [--plugin PLUGIN] [--method_stop]
[--idle_shutdown IDLE_SHUTDOWN]
[--shutdown_timeout SHUTDOWN_TIMEOUT]
[--plugin_argument PLUGIN_ARGUMENT] [--ignore_bad_clients]
[--info] [--debug] [--working_directory WORKING_DIRECTORY]
[--pause_signal PAUSE_SIGNAL]
[--unpause_signal UNPAUSE_SIGNAL] [--stop_signal STOP_SIGNAL]
Scale a server to zero.
optional arguments:
-h, --help show this help message and exit
--listen_port LISTEN_PORT, -p LISTEN_PORT
Port for the proxy server, where clients will connect.
Defaults to 8080
--server_host SERVER_HOST, -H SERVER_HOST
Hostname that the real server will be listening on.
Defaults to localhost.
--server_port SERVER_PORT, -P SERVER_PORT
Port that the real server will be listening on.
Defaults to the value of listen_port
--plugin PLUGIN Package name of the server plugin. Must be in plugins
dir. Defaults to the generic provider.
--method_stop, -m Instead of pausing the process, stop it completely.
This isn't recommended since extra startup time will
be needed.
--idle_shutdown IDLE_SHUTDOWN, -t IDLE_SHUTDOWN
Time in seconds after last client disconects to
shutdown the server. Default 15.
--shutdown_timeout SHUTDOWN_TIMEOUT, -s SHUTDOWN_TIMEOUT
Time in seconds after proxy server gets SIGINT to kill
the server. Default 15.
--plugin_argument PLUGIN_ARGUMENT, -a PLUGIN_ARGUMENT
Arguments to pass to the Server() constructor in the
plugin. Can be called multiple times.
--ignore_bad_clients, -b
Disable checking for a bad client connection. This
would prevent port scanners from starting servers, but
if your real clients are failing the check, you can
disable it. This is implemented by each server plugin.
The default plugin has no check.
--info, -i Enable info logging.
--debug, -d Enable debug logging. Default is WARNING
--working_directory WORKING_DIRECTORY, -w WORKING_DIRECTORY
Directory to start the server process.
--pause_signal PAUSE_SIGNAL
Signal to send to the server process to pause it. In
int form. Default 20 (SIGTSTP)
--unpause_signal UNPAUSE_SIGNAL
Signal to send to the server process to unpause it. In
int form. Default 18 (SIGCONT)
--stop_signal STOP_SIGNAL
Signal to send to the server process to stop it. In
int form. Default 2 (SIGINT). Note that some plugins
will use stdin to stop their process, in which case
this flag will be ignored.
```
## Example
```
$ zeroscale --plugin minecraft -p 25565 -P 25575 --debug
INFO:zeroscale.plugins.minecraft:Starting Minecraft server
INFO:zeroscale.plugins.minecraft:Minecraft server online
DEBUG:zeroscale.zeroscale:Scheduling Minecraft server stop
DEBUG:zeroscale.zeroscale:Listening on ('::', 25565, 0, 0)
DEBUG:zeroscale.zeroscale:Listening on ('0.0.0.0', 25565)
DEBUG:zeroscale.zeroscale:No clients online for 15 seconds
INFO:zeroscale.plugins.minecraft:Pausing Minecraft server
...
DEBUG:zeroscale.zeroscale:New connection, server is paused
DEBUG:zeroscale.zeroscale:Invalid client attempted connection # Detects invalid client
...
DEBUG:zeroscale.zeroscale:New connection, server is paused
INFO:zeroscale.plugins.minecraft:Unpausing Minecraft server
DEBUG:zeroscale.zeroscale:New connection, total clients: 1 # Proxies connection transparently
...
DEBUG:zeroscale.zeroscale:Lost connection, total clients: 0
DEBUG:zeroscale.zeroscale:Scheduling Server server stop
...
DEBUG:zeroscale.zeroscale:No clients online for 15 seconds
INFO:zeroscale.plugins.minecraft:Pausing Minecraft server
```
And an example of the stopping method:
```
$ zeroscale --plugin minecraft -p 25565 -P 25575 -method_stop --debug
DEBUG:zeroscale.zeroscale:Listening on ('::', 25565, 0, 0)
DEBUG:zeroscale.zeroscale:Listening on ('0.0.0.0', 25565)
...
DEBUG:zeroscale.zeroscale:New connection, server is stopped
DEBUG:zeroscale.zeroscale:Invalid client attempted connection # Detects invalid client
...
DEBUG:zeroscale.zeroscale:New connection, server is stopped
DEBUG:zeroscale.zeroscale:Sending fake response # Actually shows valid server message in client!
INFO:zeroscale.plugins.minecraft:Starting Minecraft server
...
INFO:zeroscale.plugins.minecraft:Minecraft server online
DEBUG:zeroscale.zeroscale:Scheduling Server server stop
...
DEBUG:zeroscale.zeroscale:New connection, server is running
DEBUG:zeroscale.zeroscale:New connection, total clients: 1
DEBUG:zeroscale.zeroscale:Canceling Server server stop
...
DEBUG:zeroscale.zeroscale:Lost connection, total clients: 0
DEBUG:zeroscale.zeroscale:Scheduling Server server stop
...
DEBUG:zeroscale.zeroscale:No clients online for 15 seconds
INFO:zeroscale.plugins.minecraft:Stopping Minecraft server
INFO:zeroscale.plugins.minecraft:Minecraft server offline
```
## Docker
There is also a Docker version that can control docker containers. Instead of
starting and stopping the process, it starts, stops, and pauses the container.
### Usage
```
usage: docker-zeroscale [-h] [--listen_port LISTEN_PORT]
[--server_host SERVER_HOST]
[--server_port SERVER_PORT] [--plugin PLUGIN]
[--method_stop] [--idle_shutdown IDLE_SHUTDOWN]
[--shutdown_timeout SHUTDOWN_TIMEOUT]
[--plugin_argument PLUGIN_ARGUMENT]
[--ignore_bad_clients] [--info] [--debug]
[--disable_exit_stop]
container_id
Scale a container to zero.
positional arguments:
container_id ID or name of the Docker container to control. Must
already exist. Will also try to connect to this
container as the server to proxy unless server_host is
set.
optional arguments:
-h, --help show this help message and exit
--listen_port LISTEN_PORT, -p LISTEN_PORT
Port for the proxy server, where clients will connect.
Defaults to 8080
--server_host SERVER_HOST, -H SERVER_HOST
Hostname that the real server will be listening on.
Defaults to localhost.
--server_port SERVER_PORT, -P SERVER_PORT
Port that the real server will be listening on.
Defaults to the value of listen_port
--plugin PLUGIN Package name of the server plugin. Must be in plugins
dir. Defaults to the generic provider.
--method_stop, -m Instead of pausing the process, stop it completely.
This isn't recommended since extra startup time will
be needed.
--idle_shutdown IDLE_SHUTDOWN, -t IDLE_SHUTDOWN
Time in seconds after last client disconects to
shutdown the server. Default 15.
--shutdown_timeout SHUTDOWN_TIMEOUT, -s SHUTDOWN_TIMEOUT
Time in seconds after proxy server gets SIGINT to kill
the server. Default 15.
--plugin_argument PLUGIN_ARGUMENT, -a PLUGIN_ARGUMENT
Arguments to pass to the Server() constructor in the
plugin. Can be called multiple times.
--ignore_bad_clients, -b
Disable checking for a bad client connection. This
would prevent port scanners from starting servers, but
if your real clients are failing the check, you can
disable it. This is implemented by each server plugin.
The default plugin has no check.
--info, -i Enable info logging.
--debug, -d Enable debug logging. Default is WARNING
--disable_exit_stop Disable stopping the controlled container on exit.
```
### Docker usage
If you want to run `docker-zeroscale` in its own container, there is an image
for that, but you will need to make a few changes.
* The `docker.sock` must be mounted in the container, so that it can control
the proxied container.
* The port that the proxy server will listen on needs to be specified twice:
once as an argument to Docker to tell it to open the port, and once to the
proxy server to tell it to listen on that port.
* Since you don't want the non-proxied port exposed externally, make the
proxied server listen on a non published port (don't use `-p` when starting
it), and connect the zeroscale proxy server to the same Docker network.
All together, the run command would look like this:
```
docker run \
--network my_network \
-v /var/run/docker.sock:/var/run/docker.sock:ro \
-p 25565:25565 \
rycieos/zeroscale \
proxied_container_id \
--listen_port=25565
```
`docker-zeroscale` assumes that the container it is controlling is listening
on the hostname of the container and the same port as the proxy server is
listening on by default.
### Docker compose
Since two containers need to work closely together, it's probably best to use
docker-compose to spin them up.
```yml
version: '3'
services:
my_server:
image: my_server
container_name: my_server
restart: always
networks:
- network
zeroscale:
image: rycieos/zeroscale
restart: always
ports:
- 25565:25565
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
command:
- my_server
- --listen_port=25565
depends_on:
- my_server
networks:
- network
networks:
network:
driver: bridge
```
## Plugins
### Minecraft
The original problem server that spawned this project. Should just work,
but if using the pausing method (which is the default), you will need to set
the `max-tick-time` option in the `server.properties` to `-1` to prevent the
server crashing on unpause. While it is possible for a process to detect that
it was paused, Minecraft does not, and it sees a tick as having taken way too
long, and force restarts the server.
If the server needs to be started up (using method stop), it will correctly
show the server as online, but with a message that it is unavailable.
### Terraria
Terraria server. Just works. Shows an error message if the server isn't online.
### Custom plugins
Any server can run behind the proxy, simply override any methods of the
`GenericServer` in your own module in the "plugins/" directory. The only
methods you need to override are `is_valid_connection()` and `fake_status()`.
If you don't override those, you are probably better off just using the
`generic` plugin.
```
from .generic import Server as GenericServer
class Server(GenericServer):
def __init__(self,
# Any parameters, will come from --plugin_argument params
):
super().__init__(True)
self.name = "Plugin name"
async def start(self):
if self.status is not Status.stopped:
return
logger.info('Starting server')
self.status = Status.starting
# Whatever to run the server, probably an await asyncio.create_subprocess_exec()
logger.info('Server online')
self.status = Status.running
async def stop(self):
if self.status is not Status.running:
return
logger.info('Stopping server')
self.status = Status.stopping
# Whatever to stop the server
logger.info('Server offline')
self.status = Status.stopped
async def is_valid_connection(self, client_reader):
return # If the connection is from a valid client (to stop port scanners)
def fake_status(self) -> bytes:
return # Some bytes for when a client tries to connect and the server is not online
```
## Systemd
Example systemd configs are located in systemd/ to accompany the plugins.
## Known issues
* Plugins that use subprocess pipes to read stdin, stdout, or stderr don't work
on Cygwin, as the OS is seen as posix and thus doesn't ship with the
ProactorEventLoop, but since the backend OS is Windows, the default event
loop won't work. This is a bug in the Cygwin Python package.
| zeroscale | /zeroscale-0.5.2.tar.gz/zeroscale-0.5.2/README.md | README.md |
import torch
import torch.nn as nn
import functools
from torch.autograd import Variable
import numpy as np
from torch.nn.utils import spectral_norm
# from util.util import SwitchNorm2d
import torch.nn.functional as F
###############################################################################
# Functions
###############################################################################
def weights_init(m):
classname = m.__class__.__name__
if classname.find("Conv") != -1:
m.weight.data.normal_(0.0, 0.02)
elif classname.find("BatchNorm2d") != -1:
m.weight.data.normal_(1.0, 0.02)
m.bias.data.fill_(0)
def get_norm_layer(norm_type="instance"):
if norm_type == "batch":
norm_layer = functools.partial(nn.BatchNorm2d, affine=True)
elif norm_type == "instance":
norm_layer = functools.partial(nn.InstanceNorm2d, affine=False)
elif norm_type == "spectral":
norm_layer = spectral_norm()
elif norm_type == "SwitchNorm":
norm_layer = SwitchNorm2d
else:
raise NotImplementedError("normalization layer [%s] is not found" % norm_type)
return norm_layer
def print_network(net):
if isinstance(net, list):
net = net[0]
num_params = 0
for param in net.parameters():
num_params += param.numel()
print(net)
print("Total number of parameters: %d" % num_params)
def define_G(input_nc, output_nc, ngf, netG, k_size=3, n_downsample_global=3, n_blocks_global=9, n_local_enhancers=1,
n_blocks_local=3, norm='instance', gpu_ids=[], opt=None):
norm_layer = get_norm_layer(norm_type=norm)
if netG == 'global':
# if opt.self_gen:
if opt.use_v2:
netG = GlobalGenerator_DCDCv2(input_nc, output_nc, ngf, k_size, n_downsample_global, norm_layer, opt=opt)
else:
netG = GlobalGenerator_v2(input_nc, output_nc, ngf, k_size, n_downsample_global, n_blocks_global,
norm_layer, opt=opt)
else:
raise ('generator not implemented!')
print(netG)
if len(gpu_ids) > 0:
assert (torch.cuda.is_available())
netG.cuda(gpu_ids[0])
netG.apply(weights_init)
return netG
def define_D(input_nc, ndf, n_layers_D, opt, norm='instance', use_sigmoid=False, num_D=1, getIntermFeat=False,
gpu_ids=[]):
norm_layer = get_norm_layer(norm_type=norm)
netD = MultiscaleDiscriminator(input_nc, opt, ndf, n_layers_D, norm_layer, use_sigmoid, num_D, getIntermFeat)
print(netD)
if len(gpu_ids) > 0:
assert (torch.cuda.is_available())
netD.cuda(gpu_ids[0])
netD.apply(weights_init)
return netD
class GlobalGenerator_DCDCv2(nn.Module):
def __init__(
self,
input_nc,
output_nc,
ngf=64,
k_size=3,
n_downsampling=8,
norm_layer=nn.BatchNorm2d,
padding_type="reflect",
opt=None,
):
super(GlobalGenerator_DCDCv2, self).__init__()
activation = nn.ReLU(True)
model = [
nn.ReflectionPad2d(3),
nn.Conv2d(input_nc, min(ngf, opt.mc), kernel_size=7, padding=0),
norm_layer(ngf),
activation,
]
### downsample
for i in range(opt.start_r):
mult = 2 ** i
model += [
nn.Conv2d(
min(ngf * mult, opt.mc),
min(ngf * mult * 2, opt.mc),
kernel_size=k_size,
stride=2,
padding=1,
),
norm_layer(min(ngf * mult * 2, opt.mc)),
activation,
]
for i in range(opt.start_r, n_downsampling - 1):
mult = 2 ** i
model += [
nn.Conv2d(
min(ngf * mult, opt.mc),
min(ngf * mult * 2, opt.mc),
kernel_size=k_size,
stride=2,
padding=1,
),
norm_layer(min(ngf * mult * 2, opt.mc)),
activation,
]
model += [
ResnetBlock(
min(ngf * mult * 2, opt.mc),
padding_type=padding_type,
activation=activation,
norm_layer=norm_layer,
opt=opt,
)
]
model += [
ResnetBlock(
min(ngf * mult * 2, opt.mc),
padding_type=padding_type,
activation=activation,
norm_layer=norm_layer,
opt=opt,
)
]
mult = 2 ** (n_downsampling - 1)
if opt.spatio_size == 32:
model += [
nn.Conv2d(
min(ngf * mult, opt.mc),
min(ngf * mult * 2, opt.mc),
kernel_size=k_size,
stride=2,
padding=1,
),
norm_layer(min(ngf * mult * 2, opt.mc)),
activation,
]
if opt.spatio_size == 64:
model += [
ResnetBlock(
min(ngf * mult * 2, opt.mc),
padding_type=padding_type,
activation=activation,
norm_layer=norm_layer,
opt=opt,
)
]
model += [
ResnetBlock(
min(ngf * mult * 2, opt.mc),
padding_type=padding_type,
activation=activation,
norm_layer=norm_layer,
opt=opt,
)
]
# model += [nn.Conv2d(min(ngf * mult * 2, opt.mc), min(ngf, opt.mc), 1, 1)]
if opt.feat_dim > 0:
model += [nn.Conv2d(min(ngf * mult * 2, opt.mc), opt.feat_dim, 1, 1)]
self.encoder = nn.Sequential(*model)
# decode
model = []
if opt.feat_dim > 0:
model += [nn.Conv2d(opt.feat_dim, min(ngf * mult * 2, opt.mc), 1, 1)]
# model += [nn.Conv2d(min(ngf, opt.mc), min(ngf * mult * 2, opt.mc), 1, 1)]
o_pad = 0 if k_size == 4 else 1
mult = 2 ** n_downsampling
model += [
ResnetBlock(
min(ngf * mult, opt.mc),
padding_type=padding_type,
activation=activation,
norm_layer=norm_layer,
opt=opt,
)
]
if opt.spatio_size == 32:
model += [
nn.ConvTranspose2d(
min(ngf * mult, opt.mc),
min(int(ngf * mult / 2), opt.mc),
kernel_size=k_size,
stride=2,
padding=1,
output_padding=o_pad,
),
norm_layer(min(int(ngf * mult / 2), opt.mc)),
activation,
]
if opt.spatio_size == 64:
model += [
ResnetBlock(
min(ngf * mult, opt.mc),
padding_type=padding_type,
activation=activation,
norm_layer=norm_layer,
opt=opt,
)
]
for i in range(1, n_downsampling - opt.start_r):
mult = 2 ** (n_downsampling - i)
model += [
ResnetBlock(
min(ngf * mult, opt.mc),
padding_type=padding_type,
activation=activation,
norm_layer=norm_layer,
opt=opt,
)
]
model += [
ResnetBlock(
min(ngf * mult, opt.mc),
padding_type=padding_type,
activation=activation,
norm_layer=norm_layer,
opt=opt,
)
]
model += [
nn.ConvTranspose2d(
min(ngf * mult, opt.mc),
min(int(ngf * mult / 2), opt.mc),
kernel_size=k_size,
stride=2,
padding=1,
output_padding=o_pad,
),
norm_layer(min(int(ngf * mult / 2), opt.mc)),
activation,
]
for i in range(n_downsampling - opt.start_r, n_downsampling):
mult = 2 ** (n_downsampling - i)
model += [
nn.ConvTranspose2d(
min(ngf * mult, opt.mc),
min(int(ngf * mult / 2), opt.mc),
kernel_size=k_size,
stride=2,
padding=1,
output_padding=o_pad,
),
norm_layer(min(int(ngf * mult / 2), opt.mc)),
activation,
]
if opt.use_segmentation_model:
model += [nn.ReflectionPad2d(3), nn.Conv2d(min(ngf, opt.mc), output_nc, kernel_size=7, padding=0)]
else:
model += [
nn.ReflectionPad2d(3),
nn.Conv2d(min(ngf, opt.mc), output_nc, kernel_size=7, padding=0),
nn.Tanh(),
]
self.decoder = nn.Sequential(*model)
def forward(self, input, flow="enc_dec"):
if flow == "enc":
return self.encoder(input)
elif flow == "dec":
return self.decoder(input)
elif flow == "enc_dec":
x = self.encoder(input)
x = self.decoder(x)
return x
# Define a resnet block
class ResnetBlock(nn.Module):
def __init__(
self, dim, padding_type, norm_layer, opt, activation=nn.ReLU(True), use_dropout=False, dilation=1
):
super(ResnetBlock, self).__init__()
self.opt = opt
self.dilation = dilation
self.conv_block = self.build_conv_block(dim, padding_type, norm_layer, activation, use_dropout)
def build_conv_block(self, dim, padding_type, norm_layer, activation, use_dropout):
conv_block = []
p = 0
if padding_type == "reflect":
conv_block += [nn.ReflectionPad2d(self.dilation)]
elif padding_type == "replicate":
conv_block += [nn.ReplicationPad2d(self.dilation)]
elif padding_type == "zero":
p = self.dilation
else:
raise NotImplementedError("padding [%s] is not implemented" % padding_type)
conv_block += [
nn.Conv2d(dim, dim, kernel_size=3, padding=p, dilation=self.dilation),
norm_layer(dim),
activation,
]
if use_dropout:
conv_block += [nn.Dropout(0.5)]
p = 0
if padding_type == "reflect":
conv_block += [nn.ReflectionPad2d(1)]
elif padding_type == "replicate":
conv_block += [nn.ReplicationPad2d(1)]
elif padding_type == "zero":
p = 1
else:
raise NotImplementedError("padding [%s] is not implemented" % padding_type)
conv_block += [nn.Conv2d(dim, dim, kernel_size=3, padding=p, dilation=1), norm_layer(dim)]
return nn.Sequential(*conv_block)
def forward(self, x):
out = x + self.conv_block(x)
return out
class Encoder(nn.Module):
def __init__(self, input_nc, output_nc, ngf=32, n_downsampling=4, norm_layer=nn.BatchNorm2d):
super(Encoder, self).__init__()
self.output_nc = output_nc
model = [
nn.ReflectionPad2d(3),
nn.Conv2d(input_nc, ngf, kernel_size=7, padding=0),
norm_layer(ngf),
nn.ReLU(True),
]
### downsample
for i in range(n_downsampling):
mult = 2 ** i
model += [
nn.Conv2d(ngf * mult, ngf * mult * 2, kernel_size=3, stride=2, padding=1),
norm_layer(ngf * mult * 2),
nn.ReLU(True),
]
### upsample
for i in range(n_downsampling):
mult = 2 ** (n_downsampling - i)
model += [
nn.ConvTranspose2d(
ngf * mult, int(ngf * mult / 2), kernel_size=3, stride=2, padding=1, output_padding=1
),
norm_layer(int(ngf * mult / 2)),
nn.ReLU(True),
]
model += [nn.ReflectionPad2d(3), nn.Conv2d(ngf, output_nc, kernel_size=7, padding=0), nn.Tanh()]
self.model = nn.Sequential(*model)
def forward(self, input, inst):
outputs = self.model(input)
# instance-wise average pooling
outputs_mean = outputs.clone()
inst_list = np.unique(inst.cpu().numpy().astype(int))
for i in inst_list:
for b in range(input.size()[0]):
indices = (inst[b: b + 1] == int(i)).nonzero() # n x 4
for j in range(self.output_nc):
output_ins = outputs[indices[:, 0] + b, indices[:, 1] + j, indices[:, 2], indices[:, 3]]
mean_feat = torch.mean(output_ins).expand_as(output_ins)
outputs_mean[
indices[:, 0] + b, indices[:, 1] + j, indices[:, 2], indices[:, 3]
] = mean_feat
return outputs_mean
def SN(module, mode=True):
if mode:
return torch.nn.utils.spectral_norm(module)
return module
class NonLocalBlock2D_with_mask_Res(nn.Module):
def __init__(
self,
in_channels,
inter_channels,
mode="add",
re_norm=False,
temperature=1.0,
use_self=False,
cosin=False,
):
super(NonLocalBlock2D_with_mask_Res, self).__init__()
self.cosin = cosin
self.renorm = re_norm
self.in_channels = in_channels
self.inter_channels = inter_channels
self.g = nn.Conv2d(
in_channels=self.in_channels, out_channels=self.inter_channels, kernel_size=1, stride=1, padding=0
)
self.W = nn.Conv2d(
in_channels=self.inter_channels, out_channels=self.in_channels, kernel_size=1, stride=1, padding=0
)
# for pytorch 0.3.1
# nn.init.constant(self.W.weight, 0)
# nn.init.constant(self.W.bias, 0)
# for pytorch 0.4.0
nn.init.constant_(self.W.weight, 0)
nn.init.constant_(self.W.bias, 0)
self.theta = nn.Conv2d(
in_channels=self.in_channels, out_channels=self.inter_channels, kernel_size=1, stride=1, padding=0
)
self.phi = nn.Conv2d(
in_channels=self.in_channels, out_channels=self.inter_channels, kernel_size=1, stride=1, padding=0
)
self.mode = mode
self.temperature = temperature
self.use_self = use_self
norm_layer = get_norm_layer(norm_type="instance")
activation = nn.ReLU(True)
model = []
for i in range(3):
model += [
ResnetBlock(
inter_channels,
padding_type="reflect",
activation=activation,
norm_layer=norm_layer,
opt=None,
)
]
self.res_block = nn.Sequential(*model)
def forward(self, x, mask): ## The shape of mask is Batch*1*H*W
batch_size = x.size(0)
g_x = self.g(x).view(batch_size, self.inter_channels, -1)
g_x = g_x.permute(0, 2, 1)
theta_x = self.theta(x).view(batch_size, self.inter_channels, -1)
theta_x = theta_x.permute(0, 2, 1)
phi_x = self.phi(x).view(batch_size, self.inter_channels, -1)
if self.cosin:
theta_x = F.normalize(theta_x, dim=2)
phi_x = F.normalize(phi_x, dim=1)
f = torch.matmul(theta_x, phi_x)
f /= self.temperature
f_div_C = F.softmax(f, dim=2)
tmp = 1 - mask
mask = F.interpolate(mask, (x.size(2), x.size(3)), mode="bilinear")
mask[mask > 0] = 1.0
mask = 1 - mask
tmp = F.interpolate(tmp, (x.size(2), x.size(3)))
mask *= tmp
mask_expand = mask.view(batch_size, 1, -1)
mask_expand = mask_expand.repeat(1, x.size(2) * x.size(3), 1)
# mask = 1 - mask
# mask=F.interpolate(mask,(x.size(2),x.size(3)))
# mask_expand=mask.view(batch_size,1,-1)
# mask_expand=mask_expand.repeat(1,x.size(2)*x.size(3),1)
if self.use_self:
mask_expand[:, range(x.size(2) * x.size(3)), range(x.size(2) * x.size(3))] = 1.0
# print(mask_expand.shape)
# print(f_div_C.shape)
f_div_C = mask_expand * f_div_C
if self.renorm:
f_div_C = F.normalize(f_div_C, p=1, dim=2)
###########################
y = torch.matmul(f_div_C, g_x)
y = y.permute(0, 2, 1).contiguous()
y = y.view(batch_size, self.inter_channels, *x.size()[2:])
W_y = self.W(y)
W_y = self.res_block(W_y)
if self.mode == "combine":
full_mask = mask.repeat(1, self.inter_channels, 1, 1)
z = full_mask * x + (1 - full_mask) * W_y
return z
class MultiscaleDiscriminator(nn.Module):
def __init__(self, input_nc, opt, ndf=64, n_layers=3, norm_layer=nn.BatchNorm2d,
use_sigmoid=False, num_D=3, getIntermFeat=False):
super(MultiscaleDiscriminator, self).__init__()
self.num_D = num_D
self.n_layers = n_layers
self.getIntermFeat = getIntermFeat
for i in range(num_D):
netD = NLayerDiscriminator(input_nc, opt, ndf, n_layers, norm_layer, use_sigmoid, getIntermFeat)
if getIntermFeat:
for j in range(n_layers + 2):
setattr(self, 'scale' + str(i) + '_layer' + str(j), getattr(netD, 'model' + str(j)))
else:
setattr(self, 'layer' + str(i), netD.model)
self.downsample = nn.AvgPool2d(3, stride=2, padding=[1, 1], count_include_pad=False)
def singleD_forward(self, model, input):
if self.getIntermFeat:
result = [input]
for i in range(len(model)):
result.append(model[i](result[-1]))
return result[1:]
else:
return [model(input)]
def forward(self, input):
num_D = self.num_D
result = []
input_downsampled = input
for i in range(num_D):
if self.getIntermFeat:
model = [getattr(self, 'scale' + str(num_D - 1 - i) + '_layer' + str(j)) for j in
range(self.n_layers + 2)]
else:
model = getattr(self, 'layer' + str(num_D - 1 - i))
result.append(self.singleD_forward(model, input_downsampled))
if i != (num_D - 1):
input_downsampled = self.downsample(input_downsampled)
return result
# Defines the PatchGAN discriminator with the specified arguments.
class NLayerDiscriminator(nn.Module):
def __init__(self, input_nc, opt, ndf=64, n_layers=3, norm_layer=nn.BatchNorm2d, use_sigmoid=False,
getIntermFeat=False):
super(NLayerDiscriminator, self).__init__()
self.getIntermFeat = getIntermFeat
self.n_layers = n_layers
kw = 4
padw = int(np.ceil((kw - 1.0) / 2))
sequence = [
[SN(nn.Conv2d(input_nc, ndf, kernel_size=kw, stride=2, padding=padw), opt.use_SN), nn.LeakyReLU(0.2, True)]]
nf = ndf
for n in range(1, n_layers):
nf_prev = nf
nf = min(nf * 2, 512)
sequence += [[
SN(nn.Conv2d(nf_prev, nf, kernel_size=kw, stride=2, padding=padw), opt.use_SN),
norm_layer(nf), nn.LeakyReLU(0.2, True)
]]
nf_prev = nf
nf = min(nf * 2, 512)
sequence += [[
SN(nn.Conv2d(nf_prev, nf, kernel_size=kw, stride=1, padding=padw), opt.use_SN),
norm_layer(nf),
nn.LeakyReLU(0.2, True)
]]
sequence += [[SN(nn.Conv2d(nf, 1, kernel_size=kw, stride=1, padding=padw), opt.use_SN)]]
if use_sigmoid:
sequence += [[nn.Sigmoid()]]
if getIntermFeat:
for n in range(len(sequence)):
setattr(self, 'model' + str(n), nn.Sequential(*sequence[n]))
else:
sequence_stream = []
for n in range(len(sequence)):
sequence_stream += sequence[n]
self.model = nn.Sequential(*sequence_stream)
def forward(self, input):
if self.getIntermFeat:
res = [input]
for n in range(self.n_layers + 2):
model = getattr(self, 'model' + str(n))
res.append(model(res[-1]))
return res[1:]
else:
return self.model(input)
class Patch_Attention_4(nn.Module): # While combine the feature map, use conv and mask
def __init__(self, in_channels, inter_channels, patch_size):
super(Patch_Attention_4, self).__init__()
self.patch_size = patch_size
self.F_Combine = nn.Conv2d(in_channels=1025, out_channels=512, kernel_size=3, stride=1, padding=1, bias=True)
norm_layer = get_norm_layer(norm_type="instance")
activation = nn.ReLU(True)
model = []
for i in range(1):
model += [
ResnetBlock(
inter_channels,
padding_type="reflect",
activation=activation,
norm_layer=norm_layer,
opt=None,
)
]
self.res_block = nn.Sequential(*model)
def Hard_Compose(self, input, dim, index):
# batch index select
# input: [B,C,HW]
# dim: scalar > 0
# index: [B, HW]
views = [input.size(0)] + [1 if i != dim else -1 for i in range(1, len(input.size()))]
expanse = list(input.size())
expanse[0] = -1
expanse[dim] = -1
index = index.view(views).expand(expanse)
return torch.gather(input, dim, index)
def forward(self, z, mask): ## The shape of mask is Batch*1*H*W
x = self.res_block(z)
b, c, h, w = x.shape
## mask resize + dilation
# tmp = 1 - mask
mask = F.interpolate(mask, (x.size(2), x.size(3)), mode="bilinear")
mask[mask > 0] = 1.0
# mask = 1 - mask
# tmp = F.interpolate(tmp, (x.size(2), x.size(3)))
# mask *= tmp
# mask=1-mask
## 1: mask position 0: non-mask
mask_unfold = F.unfold(mask, kernel_size=(self.patch_size, self.patch_size), padding=0, stride=self.patch_size)
non_mask_region = (torch.mean(mask_unfold, dim=1, keepdim=True) > 0.6).float()
all_patch_num = h * w / self.patch_size / self.patch_size
non_mask_region = non_mask_region.repeat(1, int(all_patch_num), 1)
x_unfold = F.unfold(x, kernel_size=(self.patch_size, self.patch_size), padding=0, stride=self.patch_size)
y_unfold = x_unfold.permute(0, 2, 1)
x_unfold_normalized = F.normalize(x_unfold, dim=1)
y_unfold_normalized = F.normalize(y_unfold, dim=2)
correlation_matrix = torch.bmm(y_unfold_normalized, x_unfold_normalized)
correlation_matrix = correlation_matrix.masked_fill(non_mask_region == 1., -1e9)
correlation_matrix = F.softmax(correlation_matrix, dim=2)
# print(correlation_matrix)
R, max_arg = torch.max(correlation_matrix, dim=2)
composed_unfold = self.Hard_Compose(x_unfold, 2, max_arg)
composed_fold = F.fold(composed_unfold, output_size=(h, w), kernel_size=(self.patch_size, self.patch_size),
padding=0, stride=self.patch_size)
concat_1 = torch.cat((z, composed_fold, mask), dim=1)
concat_1 = self.F_Combine(concat_1)
return concat_1
def inference_forward(self, z, mask): ## Reduce the extra memory cost
x = self.res_block(z)
b, c, h, w = x.shape
## mask resize + dilation
# tmp = 1 - mask
mask = F.interpolate(mask, (x.size(2), x.size(3)), mode="bilinear")
mask[mask > 0] = 1.0
# mask = 1 - mask
# tmp = F.interpolate(tmp, (x.size(2), x.size(3)))
# mask *= tmp
# mask=1-mask
## 1: mask position 0: non-mask
mask_unfold = F.unfold(mask, kernel_size=(self.patch_size, self.patch_size), padding=0, stride=self.patch_size)
non_mask_region = (torch.mean(mask_unfold, dim=1, keepdim=True) > 0.6).float()[0, 0, :] # 1*1*all_patch_num
all_patch_num = h * w / self.patch_size / self.patch_size
mask_index = torch.nonzero(non_mask_region, as_tuple=True)[0]
if len(mask_index) == 0: ## No mask patch is selected, no attention is needed
composed_fold = x
else:
unmask_index = torch.nonzero(non_mask_region != 1, as_tuple=True)[0]
x_unfold = F.unfold(x, kernel_size=(self.patch_size, self.patch_size), padding=0, stride=self.patch_size)
Query_Patch = torch.index_select(x_unfold, 2, mask_index)
Key_Patch = torch.index_select(x_unfold, 2, unmask_index)
Query_Patch = Query_Patch.permute(0, 2, 1)
Query_Patch_normalized = F.normalize(Query_Patch, dim=2)
Key_Patch_normalized = F.normalize(Key_Patch, dim=1)
correlation_matrix = torch.bmm(Query_Patch_normalized, Key_Patch_normalized)
correlation_matrix = F.softmax(correlation_matrix, dim=2)
R, max_arg = torch.max(correlation_matrix, dim=2)
composed_unfold = self.Hard_Compose(Key_Patch, 2, max_arg)
x_unfold[:, :, mask_index] = composed_unfold
composed_fold = F.fold(x_unfold, output_size=(h, w), kernel_size=(self.patch_size, self.patch_size),
padding=0, stride=self.patch_size)
concat_1 = torch.cat((z, composed_fold, mask), dim=1)
concat_1 = self.F_Combine(concat_1)
return concat_1
##############################################################################
# Losses
##############################################################################
class GANLoss(nn.Module):
def __init__(self, use_lsgan=True, target_real_label=1.0, target_fake_label=0.0,
tensor=torch.FloatTensor):
super(GANLoss, self).__init__()
self.real_label = target_real_label
self.fake_label = target_fake_label
self.real_label_var = None
self.fake_label_var = None
self.Tensor = tensor
if use_lsgan:
self.loss = nn.MSELoss()
else:
self.loss = nn.BCELoss()
def get_target_tensor(self, input, target_is_real):
target_tensor = None
if target_is_real:
create_label = ((self.real_label_var is None) or
(self.real_label_var.numel() != input.numel()))
if create_label:
real_tensor = self.Tensor(input.size()).fill_(self.real_label)
self.real_label_var = Variable(real_tensor, requires_grad=False)
target_tensor = self.real_label_var
else:
create_label = ((self.fake_label_var is None) or
(self.fake_label_var.numel() != input.numel()))
if create_label:
fake_tensor = self.Tensor(input.size()).fill_(self.fake_label)
self.fake_label_var = Variable(fake_tensor, requires_grad=False)
target_tensor = self.fake_label_var
return target_tensor
def __call__(self, input, target_is_real):
if isinstance(input[0], list):
loss = 0
for input_i in input:
pred = input_i[-1]
target_tensor = self.get_target_tensor(pred, target_is_real)
loss += self.loss(pred, target_tensor)
return loss
else:
target_tensor = self.get_target_tensor(input[-1], target_is_real)
return self.loss(input[-1], target_tensor)
# VGG Loss
from torchvision import models
class VGG19_torch(torch.nn.Module):
def __init__(self, requires_grad=False):
super(VGG19_torch, self).__init__()
vgg_pretrained_features = models.vgg19(pretrained=True).features
self.slice1 = torch.nn.Sequential()
self.slice2 = torch.nn.Sequential()
self.slice3 = torch.nn.Sequential()
self.slice4 = torch.nn.Sequential()
self.slice5 = torch.nn.Sequential()
for x in range(2):
self.slice1.add_module(str(x), vgg_pretrained_features[x])
for x in range(2, 7):
self.slice2.add_module(str(x), vgg_pretrained_features[x])
for x in range(7, 12):
self.slice3.add_module(str(x), vgg_pretrained_features[x])
for x in range(12, 21):
self.slice4.add_module(str(x), vgg_pretrained_features[x])
for x in range(21, 30):
self.slice5.add_module(str(x), vgg_pretrained_features[x])
if not requires_grad:
for param in self.parameters():
param.requires_grad = False
def forward(self, X):
h_relu1 = self.slice1(X)
h_relu2 = self.slice2(h_relu1)
h_relu3 = self.slice3(h_relu2)
h_relu4 = self.slice4(h_relu3)
h_relu5 = self.slice5(h_relu4)
out = [h_relu1, h_relu2, h_relu3, h_relu4, h_relu5]
return out
class VGGLoss_torch(nn.Module):
def __init__(self, gpu_ids):
super(VGGLoss_torch, self).__init__()
self.vgg = VGG19_torch().cuda()
self.criterion = nn.L1Loss()
self.weights = [1.0 / 32, 1.0 / 16, 1.0 / 8, 1.0 / 4, 1.0]
def forward(self, x, y):
x_vgg, y_vgg = self.vgg(x), self.vgg(y)
loss = 0
for i in range(len(x_vgg)):
loss += self.weights[i] * self.criterion(x_vgg[i], y_vgg[i].detach())
return loss | zeroscratches | /erasescratches/models/networks.py | networks.py |
import os
import torch
import sys
class BaseModel(torch.nn.Module):
def name(self):
return "BaseModel"
def initialize(self, opt):
self.opt = opt
self.gpu_ids = opt.gpu_ids
self.isTrain = opt.isTrain
self.Tensor = torch.cuda.FloatTensor if self.gpu_ids else torch.Tensor
self.save_dir = os.path.join(opt.checkpoints_dir, opt.name)
def set_input(self, input):
self.input = input
def forward(self):
pass
# used in test time, no backprop
def test(self):
pass
def get_image_paths(self):
pass
def optimize_parameters(self):
pass
def get_current_visuals(self):
return self.input
def get_current_errors(self):
return {}
def save(self, label):
pass
# helper saving function that can be used by subclasses
def save_network(self, network, network_label, epoch_label, gpu_ids):
save_filename = "%s_net_%s.pth" % (epoch_label, network_label)
save_path = os.path.join(self.save_dir, save_filename)
torch.save(network.cpu().state_dict(), save_path)
if len(gpu_ids) and torch.cuda.is_available():
network.cuda()
def save_optimizer(self, optimizer, optimizer_label, epoch_label):
save_filename = "%s_optimizer_%s.pth" % (epoch_label, optimizer_label)
save_path = os.path.join(self.save_dir, save_filename)
torch.save(optimizer.state_dict(), save_path)
def load_optimizer(self, optimizer, optimizer_label, epoch_label, save_dir=""):
save_filename = "%s_optimizer_%s.pth" % (epoch_label, optimizer_label)
if not save_dir:
save_dir = self.save_dir
save_path = os.path.join(save_dir, save_filename)
if not os.path.isfile(save_path):
print("%s not exists yet!" % save_path)
else:
optimizer.load_state_dict(torch.load(save_path))
# helper loading function that can be used by subclasses
def load_network(self, network, network_label, epoch_label, save_dir=""):
save_filename = "%s_net_%s.pth" % (epoch_label, network_label)
if not save_dir:
save_dir = self.save_dir
# print(save_dir)
# print(self.save_dir)
save_path = os.path.join(save_dir, save_filename)
if not os.path.isfile(save_path):
print("%s not exists yet!" % save_path)
# if network_label == 'G':
# raise('Generator must exist!')
else:
# network.load_state_dict(torch.load(save_path))
try:
# print(save_path)
network.load_state_dict(torch.load(save_path))
except:
pretrained_dict = torch.load(save_path)
model_dict = network.state_dict()
try:
pretrained_dict = {k: v for k, v in pretrained_dict.items() if k in model_dict}
network.load_state_dict(pretrained_dict)
# if self.opt.verbose:
print(
"Pretrained network %s has excessive layers; Only loading layers that are used"
% network_label
)
except:
print(
"Pretrained network %s has fewer layers; The following are not initialized:"
% network_label
)
for k, v in pretrained_dict.items():
if v.size() == model_dict[k].size():
model_dict[k] = v
if sys.version_info >= (3, 0):
not_initialized = set()
else:
from sets import Set
not_initialized = Set()
for k, v in model_dict.items():
if k not in pretrained_dict or v.size() != pretrained_dict[k].size():
not_initialized.add(k.split(".")[0])
print(sorted(not_initialized))
network.load_state_dict(model_dict)
def update_learning_rate():
pass | zeroscratches | /erasescratches/models/base_model.py | base_model.py |
import torch
from .NonLocal_feature_mapping_model import *
from .base_model import BaseModel
from zeroscratches.util.image_pool import ImagePool
class Mapping_Model(nn.Module):
def __init__(self, nc, mc=64, n_blocks=3, norm="instance", padding_type="reflect", opt=None):
super(Mapping_Model, self).__init__()
norm_layer = networks.get_norm_layer(norm_type=norm)
activation = nn.ReLU(True)
model = []
tmp_nc = 64
n_up = 4
print("Mapping: You are using the mapping model without global restoration.")
for i in range(n_up):
ic = min(tmp_nc * (2 ** i), mc)
oc = min(tmp_nc * (2 ** (i + 1)), mc)
model += [nn.Conv2d(ic, oc, 3, 1, 1), norm_layer(oc), activation]
for i in range(n_blocks):
model += [
networks.ResnetBlock(
mc,
padding_type=padding_type,
activation=activation,
norm_layer=norm_layer,
opt=opt,
dilation=opt.mapping_net_dilation,
)
]
for i in range(n_up - 1):
ic = min(64 * (2 ** (4 - i)), mc)
oc = min(64 * (2 ** (3 - i)), mc)
model += [nn.Conv2d(ic, oc, 3, 1, 1), norm_layer(oc), activation]
model += [nn.Conv2d(tmp_nc * 2, tmp_nc, 3, 1, 1)]
if opt.feat_dim > 0 and opt.feat_dim < 64:
model += [norm_layer(tmp_nc), activation, nn.Conv2d(tmp_nc, opt.feat_dim, 1, 1)]
# model += [nn.Conv2d(64, 1, 1, 1, 0)]
self.model = nn.Sequential(*model)
def forward(self, input):
return self.model(input)
class Pix2PixHDModel_Mapping(BaseModel):
def name(self):
return "Pix2PixHDModel_Mapping"
def init_loss_filter(self, use_gan_feat_loss, use_vgg_loss, use_smooth_l1, stage_1_feat_l2):
flags = (True, True, use_gan_feat_loss, use_vgg_loss, True, True, use_smooth_l1, stage_1_feat_l2)
def loss_filter(g_feat_l2, g_gan, g_gan_feat, g_vgg, d_real, d_fake, smooth_l1, stage_1_feat_l2):
return [
l
for (l, f) in zip(
(g_feat_l2, g_gan, g_gan_feat, g_vgg, d_real, d_fake, smooth_l1, stage_1_feat_l2), flags
)
if f
]
return loss_filter
def initialize(self, opt):
BaseModel.initialize(self, opt)
if opt.resize_or_crop != "none" or not opt.isTrain:
torch.backends.cudnn.benchmark = True
self.isTrain = opt.isTrain
input_nc = opt.label_nc if opt.label_nc != 0 else opt.input_nc
##### define networks
# Generator network
netG_input_nc = input_nc
self.netG_A = networks.GlobalGenerator_DCDCv2(
netG_input_nc,
opt.output_nc,
opt.ngf,
opt.k_size,
opt.n_downsample_global,
networks.get_norm_layer(norm_type=opt.norm),
opt=opt,
)
self.netG_B = networks.GlobalGenerator_DCDCv2(
netG_input_nc,
opt.output_nc,
opt.ngf,
opt.k_size,
opt.n_downsample_global,
networks.get_norm_layer(norm_type=opt.norm),
opt=opt,
)
if opt.non_local == "Setting_42" or opt.NL_use_mask:
if opt.mapping_exp == 1:
self.mapping_net = Mapping_Model_with_mask_2(
min(opt.ngf * 2 ** opt.n_downsample_global, opt.mc),
opt.map_mc,
n_blocks=opt.mapping_n_block,
opt=opt,
)
else:
self.mapping_net = Mapping_Model_with_mask(
min(opt.ngf * 2 ** opt.n_downsample_global, opt.mc),
opt.map_mc,
n_blocks=opt.mapping_n_block,
opt=opt,
)
else:
self.mapping_net = Mapping_Model(
min(opt.ngf * 2 ** opt.n_downsample_global, opt.mc),
opt.map_mc,
n_blocks=opt.mapping_n_block,
opt=opt,
)
self.mapping_net.apply(networks.weights_init)
if opt.load_pretrain != "":
self.load_network(self.mapping_net, "mapping_net", opt.which_epoch, opt.load_pretrain)
if not opt.no_load_VAE:
self.load_network(self.netG_A, "G", opt.use_vae_which_epoch, opt.load_pretrainA)
self.load_network(self.netG_B, "G", opt.use_vae_which_epoch, opt.load_pretrainB)
for param in self.netG_A.parameters():
param.requires_grad = False
for param in self.netG_B.parameters():
param.requires_grad = False
self.netG_A.eval()
self.netG_B.eval()
if opt.gpu_ids:
self.netG_A.cuda(opt.gpu_ids[0])
self.netG_B.cuda(opt.gpu_ids[0])
self.mapping_net.cuda(opt.gpu_ids[0])
if not self.isTrain:
self.load_network(self.mapping_net, "mapping_net", opt.which_epoch)
# Discriminator network
if self.isTrain:
use_sigmoid = opt.no_lsgan
netD_input_nc = opt.ngf * 2 if opt.feat_gan else input_nc + opt.output_nc
if not opt.no_instance:
netD_input_nc += 1
self.netD = networks.define_D(netD_input_nc, opt.ndf, opt.n_layers_D, opt, opt.norm, use_sigmoid,
opt.num_D, not opt.no_ganFeat_loss, gpu_ids=self.gpu_ids)
# set loss functions and optimizers
if self.isTrain:
if opt.pool_size > 0 and (len(self.gpu_ids)) > 1:
raise NotImplementedError("Fake Pool Not Implemented for MultiGPU")
self.fake_pool = ImagePool(opt.pool_size)
self.old_lr = opt.lr
# define loss functions
self.loss_filter = self.init_loss_filter(not opt.no_ganFeat_loss, not opt.no_vgg_loss, opt.Smooth_L1,
opt.use_two_stage_mapping)
self.criterionGAN = networks.GANLoss(use_lsgan=not opt.no_lsgan, tensor=self.Tensor)
self.criterionFeat = torch.nn.L1Loss()
self.criterionFeat_feat = torch.nn.L1Loss() if opt.use_l1_feat else torch.nn.MSELoss()
if self.opt.image_L1:
self.criterionImage = torch.nn.L1Loss()
else:
self.criterionImage = torch.nn.SmoothL1Loss()
print(self.criterionFeat_feat)
if not opt.no_vgg_loss:
self.criterionVGG = networks.VGGLoss_torch(self.gpu_ids)
# Names so we can breakout loss
self.loss_names = self.loss_filter('G_Feat_L2', 'G_GAN', 'G_GAN_Feat', 'G_VGG', 'D_real', 'D_fake',
'Smooth_L1', 'G_Feat_L2_Stage_1')
# initialize optimizers
# optimizer G
if opt.no_TTUR:
beta1, beta2 = opt.beta1, 0.999
G_lr, D_lr = opt.lr, opt.lr
else:
beta1, beta2 = 0, 0.9
G_lr, D_lr = opt.lr / 2, opt.lr * 2
if not opt.no_load_VAE:
params = list(self.mapping_net.parameters())
self.optimizer_mapping = torch.optim.Adam(params, lr=G_lr, betas=(beta1, beta2))
# optimizer D
params = list(self.netD.parameters())
self.optimizer_D = torch.optim.Adam(params, lr=D_lr, betas=(beta1, beta2))
print("---------- Optimizers initialized -------------")
def encode_input(self, label_map, inst_map=None, real_image=None, feat_map=None, infer=False):
if self.opt.label_nc == 0:
input_label = label_map.data.cuda()
else:
# create one-hot vector for label map
size = label_map.size()
oneHot_size = (size[0], self.opt.label_nc, size[2], size[3])
input_label = torch.cuda.FloatTensor(torch.Size(oneHot_size)).zero_()
input_label = input_label.scatter_(1, label_map.data.long().cuda(), 1.0)
if self.opt.data_type == 16:
input_label = input_label.half()
# get edges from instance map
if not self.opt.no_instance:
inst_map = inst_map.data.cuda()
edge_map = self.get_edges(inst_map)
input_label = torch.cat((input_label, edge_map), dim=1)
input_label = Variable(input_label, volatile=infer)
# real images for training
if real_image is not None:
real_image = Variable(real_image.data.cuda())
return input_label, inst_map, real_image, feat_map
def discriminate(self, input_label, test_image, use_pool=False):
input_concat = torch.cat((input_label, test_image.detach()), dim=1)
if use_pool:
fake_query = self.fake_pool.query(input_concat)
return self.netD.forward(fake_query)
else:
return self.netD.forward(input_concat)
def forward(self, label, inst, image, feat, pair=True, infer=False, last_label=None, last_image=None):
# Encode Inputs
input_label, inst_map, real_image, feat_map = self.encode_input(label, inst, image, feat)
# Fake Generation
input_concat = input_label
label_feat = self.netG_A.forward(input_concat, flow='enc')
# print('label:')
# print(label_feat.min(), label_feat.max(), label_feat.mean())
# label_feat = label_feat / 16.0
if self.opt.NL_use_mask:
label_feat_map = self.mapping_net(label_feat.detach(), inst)
else:
label_feat_map = self.mapping_net(label_feat.detach())
fake_image = self.netG_B.forward(label_feat_map, flow='dec')
image_feat = self.netG_B.forward(real_image, flow='enc')
loss_feat_l2_stage_1 = 0
loss_feat_l2 = self.criterionFeat_feat(label_feat_map, image_feat.data) * self.opt.l2_feat
if self.opt.feat_gan:
# Fake Detection and Loss
pred_fake_pool = self.discriminate(label_feat.detach(), label_feat_map, use_pool=True)
loss_D_fake = self.criterionGAN(pred_fake_pool, False)
# Real Detection and Loss
pred_real = self.discriminate(label_feat.detach(), image_feat)
loss_D_real = self.criterionGAN(pred_real, True)
# GAN loss (Fake Passability Loss)
pred_fake = self.netD.forward(torch.cat((label_feat.detach(), label_feat_map), dim=1))
loss_G_GAN = self.criterionGAN(pred_fake, True)
else:
# Fake Detection and Loss
pred_fake_pool = self.discriminate(input_label, fake_image, use_pool=True)
loss_D_fake = self.criterionGAN(pred_fake_pool, False)
# Real Detection and Loss
if pair:
pred_real = self.discriminate(input_label, real_image)
else:
pred_real = self.discriminate(last_label, last_image)
loss_D_real = self.criterionGAN(pred_real, True)
# GAN loss (Fake Passability Loss)
pred_fake = self.netD.forward(torch.cat((input_label, fake_image), dim=1))
loss_G_GAN = self.criterionGAN(pred_fake, True)
# GAN feature matching loss
loss_G_GAN_Feat = 0
if not self.opt.no_ganFeat_loss and pair:
feat_weights = 4.0 / (self.opt.n_layers_D + 1)
D_weights = 1.0 / self.opt.num_D
for i in range(self.opt.num_D):
for j in range(len(pred_fake[i]) - 1):
tmp = self.criterionFeat(pred_fake[i][j], pred_real[i][j].detach()) * self.opt.lambda_feat
loss_G_GAN_Feat += D_weights * feat_weights * tmp
else:
loss_G_GAN_Feat = torch.zeros(1).to(label.device)
# VGG feature matching loss
loss_G_VGG = 0
if not self.opt.no_vgg_loss:
loss_G_VGG = self.criterionVGG(fake_image, real_image) * self.opt.lambda_feat if pair else torch.zeros(
1).to(label.device)
smooth_l1_loss = 0
if self.opt.Smooth_L1:
smooth_l1_loss = self.criterionImage(fake_image, real_image) * self.opt.L1_weight
return [self.loss_filter(loss_feat_l2, loss_G_GAN, loss_G_GAN_Feat, loss_G_VGG, loss_D_real, loss_D_fake,
smooth_l1_loss, loss_feat_l2_stage_1), None if not infer else fake_image]
def inference(self, label, inst):
use_gpu = len(self.opt.gpu_ids) > 0
if use_gpu:
input_concat = label.data.cuda()
inst_data = inst.cuda()
else:
input_concat = label.data
inst_data = inst
label_feat = self.netG_A.forward(input_concat, flow="enc")
if self.opt.NL_use_mask:
if self.opt.inference_optimize:
label_feat_map = self.mapping_net.inference_forward(label_feat.detach(), inst_data)
else:
label_feat_map = self.mapping_net(label_feat.detach(), inst_data)
else:
label_feat_map = self.mapping_net(label_feat.detach())
fake_image = self.netG_B.forward(label_feat_map, flow="dec")
return fake_image
class InferenceModel(Pix2PixHDModel_Mapping):
def forward(self, label, inst):
return self.inference(label, inst) | zeroscratches | /erasescratches/models/mapping_model.py | mapping_model.py |
import logging
import torch.nn as nn
from . import networks
class Mapping_Model_with_mask(nn.Module):
def __init__(self, nc, mc=64, n_blocks=3, norm="instance", padding_type="reflect", opt=None):
super(Mapping_Model_with_mask, self).__init__()
norm_layer = networks.get_norm_layer(norm_type=norm)
activation = nn.ReLU(True)
model = []
tmp_nc = 64
n_up = 4
for i in range(n_up):
ic = min(tmp_nc * (2 ** i), mc)
oc = min(tmp_nc * (2 ** (i + 1)), mc)
model += [nn.Conv2d(ic, oc, 3, 1, 1), norm_layer(oc), activation]
self.before_NL = nn.Sequential(*model)
if opt.NL_res:
self.NL = networks.NonLocalBlock2D_with_mask_Res(
mc,
mc,
opt.NL_fusion_method,
opt.correlation_renormalize,
opt.softmax_temperature,
opt.use_self,
opt.cosin_similarity,
)
print("You are using NL + Res")
model = []
for i in range(n_blocks):
model += [
networks.ResnetBlock(
mc,
padding_type=padding_type,
activation=activation,
norm_layer=norm_layer,
opt=opt,
dilation=opt.mapping_net_dilation,
)
]
for i in range(n_up - 1):
ic = min(64 * (2 ** (4 - i)), mc)
oc = min(64 * (2 ** (3 - i)), mc)
model += [nn.Conv2d(ic, oc, 3, 1, 1), norm_layer(oc), activation]
model += [nn.Conv2d(tmp_nc * 2, tmp_nc, 3, 1, 1)]
if opt.feat_dim > 0 and opt.feat_dim < 64:
model += [norm_layer(tmp_nc), activation, nn.Conv2d(tmp_nc, opt.feat_dim, 1, 1)]
# model += [nn.Conv2d(64, 1, 1, 1, 0)]
self.after_NL = nn.Sequential(*model)
def forward(self, input, mask):
x1 = self.before_NL(input)
del input
x2 = self.NL(x1, mask)
del x1, mask
x3 = self.after_NL(x2)
del x2
return x3
class Mapping_Model_with_mask_2(nn.Module): # Multi-Scale Patch Attention
def __init__(self, nc, mc=64, n_blocks=3, norm="instance", padding_type="reflect", opt=None):
super(Mapping_Model_with_mask_2, self).__init__()
norm_layer = networks.get_norm_layer(norm_type=norm)
activation = nn.ReLU(True)
model = []
tmp_nc = 64
n_up = 4
for i in range(n_up):
ic = min(tmp_nc * (2 ** i), mc)
oc = min(tmp_nc * (2 ** (i + 1)), mc)
model += [nn.Conv2d(ic, oc, 3, 1, 1), norm_layer(oc), activation]
for i in range(2):
model += [
networks.ResnetBlock(
mc,
padding_type=padding_type,
activation=activation,
norm_layer=norm_layer,
opt=opt,
dilation=opt.mapping_net_dilation,
)
]
logging.info("Mapping: You are using multi-scale patch attention, conv combine + mask input")
self.before_NL = nn.Sequential(*model)
if opt.mapping_exp == 1:
self.NL_scale_1 = networks.Patch_Attention_4(mc, mc, 8)
model = []
for i in range(2):
model += [
networks.ResnetBlock(
mc,
padding_type=padding_type,
activation=activation,
norm_layer=norm_layer,
opt=opt,
dilation=opt.mapping_net_dilation,
)
]
self.res_block_1 = nn.Sequential(*model)
if opt.mapping_exp == 1:
self.NL_scale_2 = networks.Patch_Attention_4(mc, mc, 4)
model = []
for i in range(2):
model += [
networks.ResnetBlock(
mc,
padding_type=padding_type,
activation=activation,
norm_layer=norm_layer,
opt=opt,
dilation=opt.mapping_net_dilation,
)
]
self.res_block_2 = nn.Sequential(*model)
if opt.mapping_exp == 1:
self.NL_scale_3 = networks.Patch_Attention_4(mc, mc, 2)
# self.NL_scale_3=networks.Patch_Attention_2(mc,mc,2)
model = []
for i in range(2):
model += [
networks.ResnetBlock(
mc,
padding_type=padding_type,
activation=activation,
norm_layer=norm_layer,
opt=opt,
dilation=opt.mapping_net_dilation,
)
]
for i in range(n_up - 1):
ic = min(64 * (2 ** (4 - i)), mc)
oc = min(64 * (2 ** (3 - i)), mc)
model += [nn.Conv2d(ic, oc, 3, 1, 1), norm_layer(oc), activation]
model += [nn.Conv2d(tmp_nc * 2, tmp_nc, 3, 1, 1)]
if opt.feat_dim > 0 and opt.feat_dim < 64:
model += [norm_layer(tmp_nc), activation, nn.Conv2d(tmp_nc, opt.feat_dim, 1, 1)]
# model += [nn.Conv2d(64, 1, 1, 1, 0)]
self.after_NL = nn.Sequential(*model)
def forward(self, input, mask):
x1 = self.before_NL(input)
x2 = self.NL_scale_1(x1, mask)
x3 = self.res_block_1(x2)
x4 = self.NL_scale_2(x3, mask)
x5 = self.res_block_2(x4)
x6 = self.NL_scale_3(x5, mask)
x7 = self.after_NL(x6)
return x7
def inference_forward(self, input, mask):
x1 = self.before_NL(input)
del input
x2 = self.NL_scale_1.inference_forward(x1, mask)
del x1
x3 = self.res_block_1(x2)
del x2
x4 = self.NL_scale_2.inference_forward(x3, mask)
del x3
x5 = self.res_block_2(x4)
del x4
x6 = self.NL_scale_3.inference_forward(x5, mask)
del x5
x7 = self.after_NL(x6)
del x6
return x7 | zeroscratches | /erasescratches/models/NonLocal_feature_mapping_model.py | NonLocal_feature_mapping_model.py |
# Label Agnostic Pre-training for Zero-shot Text Classification
This repository contains the code and data for the Findings of ACL'23 paper **Label Agnostic Pre-training for Zero-shot Text Classification** by ***Christopher Clarke, Yuzhao Heng, Yiping Kang, Krisztian Flautner, Lingjia Tang and Jason Mars***.
In this paper, we investigate the task of zero-shot text classification with the aim of improving the ability of PLMs to generalize both seen and unseen data across domains without the need for additional training. We introduce two new simple yet effective training strategies, *Implicit training* & *Explicit pre-training* which specifically inject aspect-level understanding into the model at train time. To evaluate this, we release UTCD, a new benchmark dataset for evaluating text classification in zero-shot settings. **Models, data & paper coming soon!**
## Universal Text Classification Dataset (UTCD)
UTCD is a compilation of 18 classification datasets spanning 3 categories of Sentiment, Intent/Dialogue and Topic classification. UTCD focuses on the task of zero-shot text classification where the candidate labels are descriptive of the text being classified. UTCD consists of ~ 6M/800K train/test examples.
UTCD Datasets & Principles:
- Sentiment
- GoEmotions introduced in [GoEmotions: A Dataset of Fine-Grained Emotions](https://arxiv.org/pdf/2005.00547v2.pdf)
- TweetEval introduced in [TWEETEVAL: Unified Benchmark and Comparative Evaluation for Tweet Classification](https://arxiv.org/pdf/2010.12421v2.pdf) (Sentiment subset)
- Emotion introduced in [CARER: Contextualized Affect Representations for Emotion Recognition](https://aclanthology.org/D18-1404.pdf)
- Amazon Polarity introduced in [Character-level Convolutional Networks for Text Classification](https://arxiv.org/pdf/1509.01626.pdf)
- Finance Phrasebank introduced in [Good debt or bad debt: Detecting semantic orientations in economic texts](https://arxiv.org/pdf/1307.5336.pdf)
- Yelp introduced in [Character-level Convolutional Networks for Text Classification](https://arxiv.org/pdf/1509.01626.pdf)
- Intent/Dialogue
- Schema-Guided Dialogue introduced in [Towards Scalable Multi-Domain Conversational Agents: The Schema-Guided Dialogue Dataset](https://arxiv.org/pdf/1909.05855v2.pdf)
- Clinc-150 introduced in [An Evaluation Dataset for Intent Classification and Out-of-Scope Prediction](https://arxiv.org/pdf/1909.02027v1.pdf)
- SLURP SLU introduced in [SLURP: A Spoken Language Understanding Resource Package](https://arxiv.org/pdf/2011.13205.pdf)
- Banking77 introduced in [Efficient Intent Detection with Dual Sentence Encoders](https://arxiv.org/pdf/2003.04807.pdf)
- Snips introduced in [Snips Voice Platform: an embedded Spoken Language Understanding system for private-by-design voice interfaces](https://arxiv.org/pdf/1805.10190.pdf)
- NLU Evaluation introduced in [Benchmarking Natural Language Understanding Services for building Conversational Agents](https://arxiv.org/pdf/1903.05566.pdf)
- Topic
- AG News introduced in [Character-level Convolutional Networks for Text Classification](https://arxiv.org/pdf/1509.01626.pdf)
- DBpedia 14 introduced in [DBpedia: A Nucleus for a Web of Open Data](https://link.springer.com/chapter/10.1007/978-3-540-76298-0_52)
- Yahoo Answer Topics introduced in [Character-level Convolutional Networks for Text Classification](https://arxiv.org/pdf/1509.01626.pdf)
- MultiEurlex introduced in [MultiEURLEX -- A multi-lingual and multi-label legal document classification dataset for zero-shot cross-lingual transfer](https://aclanthology.org/2021.emnlp-main.559v2.pdf)
- BigPatent introduced in [BIGPATENT: A Large-Scale Dataset for Abstractive and Coherent Summarization](https://aclanthology.org/P19-1212.pdf)
- Consumer Finance introduced in [Consumer Complaint Database](https://www.consumerfinance.gov/data-research/consumer-complaints/)
In order to make NLP models more broadly useful, zero-shot techniques need to be capable of label, domain \& aspect transfer. As such, in the construction of UTCD we enforce the following principles:
- **Textual labels**: In UTCD, we mandate the use of textual labels. While numerical label values are often used in classification tasks, descriptive textual labels such as those present in the datasets across UTCD enable the development of techniques that can leverage the class name which is instrumental in providing zero-shot support. As such, for each of the compiled datasets, labels are standardized such that the labels are descriptive of the text in natural language.
- **Diverse domains and Sequence lengths**: In addition to broad coverage of aspects, UTCD compiles diverse data across several domains such as Banking, Finance, Legal, etc each comprising varied length sequences (long and short). The datasets are listed above.
## User’s Guide (HuggingFace)
The [UTCD dataset](https://huggingface.co/datasets/claritylab/UTCD) and [trained models](https://huggingface.co/models?other=zeroshot_classifier) are available on HuggingFace. Please refer to the instructions there.
## User’s Guide (Local)
### Setup environment
OS: UNIX; Python version `3.8.10`; CUDA version `11.6`.
Create conda environment:
```bash
conda create -n zs-cls python=3.8.10 pip
```
Move to project root directory, install python packages:
```bash
pip3 install -r requirements.txt
```
Add current directory for python to look for our local package:
```bash
export PYTHONPATH=$PATHONPATH:`pwd`
```
### Note
Denoting the package directory at system path `<BASE_PATH>/zero-shot-text-classification`, all trained models will be saved to `<BASE_PATH>/models`, all evaluation CSV files will be saved to `<BASE_PATH>/eval`.
Below we include command line arguments and example train/eval commands for models in our paper.
### BERT Sequence Classifier
**Arguments**
- `dataset`: Dataset to train/evaluate the model on, pass `all` for all datasets
- `domain`: One of [`in`, `out`], the domain of dataset(s) to train/evaluate on
- `normalize_aspect`: If true, datasets are normalized by aspect, ==TODO add==
- `learning_rate`: Learning rate for training
- `batch_size`: Batch size for training/evaluation
- `epochs`: #epochs for training
- `model_name_or_path`: File system path or HuggingFace model name for model evaluation, ==TODO test==
**Train**
- Train solely on in-domain dataset `go_emotion`
- ```bash
python zeroshot_classifier/models/bert.py train --domain in --dataset go_emotion
```
- Train solely on out-of-domain dataset `consumer_finance`
- ```bash
python zeroshot_classifier/models/bert.py train --domain out --dataset consumer_finance
```
- Train on all in-domain datasets
- ```bash
python zeroshot_classifier/models/bert.py train --domain in --dataset all
```
**Eval**
- Evaluate a local model on out-of-domain dataset `multi_eurlex`
- ```bash
python zeroshot_classifier/models/bert.py test --domain out --dataset multi_eurlex --model_name_or_path models/2022-06-15_21-23-57_BERT-Seq-CLS-out-multi_eurlex/trained
```
### Binary & Dual Encoding Zero-shot Classification
**Arguments**
- `mode`: Training strategy, one of [`vanilla`, `implicit-on-text-encode-sep`, `explicit`]
- `normalize_aspect`: If true, datasets are normalized by aspect, ==TODO add==
- `learning_rate`: Learning rate for training
- `batch_size`: Batch size for training/evaluation
- `epochs`: #epochs for training
- `init_model_name_or_path`: Fie system path or HuggingFace model name to initialize model weights for explicit training, ==TODO test==
- `output_dir`: Directory name postfix for trained model
- `domain`: One of [`in`, `out`], the domain of datasets to evaluate on
- `model_name_or_path`: Directory name or HuggingFace model name for evaluation
**Train**
- Vanilla training on Binary BERT
- ```bash
python zeroshot_classifier/models/binary_bert.py train --mode vanilla --batch_size 32 --epochs 8 --learning_rate 2e-5 --output_dir '{a=2e-5}'
```
- Explicit training on Bi-Encoder
- ```bash
python zeroshot_classifier/models/bi-encoder.py train --mode explicit --model_init '2022-11-21_18-58-54_Aspect-Pretrain-Binary-BERT_{md=exp, na=T}_{a=3e-05}/trained'
```
**Eval**
- Evaluate implicitly-trained model on all in-domain datasets
- ```bash
python zeroshot_classifier/models/binary_bert.py test --mode implicit-on-text-encode-sep --domain in --model_dir_nm 2022-10-12_01-21-08_Binary-BERT-implicit-on-text-encode-sep-rand-aspect-norm
```
#### Explicit Pretraining
**Arguments**
- `output_dir`: Directory name postfix for trained model
- `normalize_aspect`: If true, datasets are normalized by aspect
- `learning_rate`: Learning rate for training
- `batch_size`: Batch size for training/evaluation
- `epochs`: #epochs for training
**Train**
- Train with learning rate 2e-5, ==TODO verify working==
- ```bash
python zeroshot_classifier/models/explicit/binary_bert_pretrain.py --learning_rate 2e-5 output_dir '{a=2e-5}'
```
### Generative Classification
**Arguments**
- `mode`: Training strategy, one of [`vanilla`, `implicit`, `explicit`]
- `normalize_aspect`: If true, datasets are normalized by aspect
- `learning_rate`: Learning rate for training
- `batch_size`: Batch size for training/evaluation
- `gradient_accumulation_steps`: #gradient accumulation steps for training
- `epochs`: #epochs for training
- `ddp`: DDP training flag, intended for proper logging during training
- `init_model_name_or_path`: Fie system path or HuggingFace model name to initialize model weights for explicit training, ==TODO verify working==
- `output_dir`: Directory name postfix for trained model
- `model_name_or_path`: Directory name for model evaluation
==TODO, verify command args==
**Train**
- Implicit training on GPT with DDP
- ```bash
torchrun --nproc_per_node=4 zeroshot_classifier/models/gpt2.py train --mode implicit
```
- Explicit training on GPT
- ```bash
python zeroshot_classifier/models/gpt2.py train --mode explicit --model_init '2022-11-27_17-39-06_Aspect-Pretrain-NVIDIA-GPT2_{md=exp, na=T}_{a=2e-05}'
```
**Eval**
- Evaluate model with vanilla training on all out-of-domain datasets
- ```bash
python zeroshot_classifier/models/gpt2.py test --mode implicit --model_dir_nm '2022-11-29_19-37-13_NVIDIA-GPT2_{md=van, na=T}_{a=3e-05}'
```
#### Explicit Pretraining
**Arguments**
- `output_dir`: Directory name postfix for trained model
- `normalize_aspect`: If true, datasets are normalized by aspect
- `learning_rate`: Learning rate for training
- `batch_size`: Batch size for training/evaluation
- `gradient_accumulation_steps`: #gradient accumulation steps for training
- `epochs`: #epochs for training
**Train**
- Train with learning rate 2e-5, ==TODO verify working==
- ```bash
python zeroshot_classifier/models/explicit/gpt2_pretrain.py --learning_rate 4e-5 output_dir '{a=4e-5}'
```
| zeroshot-classifier | /zeroshot-classifier-0.2.3.tar.gz/zeroshot-classifier-0.2.3/README.md | README.md |
import os
import re
import math
import itertools
from os.path import join as os_join
from typing import List, Tuple, Dict, Union, Any
from warnings import warn
from argparse import ArgumentParser
from collections import defaultdict, OrderedDict
import numpy as np
import pandas as pd
from scipy.stats import norm
import torch
from torch import nn
from sklearn.metrics import classification_report
import transformers
from transformers import (
BatchEncoding, AutoConfig,
GPT2TokenizerFast, GPT2Model, GPT2LMHeadModel,
Trainer, TrainingArguments, SchedulerType, DataCollatorForLanguageModeling
)
from transformers.file_utils import ModelOutput
from transformers.training_args import OptimizerNames
from sentence_transformers import SentenceTransformer, util as sbert_util
import datasets
import matplotlib.pyplot as plt
import seaborn as sns
from tqdm.auto import tqdm
from stefutil import *
from zeroshot_classifier.util import *
from zeroshot_classifier.util.training import MyEvalPrediction
import zeroshot_classifier.util.utcd as utcd_util
from zeroshot_classifier.preprocess import get_dataset
MODEL_NAME = 'NVIDIA-GPT2'
HF_MODEL_NAME = 'gpt2-medium'
__all__ = ['ZsGPT2Tokenizer', 'ZsGPT2LMHeadModel']
logger = get_logger(MODEL_NAME)
class ZsGPT2Tokenizer(GPT2TokenizerFast):
"""
A wrapper around GPT2 tokenizer for 0-shot classification tokenizing
"""
SPEC_TOKS = OrderedDict([
('pref_ques', '<|question|>'), # Word embeddings
('pref_text', '<|text|>'),
('pref_answ', '<|answer|>'),
('sep_answ', '<|answer_sep|>'), # Separation between answers if multiple answers
('type_ques', '[QUES]'), # Type embeddings
('type_text', '[TEXT]'),
('type_answ', '[ANSW]')
])
_aspect_sep_token = '[ASPECT_SEP]' # for implicit training, passing in aspect as part of `text`
pad_token_ = '[PAD]'
class Cache(dict):
"""
Wrapper around caching dict, that loads metadata on corresponding dataset
"""
def __init__(self, tokenizer: 'ZsGPT2Tokenizer'):
super().__init__()
self.tokenizer = tokenizer
self.tpl_grouped = re.compile(rf'^(?P<dataset_name>.?)-label-grouped$')
def __getitem__(self, key: Tuple[str, str]):
"""
:param key: 2-tuple of (dataset_name, split)
Needed cos huggingface may load cached dataset, internal cache is gone
.. note:: works for local disk dataset only
"""
dataset_name, split = key
key = f'{dataset_name}-{split}'
if key not in self:
path = os_join(utcd_util.get_base_path(), PROJ_DIR, DSET_DIR, 'processed', dataset_name)
dset = datasets.load_from_disk(path)[split]
# See `zeroshot_classifier.util.util.py::process_utcd_dataset`
feats = dset.features['labels'].feature
n_cls = feats.num_classes
assert feats.names == sconfig(f'UTCD.datasets.{dataset_name}.splits.{split}.labels') # sanity check
label2description: Dict[int, str] = {i: desc for i, desc in enumerate(feats.names)} # label is index
self[key] = dict(
n_classes=n_cls, label2description=label2description,
max_label_id_length=max(len(self.tokenizer._call_paren(lb)) for lb in feats.names)
)
return super().__getitem__(key)
def __init__(self, form: str = 'vanilla', verbose: bool = False, **kwargs):
"""
:param form: One of [`vanilla`, `implicit`, `explicit`]
See `binary_bert::modes`
`implicit` is `implicit-on-text-encode-sep`
"""
super().__init__(**kwargs)
# Pad token cannot be `self.eos_token`
# cos otherwise `DataCollatorForLanguageModeling` would override normal eos tokens
spec_toks = list(ZsGPT2Tokenizer.SPEC_TOKS.values())
if form == 'explicit':
added_vocab = set(self.get_added_vocab().keys())
assert utcd_util.EOT_TOKEN in added_vocab and ZsGPT2Tokenizer.pad_token_ in added_vocab
# TODO: when re-loaded, PAD token doesn't seem to be added...
else:
spec_toks.append(utcd_util.EOT_TOKEN) # SGD end of turn
ca(gpt2_training_strategy=form)
self.form = form
self.did2aspect, aspect_sep_token = None, None
if form == 'implicit':
self.aspect_sep_token = ZsGPT2Tokenizer._aspect_sep_token
spec_toks.append(self.aspect_sep_token)
did2nm = sconfig('UTCD.dataset_id2name')
self.did2aspect = {did: sconfig(f'UTCD.datasets.{dnm}.aspect') for did, dnm in enumerate(did2nm)}
self.add_special_tokens(dict(pad_token=ZsGPT2Tokenizer.pad_token_, additional_special_tokens=spec_toks))
self.templates = sconfig('baselines.gpt2-nvidia.templates')
# Mapping from dataset name to label for non-UTCD cases
self.cache = ZsGPT2Tokenizer.Cache(self)
self.cache_utcd = None
self.boq_token, self.bot_token, self.boa_token = ( # begin of (question, text, answer) tokens
ZsGPT2Tokenizer.SPEC_TOKS[k] for k in ('pref_ques', 'pref_text', 'pref_answ')
) # Special tokens
self.ques_sep_token = ZsGPT2Tokenizer.SPEC_TOKS['sep_answ']
self.question_type_token, self.text_type_token, self.answer_type_token = (
ZsGPT2Tokenizer.SPEC_TOKS[k] for k in ('type_ques', 'type_text', 'type_answ')
) # Type tokens
self.warned_desc = set() # Warning for each dataset happens once @property
self.verbose = verbose
self.logger = get_logger(self.__class__.__qualname__)
if verbose:
d_log = dict(form=form, added_vocab=list(self.get_added_vocab().keys()), vocab_size=self.vocab_size)
self.logger.info(f'{pl.i(self.__class__.__qualname__)} initialized with {pl.i(d_log)}')
@property
def max_len_single_sentence(self) -> int:
return self.model_max_length - 2 * 3 # 3 pairs of (special start token, eos token)
def _call_paren(self, s: str, **kwargs) -> List[int]:
return super().__call__(s, **kwargs)['input_ids']
def enc_spec(self, tok: str) -> int:
"""
Encode special tokens with sanity check
"""
id_ = self.encode(tok)
assert len(id_) == 1
return id_[0] # Intended for special tokens
def __call__(
self, samples: Dict[str, Union[List, str, int]],
dataset_name: str = 'UTCD', split: str = 'train', mode: str = 'train',
**kwargs
):
"""
:param samples: Data sample(s) with keys [`dataset_name`, `label`, `text`]
Each value an element or a list of elements
:param split: One of [`train`, `test`]
Shouldn't matter for UTCD datasets, see `process_utcd_dataset`
:param mode: one of [`train`, `inference`, `stats`, `inference-debug],
If `inference`, the answer part is not tokenized
the text portion is truncated such that the label with largest # of ids may be generated;
the batch is not padded
i.e. Intended for prediction, see `evaluate_trained`
If `stats`, the entire sample is tokenized without truncation
"""
ca.check_mismatch('Tokenization mode', mode, ['train', 'inference', 'stats', 'inference-sample'])
max_length = kwargs.get('max_length', None)
is_batched = isinstance(samples['text'], (tuple, list))
if max_length is None:
max_length = self.model_max_length
n_token = self.model_max_length # Indented number of token positions as in the actual architecture
ln = len(samples['text'])
idxs_tpl = np.random.randint(len(self.templates), size=ln)
def call_single(
i, dataset_id: int = None, text: str = None, labels: List[int] = None, label_options: List[str] = None,
aspect: str = None
):
dset_nm: str = None if mode == 'inference-sample' else sconfig('UTCD.dataset_id2name')[dataset_id]
if mode == 'inference-sample':
assert label_options is not None
n_cls = len(label_options)
def lb_int2desc(lb: int) -> str:
return label_options[lb]
answers = []
elif 'UTCD' in dataset_name:
descs = sconfig(f'UTCD.datasets.{dset_nm}.splits.{split}.labels') # Descriptive labels
n_cls = len(descs)
# `label` is shared across all datasets, map to local label within dataset
if self.cache_utcd is None:
path = os_join(utcd_util.get_base_path(), u.proj_dir, u.dset_dir, 'processed', dataset_name)
# cos `Sequential`; each split, the label is the same
self.cache_utcd = datasets.load_from_disk(path)[split].features['labels'].feature
# The ordering indicates int<=>str label mapping, i.e., index is int label,
# see `process_utcd_dataset`
def lb_int2desc(lb: int) -> str:
"""
Map from local dataset label ordinal, in range(n_cls) to the descriptor
"""
return descs[lb]
answers = [self.cache_utcd.int2str(lb) for lb in labels]
else:
n_cls, label2description = (self.cache[dset_nm, split][k] for k in ('n_classes', 'label2description'))
def lb_int2desc(lb: int) -> str:
return label2description[lb]
if mode == 'inference': # getting the answer doesn't matter here, see `evaluate_trained`
answers = []
else:
raise NotImplementedError('Tokenization for non-UTCD datasets is not implemented yet')
answers = label2description[labels]
idx_lbs = np.arange(n_cls)
np.random.shuffle(idx_lbs)
strs_lb = ' , '.join(f'" {lb_int2desc(idx)} "' for idx in idx_lbs)
question = self.templates[idxs_tpl[i]].format(strs_lb)
n_answs = len(answers)
if n_answs > 1:
idx_answs = np.arange(n_answs)
np.random.shuffle(idx_answs)
answers = [answers[idx] for idx in idx_answs]
ids_ques = self._call_paren(question, **kwargs)
ids_text = self._call_paren(text, **kwargs)
if self.form == 'implicit':
if dataset_id is None:
assert aspect is not None
else:
assert aspect is None
aspect = self.did2aspect[dataset_id]
ids_asp = self._call_paren(aspect, **kwargs)
ids_text = ids_asp + [self.enc_spec(self.aspect_sep_token)] + ids_text
id_sep = self.enc_spec(self.ques_sep_token)
ids_answ = [self._call_paren(a, **kwargs) for a in answers]
ids_answ = sum(join_it(ids_answ, [id_sep]), start=[])
ln_q, ln_t, ln_a = len(ids_ques), len(ids_text), len(ids_answ)
if mode == 'inference':
assert dset_nm is not None # sanity check not `inference-sample`
# If text sample is so long that we need to truncate, leave room for one label only
ln_cont = (1+ln_q+1) + (1+ln_t+1) + 1 # for `pref_answ`
max_label_id_length = self.cache[dset_nm, split]['max_label_id_length']
# The maximum number of tokens that could fit for context/prompt
room = self.model_max_length-1 - max_label_id_length # Also needs to generate `EOS`
if ln_cont > room:
# Crop the text portion so that the longest label can be generated
ln_t_ = room - ((1+ln_q+1) + (1+1) + 1)
assert ln_t_ > 0
self.logger.warning(f'{pl.i(dset_nm)} sample w/o answer longer than model max sequence length and '
f'labels: {pl.i(ln_cont)} > {pl.i(self.model_max_length)} - '
f'Text portion cropped for inference: ({pl.i(ln_t)} > {pl.i(ln_t_)})')
ids_text = ids_text[:ln_t_]
elif mode == 'train':
ln_ids = ln_q + ln_t + ln_a
if ln_ids > self.max_len_single_sentence:
# Crop the text portion, keep question and label intact,
# i.e., ensure no classification label is cropped
ln_t_ = self.max_len_single_sentence - (ln_q + ln_a)
assert ln_t_ > 0
self.logger.warning(f'{pl.i(dset_nm)} sample w/ answer longer than model sequence length'
f': {pl.i(ln_ids+6)} > {pl.i(self.model_max_length)} - '
f'Text portion cropped for training: ({pl.i(ln_t)} => {pl.i(ln_t_)})')
ids_text = ids_text[:ln_t_]
# else, `stats`, no truncation
# Number of contex tokens, up until answer token, inclusive
n_ques, n_text, n_answ = (1+len(ids_ques)+1), (1+len(ids_text)+1), (1+len(ids_answ)+1)
n_cont = n_ques + n_text + 1
ids = [
self.enc_spec(self.boq_token), *ids_ques, self.enc_spec(self.eos_token),
self.enc_spec(self.bot_token), *ids_text, self.enc_spec(self.eos_token),
self.enc_spec(self.boa_token), *ids_answ, self.enc_spec(self.eos_token)
]
tids = [self.enc_spec(self.question_type_token)] * n_ques + \
[self.enc_spec(self.text_type_token)] * n_text + \
[self.enc_spec(self.answer_type_token)] * n_answ
if mode in ['inference', 'inference-sample']:
ids, tids = ids[:-(n_answ-1)], tids[:-(n_answ-1)]
assert len(ids) == (n_ques+n_text+1) # sanity check
msks = [1] * len(ids) # Encode ids are attended for CLM
# Context position ids, followed by output position ids
# adding `n_token` offset for the modified positional embeddings, see `ZsGPT2Model`
pids = list(range(n_cont)) + [i + n_token for i in range(len(ids)-n_cont)]
assert all(len(lst_ids) == len(ids) for lst_ids in (ids, tids, msks, pids)) # Sanity check
def pad(ints: List[int], name) -> List[int]:
"""
Pad to max_length, truncate if necessary
"""
if name == 'attention_mask':
int_pad = 0 # Ignore in attention
elif name == 'position_ids':
# Arbitrary, since will be ignored, but needs to be within `n_token` for embedding mapping
int_pad = 0
else:
# `input_id`s set to `pad_token` will be ignored by `DataCollatorForLanguageModeling`
int_pad = self.enc_spec(self.pad_token)
return ints[:max_length] if len(ints) > max_length else (ints + [int_pad] * (max_length - len(ints)))
out = {k: (pad(ints, k) if mode == 'train' else ints) for k, ints in ((
('input_ids', ids), ('attention_mask', msks), ('token_type_ids', tids), ('position_ids', pids)
))}
if dataset_id is not None:
out['dataset_id'] = dataset_id # For computing zero-shot classification accuracy
if mode == 'stats': # the number of tokens for just the text part
out['ids_text'] = ids_text
return out
# See `zeroshot_classifier.util.util.py::process_utcd_dataset`
keys_ = ['dataset_id', 'text', 'labels', 'label_options', 'aspect']
if mode == 'inference-sample':
assert not is_batched, f'Batched {pl.i("inference-sample")} not supported'
else:
assert 'label_options' not in samples, \
f'{pl.i("label_options")} supported for {pl.i("inference-sample")} only'
if is_batched:
ds = [call_single(i, d_id, txt, lb) for i, (d_id, txt, lb) in enumerate(zip(
*[samples[k] for k in keys_ if k in samples] # only `label_options` may not be required
))]
return BatchEncoding({k: [d[k] for d in ds] for k in ds[0]}) # Stack all the ids
else:
return BatchEncoding(call_single(0, *[samples.get(k, None) for k in keys_]))
class ZsGPT2Model(GPT2Model):
"""
Modifying the `GPT2Model` for 0-shot classification paper
"""
def __init__(self, config_):
super().__init__(config_)
# Override internal state, instead of adding internal state, so that forward pass stays untouched
# Double the positional embedding matrix, as if stacking the context & output embedding matrices together
# See positional id assignment in `ZsGPT2Tokenizer`
self.wpe = nn.Embedding(config_.max_position_embeddings*2, self.embed_dim)
def pprint_gpt2_input(tokenizer: ZsGPT2Tokenizer, d: Dict[str, torch.Tensor]):
"""
Prints to console the encoded ids, positional ids and type ids as sanity check
"""
n_ct, n_dnm, n_wd = 3, 10, 13
n_pad = n_ct + n_dnm + 3
ids, pids, tids, dids = (d[k].detach() for k in ('input_ids', 'position_ids', 'token_type_ids', 'dataset_id'))
pad = tokenizer.enc_spec(tokenizer.pad_token)
id2name = sconfig('UTCD.dataset_id2name')
for i, (ids_, did, pids_, tids_) in enumerate(zip(ids, dids, pids, tids)):
msk = (ids_ != pad)
ids_, pids_, tids_ = ids_[msk], pids_[msk], tids_[msk]
print(f'{i:>{n_ct}}: {id2name[did.item()]:>{n_dnm}}', end=' ')
for id_ in ids_:
tok = tokenizer.decode(id_)
print(f'{tok:>{n_wd}}', end='')
print()
print(' ' * n_pad, end='')
for pid in pids_:
print(f'{pid.item():>{n_wd}}', end='')
print()
print(' ' * n_pad, end='')
for tid in tids_:
print(f'{tokenizer.decode(tid):>{n_wd}}', end='')
print('\n')
class ZsGPT2LMHeadModel(GPT2LMHeadModel):
"""
So that `ZsGPT2Model` is loaded
"""
def __init__(self, config_):
super().__init__(config_)
self.transformer = ZsGPT2Model(config_) # Override internal state
def forward(self, dataset_id=None, **kwargs):
# Function override to ignore `dataset_id`, not need in learning; Just need to pass value for evaluation
return super().forward(**kwargs)
@classmethod
def from_pretrained(cls, *args, is_zs_gpt2: bool = True, **kwargs):
"""
:param is_zs_gpt2: If True, loads a local `ZsGPT2LMHeadModel`; otherwise, expects a GPT2 model
"""
if is_zs_gpt2:
return super().from_pretrained(*args, **kwargs)
else:
md_ = super().from_pretrained(*args, **kwargs) # Loads the GPT2LMHeadModel while ignoring `wpe.weight`
md_ori = GPT2LMHeadModel.from_pretrained(*args, **kwargs)
weight_pretrained = md_ori.transformer.wpe.state_dict()['weight']
# Check `vars(md_ori.transformer.wpe)`, weight is the only parameter
del md_ori
# Crude loading the pretrained weights, to each half of the doubled positional embedding
with torch.no_grad():
n_tok = md_.transformer.wpe.weight.shape[0]
if n_tok == 1024 * 2:
md_.transformer.wpe.weight[:1024, :] = weight_pretrained
md_.transformer.wpe.weight[1024:, :] = weight_pretrained
else:
warn('Wrong model size, positional not loaded. This is expected in debugging')
return md_
@staticmethod
def prepare_inputs_for_generation(input_ids, past=None, **kwargs):
"""
The original implementation is fine,
cos in the 1st generation forward call, the positional ids are range(n) anyway
but modify anyway just to be sure
"""
token_type_ids = kwargs.get("token_type_ids", None)
# only last token for inputs_ids if past is defined in kwargs
if past:
input_ids = input_ids[:, -1].unsqueeze(-1)
if token_type_ids is not None:
token_type_ids = token_type_ids[:, -1].unsqueeze(-1)
attention_mask = kwargs.get("attention_mask", None)
position_ids = kwargs.get("position_ids", None)
if attention_mask is not None and position_ids is None:
# create position_ids on the fly for batch generation
position_ids = attention_mask.long().cumsum(-1) - 1
position_ids.masked_fill_(attention_mask == 0, 1)
if past:
position_ids = position_ids[:, -1].unsqueeze(-1)
# ========================== Begin of modified ==========================
# return {
# "input_ids": input_ids,
# "past_key_values": past,
# "use_cache": kwargs.get("use_cache"),
# "position_ids": position_ids,
# "attention_mask": attention_mask,
# "token_type_ids": token_type_ids,
# }
ret = {
"input_ids": input_ids,
"past_key_values": past,
"use_cache": kwargs.get("use_cache"),
"position_ids": position_ids,
"attention_mask": attention_mask,
"token_type_ids": token_type_ids,
}
if 'dataset_id' in kwargs: # only case it doesn't exist: `inference-sample` mode
ret['dataset_id'] = kwargs['dataset_id']
return ret
# ========================== End of modified ==========================
def _update_model_kwargs_for_generation(
self, outputs: ModelOutput, model_kwargs: Dict[str, Any], is_encoder_decoder: bool = False
) -> Dict[str, Any]:
# update past
if "past_key_values" in outputs:
model_kwargs["past"] = outputs.past_key_values
elif "mems" in outputs:
model_kwargs["past"] = outputs.mems
elif "past_buckets_states" in outputs:
model_kwargs["past"] = outputs.past_buckets_states
else:
model_kwargs["past"] = None
# update token_type_ids with last value
if "token_type_ids" in model_kwargs:
token_type_ids = model_kwargs["token_type_ids"]
model_kwargs["token_type_ids"] = torch.cat([token_type_ids, token_type_ids[:, -1].unsqueeze(-1)], dim=-1)
# update attention mask
if not is_encoder_decoder:
if "attention_mask" in model_kwargs:
attention_mask = model_kwargs["attention_mask"]
model_kwargs["attention_mask"] = torch.cat(
[attention_mask, attention_mask.new_ones((attention_mask.shape[0], 1))], dim=-1
)
# ========================== Begin of added ==========================
assert 'position_ids' in model_kwargs
position_ids = model_kwargs['position_ids']
is_1st_call = position_ids[0, 0] == 0 # 1st call to prepping inputs, should start the answer position_ids
if is_1st_call:
assert torch.all(position_ids[:, 0] == 0).item() # Sanity check
new_col = position_ids[:, -1]+1 # Increment the last position
if is_1st_call:
new_col = torch.zeros_like(new_col) + self.config.n_ctx # Per the paper, generating answer now
# Integrate with `past_key_values`,
# Inspired by `GPT2LMHeadModel.prepare_inputs_for_generation`, looks like keep only the new column
model_kwargs['position_ids'] = new_col.unsqueeze(-1)
# ========================== End of added ==========================
return model_kwargs
class Tokenize:
def __init__(
self, tokenizer: ZsGPT2Tokenizer, dataset_name='ag_news', max_length=None,
split: str = 'train', mode: str = 'train', **kwargs
):
self.tokenizer = tokenizer
self.dataset_name = dataset_name
self.max_length = max_length
self.split = split
self.mode = mode
self.kwargs = kwargs
def __call__(self, sample: Dict[str, List]):
"""
:param sample: A batch of data samples
"""
if 'UTCD' not in self.dataset_name:
sample['dataset_id'] = [sconfig('UTCD.dataset_name2id')[self.dataset_name]] * len(sample['text'])
# Otherwise, `dataset_id` already part of input
args = dict(dataset_name=self.dataset_name, max_length=self.max_length, split=self.split, mode=self.mode)
return self.tokenizer(sample, **args, **self.kwargs)
def get_model_n_tokenizer(model_name='gpt2', form: str = 'vanilla', save_gpu_memory: bool = True) -> Tuple[
ZsGPT2LMHeadModel, ZsGPT2Tokenizer, DataCollatorForLanguageModeling
]:
if 'debug' in model_name: # Try a smaller model for training sanity check
if 'large' in model_name:
n_token = 128
else:
n_token = 4
model_name = 'gpt2'
conf = AutoConfig.from_pretrained(model_name)
# If using cpu, must be debugging and hence no `gradient_checkpointing`, see `get_train_setup`
conf.update(dict(n_ctx=n_token, n_positions=n_token, use_cache=not torch.cuda.is_available()))
model_ = ZsGPT2LMHeadModel.from_pretrained(model_name, config=conf, ignore_mismatched_sizes=True)
model_max_length = n_token
else:
model_max_length = 1024 # Keep max seq len of 1024, instead of 512 in paper, for longer texts & more labels
conf = AutoConfig.from_pretrained(model_name)
# `use_cache` in compatible with `gradient_checkpointing`, see `get_train_setup`
conf.update(dict(use_cache=not (torch.cuda.is_available() and save_gpu_memory)))
# Keep the 1024 token length, reducing to 512 tokens involves loading part of pretrained weights, complicated
model_ = ZsGPT2LMHeadModel.from_pretrained(model_name, config=conf, ignore_mismatched_sizes=True)
tokenizer_ = ZsGPT2Tokenizer.from_pretrained(
model_name, use_fast=True, model_max_length=model_max_length, form=form
)
model_.resize_token_embeddings(len(tokenizer_))
model_.tokenizer = tokenizer_
return model_, tokenizer_, DataCollatorForLanguageModeling(tokenizer=tokenizer_, mlm=False)
def get_train_setup(
model_name='gpt2', do_eval=True, dir_name: str = None, train_args: Dict = None,
save_gpu_memory: bool = True, normalize_aspect: bool = False
) -> TrainingArguments:
d_train_args = {
'debug': dict(
learning_rate=1e-4,
batch_size=4,
weight_decay=1e-2,
num_train_epochs=4,
lr_scheduler_type=SchedulerType.CONSTANT,
),
'debug-large': dict(
learning_rate=5e-5,
batch_size=4,
weight_decay=1e-2,
num_train_epochs=40,
lr_scheduler_type=SchedulerType.CONSTANT,
),
'gpt2': dict(
learning_rate=3e-5,
batch_size=32,
weight_decay=1e-2,
num_train_epochs=5,
lr_scheduler_type=SchedulerType.COSINE,
),
'gpt2-medium': dict(
learning_rate=4e-5,
train_batch_size=128,
eval_batch_size=64,
gradient_accumulation_steps=1,
weight_decay=1e-2,
num_train_epochs=10,
lr_scheduler_type=SchedulerType.COSINE,
)
}
name_ = model_name
if name_ not in d_train_args:
name_ = 'gpt2-medium'
lr, bsz, decay, n_ep, sch, gas = (d_train_args[name_].get(k, None) for k in [
'learning_rate', 'batch_size', 'weight_decay',
'num_train_epochs', 'lr_scheduler_type', 'gradient_accumulation_steps'
])
if bsz is None:
bsz_tr, bsz_vl = (d_train_args[name_].get(k, None) for k in ('train_batch_size', 'eval_batch_size'))
assert bsz_tr is not None and bsz_vl is not None
else:
bsz_tr = bsz_vl = bsz
dir_nm = dir_name or f'{now(for_path=True)}_{MODEL_NAME}-{model_name}'
args = dict(
output_dir=os_join(utcd_util.get_base_path(), PROJ_DIR, MODEL_DIR, dir_nm),
do_train=True,
do_eval=do_eval,
evaluation_strategy='epoch' if do_eval else 'no',
per_device_train_batch_size=bsz_tr,
per_device_eval_batch_size=bsz_vl,
gradient_accumulation_steps=gas,
eval_accumulation_steps=128, # Saves GPU memory
# Adam's beta1, beta2, epsilon taken from the GPT2 config in
# https://github.com/huggingface/transformers/blob/master/examples/pytorch/language-modeling/run_clm.py
learning_rate=lr,
weight_decay=decay,
adam_beta1=0.9,
adam_beta2=0.999,
adam_epsilon=1e-08,
max_grad_norm=1,
num_train_epochs=n_ep,
lr_scheduler_type=sch,
warmup_ratio=1e-2,
log_level='warning',
log_level_replica='info',
logging_strategy='steps',
logging_steps=1,
save_strategy='epoch',
fp16=torch.cuda.is_available(),
fp16_full_eval=False,
optim=OptimizerNames.ADAMW_TORCH,
disable_tqdm=True,
# Pass dataset name information down to `compute_loss` for computing text classification accuracy
remove_unused_columns=False,
report_to='none',
# Set to True on CPU gives warning; Enable for fitting in `clarity1` memory
gradient_checkpointing=torch.cuda.is_available() and save_gpu_memory
)
if normalize_aspect:
args.update(dict(
load_best_model_at_end=True,
metric_for_best_model='eval_loss',
greater_is_better=False
))
if train_args is None:
train_args = dict()
args = {k: v for k, v in args.items() if v is not None}
args.update(train_args)
return TrainingArguments(**args)
def compute_metrics(eval_pred: MyEvalPrediction):
"""
Will be called on eval data only, **during training**
"""
# Intended to work with `CustomTrainer.prediction_step`
if not hasattr(compute_metrics, 'metric'):
compute_metrics.metric = datasets.load_metric('accuracy')
# Labels are per-sample already, see `MyTrainer::prediction_step`
preds, trues, dids = eval_pred.predictions, eval_pred.label_ids, eval_pred.dataset_ids
return dict(cls_acc=compute_metrics.metric.compute(predictions=preds, references=trues)['accuracy'])
def get_all_setup(
model_name: str = None, dataset_name: str = 'ag_news', form: str = 'vanilla',
n_sample=None, random_seed=None, do_eval=True, custom_logging=True,
train_args: Dict = None, dataset_args: Dict = None, trainer_args: Dict = None,
is_ddp: Union[bool, int] = False, use_tqdm: bool = True # so that my own logging is correct
) -> Tuple[GPT2LMHeadModel, Union[GPT2TokenizerFast, ZsGPT2Tokenizer], Trainer]:
dataset_args = dataset_args or dict()
normalize_aspect = dataset_args.get('normalize_aspect', None)
if model_name == 'debug-gpt-ori': # Sanity check: As if keep training GPT-2, with padding for simplicity
conf = AutoConfig.from_pretrained('gpt2')
conf.update(dict(use_cache=False))
model = GPT2LMHeadModel.from_pretrained('gpt2', config=conf)
tokenizer = GPT2TokenizerFast.from_pretrained('gpt2')
data_collator_ = None
train_args_ = get_train_setup(model_name, do_eval=do_eval)
def group_texts(examples):
examples = tokenizer(examples['text'])
# Taken from
# https://github.com/huggingface/notebooks/blob/master/examples/language_modeling_from_scratch.ipynb
# block_size = tokenizer_.model_max_length
block_size = 512 # To fit in memory
concatenated_examples = {k: sum(examples[k], []) for k in examples.keys()}
total_length = len(concatenated_examples[list(examples.keys())[0]])
total_length = (total_length // block_size) * block_size
result = {
k: [t[i: i + block_size] for i in range(0, total_length, block_size)]
for k, t in concatenated_examples.items()
}
result['labels'] = result['input_ids'].copy()
return result
tr_map_func = vl_map_func = ts_map_func = group_texts
else:
# Gradient checkpointing still needed - otherwise doesn't fit in 44G GPU
save_gpu_mem = 'arc-ts' not in get_hostname()
model, tokenizer, data_collator_ = get_model_n_tokenizer(model_name, form=form, save_gpu_memory=save_gpu_mem)
_md_nm = None if form == 'explicit' else model_name # cos the filesystem path is too long
dir_nm = map_model_dir_nm(
model_name=MODEL_NAME, name=_md_nm, mode=form,
sampling=None, normalize_aspect=dataset_args.get('normalize_aspect', None)
)
train_args_ = get_train_setup(
model_name, do_eval=do_eval, dir_name=dir_nm, train_args=train_args,
save_gpu_memory=save_gpu_mem, normalize_aspect=normalize_aspect
)
tr_map_func = Tokenize(tokenizer, dataset_name=dataset_name, split='train')
# Evaluation set has the same set of labels as training set by construction,
# see `load_data:dataset2train_eval_split`
# All `Tokenize` care about is the corresponding set of labels
vl_map_func = Tokenize(tokenizer, dataset_name=dataset_name, split='train')
ts_map_func = Tokenize(tokenizer, dataset_name=dataset_name, split='test')
splits = ('train', 'eval', 'test') if normalize_aspect else ('train', 'test')
get_dset_args = dict(
dataset_name=dataset_name,
map_func=dict(train=tr_map_func, eval=vl_map_func, test=ts_map_func), remove_columns=['text', 'labels'],
n_sample=n_sample, shuffle_seed=random_seed, pbar=True, splits=splits,
fast='debug' not in model_name
)
get_dset_args.update(dataset_args)
dsets = get_dataset(**get_dset_args)
_trainer_args = dict(
model=model, args=train_args_, data_collator=data_collator_,
train_dataset=dsets['train'], eval_dataset=dsets.get('eval', None), compute_metrics=compute_metrics
)
_trainer_args.update(trainer_args or dict())
trainer = GPT2Trainer(
tokenizer=tokenizer, custom_logging=custom_logging,
is_ddp=is_ddp, with_tqdm=use_tqdm,
**_trainer_args
)
return model, tokenizer, trainer
def plot_dataset_token_length_stats(domain: str = 'in'):
ca(dataset_domain=domain)
tokenizer = get_model_n_tokenizer('gpt2-medium')[1]
# `split` shouldn't matter
func = Tokenize(tokenizer=tokenizer, dataset_name=f'UTCD-{domain}', split='train', mode='stats')
did2nm = sconfig('UTCD.dataset_id2name')
def map_func(examples):
tokenized = func(examples)
return dict(
n_token=[len(ids) for ids in tokenized['input_ids']],
n_token_text=[len(ids) for ids in tokenized['ids_text']],
dataset_name=[did2nm[i] for i in tokenized['dataset_id']]
)
dset_tr, dset_vl = get_dataset(
dataset_name=f'UTCD-{domain}', map_func=map_func, remove_columns=['text', 'labels'], fast=True
)
# discard training set for out-of-domain
dset = datasets.concatenate_datasets([dset_tr, dset_vl]) if domain == 'in' else dset_tr
df = pd.DataFrame(dset[:])
fig, axes = plt.subplots(2, 2, figsize=(16, 9))
args_bar = dict(kde=True, kde_kws=dict(bw_adjust=0.5, gridsize=2048))
args_cum = dict(cumulative=True, fill=False, element='step')
for i_row, i_col in itertools.product(range(2), range(2)):
ax = axes[i_row, i_col]
legend = i_row == 1 and i_col == 0
args = dict(palette='husl', legend=legend, common_norm=False, ax=ax, stat='density')
args |= args_bar if i_col == 0 else args_cum
x = 'n_token' if i_row == 0 else 'n_token_text'
if i_col == 0:
n_bin = df[x].max() - df[x].min() + 1
args['bins'] = n_bin
sns.histplot(data=df, x=x, hue='dataset_name', **args)
ax.set(xlabel='#token' if i_row == 0 else '#token for text', ylabel=None)
p = norm().cdf(3.5) # # dynamic upperbound; quantile by std
mi, ma = df[x].min(), math.ceil(df[x].quantile(p))
ax.set_xlim([mi, ma])
title = f'GPT2 token length distribution for UTCD {domain}-domain'
plt.suptitle(title)
fig.supylabel('density')
output_dir = os_join(BASE_PATH, PROJ_DIR, 'plot')
os.makedirs(output_dir, exist_ok=True)
plt.savefig(os_join(output_dir, f'{title}, {now(for_path=True)}.png'), dpi=300)
def load_trained(
form: str = 'vanilla', epoch: int = 3, normalize_aspect: bool = False, model_name_or_path: str = None
) -> Tuple[ZsGPT2LMHeadModel, ZsGPT2Tokenizer, str]:
ca(gpt2_training_strategy=form)
d_log = dict(form=form, epoch=epoch, normalize_aspect=normalize_aspect)
logger.info(f'Loading model with {pl.i(d_log)}... ')
if model_name_or_path:
path = os_join(get_base_path(), u.proj_dir, u.model_dir, model_name_or_path, 'trained')
if os.path.exists(path):
md_nm = path
else:
md_nm = model_name_or_path
else:
raise NotImplementedError('For obsolete local models')
md_nm = os_join(get_base_path(), u.proj_dir, u.model_dir, dir_nm, 'trained')
logger.info(f'Loading model from {pl.i(md_nm)}... ')
model = ZsGPT2LMHeadModel.from_pretrained(md_nm, is_zs_gpt2=True) # with caching
tokenizer_args = dict(form=form, use_fast=True, model_max_length=model.config.n_ctx)
tokenizer = ZsGPT2Tokenizer.from_pretrained(md_nm, **tokenizer_args)
return model, tokenizer, md_nm
def evaluate(
domain: str = 'in', batch_size: int = 48, form: str = 'vanilla', load_model_args: Dict = None,
embed_sim: bool = False
):
"""
Run evaluation, on potentially multi-label datasets
:param domain: Dataset domain
:param form: training strategy
:param batch_size: model generation batch size
:param load_model_args: arguments for `load_trained`
:param embed_sim: If true, in case model generates non-label, consider semantically most-similar text as prediction
"""
form_ = load_model_args.get('form', None)
if form_:
assert form_ == form
else:
load_model_args['form'] = form
ca(dataset_domain=domain)
model, tokenizer, eval_output_dir_nm = load_trained(**(load_model_args or dict()))
conf, model_cnm = model.config, model.__class__.__qualname__
# To disable warning `Setting `pad_token_id` to `eos_token_id` for open-end generation.`
model_size = conf.max_length = conf.n_ctx
conf.pad_token_id = conf.eos_token_id
model.eval()
model = model.to('cuda')
model.tokenizer = tokenizer # See ZsGPT2LMHeadModel.forward() sanity check`
encoder = None
if embed_sim:
encoder = SentenceTransformer('all-mpnet-base-v2', device='cuda' if torch.cuda.is_available() else 'cpu')
split = 'test'
output_path = os_join(u.eval_path, eval_output_dir_nm, domain2eval_dir_nm(domain))
os.makedirs(output_path, exist_ok=True)
dataset_names = utcd_util.get_dataset_names(domain)
d_model = OrderedDict({'model name': model_cnm, 'model size': model_size, 'training_strategy': form})
d_eval = dict(batch_size=batch_size, datasets=dataset_names, embed_similarity=embed_sim)
domain = 'in-domain' if domain == 'in' else 'out-of-domain'
logger_name = 'GPT2-NVIDIA Evaluation'
logger_fl = get_logger(
f'{logger_name} file-write', kind='file-write',
file_path=os_join(output_path, f'{now(for_path=True)}_{logger_name}, bsz={batch_size}, {domain}.log')
)
logger.info(f'Running eval {pl.i(domain)} on model {pl.i(d_model)}, with {pl.i(d_eval)}... ')
logger_fl.info(f'Running eval {domain} on model {pl.nc(d_model)}, with {pl.nc(d_eval)}... ')
for dnm_ in dataset_names:
d_info = sconfig(f'UTCD.datasets.{dnm_}.splits.{split}')
lb2id = defaultdict(lambda: -1) # If generated invalid descriptive label, will return -1
labels = d_info['labels']
# predictions and label descriptions all to lower case to be more lenient
lb2id.update({lb.lower(): i for i, lb in enumerate(labels)})
dset = get_dataset( # Get evaluation set only
dataset_name=dnm_, splits='test',
map_func=dict(test=Tokenize(tokenizer, dataset_name=dnm_, split='test', mode='inference')),
remove_columns='text', n_sample=None, from_disk=True, # keeps the `labels`
)['test']
label_embeds = None
if embed_sim:
label_embeds = encoder.encode(labels, batch_size=batch_size) # not necessarily lowercase
# Batched generation that **doesn't take up padding** is not supported by HuggingFace
n_dset = len(dset)
trues, preds = np.empty(n_dset, dtype=int), np.empty(n_dset, dtype=int)
len_ids = np.array([len(ids) for ids in dset[:]['input_ids']])
uniq_lens = np.unique(len_ids)
# Batches of likely different batch sizes
ln2idxs = [np.where(len_ids == ln)[0] for ln in uniq_lens]
idxs_batches = sum( # Get batches of same length, with max batch size of `batch_size`
(np.split(idxs, range(batch_size, idxs.size, batch_size)) if idxs.size > batch_size else [idxs]
for idxs in ln2idxs),
start=[]
)
n_bch = len(idxs_batches)
logger.info(f'Running evaluation on dataset {pl.i(dnm_)}, with labels {pl.i(labels)}, '
f'of {pl.i(len(dset))} unique texts in {pl.i(n_bch)} batches... ')
logger_fl.info(f'Running evaluation on dataset {dnm_}, with labels {labels}, '
f'of {len(dset)} unique texts in {n_bch} batches... ')
n_computed = 0
it = tqdm(idxs_batches, desc=pl.i(dnm_), unit='ba')
for step, idxs in enumerate(it): # Each batch has input samples of the same token length
idxs = [int(idx) for idx in idxs] # `Dataset.select` works with `int` indices only
inputs = { # No need to pad; Don't need to the labels to complicate forward pass
k: torch.tensor(v, device='cuda') for k, v in dset[idxs].items()
if k != 'labels' # Convert `dataset_id` too so that fits into HuggingFace APIs
}
outputs = model.generate(**inputs) # Greedy decoding
outputs_str = tokenizer.batch_decode(outputs, skip_special_tokens=False)
n_computed += len(idxs)
def set_pred_n_true(generated: str, i_sample: int) -> Tuple[int, int]:
idxs_boa = get_substr_indices(generated, s_sub=tokenizer.boa_token)
assert len(idxs_boa) >= 1 # there will be at least one index, as in prompt
# **try to be as lenient**: try to extract the text part if possible
answer_with_eos = generated[idxs_boa[0] + len(tokenizer.boa_token):]
multiple_boa = len(idxs_boa) > 1
if multiple_boa: # Should be extremely rare, the model is not generating according to template
logger.warning(f'{pl.i(model_cnm)} generated {pl.i(len(idxs_boa))} boa_token '
f'instead of {pl.i(1)} with [{pl.i(answer_with_eos)}]')
logger_fl.warning(f'{model_cnm} generated {len(idxs_boa)} boa_token '
f'instead of {1} with [{answer_with_eos}]')
mic(generated, answer_with_eos, idxs_boa)
idxs_eos = get_substr_indices(answer_with_eos, s_sub=tokenizer.eos_token)
# GPT2 would generate multiple `eos_token` for the samples in the batch that terminates early
if len(idxs_eos) == 0: # Still, **try to be as lenient**
logger.warning(f'{pl.i(model_cnm)} didn\'t finish generating answer '
f'with [{pl.i(answer_with_eos)}]')
logger_fl.warning(f'{model_cnm} didn\'t finish generating answer with [{answer_with_eos}]')
answer = answer_with_eos
else:
answer = answer_with_eos[:idxs_eos[0]] # until the 1st eos
idxs_sep = get_substr_indices(answer, s_sub=tokenizer.ques_sep_token)
if len(idxs_sep) > 0:
answers = [answer[:idxs_sep[0]]]
for i, idx in enumerate(idxs_sep[:-1]):
answers.append(answer[idx + len(tokenizer.ques_sep_token):idxs_sep[i+1]])
answers.append(answer[idxs_sep[-1] + len(tokenizer.ques_sep_token):])
else:
answers = [answer]
if multiple_boa: # should hardly happen anyway
answs = []
for a in answers:
answs.extend(a.split(tokenizer.boa_token))
answers = answs
ids_pred: List[int] = [lb2id[a.lower()] for a in answers]
assert len(ids_pred) >= 1 # sanity check
if embed_sim and all(i == -1 for i in ids_pred): # all generated answer are non-label
logger.warning(f'Generated {pl.i(answers)}, not a valid label option ')
logger_fl.warning(f'Generate {answers}, not a valid label option ')
ids_pred = []
answ_embeds = encoder.encode(answers, batch_size=batch_size)
for v_ans in answ_embeds:
scores = [sbert_util.cos_sim(v_lb, v_ans).item() for v_lb in label_embeds]
ids_pred.append(int(np.argmax(scores)))
ids_true: List[int] = dset[i_sample]['labels']
matched = set(ids_pred) & set(ids_true)
if len(matched) > 0:
# predicted label is one of the correct labels, pick that label so that prediction is correct
id_true = id_pred = next(iter(matched))
else:
# prediction incorrect, pick a single label arbitrarily
# This renders class-level performance inaccurate; TODO?
id_pred, id_true = -1, ids_true[0]
preds[i_sample], trues[i_sample] = id_pred, id_true
return id_pred, id_true
preds_batch, trues_batch = zip(*[
set_pred_n_true(out, i_sample) for out, i_sample in zip(outputs_str, idxs)
])
d_log: Dict[str, Any] = dict(
progress=f'{n_computed:>{len(str(n_dset))}}/{n_dset}',
sequence_length=len(inputs['input_ids'][0]),
batch_size=f'{len(idxs):>{len(str(batch_size))}}/{batch_size}',
n_acc=sum(p == t for p, t in zip(preds_batch, trues_batch))
)
it.set_postfix({k: pl.i(v) for k, v in d_log.items()})
d_log.update(dict(ids_pred=list(preds_batch), ids_true=list(trues_batch)))
logger_fl.info(pl.nc(d_log))
def check_labels_filled(lbs): # sanity check, every index is assigned a label
return np.all((-1 <= lbs) & (lbs < len(labels)))
assert check_labels_filled(trues) and check_labels_filled(preds)
# note `-1` is not actual label, support of 0 - included for full label specification per sklearn
# **note** cos the -1 label, the `macro avg` row is not accurate;
# included it for getting global accuracy
args = dict(
labels=[-1, *range(len(labels))], target_names=['Label not in dataset', *labels],
zero_division=0, output_dict=True # disables warning
)
report = classification_report(trues, preds, **args)
acc = f'{report["accuracy"]:.3f}'
logger.info(f'{pl.i(dnm_)} Classification Accuracy: {pl.i(acc)}')
logger_fl.info(f'{dnm_} Classification Accuracy: {acc}')
df = pd.DataFrame(report).transpose()
path = os_join(output_path, f'{dnm_}.csv')
df.to_csv(path)
logger.info(f'Evaluation on {pl.i(dnm_)} written to CSV at {pl.i(path)}')
logger_fl.info(f'Evaluation on {dnm_} written to CSV at {path}')
def gpt2_inference(text: str, label_options: List[str]) -> str:
device = 'cuda' if torch.cuda.is_available() else 'cpu'
model, tokenizer, _ = load_trained(epoch=3)
model = model.to(device)
model.config.pad_token_id = model.config.eos_token_id
model.eval()
# 'dataset_name` just so that it passes, irrelevant
tokenize_fn = Tokenize(tokenizer, dataset_name='UTCD', mode='inference-sample')
inputs = tokenize_fn(dict(text=text, dataset_id=-1, labels=-1, label_options=label_options))
inputs = {k: torch.tensor(v).to(device).unsqueeze(0) for k, v in inputs.items()} # add dummy batch dim
outputs = model.generate(**inputs)
return tokenizer.batch_decode(outputs, skip_special_tokens=False)
def parse_args():
modes = ['vanilla', 'implicit', 'explicit']
parser = ArgumentParser()
subparser = parser.add_subparsers(dest='command')
parser_train = subparser.add_parser('train')
parser_test = subparser.add_parser('test')
parser_train.add_argument('--mode', type=str, choices=modes, default='vanilla')
parser_train.add_argument('--normalize_aspect', type=bool, default=True)
parser_train.add_argument('--learning_rate', type=float, default=2e-5)
parser_train.add_argument('--batch_size', type=int, default=4)
parser_train.add_argument('--gradient_accumulation_steps', type=int, default=32)
parser_train.add_argument('--epochs', type=int, default=8)
parser_train.add_argument('--ddp', type=int, default=None)
parser_train.add_argument('--init_model_name_or_path', type=str, default=HF_MODEL_NAME)
parser_train.add_argument('--output_dir', type=str, default=None)
# set test arguments
parser_test.add_argument('--domain', type=str, choices=['in', 'out'], required=True)
parser_test.add_argument('--mode', type=str, choices=modes, default='vanilla')
parser_test.add_argument('--batch_size', type=int, default=32)
parser_test.add_argument('--model_name_or_path', type=str, required=True)
return parser.parse_args()
if __name__ == '__main__':
mic.output_width = 256
seed = sconfig('random-seed')
def train(
mode: str = 'vanilla', normalize_aspect: bool = True,
learning_rate: float = 2e-5, batch_size: int = 4, gradient_accumulation_steps: int = 32, epochs: int = 8,
ddp: int = None,
init_model_name_or_path: str = HF_MODEL_NAME, output_dir: str = None
):
transformers.set_seed(seed)
dnm = 'UTCD-in'
md_nm = init_model_name_or_path
mic(dnm, md_nm, mode)
if mode == 'explicit':
assert init_model_name_or_path != HF_MODEL_NAME
if init_model_name_or_path != HF_MODEL_NAME:
path = os_join(get_base_path(), u.proj_dir, u.model_dir, init_model_name_or_path, 'trained')
if os.path.exists(path):
md_nm = path
logger.info(f'Loading model from local path{pl.i(path)}... ')
lr, bsz, gas, n_ep = learning_rate, batch_size, gradient_accumulation_steps, epochs
output_dir = output_dir or f'{{a={lr}}}'
dataset_args = dict(pbar=False)
if normalize_aspect:
dataset_args['normalize_aspect'] = seed
path = map_model_output_path(
model_name=MODEL_NAME.replace(' ', '-'), mode=mode,
sampling=None, normalize_aspect=normalize_aspect, output_dir=output_dir
)
train_args = dict( # Distribute among GPUs & fit in memory; Effectively batch size 128 as in paper
output_dir=path,
num_train_epochs=n_ep,
learning_rate=lr,
warmup_ratio=1e-1,
per_device_train_batch_size=bsz,
per_device_eval_batch_size=bsz,
gradient_accumulation_steps=gas,
dataloader_num_workers=4
)
if normalize_aspect:
train_args.update(dict(
load_best_model_at_end=True,
metric_for_best_model='eval_loss',
greater_is_better=False
))
mic(n_ep, lr, normalize_aspect, ddp)
mic(train_args)
model, tokenizer, trainer = get_all_setup(
model_name=md_nm, dataset_name=dnm, form=mode, do_eval=True, custom_logging=True,
random_seed=seed,
train_args=train_args, dataset_args=dataset_args, trainer_args=dict(compute_cls_acc=True),
is_ddp=ddp
)
save_path = os_join(trainer.args.output_dir, 'trained')
trainer.train()
trainer.save_model(save_path)
tokenizer.save_pretrained(save_path)
os.listdir(save_path)
# train()
def run_eval(domain: str = 'in', mode: str = 'vanilla', batch_size: int = 32, model_name_or_path: str = None):
transformers.set_seed(seed) # cos explicit 3 epoch doesn't generate BOA token...
# n_ep = 3
# n_ep = 5
n_ep = 8
if mode == 'vanilla':
# dnm = '2022-11-29_12-12-56_NVIDIA-GPT2_{md=van, na=T}_{a=1e-05}'
# dnm = '2022-11-29_19-15-44_NVIDIA-GPT2_{md=van, na=T}_{a=2e-05}'
dnm = '2022-11-29_19-37-13_NVIDIA-GPT2_{md=van, na=T}_{a=3e-05}'
# dnm = '2022-11-29_19-43-32_NVIDIA-GPT2_{md=van, na=T}_{a=4e-05}'
elif mode == 'implicit':
# dnm = '2022-12-03_10-43-47_NVIDIA-GPT2_{md=imp, na=T}_{a=1e-05}'
# dnm = '2022-12-03_14-47-52_NVIDIA-GPT2_{md=imp, na=T}_{a=2e-05}'
# dnm = '2022-12-03_15-03-14_NVIDIA-GPT2_{md=imp, na=T}_{a=3e-05}'
dnm = '2022-12-02_21-33-18_NVIDIA-GPT2_{md=imp, na=T}_{a=4e-05}'
else: # `explicit`
# dnm = '2022-12-05_16-13-20_NVIDIA-GPT2_{md=exp, na=T}_{a=1e-05}'
# dnm = '2022-12-05_16-25-57_NVIDIA-GPT2_{md=exp, na=T}_{a=2e-05}'
# dnm = '2022-12-05_16-52-24_NVIDIA-GPT2_{md=exp, na=T}_{a=3e-05}'
dnm = '2022-12-05_17-12-33_NVIDIA-GPT2_{md=exp, na=T}_{a=4e-05}'
md_args = dict(epoch=n_ep, model_name_or_path=model_name_or_path or dnm)
mic(domain, mode, dnm)
evaluate(domain=domain, batch_size=batch_size, form=mode, load_model_args=md_args, embed_sim=True)
# run_eval()
def sanity_check_trained_generate():
text = 'hello world'
label_options = ['happy', 'sad', 'angry', 'fearful', 'surprised']
mic(text, label_options)
mic(gpt2_inference(text, label_options))
# sanity_check_trained_generate()
# plot_dataset_token_length_stats(domain='in')
def command_prompt():
args = parse_args()
cmd = args.command
if cmd == 'train':
train(
mode=args.mode, normalize_aspect=args.normalize_aspect,
learning_rate=args.learning_rate, batch_size=args.batch_size,
gradient_accumulation_steps=args.gradient_accumulation_steps, epochs=args.epochs,
ddp=args.ddp, init_model_name_or_path=args.init_model_name_or_path, output_dir=args.output_dir
)
else:
assert cmd == 'test' # sanity check
run_eval(
domain=args.domain, mode=args.mode, batch_size=args.batch_size,
model_name_or_path=args.model_name_or_path
)
command_prompt() | zeroshot-classifier | /zeroshot-classifier-0.2.3.tar.gz/zeroshot-classifier-0.2.3/zeroshot_classifier/models/gpt2.py | gpt2.py |
import os
from os.path import join as os_join
from typing import Union
import numpy as np
import torch
from transformers import GPT2Tokenizer, GPTNeoForCausalLM
from tqdm.auto import tqdm
from stefutil import *
from zeroshot_classifier.util import *
import zeroshot_classifier.util.utcd as utcd_util
from zeroshot_classifier.preprocess import get_dataset
from zeroshot_classifier.models.gpt3 import PromptMap
HF_MODEL_NAME = 'EleutherAI/gpt-neo-2.7B'
logger = get_logger('GPT-NEO')
def evaluate(
model_name: str = HF_MODEL_NAME, domain: str = 'in', batch_size: int = 16, dataset_name: str = 'all',
subsample: Union[bool, int] = False, subsample_seed: int = 77, max_tokens: int = 32
):
"""
:param model_name: Name of the GPT-Neo model
:param domain: Dataset domain
:param batch_size: Batch size in a generation forward pass
:param dataset_name: Name of the dataset to evaluate
:param subsample: Whether to subsample the dataset. If an int, the number of samples to subsample.
:param subsample_seed: Seed for random subsampling
:param max_tokens: # token reserved for answer
"""
ca(dataset_domain=domain)
tokenizer = GPT2Tokenizer.from_pretrained(HF_MODEL_NAME)
tokenizer.pad_token = tokenizer.eos_token
model = GPTNeoForCausalLM.from_pretrained(HF_MODEL_NAME)
conf = model.config
# conf.pad_token_id = conf.eos_token_id # for generation
conf.max_length = 2048 # As long as the model supports
# from transformers import GPT2LMHeadModel
# model = GPT2LMHeadModel.from_pretrained('gpt2-medium')
# mic(type(model))
model.eval()
mic(model.device)
import sys
mic(fmt_sizeof(sys.getsizeof(model)))
mic(get_model_num_trainable_parameter(model))
if torch.cuda.is_available():
model = model.to('cuda')
split = 'test'
_model_str = model_name.split('/')[-1]
output_dir_nm = f'{now(for_path=True)}_Zeroshot-GPT-NEO-{_model_str}'
output_path = os_join(u.eval_path, output_dir_nm, domain2eval_dir_nm(domain))
os.makedirs(output_path, exist_ok=True)
if dataset_name == 'all' and subsample:
raise NotImplementedError('Subsampling intended for single dataset')
dataset_names = utcd_util.get_eval_dataset_names(domain=domain, dataset_name=dataset_name)
log_fnm = f'{now(for_path=True)}_GPT-NEO_{_model_str}_{domain}_{dataset_name}_Eval'
logger_fl = get_logger('GPT3 Eval', kind='file-write', file_path=os_join(output_path, f'{log_fnm}.log'))
d_log = dict(
model=model_name, domain=domain, dataset_names=dataset_names, batch_size=batch_size, output_path=output_path
)
logger.info(f'Evaluating GPT-NEO model w/ {pl.i(d_log)}... ')
logger_fl.info(f'Evaluating GPT-NEO model w/ {d_log}... ')
for dnm in dataset_names:
if subsample:
n_tgt = subsample if isinstance(subsample, int) else 5000
dset = utcd_util.subsample_dataset(dataset_name=dnm, split='test', n_tgt=n_tgt, seed=subsample_seed)
else:
dset = get_dataset(dnm, splits='test')['test']
pm = PromptMap(dataset_name=dnm, logger_fl=logger_fl)
# Add prompt to each text example
dset = dset.map(lambda examples: dict(text=[pm(t) for t in examples['text']]), batched=True)
# mic(dset, dset[0])
# exit(1)
map_args = dict(truncation=True, max_length=conf.max_length - max_tokens)
dset = dset.map(lambda examples: tokenizer(examples['text'], **map_args), batched=True)
# d_info = sconfig(f'UTCD.datasets.{dnm_}.splits.{split}')
# lb2id = defaultdict(lambda: -1) # If generated invalid descriptive label, will return -1
# labels = d_info['labels']
# # predictions and label descriptions all to lower case to be more lenient
# lb2id.update({lb.lower(): i for i, lb in enumerate(labels)})
n_dset = len(dset) # See gpt2 eval, to batches of the same input id lengths
trues, preds = np.empty(n_dset, dtype=int), np.empty(n_dset, dtype=int)
len_ids = np.array([len(ids) for ids in dset[:]['input_ids']])
uniq_lens = np.unique(len_ids)
ln2idxs = [np.where(len_ids == ln)[0] for ln in uniq_lens]
idxs_batches = sum(
(np.split(idxs, range(batch_size, idxs.size, batch_size)) if idxs.size > batch_size else [idxs]
for idxs in ln2idxs),
start=[]
)
n_computed = 0
it = tqdm(idxs_batches, desc=f'Evaluating {pl.i(dnm)}', unit='ba')
for step, idxs in enumerate(it):
idxs = [int(idx) for idx in idxs]
inputs = {
k: torch.tensor(v, device='cuda') for k, v in dset[idxs].items()
if k not in ['text', 'labels']
}
outputs = model.generate(**inputs)
outputs_str = tokenizer.batch_decode(outputs, skip_special_tokens=False)
n_computed += len(idxs)
# mic(outputs_str)
# exit(1)
def eval_single(generated: str = None, idx: int = None):
idxs_boa = get_substr_indices(generated, s_sub=tokenizer.eos_token)
idx_answ_start = len(dset[idx]['text'])
if len(idxs_boa):
answer = generated[:idxs_boa[-1]]
else: # Did not finish, keep forward for a bit longer
answer = generated[:idx_answ_start + max_tokens*2]
mic(generated, answer)
[eval_single(g, i) for g, i in zip(outputs_str, idxs)]
exit(1)
def set_pred_n_true(generated: str, i_sample: int) -> Tuple[int, int]:
idxs_boa = get_substr_indices(generated, s_sub=tokenizer.boa_token)
# there will be at least one index, as in prompt
if not len(idxs_boa) >= 1:
ids = dset[i_sample]['input_ids']
txt = tokenizer.decode(ids)
mic(generated, idxs_boa, txt)
assert len(idxs_boa) >= 1
# **try to be as lenient**: try to extract the text part if possible
answer_with_eos = generated[idxs_boa[-1] + len(tokenizer.boa_token):]
if len(idxs_boa) > 1:
logger.warning(f'{pl.i(model_cnm)} generated {pl.i(len(idxs_boa))} boa_token '
f'instead of {pl.i(1)} with [{pl.i(answer_with_eos)}]')
logger_fl.warning(f'{model_cnm} generated {len(idxs_boa)} boa_token '
f'instead of {1} with [{answer_with_eos}]')
assert len(idxs_boa) == 1
idxs_eos = get_substr_indices(answer_with_eos, s_sub=tokenizer.eos_token)
# GPT2 would generate multiple `eos_token` for the samples in the batch that terminates early
if len(idxs_eos) == 0: # Still, **try to be as lenient**
logger.warning(f'{pl.i(model_cnm)} didn\'t finish generating answer '
f'with [{pl.i(answer_with_eos)}]')
logger_fl.warning(f'{model_cnm} didn\'t finish generating answer with [{answer_with_eos}]')
answer = answer_with_eos
else:
answer = answer_with_eos[:idxs_eos[0]] # until the 1st eos
# answer = answer.lower()
idxs_sep = get_substr_indices(answer, s_sub=tokenizer.ques_sep_token)
if len(idxs_sep) > 0:
answers = [answer[:idxs_sep[0]]]
for i, idx in enumerate(idxs_sep[:-1]):
answers.append(answer[idx + len(tokenizer.ques_sep_token):idxs_sep[i+1]])
answers.append(answer[idxs_sep[-1] + len(tokenizer.ques_sep_token):])
else:
answers = [answer]
ids_pred: List[int] = [lb2id[a.lower()] for a in answers]
assert len(ids_pred) >= 1 # sanity check
if embed_sim and all(i == -1 for i in ids_pred): # all generated answer are non-label
logger.warning(f'Generated {pl.i(answers)}, not a valid label option ')
logger_fl.warning(f'Generate {answers}, not a valid label option ')
ids_pred = []
answ_embeds = encoder.encode(answers, batch_size=batch_size)
for v_ans in answ_embeds:
scores = [sbert_util.cos_sim(v_lb, v_ans).item() for v_lb in label_embeds]
ids_pred.append(int(np.argmax(scores)))
ids_true: List[int] = dset[i_sample]['labels']
matched = set(ids_pred) & set(ids_true)
if len(matched) > 0:
# predicted label is one of the correct labels, pick that label so that prediction is correct
id_true = id_pred = next(iter(matched))
else:
# prediction incorrect, pick a single label arbitrarily
# This renders class-level performance inaccurate; TODO?
id_pred, id_true = -1, ids_true[0]
preds[i_sample], trues[i_sample] = id_pred, id_true
return id_pred, id_true
preds_batch, trues_batch = zip(*[
set_pred_n_true(out, i_sample) for out, i_sample in zip(outputs_str, idxs)
])
d_log: Dict[str, Any] = dict(
progress=f'{n_computed:>{len(str(n_dset))}}/{n_dset}',
sequence_length=len(inputs['input_ids'][0]),
batch_size=f'{len(idxs):>{len(str(batch_size))}}/{batch_size}',
n_acc=sum(p == t for p, t in zip(preds_batch, trues_batch))
)
it.set_postfix({k: pl.i(v) for k, v in d_log.items()})
d_log.update(dict(ids_pred=list(preds_batch), ids_true=list(trues_batch)))
logger_fl.info(pl.nc(d_log))
def check_labels_filled(lbs): # sanity check, every index is assigned a label
return np.all((-1 <= lbs) & (lbs < len(labels)))
assert check_labels_filled(trues) and check_labels_filled(preds)
# note `-1` is not actual label, support of 0 - included for full label specification per sklearn
# **note** cos the -1 label, the `macro avg` row is not accurate;
# included it for getting global accuracy
args = dict(
labels=[-1, *range(len(labels))], target_names=['Label not in dataset', *labels],
zero_division=0, output_dict=True # disables warning
)
report = classification_report(trues, preds, **args)
acc = f'{report["accuracy"]:.3f}'
logger.info(f'{pl.i(dnm_)} Classification Accuracy: {pl.i(acc)}')
logger_fl.info(f'{dnm_} Classification Accuracy: {acc}')
df = pd.DataFrame(report).transpose()
path = os_join(output_path, f'{dnm_}.csv')
df.to_csv(path)
logger.info(f'Evaluation on {pl.i(dnm_)} written to CSV at {pl.i(path)}')
logger_fl.info(f'Evaluation on {dnm_} written to CSV at {path}')
if __name__ == '__main__':
evaluate(domain='in', dataset_name='emotion') | zeroshot-classifier | /zeroshot-classifier-0.2.3.tar.gz/zeroshot-classifier-0.2.3/zeroshot_classifier/models/gpt_neo.py | gpt_neo.py |
import math
import pickle
import random
from typing import List, Dict
from os.path import join as os_join
import torch
import torch.nn.functional as F
from torch.utils.data import DataLoader
from tqdm import tqdm
from stefutil import *
from zeroshot_classifier.util import *
from zeroshot_classifier.util.load_data import get_datasets, binary_cls_format
import zeroshot_classifier.util.utcd as utcd_util
from zeroshot_classifier.models.architecture import BinaryBertCrossEncoder
from zeroshot_classifier.models._bert_based_models import HF_MODEL_NAME, parse_args
MODEL_NAME = 'Binary BERT'
if __name__ == '__main__':
import os
import numpy as np
import transformers
seed = sconfig('random-seed')
args = parse_args()
cmd = args.command
log_nm = f'{MODEL_NAME} {args.command.capitalize()}'
logger = get_logger(log_nm)
if cmd == 'train':
output_path, output_dir, sampling, mode = args.output, args.output_dir, args.sampling, args.mode
normalize_aspect = args.normalize_aspect
lr, bsz, n_ep = args.learning_rate, args.batch_size, args.epochs
init_model_name_or_path = args.init_model_name_or_path
# best_metric = 'accuracy'
best_metric = 'loss'
output_path = map_model_output_path(
model_name=MODEL_NAME.replace(' ', '-'), output_path=output_path, output_dir=output_dir,
mode=mode, sampling=sampling, normalize_aspect=normalize_aspect
)
logger_fl = get_logger(log_nm, kind='file-write', file_path=os_join(output_path, 'training.log'))
dset_args = dict(normalize_aspect=seed) if normalize_aspect else dict()
data = get_datasets(domain='in', **dset_args)
dataset_names = [dnm for dnm, d_dset in sconfig('UTCD.datasets').items() if d_dset['domain'] == 'in']
logger.info(f'Processing datasets {pl.i(dataset_names)} for training... ')
logger_fl.info(f'Processing datasets {pl.nc(dataset_names)} for training... ')
train, val, test = [], [], []
it = tqdm(dataset_names, desc=f'Formatting into Binary CLS w/ {pl.i(dict(sampling=sampling, mode=mode))}')
for dataset_name in it:
dset = data[dataset_name]
args = dict(sampling=sampling, mode=mode)
for split, ds in zip(['train', 'val', 'test'], [train, val, test]):
it.set_postfix(dnm=f'{pl.i(dataset_name)}-{pl.i(split)}')
ds.extend(binary_cls_format(dset, **args, split=split))
d_log = dict(init_model_name_or_path=init_model_name_or_path)
md_nm = init_model_name_or_path
if mode == 'explicit':
assert init_model_name_or_path != HF_MODEL_NAME # sanity check
if init_model_name_or_path != HF_MODEL_NAME:
# loading from explicit pre-training local weights,
# the classification head would be ignored for classifying 3 classes
path = os_join(get_base_path(), u.proj_dir, u.model_dir, init_model_name_or_path)
if os.path.exists(path):
md_nm = path
d_log['files'] = os.listdir(path)
logger.info(f'Loading model with {pl.i(d_log)}...')
logger_fl.info(f'Loading model with {pl.nc(d_log)}...')
model = BinaryBertCrossEncoder(md_nm, num_labels=2, automodel_args=dict(ignore_mismatched_sizes=True))
add_tok_arg = utcd_util.get_add_special_tokens_args(model.tokenizer, train_strategy=mode)
if add_tok_arg:
logger.info(f'Adding special tokens {pl.i(add_tok_arg)} to tokenizer... ')
logger_fl.info(f'Adding special tokens {pl.nc(add_tok_arg)} to tokenizer... ')
model.tokenizer.add_special_tokens(special_tokens_dict=add_tok_arg)
model.model.resize_token_embeddings(len(model.tokenizer))
transformers.logging.set_verbosity_error() # disables `longest_first` warning
random.seed(seed)
random.shuffle(train)
train_dataloader = DataLoader(train, shuffle=True, batch_size=bsz)
val_dataloader = DataLoader(val, shuffle=False, batch_size=bsz)
warmup_steps = math.ceil(len(train_dataloader) * n_ep * 0.1) # 10% of train data for warm-up
d_log = {
'#data': len(train), 'learning_rate': lr, 'batch size': bsz, 'epochs': n_ep, 'warmup steps': warmup_steps,
'best_model_metric': best_metric, 'output path': output_path
}
logger.info(f'Training w/ {pl.i(d_log)}... ')
logger_fl.info(f'Training w/ {pl.nc(d_log)}... ')
transformers.set_seed(seed)
model.fit(
train_dataloader=train_dataloader,
val_dataloader=val_dataloader,
epochs=n_ep,
optimizer_params=dict(lr=lr),
warmup_steps=warmup_steps,
output_path=output_path,
logger_fl=logger_fl,
best_model_metric=best_metric
)
elif cmd == 'test':
WITH_EVAL_LOSS = False
mode, domain, model_name_or_path, bsz = args.mode, args.domain, args.model_name_or_path, args.batch_size
split = 'test'
out_path = os_join(u.eval_path, model_name_or_path, domain2eval_dir_nm(domain))
os.makedirs(out_path, exist_ok=True)
data = get_datasets(domain=domain)
model_path = os_join(get_base_path(), u.proj_dir, u.model_dir, model_name_or_path)
if not os.path.exists(model_path):
model_path = model_name_or_path # A huggingface model
logger.info(f'Loading model from path {pl.i(model_path)}... ')
model = BinaryBertCrossEncoder(model_path) # load model
logger = get_logger(f'{MODEL_NAME} Eval')
d_log = dict(mode=mode, domain=domain, batch_size=bsz, model_name_or_path=model_name_or_path)
logger.info(f'Evaluating Binary Bert with {pl.i(d_log)} and saving to {pl.i(out_path)}... ')
eval_loss: Dict[str, np.array] = dict() # a sense of how badly the model makes the prediction
dataset_names = [dnm for dnm, d_dset in sconfig('UTCD.datasets').items() if d_dset['domain'] == domain]
for dnm in dataset_names: # loop through all datasets
dset = data[dnm]
pairs, aspect = dset[split], dset['aspect']
d_dset = sconfig(f'UTCD.datasets.{dnm}.splits.{split}')
label_options, multi_label = d_dset['labels'], d_dset['multi_label']
n_options = len(label_options)
label2id = {lbl: i for i, lbl in enumerate(label_options)}
n_txt = sconfig(f'UTCD.datasets.{dnm}.splits.{split}.n_text')
d_log = {'#text': n_txt, '#label': n_options, 'labels': label_options}
logger.info(f'Evaluating {pl.i(dnm)} with {pl.i(d_log)}...')
arr_preds, arr_labels = np.empty(n_txt, dtype=int), np.empty(n_txt, dtype=int)
arr_loss = torch.empty(n_txt, dtype=torch.float32) if WITH_EVAL_LOSS else None
txt_n_lbs2query = TrainStrategy2PairMap(train_strategy=mode)(aspect)
gen = group_n(pairs.items(), n=bsz)
# loop through each test example
it = tqdm(gen, desc=F'Evaluating {pl.i(dnm)}', unit='group', total=math.ceil(n_txt/bsz))
for i_grp, group in enumerate(it):
txts_, lst_labels = zip(*group)
lst_labels: List[List[int]] = [[label2id[lb] for lb in labels] for labels in lst_labels]
query = sum([txt_n_lbs2query(t, label_options) for t in txts_], start=[]) # (n_options x bsz, 2)
# probability for positive class
logits = model.predict(query, batch_size=bsz, apply_softmax=True, convert_to_tensor=True)[:, 1]
logits = logits.reshape(-1, n_options)
preds = logits.argmax(axis=1)
trues = torch.empty_like(preds)
for i, pred, labels in zip(range(bsz), preds, lst_labels):
# if false prediction, pick one of the correct labels arbitrarily
trues[i] = pred if pred in labels else labels[0]
idx_strt = i_grp*bsz
arr_preds[idx_strt:idx_strt+bsz], arr_labels[idx_strt:idx_strt+bsz] = preds.cpu(), trues.cpu()
if WITH_EVAL_LOSS:
if multi_label and any(len(lbs) > 1 for lbs in lst_labels):
# in this case, vectorizing is complicated, run on each sample separately since edge case anyway
for i, lbs in enumerate(lst_labels):
target = torch.tensor(lbs, device=logits.device)
if len(lbs) > 1:
loss = max(F.cross_entropy(logits[i].repeat(len(lbs), 1), target, reduction='none'))
else:
loss = F.cross_entropy(logits[None, i], target) # dummy batch dimension
arr_loss[idx_strt+i] = loss
else:
arr_loss[idx_strt:idx_strt+bsz] = F.cross_entropy(logits, trues, reduction='none')
if WITH_EVAL_LOSS:
eval_loss[dnm] = arr_loss.numpy()
args = dict(zero_division=0, target_names=label_options, output_dict=True) # disables warning
df, acc = eval_res2df(arr_labels, arr_preds, report_args=args)
logger.info(f'{pl.i(dnm)} Classification Accuracy: {pl.i(acc)}')
df.to_csv(os_join(out_path, f'{dnm}.csv'))
if WITH_EVAL_LOSS:
with open(os_join(out_path, 'eval_loss.pkl'), 'wb') as f:
pickle.dump(eval_loss, f) | zeroshot-classifier | /zeroshot-classifier-0.2.3.tar.gz/zeroshot-classifier-0.2.3/zeroshot_classifier/models/binary_bert.py | binary_bert.py |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.