code
stringlengths
501
5.19M
package
stringlengths
2
81
path
stringlengths
9
304
filename
stringlengths
4
145
<h1></h1> <p> <p>&nbsp;</p><div class="separator" style="clear: both; text-align: center;"><a href="https://978046he7ot53y1orgozyousd9.hop.clickbank.net/?cbpage=getpickstrial&amp;tid=PYPI" rel="nofollow" style="margin-left: 1em; margin-right: 1em;" target="_blank"><img border="0" data-original-height="300" data-original-width="280" src="https://1.bp.blogspot.com/-xEAU5zJQn6o/YPauo8FfjJI/AAAAAAAACH8/3Uyd1rnpuYEN99IhM01oZ1nxM_Fx8_EuQCLcBGAsYHQ/s16000/ZCode-System.jpg" /></a></div><br /><p></p><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://978046he7ot53y1orgozyousd9.hop.clickbank.net/?cbpage=getpickstrial&amp;tid=PYPI" rel="nofollow" style="margin-left: 1em; margin-right: 1em;" target="_blank"><img border="0" data-original-height="107" data-original-width="413" src="https://1.bp.blogspot.com/-ALAPiIr8Wc8/YO3mmzVwdGI/AAAAAAAACEw/gh3jYr2UcU0jZCGanv1rEPkbrubiiI-hACPcBGAYYCw/s16000/get-discount-button.webp" /></a></div><br /> <br> # zcode system discount ```bash pip3 zcode system discount
zcode-system-discount
/zcode%20system%20discount-1.tar.gz/zcode system discount-1/README.md
README.md
# zcoinbase A simple python client that implements simple interfaces to Coinbase Pro API. This project uses minimal libraries to interface with the Coinbase API directly, it does not depend on any other Coinbase Python libraries (it directly interfaces with the REST, Websocket and FIX APIs) ## Coinbase API If you plan on using this API, you should familiarize yourself with the Coinbase Pro API here: https://docs.pro.coinbase.com/ ## Using zcoibase is hosted on PyPI, so the easiest way to get started is to run: `pip install zcoinbase` ## Notable Features * Easy-to-use, function-based Websocket client with support for functional programming of websocket messages. * Features a Websocket-based real-time Order-book on the websocket API. * Historical Data Downloader, should make it easy to download historical data from the markets. ### Examples Examples on how to use zcoinbase can be found in the `examples` directory. ## Warning This API is in a highly experimental, developmental state, use at your own risk. ## Under Developement In order of Priorities, here are the TODOs for this. - Simple Client for Dealing with Websocket Messages (Real-time Market) - The idea for this is to provide a real-time interface to the market that does not require *any* knowledge of how the Websocket API works - FIX API
zcoinbase
/zcoinbase-0.0.9.tar.gz/zcoinbase-0.0.9/README.md
README.md
ZCollection =========== This project is a Python library allowing manipulating data partitioned into a **collection** of `Zarr <https://zarr.readthedocs.io/en/stable/>`_ groups. This collection allows dividing a dataset into several partitions to facilitate acquisitions or updates made from new products. Possible data partitioning is: by **date** (hour, day, month, etc.) or by **sequence**. A collection partitioned by date, with a monthly resolution, may look like on the disk: .. code-block:: text collection/ ├── year=2022 │ ├── month=01/ │ │ ├── time/ │ │ │ ├── 0.0 │ │ │ ├── .zarray │ │ │ └── .zattrs │ │ ├── var1/ │ │ │ ├── 0.0 │ │ │ ├── .zarray │ │ │ └── .zattrs │ │ ├── .zattrs │ │ ├── .zgroup │ │ └── .zmetadata │ └── month=02/ │ ├── time/ │ │ ├── 0.0 │ │ ├── .zarray │ │ └── .zattrs │ ├── var1/ │ │ ├── 0.0 │ │ ├── .zarray │ │ └── .zattrs │ ├── .zattrs │ ├── .zgroup │ └── .zmetadata └── .zcollection Partition updates can be set to overwrite existing data with new ones or to update them using different **strategies**. The `Dask library <https://dask.org/>`_ handles the data to scale the treatments quickly. It is possible to create views on a reference collection, to add and modify variables contained in a reference collection, accessible in reading only. This library can store data on POSIX, S3, or any other file system supported by the Python library `fsspec <https://filesystem-spec.readthedocs.io/en/latest/>`_. Note, however, only POSIX and S3 file systems have been tested.
zcollection
/zcollection-2023.5.0.tar.gz/zcollection-2023.5.0/README.rst
README.rst
Installation ============ Required dependencies --------------------- - Python (3.8 or later) - setuptools - `dask <https://dask.pydata.org/>`_ - `distributed <https://distributed.dask.org/en/stable/>`_ - `fsspec <https://filesystem-spec.readthedocs.io/en/latest/>`_ - `numcodecs <https://numcodecs.readthedocs.io/en/stable/>`_ - `numpy <https://numpy.org/>`_ - `pyarrow <https://arrow.apache.org/docs/python/>`_ - `xarray <http://xarray.pydata.org/en/stable/>`_ - `zarr <https://zarr.readthedocs.io/en/stable/>`_ .. note:: `pyarrow` is optional, but required if you want to use the indexing API. Instructions ------------ Installation via conda and sources ################################## It is possible to install the latest version from source. First, install the dependencies using conda:: $ conda install dask distributed fsspec numcodecs numpy pandas pyarrow xarray zarr Then, clone the repository:: $ git clone [email protected]:CNES/zcollection.git $ cd zcollection Finally, install the library using pip (it is possible to checkout a different branch before installing):: $ pip install . Installation via pip #################### $ pip install zcollection Testing ------- To run the test suite after installing the library, install (via pypi or conda) `pytest <https://pytest.org>`__ and run ``pytest`` in the root directory of the cloned repository. The unit test process can be modified using options implemented for this project, in addition to the options provided by ``pytest``. The available user options are: - **s3**: Enable tests on the local S3 server driven by minio. (default: False) - **memory**: Use a file system in memory instead of the local file system. (default: False) - **threads_per_worker**: Number of threads for each worker Dask. (default: the number of logical cores of the target platform). - **n_workers**: Number of core for each worker Dask. (default: the number of cores of the target platform). To run the tests using a local S3 server, driven by the ``minio`` software, it's necessary to install the following optional requirements: - `s3fs <https://github.com/fsspec/s3fs/>`_ - `requests <https://docs.python-requests.org/en/latest/>`_ You will need to install the ``minio`` program. You can find more information on this web `page <https://min.io/download>`_. Documentation ------------- The documentation use sphinx and Google-style docstrings. To build the documentation, run ``make html`` in the ``docs`` directory.
zcollection
/zcollection-2023.5.0.tar.gz/zcollection-2023.5.0/docs/source/install.rst
install.rst
ZCollection =========== This project is a Python library manipulating data split into a :py:class:`collection <zcollection.collection.Collection>` of groups stored in `Zarr format <https://zarr.readthedocs.io/en/stable/>`_. This collection allows dividing a dataset into several partitions to facilitate acquisitions or updates made from new products. Possible data partitioning is: by :py:class:`date <zcollection.partitioning.date.Date>` (hour, day, month, etc.) or by :py:class:`sequence <zcollection.partitioning.sequence.Sequence>`. A collection partitioned by date, with a monthly resolution, may look like on the disk: :: collection/ ├── year=2022 │ ├── month=01/ │ │ ├── time/ │ │ │ ├── 0.0 │ │ │ ├── .zarray │ │ │ └── .zattrs │ │ ├── var1/ │ │ │ ├── 0.0 │ │ │ ├── .zarray │ │ │ └── .zattrs │ │ ├── .zattrs │ │ ├── .zgroup │ │ └── .zmetadata │ └── month=02/ │ ├── time/ │ │ ├── 0.0 │ │ ├── .zarray │ │ └── .zattrs │ ├── var1/ │ │ ├── 0.0 │ │ ├── .zarray │ │ └── .zattrs │ ├── .zattrs │ ├── .zgroup │ └── .zmetadata └── .zcollection Partition updates can be set to overwrite existing data with new ones or to update them using different :py:mod:`strategies <zcollection.merging>`. The `Dask library <https://dask.org/>`_ handles the data to scale the treatments quickly. It is possible to create views on a reference collection, to add and modify variables contained in a reference collection, accessible in reading only. This library can store data on POSIX, S3, or any other file system supported by the Python library `fsspec <https://filesystem-spec.readthedocs.io/en/latest/>`_. Note, however, only POSIX and S3 file systems have been tested. .. toctree:: :maxdepth: 2 :caption: Contents: install auto_examples/index.rst api release Indices and tables ================== * :ref:`genindex` * :ref:`modindex` * :ref:`search`
zcollection
/zcollection-2023.5.0.tar.gz/zcollection-2023.5.0/docs/source/index.rst
index.rst
Release notes ============= 2023.5.0 -------- * Add missing copyrights. * Modularise code to reduce the number of lines per module. * Writing variables is limited to the worker being used. * Improve test coverage. * #9: Read the version attribute directly from the ``version.py`` module. * #8: Incomplete overlaps with more than one worker. * #7: Fix bug, in the update method, if the user has selected multiple partitions the selected variables must contain the updated variables. * #6: the parameter name for specifying the number of concurrent inserts is incorrect. * #3: Add a trim argument to the ``update`` method, like Dask's Dask's ``map_overlap``. * Update the documentation. * Refactor the code. * Loading data using Dask or Numpy. * Variable adds attributes to partitions. 2023.3.2 -------- * Writing a partition with many variables is slow. * Writing metadata only in the collection's configuration. * Adding an inter-process lock * If a variable has been modified since its initialization, the library throws a specific exception to warn the user. 2023.3.1 -------- * Fixed a compatibility issue with fspec 2023.3.0. 2023.3.0 -------- * Apply an optional mask before querying an indexer. 2023.2.0 -------- * Synchronize the view with the reference collection. * Support for Python 3.11. * Bug fixes. * Optimization of the insertion of new partitions. * Copy collection over different file systems. * Export Dataset to Zarr group. 2022.12.0/2022.12.1 ------------------- Release on December 2, 2022 * Write immutable variables of a dataset into a single group. * Possibility to update partitions using neighbor partitions (useful for filtering, for example). * Refactor methods overlapping partitions. * Update documentation. 2022.10.2/2022.10.1 ------------------- Release on October 13, 20212 * Add compatibility with Python 3.8. 2022.10.0 --------- Release on October 7, 20212 * Added an option to the method ``drop_partitions`` to drop partitions older than a specified time delta relative to the current time. 2022.8.0 -------- Release on August 14, 2022 * Support Python starting 3.9. * Refactor convenience functions. * Refactor dataset & variables modules. * The indexer can return only the partition keys. * Optimization of dataset handling. * Bug fixes. 0.2 / 2020-04-04 ---------------- Release on April 4, 2020 * Installation from PyPi. * Unsigned integers are not handled. 0.1 / 2022-08-30 ----------------- Release on March 30, 2020 * First public version.
zcollection
/zcollection-2023.5.0.tar.gz/zcollection-2023.5.0/docs/source/release.rst
release.rst
{{ fullname | escape | underline }} .. currentmodule:: {{ module }} .. autoclass:: {{ objname }} :show-inheritance: {% block methods %} {%- set attr = [] -%} {%- set meth = [] -%} {%- set private = [] -%} {%- set protected = [] -%} {%- set special = [] -%} {%- set inherited_meth = [] -%} {%- set skip = ['__abstractmethods__', '__annotations__', '__dict__', '__doc__', '__entries__', '__hash__', '__init__', '__members__', '__module__', '__slots__', '__slotnames__', '__weakref__'] -%} {%- for item in methods if not item in skip -%} {%- if item in inherited_members -%} {{ inherited_meth.append(item) or "" }} {%- else -%} {{ meth.append(item) or "" }} {%- endif -%} {%- endfor -%} {%- for item in members if not item in inherited_members and not item in skip -%} {%- if item.startswith('__') and item.endswith('__') -%} {{ special.append(item) or "" }} {%- elif item.startswith('__') -%} {{ private.append(item) or "" }} {%- elif item.startswith('_') -%} {{ protected.append(item) or "" }} {%- endif -%} {%- endfor %} {%- if attributes %} .. rubric:: {{ _('Attributes') }} .. autosummary:: :toctree: {% for item in attributes %} ~{{ name }}.{{ item }} {%- endfor %} {% endif -%} {%- if meth %} .. rubric:: {{ _('Public Methods') }} .. autosummary:: :toctree: {% for item in meth %} ~{{ name }}.{{ item }} {%- endfor %} {% endif -%} {%- if protected %} .. rubric:: {{ _('Protected Methods') }} .. autosummary:: :toctree: {% for item in protected %} ~{{ name }}.{{ item }} {%- endfor %} {% endif -%} {%- if private %} .. rubric:: {{ _('Private Methods') }} .. autosummary:: :toctree: {% for item in private %} ~{{ name }}.{{ item }} {%- endfor %} {% endif -%} {%- if special %} .. rubric:: {{ _('Special Methods') }} .. autosummary:: :toctree: {% for item in special %} ~{{ name }}.{{ item }} {%- endfor %} {%- endif -%} {%- if inherited_meth %} .. rubric:: {{ _('Inherited Methods') }} .. autosummary:: :toctree: {% for item in inherited_meth %} ~{{ name }}.{{ item }} {%- endfor %} {%- endif -%} {%- endblock -%}
zcollection
/zcollection-2023.5.0.tar.gz/zcollection-2023.5.0/docs/source/_templates/autosummary/class.rst
class.rst
{{ fullname | escape | underline}} .. automodule:: {{ fullname }} {% block attributes -%} {% if attributes %} .. rubric:: {{ ('Modules Attributes') }} .. autosummary:: :toctree: {% for item in attributes %} {{ item }} {%- endfor %} {% endif -%} {% endblock -%} {% block classes -%} {% if classes %} .. rubric:: {{ ('Classes') }} .. autosummary:: :toctree: {% for item in classes %} {{ item }} {%- endfor %} {% endif -%} {% endblock -%} {% block exceptions -%} {% if exceptions %} .. rubric:: {{ ('Exceptions') }} .. autosummary:: :toctree: {% for item in exceptions %} {{ item }} {%- endfor %} {% endif -%} {% endblock -%} {% block functions -%} {% if functions %} .. rubric:: {{ ('Functions') }} .. autosummary:: :toctree: {% for item in functions %} {{ item }} {%- endfor %} {% endif -%} {% endblock -%}
zcollection
/zcollection-2023.5.0.tar.gz/zcollection-2023.5.0/docs/source/_templates/autosummary/module.rst
module.rst
from typing import Iterator, List, Optional, Tuple, Union import pathlib import pprint import dask.distributed import fsspec import numpy import zcollection import zcollection.indexing import zcollection.partitioning.tests.data # %% # Initialization of the environment # --------------------------------- fs = fsspec.filesystem('memory') cluster = dask.distributed.LocalCluster(processes=False) client = dask.distributed.Client(cluster) # %% # A collection can be indexed. This allows quick access to the data without # having to browse the entire dataset. # # Creating the test collection. # ----------------------------- # # For this latest example, we will index another data set. This one contains # measurements of a fictitious satellite on several half-orbits. zds: zcollection.Dataset = zcollection.Dataset.from_xarray( zcollection.partitioning.tests.data.create_test_sequence(5, 20, 10)) print(zds) # %% collection: zcollection.Collection = zcollection.create_collection( 'time', zds, zcollection.partitioning.Date(('time', ), 'M'), partition_base_dir='/one_other_collection', filesystem=fs) collection.insert(zds, merge_callable=zcollection.merging.merge_time_series) # %% # Here we have created a collection partitioned by month. pprint.pprint(fs.listdir('/one_other_collection/year=2000')) # %% # Class to implement # ------------------ # # The idea of the implementation is to calculate for each visited partition, the # slice of data that has a constant quantity. In our example, we will rely on # the cycle and pass number information. The first method we will implement is # the detection of these constant parts of two vectors containing the cycle and # pass number. def split_half_orbit( cycle_number: numpy.ndarray, pass_number: numpy.ndarray, ) -> Iterator[Tuple[int, int]]: """Calculate the indexes of the start and stop of each half-orbit. Args: pass_number: Pass numbers. Returns: Iterator of start and stop indexes. """ assert pass_number.shape == cycle_number.shape pass_idx = numpy.where(numpy.roll(pass_number, 1) != pass_number)[0] cycle_idx = numpy.where(numpy.roll(cycle_number, 1) != cycle_number)[0] half_orbit = numpy.unique( numpy.concatenate( (pass_idx, cycle_idx, numpy.array([pass_number.size], dtype='int64')))) del pass_idx, cycle_idx yield from tuple(zip(half_orbit[:-1], half_orbit[1:])) # %% # Now we will compute these constant parts from a dataset contained in a # partition. def _half_orbit( zds: zcollection.Dataset, *args, **kwargs, ) -> numpy.ndarray: """Return the indexes of the start and stop of each half-orbit. Args: ds: Datasets stored in a partition to be indexed. Returns: Dictionary of start and stop indexes for each half-orbit. """ pass_number_varname = kwargs.pop('pass_number', 'pass_number') cycle_number_varname = kwargs.pop('cycle_number', 'cycle_number') pass_number = zds.variables[pass_number_varname].values cycle_number = zds.variables[cycle_number_varname].values generator = (( i0, i1, cycle_number[i0], pass_number[i0], ) for i0, i1 in split_half_orbit(cycle_number, pass_number)) return numpy.fromiter(generator, numpy.dtype(HalfOrbitIndexer.dtype())) # %% # Finally, we implement our indexing class. The base class # (:py:class:`zcollection.indexing.Indexer<zcollection.indexing.abc.Indexer>`) # implements the index update and the associated queries. class HalfOrbitIndexer(zcollection.indexing.Indexer): """Index collection by half-orbit.""" #: Column name of the cycle number. CYCLE_NUMBER = 'cycle_number' #: Column name of the pass number. PASS_NUMBER = 'pass_number' @classmethod def dtype(cls, /, **kwargs) -> List[Tuple[str, str]]: """Return the columns of the index. Returns: A tuple of (name, type) pairs. """ return super().dtype() + [ (cls.CYCLE_NUMBER, 'uint16'), (cls.PASS_NUMBER, 'uint16'), ] @classmethod def create( cls, path: Union[pathlib.Path, str], zds: zcollection.Collection, filesystem: Optional[fsspec.AbstractFileSystem] = None, **kwargs, ) -> 'HalfOrbitIndexer': """Create a new index. Args: path: The path to the index. ds: The collection to be indexed. filesystem: The filesystem to use. Returns: The created index. """ return super()._create(path, zds, meta=dict(attribute=b'value'), filesystem=filesystem) # type: ignore def update( self, zds: zcollection.Collection, partition_size: Optional[int] = None, npartitions: Optional[int] = None, **kwargs, ) -> None: """Update the index. Args: ds: New data stored in the collection to be indexed. partition_size: The length of each bag partition. npartitions: The number of desired bag partitions. cycle_number: The name of the cycle number variable stored in the collection. Defaults to "cycle_number". pass_number: The name of the pass number variable stored in the collection. Defaults to "pass_number". """ super()._update(zds, _half_orbit, partition_size, npartitions, **kwargs) # %% # Using the index # --------------- # # Now we can create our index and fill it. indexer: HalfOrbitIndexer = HalfOrbitIndexer.create('/index.parquet', collection, filesystem=fs) indexer.update(collection) # The following command allows us to view the information stored in our index: # the first and last indexes of the partition associated with the registered # half-orbit number and the identifier of the indexed partition. indexer.table.to_pandas() # %% # This index can now be used to load a part of a collection. selection: zcollection.Dataset | None = collection.load( indexer=indexer.query(dict(pass_number=[1, 2])), delayed=False, ) assert selection is not None selection.to_xarray() # %% # Close the local cluster to avoid printing warning messages in the other # examples. client.close() cluster.close()
zcollection
/zcollection-2023.5.0.tar.gz/zcollection-2023.5.0/examples/ex_indexing.py
ex_indexing.py
from __future__ import annotations from typing import Iterator import datetime import pprint import dask.distributed import fsspec import numpy import zcollection import zcollection.tests.data # %% # Initialization of the environment # --------------------------------- # # Before we create our first collection, we will create a dataset to record. def create_dataset() -> zcollection.Dataset: """Create a dataset to record.""" generator: Iterator[zcollection.Dataset] = \ zcollection.tests.data.create_test_dataset_with_fillvalue() return next(generator) zds: zcollection.Dataset | None = create_dataset() assert zds is not None zds.to_xarray() # %% # We will create the file system that we will use. In this example, a file # system in memory. fs: fsspec.AbstractFileSystem = fsspec.filesystem('memory') # %% # Finally we create a local dask cluster using only threads in order to work # with the file system stored in memory. cluster = dask.distributed.LocalCluster(processes=False) client = dask.distributed.Client(cluster) # %% # Creation of the partitioning # ---------------------------- # # Before creating our collection, we define the partitioning of our dataset. In # this example, we will partition the data by ``month`` using the variable # ``time``. partition_handler = zcollection.partitioning.Date(('time', ), resolution='M') # %% # Finally, we create our collection: collection: zcollection.Collection = zcollection.create_collection( 'time', zds, partition_handler, '/my_collection', filesystem=fs) # %% # .. note:: # # The collection created can be accessed using the following command :: # # >>> collection = zcollection.open_collection("/my_collection", # >>> filesystem=fs) # # When the collection has been created, a configuration file is created. This # file contains all the metadata to ensure that all future inserted data will # have the same features as the existing data (data consistency). pprint.pprint(collection.metadata.get_config()) # %% # Now that the collection has been created, we can insert new records. collection.insert(zds) # %% # .. note:: # # When inserting it's possible to specify the :ref:`merge strategy of a # partition <merging_datasets>`. By default, the last inserted data # overwrite the existing ones. Others strategy can be defined, for example, # to update existing data (overwrite the updated data, while keeping the # existing ones). This last strategy allows updating incrementally an # existing partition. :: # # >>> import zcollection.merging # >>> collection.insert( # ... ds, merge_callable=zcollection.merging.merge_time_series) # # Let's look at the different partitions thus created. pprint.pprint(fs.listdir('/my_collection/year=2000')) # %% # This collection is composed of several partitions, but it is always handled # as a single data set. # # Loading data # ------------ # To load the dataset call the method # :py:meth:`load<zcollection.collection.Collection.load>` on the instance. By # default, the method loads all partitions stored in the collection. collection.load(delayed=True) # %% # .. note:: # # By default, the data is loaded as a :py:class:`dask.array<da.Array>`. It is # possible to load the data as a :py:class:`numpy.ndarray` by specifying the # parameter ``delayed=False``. # # You can also filter the partitions to be considered by filtering the # partitions using keywords used for partitioning in a valid Python expression. collection.load(filters='year == 2000 and month == 2') # %% # You can also used a callback function to filter partitions with a complex # condition. collection.load( filters=lambda keys: datetime.date(2000, 2, 15) <= datetime.date( keys['year'], keys['month'], 1) <= datetime.date(2000, 3, 15)) # %% # Note that the :py:meth:`load<zcollection.collection.Collection.load>` # function may return None if no partition has been selected. assert collection.load(filters='year == 2002 and month == 2') is None # %% # Editing variables # ----------------- # # .. note:: # # The functions for modifying collections are not usable if the collection # is :py:meth:`open<zcollection.open_collection>` in read-only mode. # # It's possible to delete a variable from a collection. collection.drop_variable('var2') collection.load() # %% # The variable used for partitioning cannot be deleted. try: collection.drop_variable('time') except ValueError as exc: print(exc) # %% # The :py:meth:`add_variable<zcollection.collection.Collection.add_variable>` # method allows you to add a new variable to the collection. collection.add_variable(zds.metadata().variables['var2']) # %% # The newly created variable is initialized with its default value. zds = collection.load() assert zds is not None zds.variables['var2'].values # %% # Finally it's possible to # :py:meth:`update<zcollection.collection.Collection.update>` the existing # variables. # # In this example, we will alter the variable ``var2`` by setting it to 1 # anywhere the variable ``var1`` is defined. def ones(zds) -> dict[str, numpy.ndarray]: """Returns a variable with ones everywhere.""" return dict(var2=zds.variables['var1'].values * 0 + 1) collection.update(ones) # type: ignore[arg-type] zds = collection.load() assert zds is not None zds.variables['var2'].values # %% # ..note:: # # The method :py:meth:`update<zcollection.collection.Collection.update>` # supports the ``delayed`` parameter. If ``delayed=True``, the function # ``ones`` is applied to each partition using a Dask array as container # for the variables data stored in the provided dataset. This is the default # behavior. If ``delayed=False``, the function ``ones`` is applied to each # partition using a Numpy array as container. # # Sometime is it important to know the values of the neighboring partitions. # This can be done using the # :py:meth:`update<zcollection.collection.Collection.update>` method with the # ``depth`` argument. In this example, we will set the variable ``var2`` to 2 # everywhere the processed partition is surrounded by at least one partition, -1 # if the left partition is missing and -2 if the right partition is missing. # # .. note:: # # ``partition_info`` contains information about the target partition: a tuple # with the partitioned dimension and the slice to select the partition. If the # start of the slice is 0, it means that the left partition is missing. If the # stop of the slice is equal to the length of the given dataset, it means that # the right partition is missing. def twos(ds, partition_info: tuple[str, slice]) -> dict[str, numpy.ndarray]: """Returns a variable with twos everywhere if the partition is surrounded by partitions on both sides, -1 if the left partition is missing and -2 if the right partition is missing.""" data = numpy.zeros(ds.variables['var1'].shape, dtype='int8') dim, indices = partition_info assert dim == 'num_lines' if indices.start != 0: data[:] = -1 elif indices.stop != data.shape[0]: data[:] = -2 else: data[:] = 2 return dict(var2=data) collection.update(twos, depth=1) # type: ignore[arg-type] zds = collection.load() assert zds is not None zds.variables['var2'].values # %% # Map a function over the collection # ---------------------------------- # It's possible to map a function over the partitions of the collection. for partition, array in collection.map(lambda ds: ( # type: ignore[arg-type] ds['var1'].values + ds['var2'].values)).compute(): print(f' * partition = {partition}: mean = {array.mean()}') # %% # .. note:: # # The :py:meth:`map<zcollection.collection.Collection.map>` method is # lazy. To compute the result, you need to call the method ``compute`` # on the returned object. # # It's also possible to map a function over the partitions with a a number of # neighboring partitions, like the # :py:meth:`update<zcollection.collection.Collection.update>`. To do so, use the # :py:meth:`map_overlap<zcollection.collection.Collection.map_overlap>` method. for partition, array in collection.map_overlap( lambda ds: ( # type: ignore[arg-type] ds['var1'].values + ds['var2'].values), depth=1).compute(): print(f' * partition = {partition}: mean = {array.mean()}') # %% # Close the local cluster to avoid printing warning messages in the other # examples. client.close() cluster.close()
zcollection
/zcollection-2023.5.0.tar.gz/zcollection-2023.5.0/examples/ex_collection.py
ex_collection.py
from typing import Iterator import pprint import dask.distributed import fsspec import numpy import zcollection import zcollection.tests.data # %% # Initialization of the environment # --------------------------------- # # As in the example of handling # :ref:`collections <sphx_glr_auto_examples_ex_collection.py>`, we will create # the test environment and a collection. def create_dataset() -> zcollection.Dataset: """Create a dataset to record.""" generator: Iterator[zcollection.Dataset] = \ zcollection.tests.data.create_test_dataset_with_fillvalue() return next(generator) cluster = dask.distributed.LocalCluster(processes=False) client = dask.distributed.Client(cluster) zds: zcollection.Dataset | None = create_dataset() assert zds is not None fs: fsspec.AbstractFileSystem = fsspec.filesystem('memory') collection: zcollection.Collection = zcollection.create_collection( 'time', zds, zcollection.partitioning.Date(('time', ), resolution='M'), '/view_reference', filesystem=fs) collection.insert(zds, merge_callable=zcollection.merging.merge_time_series) # %% # Creation of views # ----------------- # # A :py:class:`view<zcollection.view.View>` allows you to extend a collection # (:py:class:`a view reference<zcollection.view.ViewReference>`) that you are # not allowed to modify. view: zcollection.View = zcollection.create_view( '/my_view', zcollection.view.ViewReference('/view_reference', fs), filesystem=fs) # %% # .. note:: # # The created view can be accessed using the following command :: # # >>> view = zcollection.open_view("/my_view", filesystem=fs) # # Editing variables # ----------------- # When the view is created, it has no data of its own, it uses all the # partitions defined in the reference view. You can select the partitions used # from the reference collection by specifying the keyword argument ``filters`` # during the creation of the view. pprint.pprint(fs.listdir('/my_view')) # %% # It's not yet possible to read data from the view, as it does not yet have any # data. To minimize the risk of mismatches with the reference view, the data # present in the view drives the range of data that can be read. try: view.load() except ValueError as err: print(err) # %% # Such a state of the view is not very interesting. But it is possible to # :py:meth:`add<zcollection.view.View.add_variable>` and modify variables in # order to enhance the view. var3_template: zcollection.meta.Variable = zds.metadata().variables['var2'] var3_template.name = 'var3' view.add_variable(var3_template) del var3_template # %% # This step creates all necessary partitions for the new variable. pprint.pprint(fs.listdir('/my_view/year=2000')) # %% # The new variable is not initialized. zds = view.load() assert zds is not None zds.variables['var3'].values # %% # The same principle used by the collection allows to # :py:meth:`update<zcollection.view.View.update>` the variables. view.update( lambda ds: dict(var3=ds['var1'].values * 0 + 1)) # type: ignore[arg-type] # %% # Like the :py:meth:`update<zcollection.collection.Collection.update>` method # of the collection, the update method of view allows to selecting the # neighboring partitions with the keyword argument ``depth``. # %% zds = view.load() assert zds is not None var3: numpy.ndarray = zds['var3'].values print(var3) # %% # **Warning**: The variables of the reference collection cannot be edited. try: view.update( lambda ds: dict(var2=ds['var2'].values * 0)) # type: ignore[arg-type] except ValueError as exc: print(str(exc)) # %% # Sync the view with the reference # -------------------------------- # The view may not be read anymore if the number of elements in the reference # collection and in the view is not identical. To avoid this problem, the view # is automatically synchronized when it is opened. But only if the reference # collection has been completed (adding new data after the existing data), the # data already present in the view are kept. The existing tables in the view are # resized and filled with the defined fill values. If you want to know which # partitions are synchronized, you have to use the following data flow: open the # view and ask not to synchronize it (``resync=False``), then call the ``sync`` # method of view class to obtain a filter allowing selecting all the partitions # that have been modified. # # Let's illustrate this data flow with an example. # # First, we create an utility function to resize a dataset. def resize(ds: zcollection.Dataset, dim: str, size: int) -> zcollection.Dataset: """Resize a dataset.""" def new_shape( var: zcollection.Variable, selected_dim: str, new_size: int, ) -> tuple[int, ...]: """Compute the new shape of a variable.""" return tuple(new_size if dim == selected_dim else size for dim, size in zip(var.dimensions, var.shape)) return zcollection.Dataset([ zcollection.Array( name, numpy.resize(var.array.compute(), new_shape(var, dim, size)), var.dimensions, attrs=var.attrs, compressor=var.compressor, fill_value=var.fill_value, filters=var.filters, ) for name, var in ds.variables.items() ]) # %% # We then modify the last partition of the reference collection. We start by # opening the reference collection and loading the last partition. collection = zcollection.open_collection('/view_reference', filesystem=fs, mode='w') zds = collection.load( filters=lambda keys: keys['month'] == 6 and keys['year'] == 2000) assert zds is not None # %% # We create a new time variable, resize the dataset and insert the new time # values. time: numpy.ndarray = numpy.arange( numpy.datetime64('2000-06-01T00:00:00'), numpy.datetime64('2000-06-30T23:59:59'), numpy.timedelta64(1, 'h'), ) zds = resize(zds, 'num_lines', len(time)) zds['time'].values = time # %% # Finally, we update the partition in the reference collection. collection.insert(zds) # %% # Now we cannot load the view, because the shape of the last partition is no # longer consistent between the reference collection and the view. try: view.load() except ValueError as err: print(err) # %% # We call the ``sync`` method to resynchronize the view. filters = view.sync() # %% # The method returns a callable that can be used to filter the partitions that # have been synchronized. You can use this information to perform a # :py:meth:`update<zcollection.view.View.update>` of the view on the # synchronized partitions: :: # # view.update( # lambda ds: dict(var3=ds['var1'].values * 0 + 1), # filters=filters) # print(tuple(view.partitions(filters=filters))) # %% # The view is now synchronized and can be loaded. zds = view.load() assert zds is not None zds.variables['var3'].values # %% # Map a function over the view # ---------------------------- # It's possible to map a function over the partitions of the view. for partition, array in view.map(lambda ds: ( # type: ignore[arg-type] ds['var1'].values + ds['var2'].values)).compute(): print(f' * partition = {partition}: mean = {array.mean()}') # %% # .. seealso:: # # See the :py:meth:`map_overlap<zcollection.view.View.map_overlap>` method # apply a function over the partitions of the view of selecting the # neighboring partitions. # # Drop a variable # ---------------- # A method allows you to # :py:meth:`drop_variable<zcollection.view.View.drop_variable>` variables from # the view. view.drop_variable('var3') try: view.load() except ValueError as err: # The view no longer has a variable. print(err) # %% # **Warning**: The variables of the reference collection cannot be dropped. try: view.drop_variable('var2') except ValueError as exc: print(str(exc)) # %% # Close the local cluster to avoid printing warning messages in the other # examples. client.close() cluster.close()
zcollection
/zcollection-2023.5.0.tar.gz/zcollection-2023.5.0/examples/ex_view.py
ex_view.py
[^_^]: 名称: 小发明系列-zfind # 背景 我们经常会使用 find 命令, 奈何 find命令实在不怎么好用, 于是写一个python 脚本来包装 find命令,让它更友好一些,使用它可以极高的提高效率. 它可以让我们以更少的输入来快速完成原来很复杂的查询, 而且会打印出生成的底层语句, 来看几个例子吧 # 例子 以前我要使用 find 查找当前目录下的"名称中包含XX的后缀是XX的文档", 而且希望它能忽略大小写, 能查找目录软链接下的内容, 我必须写很长的参数. ## case1 简单初体验 比如我之前想查找 当前目录下,名称中含有 make 的markdown 文件, 那么我必须写成: `find -L . -iname "*make*.md" -type f` 作为对比, 我现在只用写: `zfind make` 实际效果如下: ```bash ➜ interview zfind make the command is: find -L . -iname "*make*.md" -type f ./writings/cpp_rank/25_0_什么是Cmake_28.md ./writings/cpp-interview/cpp_rank/25_0_什么是Cmake_28.md ./writings/cpp-interview/cpp_basic/18_0_make的用法_11298.md ./writings/cpp-interview/cpp_basic/12_0_make的用法_11291.md ./htmls/cpp-html/make/@makefile写法.htm.md ./htmls/cpp-html/make/@make 命令零基础教程.html.md ./htmls/cpp-html/make/@CMake Tutorial.htm.md ./cpp-interview/cpp_rank/25_0_什么是Cmake_28.md ./cpp-interview/cpp_basic/18_0_make的用法_11298.md ./cpp-interview/cpp_basic/12_0_make的用法_11291.md ``` ## case2 指定特定文件后缀查询 再比如, 我想找 当前目录下,名称中含有 make 的 html 文件, 那么我必须写成 ` find -L . -iname "*make*.html" -type f` 作为对比, 我现在只用写: `zfind make -s html` 实际效果如下: ```bash ➜ interview zfind make -s html the command is: find -L . -iname "*make*.html" -type f ./htmls/cpp-html/make/Make 命令零基础教程.html ./htmls/cpp-html/make/makefile - What is the difference between _make_ and _make all__ - Stack Overflow.html ``` # case3 多种文件后缀查询 再比如, 我想找 当前目录下,名称中含有 make 的 html和htm 文件, 那么我必须写成两条语句 ` find -L . -iname "*make*.html" -type f` 和 ` find -L . -iname "*make*.htm" -type f` 作为对比, 我现在只用写: `zfind make -s html+htm` # case4 排除特定路径 有的时候,我不想找某个路径, 那么我可以使用 -e 来排除这个路径, e 是 exclude 的首字母. 它强制是模糊查询的. 我写这么一条语句: `zfind find -e blog` 其实它会生成这么复杂的一条语句. `the command is: find -L . -iname "*find*.md" -type f -print -o -path "*blog*" -prune` # 使用 ## 方法1 我将提供下面一段脚本, 只要你: 1. 将它命令为 zfind, 不要带`.py`后缀, 使用 `chmod a+x zfind `成为可执行文件 2. 把它放到可执行文件的查找路径下 3. 使用 zfind 关键字 就可以愉快地使用了 <font color=red>因为我喜欢使用markdown 来写文档, 所以我默认让 find 命令查找 markdown 文件</font> ## 方法2 pip install zcommands-zx # 脚本展示 ```python #! /Users/zxzx/.conda/envs/scrapy/bin/python #换成你本机的python解释器路径 #coding=utf-8 import os,sys help_txt = """ 使用 zfind -h 来获取帮助 使用 zfind 关键字 在当前目录查询含有`关键字`的md文档 使用 zfind 关键字 -d 路径 在指定目录查询含有`关键字`的md文档 使用 zfind 关键字 -d 路径 -s 文件后缀 在指定目录查询含有`关键字`的指定后缀文档 """ ######################## 准备变量 search_dir = '' keyword = '' suffix = '' type = '' args = sys.argv if len(args) > 1 and args[1] == '-h': print(help_txt) if len(args) >= 2: keyword = sys.argv[1] # 捕捉多余可选参数 opt_args = args[2:] for i in range(len(opt_args)): if(opt_args[i] == '-s'): suffix = opt_args[i+1] if (opt_args[i] == '-d'): search_dir = opt_args[i+1] if (opt_args[i] == '-t'): type = opt_args[i+1] ######### 执行命令 search_dir = search_dir or '.' suffix = suffix or 'md' type = type or 'f' if type == 'd': suffix = '' else: suffix = '.'+ suffix command = 'find -L {} -iname "*{}*{}" -type {}'.format(search_dir, keyword, suffix, type) # example as: find . -iname "*@make*.md" ######### 执行查询软链接命令 print("the command is: ", command) ret = os.popen(command).readlines() for line in ret: print(line, end='') ``` # 缺点 目前还没有发现
zcommands-zx
/zcommands-zx-0.0.3.tar.gz/zcommands-zx-0.0.3/README.md
README.md
# 标准Readme Python版 [![standard-readme compliant](https://img.shields.io/badge/readme%20style-standard-brightgreen.svg?style=flat-square)](https://github.com/RichardLitt/standard-readme) 标准 Readme 样式 README 文件是人们通常最先看到的第一个东西。它应该告诉人们为什么要使用、如何安装、以及如何使用你的代码。README 文件标准化能够使得创建和维护 README 文件更加简单。毕竟,要写好一个文档不是那么容易的。 本仓库包含以下内容: 1. 一个创建标准 README 的[生成器](https://github.com/RichardLitt/generator-standard-readme)的 `Python` 版本。 (也许它现在还有很多问题,但是它有更多其他强大的功能正在开发中) ## 内容列表 - [背景](#背景) - [安装](#安装) - [使用说明](#使用说明) - [徽章](#徽章) - [示例](#示例) - [相关仓库](#相关仓库) - [维护者](#维护者) - [如何贡献](#如何贡献) - [使用许可](#使用许可) ## 背景 如果你的文档是完整的,那么使用你代码的人就不用再去看代码了。这非常的重要。它使得你可以分离接口文档与具体实现。它意味着你可修改实现的代码而保持接口与文档不变。 > 请记住:是文档而非代码,定义了一个模块的功能。 —— [Ken Williams, Perl Hackers](http://mathforum.org/ken/perl_modules.html#document) 写 README 从某种程度上来说相当不易,一直维护下去更是难能可贵。如果可以减少这个过程,则可以让写代码与修改代码更容易,使得是否在说明中指明一处需改有无必要更加清楚,你可以花费更少的时间来考虑是否你最初的文档是否需要更新,你可以分配更多的时间来写代码而非维护文档。 同时,标准化在某些别的地方也有好处。有了标准化,用户就可以花费更少的时间来搜索他们需要的信息,他们同时可以做一个工具来从描述中搜集信息,自动跑示例代码,检查授权协议等等。 这个仓库的目标是: 1. 一个**生成器**用来快速搭建新的 README 的框架。 ## 安装 这个项目使用 Python3. ```sh $ pip install md2-notfresh ``` > 为什么叫 md2-notfresh 呢? 因为 notfresh 是我的 github ID,而 md 是 markdown 的缩写,所以本来想用 md 作为命令的入口的, 但是因为 md = mkdir,被占用了,所以我用 md2 来表示. ## 使用说明 这是一个 README 生成工具,你可以用它快速生成一个开原仓库的README文档, 当然它本质上说是一个可自定义的 markdown 模板生成工具,更多功能正在开发中。 ```sh 使用 md2 -h 来获取帮助 使用 md2 readme 在当前目录生成 README 文件 使用 md2 update-url filename 来更新md文件的网页链接 使用 md2 update-space filename 来更新md文件的英文字母和汉字的空格 # Prints out the standard-readme spec ``` 我另外制作了两个命令来做辅助工作, 他们分别是` md2 update-url ` 和 `md2 update-space` , 我现在来简单说一下他们的使用方法. ### 辅助命令用法 - `md2 update-url filename` 如果我们平常喜欢使用 markdown 文档来进行写作, 我想做讨厌的之一是自己写 `[锚文字](实际url)` 了,我们喜欢直接贴一个url, 但是这样的坏处是, 如果这样的 markdown 发表到博客网站, 用户是无法点击的, 只好http复制地址到地址栏, 所以我就写了这样一个命令. 来自动检测并帮助自动更新url 比如在 a.md 里面的内容是 ``` https://pypi.org/project/md2-notfresh/ ``` 使用命令 `md2 update-url a.md` 会把它的内容变成 ``` [https://pypi.org/project/md2-notfresh/](https://pypi.org/project/md2-notfresh/) ``` 如果 url 已经符合规范了, 那么它就不会再次修改了. 所以多次操作是安全的. 它的原理是自动检测 http:// 开头的内容. 我们一般认为 http或者https 开头的都是网址. - `md2 update-space filename` 如果我们平常喜欢使用 markdown 文档来进行写作, 我想做讨厌的之一中文和英文字符的空格, 比如`Hello张三`, 我们应该给`张三`和`hello`之间加上空格, 所以这个命令就是自动帮我们完成这件事的. 如果汉字和英文已经符合规范了, 那么它就不会再次修改了. 所以多次操作是安全的. 这个命令请谨慎使用, 确保你真的想在中文和英文中间留一个空格. ## 徽章 如果你的项目遵循 Standard-Readme 而且项目位于 Github 上,非常希望你能把这个徽章加入你的项目。它可以更多的人访问到这个项目,而且采纳 Stand-README。 加入徽章**并非强制的**。 [![standard-readme compliant](https://img.shields.io/badge/readme%20style-standard-brightgreen.svg?style=flat-square)](https://github.com/RichardLitt/standard-readme) 为了加入徽章到 Markdown 文本里面,可以使用以下代码: ``` [![standard-readme compliant](https://img.shields.io/badge/readme%20style-standard-brightgreen.svg?style=flat-square)](https://github.com/RichardLitt/standard-readme) ``` ## 示例 想了解我们建议的规范是如何被应用的,请参考 [example-readmes](example-readmes/)。 ## 相关仓库 - [Art of Readme](https://github.com/noffle/art-of-readme) — 💌 写高质量 README 的艺术。 - [open-source-template](https://github.com/davidbgk/open-source-template/) — 一个鼓励参与开源的 README 模板。 ## 维护者 [@notfresh](https://github.com/notfresh)。 ## 如何贡献 直接发我邮箱, [email protected] 标准 Readme 遵循 [Contributor Covenant](http://contributor-covenant.org/version/1/3/0/) 行为规范。 ### 贡献者 感谢以下参与项目的人: <a href="graphs/contributors"><img src="https://opencollective.com/standard-readme/contributors.svg?width=890&button=false" /></a> ## 使用许可 [MIT](LICENSE) © notfresh
zcommands-zx
/zcommands-zx-0.0.3.tar.gz/zcommands-zx-0.0.3/README-backup.md
README-backup.md
import os, sys def show_help(): help_txt = """ 使用 zfind -h 来获取帮助 使用 zfind 关键字 在当前目录查询含有`关键字`的md文档 使用 zfind 关键字 -d 路径 在指定目录查询含有`关键字`的md文档 使用 zfind 关键字 -d 路径 -s 文件后缀 在指定目录查询含有`关键字`的指定后缀文档 使用 zfind 关键字 -d 路径 -s 文件后缀 -t 查找类型(d:文件夹, f: 文件) 在指定目录查询含有`关键字`的指定后缀文档 使用 zfind 关键字 -e 排除路径 排除路径中含有你不想搜索的路径 技巧: 如果想同时查找多个后缀,可以使用`+`进行连接,中间不得有空格 例如: zfind ok -s html+htm+md 会生成三条命令: (1)find -L . -iname "*ok*.html" -type f (2)find -L . -iname "*ok*.htm" -type f (3)find -L . -iname "*ok*.md" -type f 问题: 暂时无法检测失效软链接的问题 """ print(help_txt) ######################## 准备变量 search_dir = '' keyword = '' suffix = '' type = '' exclude = '' args = sys.argv if len(args) == 1: show_help() exit(0) if len(args) > 1 and args[1] == '-h': show_help() exit(0) if len(args) >= 2: keyword = sys.argv[1] # 捕捉多余可选参数 opt_args = args[2:] for i in range(len(opt_args)): if (opt_args[i] == '-s'): suffix = opt_args[i + 1] if (opt_args[i] == '-d'): search_dir = opt_args[i + 1] if (opt_args[i] == '-t'): type = opt_args[i + 1] if (opt_args[i] == '-e'): exclude = opt_args[i + 1] ######### 执行命令 search_dir = search_dir or '.' suffix = suffix or 'md' type = type or 'f' if type == 'd': suffix = '' else: if '+' in suffix: suffixes = suffix.split('+') # suffix = '.'+ suffix suffixes = list(set(suffixes)) # 去重 suffixes = ['.' + item for item in suffixes] suffix = suffixes def islist(a): from typing import List return isinstance(a, List) def make_command(keyword, search_dir, suffix, type, exclude): command = [] if islist(suffix): for suffix_part in suffix: command_part = 'find -L {} -iname "*{}*.{}" -type {} -print '.format(search_dir, keyword, suffix_part, type) if exclude: command_part += ' -o -path "*{}*" -prune'.format(exclude) command.append(command_part) else: command = 'find -L {} -iname "*{}*.{}" -type {} -print '.format(search_dir, keyword, suffix, type) if exclude: command += ' -o -path "*{}*" -prune'.format(exclude) # 兼容查询软链接命令, 兼容大小写 # example as: find . -iname "*@make*.md" return command def exec_command(command): if islist(command): ret = [] for command_part in command: print("the command is: ", command_part) ret_part = os.popen(command_part).readlines() ret.extend(ret_part) else: print("the command is: ", command) ret = os.popen(command).readlines() return ret def main(): command = make_command(keyword, search_dir, suffix, type, exclude) ret = exec_command(command) for line in ret: print(line, end='')
zcommands-zx
/zcommands-zx-0.0.3.tar.gz/zcommands-zx-0.0.3/src/zfind/__init__.py
__init__.py
# zcommon.py A collection of common methods and utils ## Note The methods and utils in this package are using in other repositories and are packaged to allow sharing of base code. Utils in this package, though well tested, do not come with any help. If there are items in this repo that may serve you as a standalone package, please let me know. Some packages were already exported, such as 1. https://github.com/LamaAni/TicTocTimer 1. https://github.com/LamaAni/MatchPattern 1. https://github.com/LamaAni/zthreading.py # Install ```shell pip install zcommon ``` ## From the git repo directly To install from master branch, ```shell pip install git+https://github.com/LamaAni/zcommon.py@master ``` To install from a release (tag) ```shell pip install git+https://github.com/LamaAni/zcommon.py@[tag] ``` # Contribution Feel free to ping me in issues or directly on LinkedIn to contribute. # Licence Copyright © `Zav Shotan` and other [contributors](https://github.com/LamaAni/zcommon.py/graphs/contributors). It is free software, released under the MIT licence, and may be redistributed under the terms specified in `LICENSE`.
zcommon
/zcommon-0.1.2.tar.gz/zcommon-0.1.2/README.md
README.md
================================== Change log for zconfig_watchedfile ================================== 2.0 (2023-08-17) ================ - Drop support for all Python versions older than 3.8. - Add support for Python 3.9, 3.10, 3.11. 1.2 (2019-12-04) ================ - Migrated to github. - Add support for Python 3.7 and 3.8. 1.1 (2019-01-25) ================ - Make `setup.py` compatible with newer `setuptools` versions. - Drop support for Python 2.6. 1.0 (2013-11-29) ================ - initial release
zconfig-watchedfile
/zconfig_watchedfile-2.0.tar.gz/zconfig_watchedfile-2.0/CHANGES.rst
CHANGES.rst
=================== zconfig_watchedfile =================== Provides a ZConfig statement to register a logging handler that uses a `WatchedFileHandler`_, which is helpful for integrating with an external logrotate service:: %import zconfig_watchedfile <logger> name example <watchedfile> path /path/to/logfile.log </watchedfile> </logger> The ``<watchedfile>`` supports both the default ZConfig settings for handlers (formatter, dateformat, level) and the parameters of `WatchedFileHandler`_ (mode, encoding, delay). This package is compatible with Python version 3.8 up to 3.11. .. _`WatchedFileHandler`: https://docs.python.org/3.11/library/logging.handlers.html#watchedfilehandler
zconfig-watchedfile
/zconfig_watchedfile-2.0.tar.gz/zconfig_watchedfile-2.0/README.rst
README.rst
import configparser import collections DEFAULT_ZSEP = ' : ' REVERSED = ('.',) class ZDictError(Exception): """Base class for `ZDict` Exceptions.""" class ZKeyError(ZDictError, KeyError): """Raised when no zkey is found.""" def __init__(self, key): msg = "No key %r found in shortnames and longnames." % (key,) super().__init__(msg) class DuplicateZKeyError(ZDictError): """Raised when duplicate zkeys are found.""" def __init__(self, new, old): msg = "Duplicate zkeys: %r. %r already exists." % (new, old) super().__init__(msg) class RecursiveZkeyError(ZDictError): """Raised when circular zkey structure is found. cf. '[aa : bb]', '[bb : cc]', '[cc : aa]' """ def __init__(self, key): msg = "Recurcive zkey detected: %r." % (key,) super().__init__(msg) class Error(Exception): """Base class for `ZConfigParser` Exceptions.""" class NoZSectionError(Error, configparser.NoSectionError): """Raised when no zsection is found.""" def __init__(self, section): super().__init__('No zsection: %r' % (section,)) class NoZOptionError(Error, configparser.NoOptionError): """Raised when no option in a zsection is found.""" def __init__(self, option, section): configparser.Error.__init__(self, 'No zoption: %r in zsection: %r' % (option, section)) self.option = option self.section = section self.args = (option, section) # not used # class DuplicateZSectionError(Error, configparser.DuplicateSectionError): # """Raised when duplicate zsections are found. # cf. `ConfigParser.DuplicateSectionError` provides # 'source' and 'lineno' information when reading from a file. # """ class ZDict(collections.OrderedDict): """A custom dictionary used in `ZConfigParser`. It creates and keeps internal zsection dependency dictionaries. (They are ordinary ``dict``). Without this, `ZConfigParser` has to search all sections all the time and is very slow. """ def __init__(self, *args, **kwargs): self.ZSEP = kwargs.pop('ZSEP', DEFAULT_ZSEP) self.zdata = dict() self._zparents = dict() # used for valification super().__init__(*args, **kwargs) def _zsplit(self, key): keys = key.split(self.ZSEP) if self.ZSEP in REVERSED: keys.reverse() return keys def _zkey(self, key): if key in self: return key if key in self.zdata: return self.zdata[key] raise ZKeyError(key) def _get_shortnames(self, key, collected=None): if collected is None: collected = [] longname = self._zkey(key) shortnames = self._zsplit(longname) shortname = shortnames[0] if shortname in collected: raise RecursiveZkeyError(shortname) collected.append(shortname) if len(shortnames) > 1: for key in shortnames[1:]: self._get_shortnames(key, collected) return collected def __setitem__(self, key, value): """When setting, the dictionary memorizes zsections structure. Keys must be longnames. Used when `ConfigParser` reads files, dicts, etc.. """ shortnames = self._zsplit(key) shortname = shortnames[0] if len(shortnames) > 1: old = self._zparents.get(shortname) if old and not old == shortnames: raise DuplicateZKeyError(shortnames, old) self.zdata[shortname] = key self._zparents[shortname] = shortnames super().__setitem__(key, value) def zget(self, key): all_shortnames = self._get_shortnames(key) longnames = [self._zkey(s) for s in all_shortnames] values = [self[lo] for lo in longnames] return values def zkeys(self): """Collect all shortnames and longnames. Two dictionary keys are combined (sections and zsections), so key ordering of `OrderedDict` part is not preserved. """ # `KeysView` is subclass of `set`. keys = self.keys() | self.zdata.keys() return collections.abc.KeysView(keys) def zcontains(self, key): return key in self.zkeys() def __repr__(self): return super().__repr__() class ZDictGen(object): """A supplement class needed to create `ZSEP` pre-initialized `ZDict`. To adjust for `ConfigParser` intialization argument `dict_type`. """ def __init__(self, *args, **kwargs): self._ZSEP = kwargs.pop('ZSEP', DEFAULT_ZSEP) self._args = args self._kwargs = kwargs def __call__(self): return ZDict(*self._args, ZSEP=self._ZSEP, **self._kwargs) class ZConfigParser(configparser.ConfigParser): """ConfigParser, plus some section inheritance function. E.g. section ``[aa : bb]`` becomes ``[aa]``, and inherits and overrides section [bb]. Default separator word is ' : ', exactly one space before and after ':'. """ def __init__(self, *args, **kwargs): if len(args) > 1 or 'dict_type' in kwargs: msg = ("you can not assign 'dict_type' in ZConfigParser") raise ValueError(msg) self.ZSEP = kwargs.pop('ZSEP', DEFAULT_ZSEP) zd = ZDictGen(ZSEP=self.ZSEP) super().__init__(*args, dict_type=zd, **kwargs) def get(self, section, option, **kwargs): """Override `ConfigParser`'s method. `ZConfigParser` only wraps Exceptions in `get`. Other 'get' (`getint` etc.) might leak (raise) `ConfigParser`'s Exceptions. """ try: return super().get(section, option, **kwargs) except configparser.NoSectionError: raise NoZSectionError(section) except configparser.NoOptionError: raise NoZOptionError(option, section) def _unify_values(self, section, vars): """Override `ConfigParser`'s method. ConfigParser's `.get()` calls this function. The code is mostly the same as the original, just inserting dictionaries list, instead of a dictionary (sectiondict). """ sectiondict = [{}] try: sectiondict = self._sections.zget(section) except ZKeyError: if section != self.default_section: raise NoZSectionError(section) vardict = {} if vars: for key, value in vars.items(): if value is not None: value = str(value) vardict[self.optionxform(key)] = value return collections.ChainMap(vardict, *sectiondict, self._defaults) def zsections(self): """Return all section shortnames and longnames.""" return self._sections.zkeys() def has_zsection(self, section): """Check section name (whether short or long).""" return self._sections.zcontains(section) def has_zoption(self, section, option): """Check option name in a zsection (whether short or long).""" try: self.get(section, option) return True except (NoZSectionError, NoZOptionError, ZKeyError): return False
zconfigparser
/zconfigparser-0.1.0.tar.gz/zconfigparser-0.1.0/zconfigparser.py
zconfigparser.py
======== ZContact ======== The Online Contact Manager -------------------------- ZContact is an online contact management application built on the Zope3 web application framework. Below are instructions for managing ZContact on Ubuntu Linux. With some tweaks, this might even work on Mac OSX and Windows. Quick Start =========== Follow these instructions to install ZContact and create a default server setup. 0. Install dependencies if they are not installed already (most of these dependencies are from Zope 3):: $ sudo apt-get install build-essential python-all python-all-dev libc6-dev libicu-dev python-setuptools 1. Install ZContact:: $ sudo easy_install-2.4 zcontact #. Create an "instance" of zcontact (including server configuration, log files and database) called "MyZContactServer". Feel free to replace MyZContactServer with whatever you want, or leave it blank and it will default to just "zcontact":: $ paster make-config zcontact MyZContactServer #. Go to the newly created configuration area for your zcontact instance and start the server:: $ cd MyZContactServer $ paster serve deploy.ini #. ZContact will now be available at http://localhost:8080 . Updating Your ZContact Installation =================================== To update your ZContact application, simply run the following command and restart your server. $ sudo easy_install-2.4 -U zcontact (the -U stands for "Update"). Running ZContact as a Daemon ============================ To run ZContact as a daemon, go to the directory where your ZContact instance is located and type: $ paster serve deploy.ini --daemon The running daemon can be stopped with: $ paster serve deploy.ini stop Migrating Data ============== To migrate data from one zcontact server to another follow these steps: 1. Make sure both zcontact instances are **not** running. #. Copy the database file you want to migrate to the new instance. The database file is located in the var/ directory of the ZContact instance and is called Data.fs. You do not need to move any of the Data.fs.* files. #. Restart your ZContact instance. Developer Installation ====================== If you want to setup ZContact as a developer (i.e. from a repository checkout) rather than installing it as an egg on your system, follow these steps: 1. Grab a branch of the latest ZContact code from Launchpad:: $ bzr branch http://bazaar.launchpad.net/~pcardune/zcontact/zcontact-lp (Note: you can also use bzr checkout instead of bzr branch if you don't want to get all the revision information) #. Change to the directory where you just create the branch:: $ cd zcontact-lp #. Run make:: $ make (Note: This will run the bootstrap.py script which sets up buildout, and it will invoke buildout which downloads all the necessary eggs to the eggs/ directory. If you have a common place where you have development eggs available, you should modify buildout.cfg before running make.) #. Run the tests:: $ make test #. Create the configuration:: $ make install (This adds the var and log directories along with a deploy.ini, site.zcml, and zope.conf in the checkout) #. Start the server:: $ make run #. Generate test coverage reports:: $ make coverage NOTE: if you get errors about setuptools not being the right version, then you need to install the easy_install script and run:: $ sudo easy_install-2.4 -U setuptools (The -U option forces setuptools to look online for the latest updates) If you don't like using make, or you are not on a Linux system, then try the following:: $ python bootstrap.py $ ./bin/buildout -vN A note to the wise: It seems to be the consensus of the Zope community that one should never use the standard system python to run your software because you might screw it up. And screwing up system pythong is not a good idea if you can avoid it. So to really do this properly, you should install your own python by actually downloading the src, compiling it, and installing it to some place like /opt/mypython. Then when you install the checkout, use:: $ /opt/mypython/bin/python bootstrap.py $ ./bin/buildout -vN And that will be best. Getting More Information ======================== Contact me on chat.freenode.net. My most common username is pcardune and I hang around #schooltool and #zope3-dev. Otherwise, email me at paul_at_carduner_dot_net Please send me requests for other instructions you want to be put into this README file.
zcontact
/zcontact-0.1.0a3.tar.gz/zcontact-0.1.0a3/README.txt
README.txt
======= History ======= 0.1.4 (2020.12.01) ------------------ - 新增功能:新参数 `-c <收藏夹 URL, ...>`,支持下载收藏夹中的作品 0.1.3 (2020.07.22) ------------------ - 首次发布到 PyPI - 修复了在动态加载页面中无法获取并下载所有图片的问题 - 保存的图片文件名中加入了序号,以保持原始顺序 - 添加了注释,并对代码细节做了调整 2020.03.25 ---------- - 优化了终端输出信息,用不同颜色文字进行了标识 - 修复了在低网速下无法下载图片的问题,并加快了整体下载速度 0.1.2 (2020.03.24) ------------------ 新功能: - 新增下载超清原图(默认选项,约几 MB),使用参数 `--thumbnail` 下载缩略图(宽最大 1280px,约 500KB) - 新增支持下载 JPG、PNG、GIF、BMP 格式的图片 0.1.1 (2019.12.09) ------------------ 新功能: - 可以选择下载用户的特定主题 - 支持一次性输入多个用户名或 ID BUG 修复: - 修复用户如果没有上传任何图片时的下载错误 0.1.0 (2019.09.09) ------------------ 主要功能: - 极速下载:多线程异步下载,可以根据需要设置线程数 - 异常重试:只要重试次数足够多,就没有下载不下来的图片 \(^o^)/ - 增量下载:设计师/用户有新的上传,再跑一遍程序就行了 O(∩_∩)O 嗯! - 支持代理:可以配置使用代理(0.1.3 版本后改为自动读取系统代理)
zcooldl
/zcooldl-0.1.4.tar.gz/zcooldl-0.1.4/HISTORY.rst
HISTORY.rst
================ ZCool Downloader ================ .. image:: https://img.shields.io/pypi/v/zcooldl.svg :target: https://pypi.python.org/pypi/zcooldl .. image:: https://img.shields.io/travis/lonsty/zcooldl.svg :target: https://travis-ci.org/lonsty/zcooldl .. image:: https://readthedocs.org/projects/zcooldl/badge/?version=latest :target: https://zcooldl.readthedocs.io/en/latest/?badge=latest :alt: Documentation Status .. image:: https://pyup.io/repos/github/lonsty/zcooldl/shield.svg :target: https://pyup.io/repos/github/lonsty/zcooldl/ :alt: Updates ZCool picture crawler. Download ZCool (https://www.zcool.com.cn/) Designer's or User's pictures, photos and illustrations. * Free software: MIT license * Documentation: https://zcooldl.readthedocs.io. Features -------- * 极速下载:多线程异步下载,可以根据需要设置线程数 * 异常重试:只要重试次数足够多,就没有下载不下来的图片 \(^o^)/! * 增量下载:设计师/用户有新的上传,再跑一遍程序就行了 O(∩_∩)O 嗯! * 自选主题:可以选择下载用户的特定主题,而不是该用户下的所有内容 * 下载收藏夹 `New`:使用 `-c <收藏夹 URL, ...>` 下载收藏夹中的作品(收藏夹可自由创建) Quickstart ---------- Install zcooldl via pip: .. code-block:: console $ pip install -U zcooldl Download all username's pictures to current directory: .. code-block:: console $ zcooldl -u <username> Credits ------- This package was created with Cookiecutter_ and the `audreyr/cookiecutter-pypackage`_ project template. .. _Cookiecutter: https://github.com/audreyr/cookiecutter .. _`audreyr/cookiecutter-pypackage`: https://github.com/audreyr/cookiecutter-pypackage
zcooldl
/zcooldl-0.1.4.tar.gz/zcooldl-0.1.4/README.rst
README.rst
.. highlight:: shell ============ Contributing ============ Contributions are welcome, and they are greatly appreciated! Every little bit helps, and credit will always be given. You can contribute in many ways: Types of Contributions ---------------------- Report Bugs ~~~~~~~~~~~ Report bugs at https://github.com/lonsty/zcooldl/issues. If you are reporting a bug, please include: * Your operating system name and version. * Any details about your local setup that might be helpful in troubleshooting. * Detailed steps to reproduce the bug. Fix Bugs ~~~~~~~~ Look through the GitHub issues for bugs. Anything tagged with "bug" and "help wanted" is open to whoever wants to implement it. Implement Features ~~~~~~~~~~~~~~~~~~ Look through the GitHub issues for features. Anything tagged with "enhancement" and "help wanted" is open to whoever wants to implement it. Write Documentation ~~~~~~~~~~~~~~~~~~~ ZCool Downloader could always use more documentation, whether as part of the official ZCool Downloader docs, in docstrings, or even on the web in blog posts, articles, and such. Submit Feedback ~~~~~~~~~~~~~~~ The best way to send feedback is to file an issue at https://github.com/lonsty/zcooldl/issues. If you are proposing a feature: * Explain in detail how it would work. * Keep the scope as narrow as possible, to make it easier to implement. * Remember that this is a volunteer-driven project, and that contributions are welcome :) Get Started! ------------ Ready to contribute? Here's how to set up `zcooldl` for local development. 1. Fork the `zcooldl` repo on GitHub. 2. Clone your fork locally:: $ git clone [email protected]:your_name_here/zcooldl.git 3. Install your local copy into a virtualenv. Assuming you have virtualenvwrapper installed, this is how you set up your fork for local development:: $ mkvirtualenv zcooldl $ cd zcooldl/ $ python setup.py develop 4. Create a branch for local development:: $ git checkout -b name-of-your-bugfix-or-feature Now you can make your changes locally. 5. When you're done making changes, check that your changes pass flake8 and the tests, including testing other Python versions with tox:: $ flake8 zcooldl tests $ python setup.py test or pytest $ tox To get flake8 and tox, just pip install them into your virtualenv. 6. Commit your changes and push your branch to GitHub:: $ git add . $ git commit -m "Your detailed description of your changes." $ git push origin name-of-your-bugfix-or-feature 7. Submit a pull request through the GitHub website. Pull Request Guidelines ----------------------- Before you submit a pull request, check that it meets these guidelines: 1. The pull request should include tests. 2. If the pull request adds functionality, the docs should be updated. Put your new functionality into a function with a docstring, and add the feature to the list in README.rst. 3. The pull request should work for Python 3.5, 3.6, 3.7 and 3.8, and for PyPy. Check https://travis-ci.com/lonsty/zcooldl/pull_requests and make sure that the tests pass for all supported Python versions. Tips ---- To run a subset of tests:: $ python -m unittest tests.test_zcooldl Deploying --------- A reminder for the maintainers on how to deploy. Make sure all your changes are committed (including an entry in HISTORY.rst). Then run:: $ bump2version patch # possible: major / minor / patch $ git push $ git push --tags Travis will then deploy to PyPI if tests pass.
zcooldl
/zcooldl-0.1.4.tar.gz/zcooldl-0.1.4/CONTRIBUTING.rst
CONTRIBUTING.rst
===== Usage ===== To use ZCool Downloader in terminal: * Download all images of an **user** .. code-block:: console $ zcooldl -u <username> * Download all images of a **collection** .. code-block:: console $ zcooldl -c <collection URL> Type `zcooldl --help` to get full usage: .. code-block:: none Usage: zcooldl [OPTIONS] ZCool picture crawler, download pictures, photos and illustrations of ZCool (https://zcool.com.cn/). Visit https://github.com/lonsty/scraper. Options: -u, --usernames TEXT One or more user names, separated by commas. -i, --ids TEXT One or more user IDs, separated by commas. -c, --collections TEXT One or more collection URLs, separated by commas. -t, --topics TEXT Specific topics to download, separated by commas. -d, --destination TEXT Destination to save images. -R, --retries INTEGER Repeat download for failed images. [default: 3] -r, --redownload TEXT Redownload images from failed records (PATH of the .json file). -o, --overwrite Override the existing files. --thumbnail Download thumbnails with a maximum width of 1280px. --max-pages INTEGER Maximum pages to download. --max-topics INTEGER Maximum topics per page to download. --max-workers INTEGER Maximum thread workers. [default: 20] --help Show this message and exit.
zcooldl
/zcooldl-0.1.4.tar.gz/zcooldl-0.1.4/docs/usage.rst
usage.rst
.. highlight:: shell ============ Installation ============ Stable release -------------- To install ZCool Downloader, run this command in your terminal: .. code-block:: console $ pip install zcooldl This is the preferred method to install ZCool Downloader, as it will always install the most recent stable release. If you don't have `pip`_ installed, this `Python installation guide`_ can guide you through the process. .. _pip: https://pip.pypa.io .. _Python installation guide: http://docs.python-guide.org/en/latest/starting/installation/ From sources ------------ The sources for ZCool Downloader can be downloaded from the `Github repo`_. You can either clone the public repository: .. code-block:: console $ git clone git://github.com/lonsty/zcooldl Or download the `tarball`_: .. code-block:: console $ curl -OJL https://github.com/lonsty/zcooldl/tarball/master Once you have a copy of the source, you can install it with: .. code-block:: console $ python setup.py install .. _Github repo: https://github.com/lonsty/zcooldl .. _tarball: https://github.com/lonsty/zcooldl/tarball/master
zcooldl
/zcooldl-0.1.4.tar.gz/zcooldl-0.1.4/docs/installation.rst
installation.rst
zcooldl package =============== Submodules ---------- zcooldl.cli module ------------------ .. automodule:: zcooldl.cli :members: :undoc-members: :show-inheritance: zcooldl.utils module -------------------- .. automodule:: zcooldl.utils :members: :undoc-members: :show-inheritance: zcooldl.zcooldl module ---------------------- .. automodule:: zcooldl.zcooldl :members: :undoc-members: :show-inheritance: Module contents --------------- .. automodule:: zcooldl :members: :undoc-members: :show-inheritance:
zcooldl
/zcooldl-0.1.4.tar.gz/zcooldl-0.1.4/docs/zcooldl.rst
zcooldl.rst
from com.zoho.oauth.common.Utility import Logger #import MySQLdb import mysql.connector class ZohoOAuthPersistenceHandler(object): ''' This class deals with persistance of oauth related tokens ''' def saveOAuthTokens(self,oAuthTokens): try: self.deleteOAuthTokens(oAuthTokens.userEmail) connection=self.getDBConnection() cursor=connection.cursor() #sqlQuery="INSERT INTO oauthtokens(useridentifier,accesstoken,refreshtoken,expirytime) VALUES('"+oAuthTokens.userEmail+"','"+oAuthTokens.accessToken+"','"+oAuthTokens.refreshToken+"',"+oAuthTokens.expiryTime+")"; sqlQuery="INSERT INTO oauthtokens(useridentifier,accesstoken,refreshtoken,expirytime) VALUES(%s,%s,%s,%s)"; data=(oAuthTokens.userEmail,oAuthTokens.accessToken,oAuthTokens.refreshToken,oAuthTokens.expiryTime) cursor.execute(sqlQuery,data) connection.commit() except Exception as ex: import logging Logger.addLog("Exception occured while saving oauthtokens into DB ",logging.ERROR,ex) finally: cursor.close() connection.close() def getOAuthTokens(self,userEmail): try: connection=self.getDBConnection() cursor=connection.cursor() sqlQuery="SELECT * FROM oauthtokens where useridentifier='"+userEmail+"'" cursor.execute(sqlQuery) except Exception as ex: import logging Logger.addLog("Exception occured while fetching oauthtokens from DB ",logging.ERROR,ex) finally: cursor.close() connection.close() def deleteOAuthTokens(self,userEmail): try: connection=self.getDBConnection() cursor=connection.cursor() #sqlQuery="DELETE FROM oauthtokens where useridentifier='"+userEmail+"'" sqlQuery="DELETE FROM oauthtokens where useridentifier=%s" cursor.execute(sqlQuery,(userEmail,)) connection.commit() except Exception as ex: import logging Logger.addLog("Exception occured while deleting oauthtokens from DB ",logging.ERROR,ex) finally: cursor.close() connection.close() def getDBConnection(self): connection=mysql.connector.connect(user='root', password='',host='127.0.0.1',database='zohooauth') return connection #connection=MySQLdb.connect(host="localhost",user="root",passwd="",db="zohooauth") #return connection
zcrm-python-cl
/zcrm-python-cl-0.0.3.tar.gz/zcrm-python-cl-0.0.3/src/com/zoho/oauth/clientapp/Persistence.py
Persistence.py
from com.zoho.oauth.common.Utility import Logger,ZohoOAuthConstants,ZohoOAuthException,ZohoOAuthHTTPConnector,\ ZohoOAuthParams from com.zoho.oauth.clientapp.Persistence import ZohoOAuthPersistenceHandler class ZohoOAuth(object): ''' This class is to load oauth configurations and provide OAuth request URIs ''' configProperties={} iamURL='https://accounts.localzoho.com' def __init__(self): ''' Constructor ''' @staticmethod def initialize(): try: from PathIdentifier import PathIdentifier import os resources_path = os.path.join(PathIdentifier.getClientLibraryRoot(),'resources','oauth_configuration.properties') filePointer=open(resources_path,"r") ZohoOAuth.configProperties=ZohoOAuth.getFileContentAsDictionary(filePointer) oAuthParams=ZohoOAuthParams.getInstance(ZohoOAuth.configProperties[ZohoOAuthConstants.CLIENT_ID], ZohoOAuth.configProperties[ZohoOAuthConstants.CLIENT_SECRET], ZohoOAuth.configProperties[ZohoOAuthConstants.REDIRECT_URL]) ZohoOAuthClient.getInstance(oAuthParams) except Exception as ex: import logging Logger.addLog('Exception occured while reading oauth configurations',logging.ERROR,ex) @staticmethod def getFileContentAsDictionary(filePointer) : dictionary={} for line in filePointer: line=line.rstrip() keyValue=line.split("=") if(not keyValue[0].startswith('#')): dictionary[keyValue[0]]=keyValue[1] filePointer.close() return dictionary @staticmethod def getGrantURL(): return (ZohoOAuth.iamURL+"/oauth/v2/auth") @staticmethod def getTokenURL(): return (ZohoOAuth.iamURL+"/oauth/v2/token") @staticmethod def getRefreshTokenURL(): return (ZohoOAuth.iamURL+"/oauth/v2/token") @staticmethod def getRevokeTokenURL(): return (ZohoOAuth.iamURL+"/oauth/v2/token/revoke") @staticmethod def getUserInfoURL(): return (ZohoOAuth.iamURL+"/oauth/user/info") @staticmethod def getClientInstance(): oAuthClientIns=ZohoOAuthClient.getInstance() if(oAuthClientIns==None): raise ZohoOAuthException('ZohoOAuth.initialize() must be called before this') return oAuthClientIns class ZohoOAuthClient(object): ''' This class is to generate oauth related tokens ''' oAuthParams=None oAuthClientIns=None def __init__(self, oauthParams): ''' Constructor ''' ZohoOAuthClient.oAuthParams=oauthParams @staticmethod def getInstance(param=None): if(param!=None and ZohoOAuthClient.oAuthClientIns==None): ZohoOAuthClient.oAuthClientIns=ZohoOAuthClient(param) return ZohoOAuthClient.oAuthClientIns def getAccessToken(self,userEmail): try: handler=ZohoOAuthPersistenceHandler() oAuthTokens=handler.getOAuthTokens(userEmail) try: return oAuthTokens.accessToken except Exception as e: import logging Logger.addLog("Access token expired hence refreshing",logging.INFO,e) oAuthTokens=self.refreshAccessToken(oAuthTokens.refreshToken,userEmail) return oAuthTokens.accessToken except Exception as ex: Logger.addLog("Exception occured while fetching oauthtoken from db",logging.ERROR,ex) def refreshAccessToken(self,refreshToken,userEmail): if(refreshToken==None): raise ZohoOAuthException("Refresh token not provided!") try: connector=self.getConnector(ZohoOAuth.getRefreshTokenURL()) connector.addHttpRequestParams(ZohoOAuthConstants.GRANT_TYPE,ZohoOAuthConstants.GRANT_TYPE_REFRESH) connector.addHttpRequestParams(ZohoOAuthConstants.REFRESH_TOKEN,refreshToken) connector.setHttpRequestMethod(ZohoOAuthConstants.REQUEST_METHOD_POST) response=connector.triggerRequest() responseJSON=response.json() if(ZohoOAuthConstants.ACCESS_TOKEN in responseJSON): oAuthTokens=self.getTokensFromJson(responseJSON) oAuthTokens.setUserEmail(userEmail) ZohoOAuthPersistenceHandler().saveOAuthTokens(oAuthTokens) except ZohoOAuthException as ex: import logging Logger.addLog("Exception occured while refreshing oauthtoken",logging.ERROR,ex) def generateAccessToken(self,grantToken): if(grantToken==None): raise ZohoOAuthException("Grant token not provided!") try: connector=self.getConnector(ZohoOAuth.getTokenURL()) connector.addHttpRequestParams(ZohoOAuthConstants.GRANT_TYPE,ZohoOAuthConstants.GRANT_TYPE_AUTH_CODE) connector.addHttpRequestParams(ZohoOAuthConstants.CODE,grantToken) connector.setHttpRequestMethod(ZohoOAuthConstants.REQUEST_METHOD_POST) print connector.reqParams response=connector.triggerRequest() responseJSON=response.json() if(ZohoOAuthConstants.ACCESS_TOKEN in responseJSON): oAuthTokens=self.getTokensFromJson(responseJSON) oAuthTokens.setUserEmail(self.getUserEmailFromIAM(oAuthTokens.accessToken)) print oAuthTokens ZohoOAuthPersistenceHandler().saveOAuthTokens(oAuthTokens) return oAuthTokens else: raise ZohoOAuthException("Exception occured while fetching accesstoken from Grant Token;Response is:"+response.content) except ZohoOAuthException as ex: import logging Logger.addLog("Exception occured while generating access token",logging.ERROR,ex) def getTokensFromJson(self,responseJson): expiresIn = responseJson[ZohoOAuthConstants.EXPIRES_IN] expiresIn=expiresIn+(ZohoOAuthTokens.getCurrentTimeInMillis()) accessToken=responseJson[ZohoOAuthConstants.ACCESS_TOKEN] refreshToken=None if(ZohoOAuthConstants.REFRESH_TOKEN in responseJson): refreshToken=responseJson[ZohoOAuthConstants.REFRESH_TOKEN] oAuthTokens = ZohoOAuthTokens(refreshToken,accessToken,expiresIn) return oAuthTokens; def getConnector(self,url): connector=ZohoOAuthHTTPConnector.getInstance(url,{}) connector.addHttpRequestParams(ZohoOAuthConstants.CLIENT_ID, ZohoOAuthClient.oAuthParams.clientID) connector.addHttpRequestParams(ZohoOAuthConstants.CLIENT_SECRET, ZohoOAuthClient.oAuthParams.clientSecret) connector.addHttpRequestParams(ZohoOAuthConstants.REDIRECT_URL, ZohoOAuthClient.oAuthParams.redirectUri) return connector def getUserEmailFromIAM(self,accessToken): header={ZohoOAuthConstants.AUTHORIZATION:(ZohoOAuthConstants.OAUTH_HEADER_PREFIX+accessToken)} connector=ZohoOAuthHTTPConnector.getInstance(ZohoOAuth.getUserInfoURL(),None,header,None,ZohoOAuthConstants.REQUEST_METHOD_GET) response=connector.triggerRequest() return response.json()['Email'] class ZohoOAuthTokens(object): ''' This class is to encapsulate the OAuth tokens ''' def __init__(self, refresh_token,access_token,expiry_time,user_email=None): ''' Constructor ''' self.refreshToken=refresh_token self.accessToken=access_token self.expiryTime=expiry_time self.userEmail=user_email def getAccessToken(self): if((self.expiryTime-self.getCurrentTimeInMillis())>10): return self.accessToken else: return ZohoOAuthException("Access token got expired!") @staticmethod def getCurrentTimeInMillis(): import time return int(round(time.time() * 1000)) def setUserEmail(self,userEmail): self.userEmail=userEmail
zcrm-python-cl
/zcrm-python-cl-0.0.3.tar.gz/zcrm-python-cl-0.0.3/src/com/zoho/oauth/client/OAuthClient.py
OAuthClient.py
class ZohoOAuthConstants(object): ''' OAuth constants ''' IAM_URL="iamURL"; SCOPES="scope"; STATE="state"; STATE_OBTAINING_GRANT_TOKEN="OBTAIN_GRANT_TOKEN"; RESPONSE_TYPE="response_type"; RESPONSE_TYPE_CODE="code"; CLIENT_ID="client_id"; CLIENT_SECRET="client_secret"; REDIRECT_URL="redirect_uri"; ACCESS_TYPE="access_type"; ACCESS_TYPE_OFFLINE="offline"; ACCESS_TYPE_ONLINE="online"; PROMPT="prompt"; PROMPT_CONSENT="consent"; GRANT_TYPE="grant_type"; GRANT_TYPE_AUTH_CODE="authorization_code"; GRANT_TYPE_REFRESH="refresh_token"; CODE="code"; GRANT_TOKEN="grant_token"; ACCESS_TOKEN="access_token"; REFRESH_TOKEN="refresh_token"; EXPIRES_IN = "expires_in"; EXPIRIY_TIME = "expiry_time"; PERSISTENCE_HANDLER_CLASS = "persistence_handler_class"; TOKEN = "token"; DISPATCH_TO = "dispatchTo"; OAUTH_TOKENS_PARAM = "oauth_tokens"; OAUTH_HEADER_PREFIX="Zoho-oauthtoken "; AUTHORIZATION="Authorization"; REQUEST_METHOD_GET="GET"; REQUEST_METHOD_POST="POST"; RESPONSECODE_OK=200; class ZohoOAuthException(Exception): ''' This is the custom exception class for handling Client Library OAuth related exceptions ''' def __init__(self, errMessage): self.message=errMessage Exception.__init__(self,errMessage) def __str__(self): return self.message class ZohoOAuthHTTPConnector(object): ''' This module is to make HTTP connections, trigger the requests and receive the response ''' @staticmethod def getInstance(url,params=None,headers=None,body=None,method=None): return ZohoOAuthHTTPConnector(url,params,headers,body,method) def __init__(self, url,params,headers,body,method): ''' Constructor ''' self.url=url self.reqHeaders=headers self.reqMethod=method self.reqParams=params self.reqBody=body def triggerRequest(self): response=None import requests,json if(self.reqMethod == ZohoOAuthConstants.REQUEST_METHOD_GET): response=requests.get(self.url,params=self.reqParams, headers=self.reqHeaders,allow_redirects=False) elif(self.reqMethod==ZohoOAuthConstants.REQUEST_METHOD_POST): response=requests.post(self.url,data=json.dumps(self.reqBody), params=self.reqParams,headers=self.reqHeaders,allow_redirects=False) return response def setUrl(self,url): self.url=url def getUrl(self): return self.url def addHttpHeader(self,key,value): self.reqHeaders[key]=value def getHttpHeaders(self): return self.reqHeaders def setHttpRequestMethod(self,method): self.reqMethod=method def getHttpRequestMethod(self): return self.reqMethod def setRequestBody(self,reqBody): self.reqBody=reqBody def getRequestBody(self): return self.reqBody def addHttpRequestParams(self,key,value): self.reqParams[key]=value def getHttpRequestParams(self): return self.reqParams class ZohoOAuthParams(object): ''' This class is to OAuth related params(i.e. client_id,client_secret,..) ''' def __init__(self, client_id,client_secret,redirect_uri): ''' Constructor ''' self.clientID=client_id self.clientSecret=client_secret self.redirectUri=redirect_uri @staticmethod def getInstance(client_id,client_secret,redirect_uri): return ZohoOAuthParams(client_id,client_secret,redirect_uri) import logging logger=logging.getLogger('Client_Library_OAUTH') class Logger(object): ''' This class is to log the exceptions onto console and file ''' @staticmethod def addLog(message,level,exception=None): logger.setLevel(logging.DEBUG) consoleHandler = logging.StreamHandler() consoleHandler.setLevel(logging.DEBUG) fileHandler=logging.FileHandler("oauth.log") fileHandler.setLevel(logging.DEBUG) # create formatter and add it to the handlers formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s') consoleHandler.setFormatter(formatter) fileHandler.setFormatter(formatter) # add the handlers to the logger logger.addHandler(consoleHandler) logger.addHandler(fileHandler) if(exception!=None): message+='; Exception Message::'+exception.__str__() if(level==logging.ERROR): logger.error(message) elif(level==logging.INFO): logger.info(message) elif(level==logging.WARNING): logger.warning(message)
zcrm-python-cl
/zcrm-python-cl-0.0.3.tar.gz/zcrm-python-cl-0.0.3/src/com/zoho/oauth/common/Utility.py
Utility.py
from com.zoho.crm.library.exception.Exception import ZCRMException from com.zoho.crm.library.common.Utility import APIConstants class CommonAPIResponse(object): def __init__(self,response,statusCode,url,apiKey=None): ''' Constructor ''' self.response=response self.statusCode=statusCode self.apiKey=apiKey self.url=url self.setResponseJson() self.processResponse() ''' def getStatusCode(self): return self.statusCode def getResponseJson(self): return self.responseJson def getResponse(self): return self.response def getAPIName(self): return self.apiName def getResponseHeaders(self): return self.responseHeaders def setMessage(self,message): self.responseMessage=message def getMessage(self): return self.responseMessage def setDetails(self,details): self.responseDetails=details def getDetails(self): return self.responseDetails ''' def setResponseJson(self): self.responseJson=self.response.json() self.responseHeaders=self.response.headers def processResponse(self): if(self.statusCode in APIConstants.FAULTY_RESPONSE_CODES): self.handleFaultyResponses() else: self.processResponseData() def handleFaultyResponses(self): return def processResponseData(self): return class APIResponse(CommonAPIResponse): ''' classdocs ''' def __init__(self, response,statusCode,url,apiKey): ''' Constructor ''' super(APIResponse,self).__init__(response,statusCode,url,apiKey) def handleFaultyResponses(self): if(self.statusCode==APIConstants.RESPONSECODE_NO_CONTENT): errorMsg=APIConstants.INVALID_DATA+"-"+APIConstants.INVALID_ID_MSG exception=ZCRMException(self.url,self.statusCode,errorMsg,APIConstants.NO_CONTENT,None,errorMsg) raise exception else: responseJSON=self.responseJson exception=ZCRMException(self.url,self.statusCode,responseJSON[APIConstants.MESSAGE],responseJSON[APIConstants.CODE],responseJSON[APIConstants.DETAILS],responseJSON[APIConstants.MESSAGE]) raise exception def processResponseData(self): respJson=self.responseJson if(self.apiKey in respJson): respJson=self.responseJson[self.apiKey] if(isinstance(respJson, list)): respJson=respJson[0] if(APIConstants.STATUS in respJson and (respJson[APIConstants.STATUS]==APIConstants.STATUS_ERROR)): exception=ZCRMException(self.url,self.statusCode,respJson[APIConstants.MESSAGE],respJson[APIConstants.CODE],respJson[APIConstants.DETAILS],respJson[APIConstants.MESSAGE]) raise exception elif(APIConstants.STATUS in respJson and (respJson[APIConstants.STATUS]==APIConstants.STATUS_SUCCESS)): self.status=respJson[APIConstants.STATUS] self.code=respJson[APIConstants.CODE] self.message=respJson[APIConstants.MESSAGE] self.details=respJson[APIConstants.DETAILS] class BulkAPIResponse(CommonAPIResponse): ''' This class is to store the Bulk APIs responses ''' def __init__(self, response,statusCode,url,apiKey): ''' Constructor ''' super(BulkAPIResponse,self).__init__(response,statusCode,url,apiKey) def handleFaultyResponses(self): if(self.statusCode==APIConstants.RESPONSECODE_NO_CONTENT): errorMsg=APIConstants.INVALID_DATA+"-"+APIConstants.INVALID_ID_MSG exception=ZCRMException(self.url,self.statusCode,errorMsg,APIConstants.NO_CONTENT,None,errorMsg) raise exception else: responseJSON=self.responseJson exception=ZCRMException(self.url,self.statusCode,responseJSON['message'],responseJSON['code'],responseJSON['details'],responseJSON['message']) raise exception def processResponseData(self): if(APIConstants.DATA in self.responseJson): dataList=self.responseJson[APIConstants.DATA] self.bulkEntityResponse=[] for eachRecord in dataList: if(APIConstants.STATUS in eachRecord): self.bulkEntityResponse.append(EntityResponse(eachRecord)) class EntityResponse(object): ''' This class is to store each entity response of the Bulk APIs response ''' def __init__(self,entityResponse): ''' Constructor ''' self.responseJson=entityResponse self.code=entityResponse[APIConstants.CODE] self.message=entityResponse[APIConstants.MESSAGE] self.status=entityResponse[APIConstants.STATUS] if(APIConstants.DETAILS in entityResponse): self.details=entityResponse[APIConstants.DETAILS] if(APIConstants.ACTION in entityResponse): self.upsertAction=entityResponse[APIConstants.ACTION] if(APIConstants.DUPLICATE_FIELD in entityResponse): self.upsertDuplicateField=entityResponse[APIConstants.DUPLICATE_FIELD] @staticmethod def getInstance(self,entityResponse): return EntityResponse(entityResponse)
zcrm-python-cl
/zcrm-python-cl-0.0.3.tar.gz/zcrm-python-cl-0.0.3/src/com/zoho/crm/library/api/APIResponse.py
APIResponse.py
from com.zoho.crm.library.common.Utility import ZCRMConfigUtil, APIConstants,HTTPConnector from com.zoho.crm.library.exception.Exception import ZCRMException from com.zoho.crm.library.api.APIResponse import APIResponse, BulkAPIResponse class APIRequest(object): ''' This class is to wrap the API request related stuff like request params,headers,body,..etc ''' def __init__(self, apiHandlerIns): ''' Constructor ''' self.constructAPIUrl() self.url+=apiHandlerIns.requestUrlPath if(not self.url.startswith("http")): self.url="https://"+self.url self.requestBody=apiHandlerIns.requestBody self.requestHeaders=apiHandlerIns.requestHeaders self.requestParams=apiHandlerIns.requestParams self.requestMethod=apiHandlerIns.requestMethod self.requestAPIKey=apiHandlerIns.requestAPIKey def constructAPIUrl(self): self.url=ZCRMConfigUtil.getAPIBaseUrl()+"/crm/"+ZCRMConfigUtil.getAPIVersion()+"/" def authenticateRequest(self): accessToken=ZCRMConfigUtil.getInstance().getAccessToken() if(self.requestHeaders==None): self.requestHeaders={APIConstants.AUTHORIZATION:APIConstants.OAUTH_HEADER_PREFIX+accessToken} else: self.requestHeaders[APIConstants.AUTHORIZATION]=APIConstants.OAUTH_HEADER_PREFIX+accessToken def getAPIResponse(self): try: self.authenticateRequest() connector=HTTPConnector.getInstance(self.url, self.requestParams, self.requestHeaders, self.requestBody, self.requestMethod, self.requestAPIKey, False) response=connector.triggerRequest() return APIResponse(response,response.status_code,self.url,self.requestAPIKey) except ZCRMException as ex: raise ex def getBulkAPIResponse(self): try: self.authenticateRequest() connector=HTTPConnector.getInstance(self.url, self.requestParams, self.requestHeaders, self.requestBody, self.requestMethod, self.requestAPIKey, False) response=connector.triggerRequest() return BulkAPIResponse(response,response.status_code,self.url,self.requestAPIKey) except ZCRMException as ex: raise ex
zcrm-python-cl
/zcrm-python-cl-0.0.3.tar.gz/zcrm-python-cl-0.0.3/src/com/zoho/crm/library/api/APIRequest.py
APIRequest.py
from com.zoho.crm.library.common.Utility import HTTPConnector from com.zoho.crm.library.setup.ZCRMRestClient import ZCRMRestClient from com.zoho.oauth.client.OAuthClient import ZohoOAuth from com.zoho.crm.library.crud.Operations import ZCRMModule import threading threadLocal=threading.local() class MyThread(threading.Thread): def __init__(self,email): super(MyThread,self).__init__() self.local=threadLocal self.email=email def run(self): self.local.email=self.email print self.local.email class MyClass(object): ''' classdocs ''' def __init__(self): ''' Constructor ''' def test(self): thread=threading.Thread(name='[email protected]') thread.start() #thread.__setattr__('email','[email protected]') ##print threading.currentThread.getName() #threading.local().email='[email protected]' #threading.local().__dict__['email']='[email protected]' #threading.local().email='[email protected]' #threading.currentThread.__setattr__('email','[email protected]') #threadLocal.userEmail='[email protected]' t1=MyThread('sumanth') t1.start() print t1.email print threading.current_thread().name ZCRMRestClient.initialize() ZCRMModule.getInstance('Leads').getRecord(440872000000162010) '''cliIns=ZohoOAuth.getClientInstance() grantToken='1000.c1bb6a1daf67ca93569f43a8d475c25b.d496850a04b7b6d52abf1866639d1202' print cliIns.generateAccessToken(grantToken) url="https://crm.localzoho.com/crm/v2/Leads" headers={} headers['Authorization']="Zoho-authtoken 924003cd681fb99dc60cc8ad7e2e8f60" #headers['Content-type']='application/json' #headers['Accept']='text/plain' reqBody={"data":[{"Last_Name":"Python2"},{"Last_Name":"Python3"}]} connector=HTTPConnector.getInstance(url,{},headers,reqBody,"POST",'data',False) connector.triggerRequest() ''' obj=MyClass() obj.test()
zcrm-python-cl
/zcrm-python-cl-0.0.3.tar.gz/zcrm-python-cl-0.0.3/src/com/zoho/crm/library/common/Test.py
Test.py
import requests import json from com.zoho.oauth.client.OAuthClient import ZohoOAuth from com.zoho.crm.library.exception.Exception import ZCRMException class HTTPConnector(object): ''' This module is to make HTTP connections, trigger the requests and receive the response ''' @staticmethod def getInstance(url,params,headers,body,method,apiKey,isBulkReq): return HTTPConnector(url,params,headers,body,method,apiKey,isBulkReq) def __init__(self, url,params,headers,body,method,apiKey,isBulkReq): ''' Constructor ''' self.url=url self.reqHeaders=headers self.reqMethod=method self.reqParams=params self.reqBody=body self.apiKey=apiKey self.isBulkReq=isBulkReq def triggerRequest(self): response=None if(self.reqMethod == APIConstants.REQUEST_METHOD_GET): #if(self.reqParams!=None and self.reqParams.length>0): # self.url=self.url+'?'+self.getRequestParamsAsString(self.reqParams) response=requests.get(self.url, headers=self.reqHeaders,params=self.reqParams,allow_redirects=False) elif(self.reqMethod==APIConstants.REQUEST_METHOD_PUT): response=requests.put(self.url, data=self.reqBody,params=self.reqParams,headers=self.reqHeaders,allow_redirects=False) elif(self.reqMethod==APIConstants.REQUEST_METHOD_POST): response=requests.post(self.url,data=json.dumps(self.reqBody), params=self.reqParams,headers=self.reqHeaders,allow_redirects=False) #print response #print response.status_code #print response.json() #print response.content #obj=BulkAPIResponse(response,response.status_code,self.url,self.apiKey) #print obj.statusCode #print obj.bulkEntityResponse[0].message #print obj.bulkEntityResponse[0].code #print obj.bulkEntityResponse[0].status #print obj.bulkEntityResponse[0].details elif(self.reqMethod==APIConstants.REQUEST_METHOD_DELETE): response=requests.delete(self.url,headers=self.reqHeaders,params=self.reqParams,allow_redirects=False) return response def getRequestParamsAsString(self,params): mapAsString='' for key in params: mapAsString+=key+'='+params[key] return mapAsString def setUrl(self,url): self.url=url def getUrl(self): return self.url def addHttpHeader(self,key,value): self.reqHeaders.put(key,value) def getHttpHeaders(self): return self.reqHeaders def setHttpRequestMethod(self,method): self.reqMethod=method def getHttpRequestMethod(self): return self.reqMethod def setRequestBody(self,reqBody): self.reqBody=reqBody def getRequestBody(self): return self.reqBody def addHttpRequestParams(self,key,value): self.reqParams.put(key,value) def getHttpRequestParams(self): return self.reqParams class APIConstants(object): ''' This module holds the constants required for the client library ''' ERROR="error" REQUEST_METHOD_GET="GET" REQUEST_METHOD_POST="POST" REQUEST_METHOD_PUT="PUT" REQUEST_METHOD_DELETE="DELETE" OAUTH_HEADER_PREFIX="Zoho-oauthtoken " AUTHORIZATION="Authorization" API_NAME="api_name" INVALID_ID_MSG = "The given id seems to be invalid." API_MAX_RECORDS_MSG = "Cannot process more than 100 records at a time." INVALID_DATA="INVALID_DATA" CODE_SUCCESS = "SUCCESS" STATUS_SUCCESS = "success" STATUS_ERROR = "error" LEADS = "Leads" ACCOUNTS = "Accounts" CONTACTS = "Contacts" DEALS = "Deals" QUOTES = "Quotes" SALESORDERS = "SalesOrders" INVOICES = "Invoices" PURCHASEORDERS = "PurchaseOrders" PER_PAGE = "per_page" PAGE = "page" COUNT = "count" MORE_RECORDS = "more_records" MESSAGE = "message" CODE = "code" STATUS = "status" DETAILS="details" DATA = "data" INFO = "info" RESPONSECODE_OK=200 RESPONSECODE_CREATED=201 RESPONSECODE_ACCEPTED=202 RESPONSECODE_NO_CONTENT=204 RESPONSECODE_MOVED_PERMANENTLY=301 RESPONSECODE_MOVED_TEMPORARILY=302 RESPONSECODE_NOT_MODIFIED=304 RESPONSECODE_BAD_REQUEST=400 RESPONSECODE_AUTHORIZATION_ERROR=401 RESPONSECODE_FORBIDDEN=403 RESPONSECODE_NOT_FOUND=404 RESPONSECODE_METHOD_NOT_ALLOWED=405 RESPONSECODE_REQUEST_ENTITY_TOO_LARGE=413 RESPONSECODE_UNSUPPORTED_MEDIA_TYPE=415 RESPONSECODE_TOO_MANY_REQUEST=429 RESPONSECODE_INTERNAL_SERVER_ERROR=500 DOWNLOAD_FILE_PATH="../../../../../../resources" USER_EMAIL_ID="user_email_id" ACTION="action" DUPLICATE_FIELD="duplicate_field" NO_CONTENT="No Content" FAULTY_RESPONSE_CODES=[RESPONSECODE_NO_CONTENT,RESPONSECODE_NOT_FOUND,RESPONSECODE_AUTHORIZATION_ERROR,RESPONSECODE_BAD_REQUEST,RESPONSECODE_FORBIDDEN,RESPONSECODE_INTERNAL_SERVER_ERROR,RESPONSECODE_METHOD_NOT_ALLOWED,RESPONSECODE_MOVED_PERMANENTLY,RESPONSECODE_MOVED_TEMPORARILY,RESPONSECODE_REQUEST_ENTITY_TOO_LARGE,RESPONSECODE_TOO_MANY_REQUEST,RESPONSECODE_UNSUPPORTED_MEDIA_TYPE] class ZCRMConfigUtil(object): ''' This class is to deal with configuration related things ''' configPropDict={} @staticmethod def getInstance(): return ZCRMConfigUtil() @staticmethod def initialize(isToInitializeOAuth): from com.PathIdentifier import PathIdentifier import os dirSplit=os.path.split(PathIdentifier.getClientLibraryRoot()) resources_path = os.path.join(dirSplit[0],'resources','configuration.properties') filePointer=open(resources_path,"r") ZCRMConfigUtil.configPropDict=CommonUtil.getFileContentAsDictionary(filePointer) if(isToInitializeOAuth): ZohoOAuth.initialize() @staticmethod def getAPIBaseUrl(): return ZCRMConfigUtil.configPropDict["apiBaseUrl"] @staticmethod def getAPIVersion(): return ZCRMConfigUtil.configPropDict["apiVersion"] def getAccessToken(self): from com.zoho.crm.library.setup.ZCRMRestClient import ZCRMRestClient userEmail=ZCRMRestClient.getInstance().getCurrentUserEmailID() if(userEmail==None and ZCRMConfigUtil.configPropDict['currentUserEmail']==None): raise ZCRMException('fetching current user email',400,'Current user should either be set in ZCRMRestClient or in configuration.properties file') elif(userEmail==None): userEmail=ZCRMConfigUtil.configPropDict['currentUserEmail'] clientIns=ZohoOAuth.getClientInstance() clientIns.getAccessToken(userEmail) class CommonUtil(object): ''' This class is to provide utility methods ''' @staticmethod def getFileContentAsDictionary(filePointer) : dictionary={} for line in filePointer: line=line.rstrip() keyValue=line.split("=") if(not keyValue[0].startswith('#')): dictionary[keyValue[0]]=keyValue[1] filePointer.close() return dictionary
zcrm-python-cl
/zcrm-python-cl-0.0.3.tar.gz/zcrm-python-cl-0.0.3/src/com/zoho/crm/library/common/Utility.py
Utility.py
# ZOHO CRM PYTHON SDK 2.1 ## Table Of Contents * [Overview](#overview) * [Registering a Zoho Client](#registering-a-zoho-client) * [Environmental Setup](#environmental-setup) * [Including the SDK in your project](#including-the-sdk-in-your-project) * [Persistence](#token-persistence) * [DataBase Persistence](#database-persistence) * [File Persistence](#file-persistence) * [Custom Persistence](#custom-persistence) * [Configuration](#configuration) * [Initialization](#initializing-the-application) * [Class Hierarchy](#class-hierarchy) * [Responses And Exceptions](#responses-and-exceptions) * [Threading](#threading-in-the-python-sdk) * [Multithreading in a Multi-User App](#multithreading-in-a-multi-user-app) * [Multi-threading in a Single User App](#multi-threading-in-a-single-user-app) * [Sample Code](#sdk-sample-code) ## Overview Zoho CRM PYTHON SDK offers a way to create client Python applications that can be integrated with Zoho CRM. ## Registering a Zoho Client Since Zoho CRM APIs are authenticated with OAuth2 standards, you should register your client app with Zoho. To register your app: - Visit this page [https://api-console.zoho.com](https://api-console.zoho.com) - Click on `ADD CLIENT`. - Choose a `Client Type`. - Enter **Client Name**, **Client Domain** or **Homepage URL** and **Authorized Redirect URIs** then click `CREATE`. - Your Client app would have been created and displayed by now. - Select the created OAuth client. - Generate grant token by providing the necessary scopes, time duration (the duration for which the generated token is valid) and Scope Description. ## Environmental Setup Python SDK is installable through **pip**. **pip** is a tool for dependency management in Python. SDK expects the following from the client app. - Client app must have Python(version 3 and above) - Python SDK must be installed into client app through **pip**. ## Including the SDK in your project You can include the SDK to your project using: - Install **Python** from [python.org](https://www.python.org/downloads/) (if not installed). - Install **Python SDK** - Navigate to the workspace of your client app. - Run the command below: ```sh pip install zcrmsdk==4.x.xb3 ``` - The Python SDK will be installed in your client application. ## Token Persistence Token persistence refers to storing and utilizing the authentication tokens that are provided by Zoho. There are three ways provided by the SDK in which persistence can be utilized. They are DataBase Persistence, File Persistence and Custom Persistence. ### Table of Contents - [DataBase Persistence](#database-persistence) - [File Persistence](#file-persistence) - [Custom Persistence](#custom-persistence) ### Implementing OAuth Persistence Once the application is authorized, OAuth access and refresh tokens can be used for subsequent user data requests to Zoho CRM. Hence, they need to be persisted by the client app. The persistence is achieved by writing an implementation of the inbuilt Abstract Base Class **[TokenStore](zcrmsdk/src/com/zoho/api/authenticator/store/token_store.py)**, which has the following callback methods. - **get_token(self, user, [token](zcrmsdk/src/com/zoho/api/authenticator/token.py))** - invoked before firing a request to fetch the saved tokens. This method should return implementation of inbuilt **Token Class** object for the library to process it. - **save_token(self, user, [token](zcrmsdk/src/com/zoho/api/authenticator/token.py))** - invoked after fetching access and refresh tokens from Zoho. - **delete_token(self, [token](zcrmsdk/src/com/zoho/api/authenticator/token.py))** - invoked before saving the latest tokens. - **get_tokens(self)** - The method to get the all the stored tokens. - **delete_tokens(self)** - The method to delete all the stored tokens. Note: - user is an instance of UserSignature Class. - token is an instance of Token Class. ### DataBase Persistence In case the user prefers to use default DataBase persistence, **MySQL** can be used. - The database name should be **zohooauth**. - There must be a table name **oauthtoken** with the following columns. - id int(11) - user_mail varchar(255) - client_id varchar(255) - client_secret varchar(255) - refresh_token varchar(255) - access_token varchar(255) - grant_token varchar(255) - expiry_time varchar(20) - redirect_url varchar(255) #### MySQL Query ```sql CREATE TABLE oauthtoken ( id varchar(255) NOT NULL, user_mail varchar(255) NOT NULL, client_id varchar(255), client_secret varchar(255), refresh_token varchar(255), access_token varchar(255), grant_token varchar(255), expiry_time varchar(20), redirect_url varchar(255), primary key (id) ) alter table oauthtoken auto_increment = 1; ``` #### Create DBStore object ```python from zcrmsdk.src.com.zoho.api.authenticator.store import DBStore """ DBStore takes the following parameters 1 -> DataBase host name. Default value "localhost" 2 -> DataBase name. Default value "zohooauth" 3 -> DataBase user name. Default value "root" 4 -> DataBase password. Default value "" 5 -> DataBase port number. Default value "3306" 6-> DataBase table name . Default value "oauthtoken" """ store = DBStore() store = DBStore(host='host_name', database_name='database_name', user_name='user_name', password='password', port_number='port_number', table_name = "table_name") ``` ### File Persistence In case of File Persistence, the user can persist tokens in the local drive, by providing the absolute file path to the FileStore object. - The File contains - id - user_mail - client_id - client_secret - refresh_token - access_token - grant_token - expiry_time - redirect_url #### Create FileStore object ```python from zcrmsdk.src.com.zoho.api.authenticator.store import FileStore """ FileStore takes the following parameter 1 -> Absolute file path of the file to persist tokens """ store = FileStore(file_path='/Users/username/Documents/python_sdk_token.txt') ``` ### Custom Persistence To use Custom Persistence, the user must implement the Abstract Base Class **[TokenStore](zcrmsdk/src/com/zoho/api/authenticator/store/token_store.py)** and override the methods. ```python from zcrmsdk.src.com.zoho.api.authenticator.store import TokenStore class CustomStore(TokenStore): def __init__(self): pass def get_token(self, user, token): """ Parameters: user (UserSignature) : A UserSignature class instance. token (Token) : A Token (zcrmsdk.src.com.zoho.api.authenticator.OAuthToken) class instance """ # Add code to get the token return None def save_token(self, user, token): """ Parameters: user (UserSignature) : A UserSignature class instance. token (Token) : A Token (zcrmsdk.src.com.zoho.api.authenticator.OAuthToken) class instance """ # Add code to save the token def delete_token(self, token): """ Parameters: token (Token) : A Token (zcrmsdk.src.com.zoho.api.authenticator.OAuthToken) class instance """ # Add code to delete the token def get_tokens(self): """ Returns: list: List of stored tokens """ # Add code to get all the stored tokens def delete_tokens(self): # Add code to delete all the stored tokens def get_token_by_id(id, token): """ The method to get id token details. Parameters: id (String) : A String id. token (Token) : A Token class instance. Returns: Token : A Token class instance representing the id token details. """ ``` ## Configuration Before you get started with creating your Python application, you need to register your client and authenticate the app with Zoho. - Create an instance of **Logger** Class to log exception and API information. ```python from zcrmsdk.src.com.zoho.api.logger import Logger """ Create an instance of Logger Class that takes two parameters 1 -> Level of the log messages to be logged. Can be configured by typing Logger.Levels "." and choose any level from the list displayed. 2 -> Absolute file path, where messages need to be logged. """ logger = Logger.get_instance(level=Logger.Levels.INFO, file_path="/Users/user_name/Documents/python_sdk_log.log") ``` - Create an instance of **UserSignature** Class that identifies the current user. ```python from zcrmsdk.src.com.zoho.crm.api.user_signature import UserSignature # Create an UserSignature instance that takes user Email as parameter user = UserSignature(email='[email protected]') ``` - Configure API environment which decides the domain and the URL to make API calls. ```python from zcrmsdk.src.com.zoho.crm.api.dc import USDataCenter """ Configure the environment which is of the pattern Domain.Environment Available Domains: USDataCenter, EUDataCenter, INDataCenter, CNDataCenter, AUDataCenter Available Environments: PRODUCTION(), DEVELOPER(), SANDBOX() """ environment = USDataCenter.PRODUCTION() ``` - Create an instance of OAuthToken with the information that you get after registering your Zoho client. ```python from zcrmsdk.src.com.zoho.api.authenticator.oauth_token import OAuthToken """ Create a Token instance that takes the following parameters 1 -> OAuth client id. 2 -> OAuth client secret. 3 -> Grant token. 4 -> Refresh token. 5 -> OAuth redirect URL. Default value is None 6 -> id """ token = OAuthToken(client_id='clientId', client_secret='clientSecret', grant_token='grant_token', refresh_token="refresh_token", redirect_url='redirectURL', id="id") ``` - Create an instance of [TokenStore](zcrmsdk/src/com/zoho/api/authenticator/store/token_store.py) to persist tokens, used for authenticating all the requests. ```python from zcrmsdk.src.com.zoho.api.authenticator.store import DBStore, FileStore """ DBStore takes the following parameters 1 -> DataBase host name. Default value "localhost" 2 -> DataBase name. Default value "zohooauth" 3 -> DataBase user name. Default value "root" 4 -> DataBase password. Default value "" 5 -> DataBase port number. Default value "3306" 6 -> DataBase table name. Default value "oauthtoken" """ store = DBStore() #store = DBStore(host='host_name', database_name='database_name', user_name='user_name', password='password', port_number='port_number', table_name = "table_name") """ FileStore takes the following parameter 1 -> Absolute file path of the file to persist tokens """ #store = FileStore(file_path='/Users/username/Documents/python_sdk_tokens.txt') ``` - Create an instance of **[SDKConfig](zcrmsdk/src/com/zoho/crm/api/sdk_config.py)** containing the SDK Configuration. ```python from zcrmsdk.src.com.zoho.crm.api.sdk_config import SDKConfig """ auto_refresh_fields (Default value is False) if True - all the modules' fields will be auto-refreshed in the background, every hour. if False - the fields will not be auto-refreshed in the background. The user can manually delete the file(s) or refresh the fields using methods from ModuleFieldsHandler(zcrmsdk/src/com/zoho/crm/api/util/module_fields_handler.py) pick_list_validation (Default value is True) A boolean field that validates user input for a pick list field and allows or disallows the addition of a new value to the list. if True - the SDK validates the input. If the value does not exist in the pick list, the SDK throws an error. if False - the SDK does not validate the input and makes the API request with the user’s input to the pick list connect_timeout (Default value is None) A Float field to set connect timeout read_timeout (Default value is None) A Float field to set read timeout """ config = SDKConfig(auto_refresh_fields=True, pick_list_validation=False, connect_timeout=None, read_timeout=None) ``` - The path containing the absolute directory path (in the key resource_path) to store user-specific files containing information about fields in modules. ```python resource_path = '/Users/user_name/Documents/python-app' ``` - Create an instance of RequestProxy containing the proxy properties of the user. ```python from zcrmsdk.src.com.zoho.crm.api.request_proxy import RequestProxy """ RequestProxy takes the following parameters 1 -> Host 2 -> Port Number 3 -> User Name. Default value is None 4 -> Password. Default value is an empty string """ request_proxy = RequestProxy(host='proxyHost', port=80) request_proxy = RequestProxy(host='proxyHost', port=80, user='userName', password='password') ``` ## Initializing the Application Initialize the SDK using the following code. ```python from zcrmsdk.src.com.zoho.crm.api.user_signature import UserSignature from zcrmsdk.src.com.zoho.crm.api.dc import USDataCenter from zcrmsdk.src.com.zoho.api.authenticator.store import DBStore, FileStore from zcrmsdk.src.com.zoho.api.logger import Logger from zcrmsdk.src.com.zoho.crm.api.initializer import Initializer from zcrmsdk.src.com.zoho.api.authenticator.oauth_token import OAuthToken from zcrmsdk.src.com.zoho.crm.api.sdk_config import SDKConfig class SDKInitializer(object): @staticmethod def initialize(): """ Create an instance of Logger Class that takes two parameters 1 -> Level of the log messages to be logged. Can be configured by typing Logger.Levels "." and choose any level from the list displayed. 2 -> Absolute file path, where messages need to be logged. """ logger = Logger.get_instance(level=Logger.Levels.INFO, file_path='/Users/user_name/Documents/python_sdk_log.log') # Create an UserSignature instance that takes user Email as parameter user = UserSignature(email='[email protected]') """ Configure the environment which is of the pattern Domain.Environment Available Domains: USDataCenter, EUDataCenter, INDataCenter, CNDataCenter, AUDataCenter Available Environments: PRODUCTION(), DEVELOPER(), SANDBOX() """ environment = USDataCenter.PRODUCTION() """ Create a Token instance that takes the following parameters 1 -> OAuth client id. 2 -> OAuth client secret. 3 -> Grant token. 4 -> Refresh token. 5 -> OAuth redirect URL. 6 -> id """ token = OAuthToken(client_id='clientId', client_secret='clientSecret', grant_token='grant_token', refresh_token="refresh_token", redirect_url='redirectURL', id="id") """ Create an instance of TokenStore 1 -> Absolute file path of the file to persist tokens """ store = FileStore(file_path='/Users/username/Documents/python_sdk_tokens.txt') """ Create an instance of TokenStore 1 -> DataBase host name. Default value "localhost" 2 -> DataBase name. Default value "zohooauth" 3 -> DataBase user name. Default value "root" 4 -> DataBase password. Default value "" 5 -> DataBase port number. Default value "3306" 6-> DataBase table name . Default value "oauthtoken" """ store = DBStore() store = DBStore(host='host_name', database_name='database_name', user_name='user_name', password='password',port_number='port_number', table_name = "table_name") """ auto_refresh_fields (Default value is False) if True - all the modules' fields will be auto-refreshed in the background, every hour. if False - the fields will not be auto-refreshed in the background. The user can manually delete the file(s) or refresh the fields using methods from ModuleFieldsHandler(zcrmsdk/src/com/zoho/crm/api/util/module_fields_handler.py) pick_list_validation (Default value is True) A boolean field that validates user input for a pick list field and allows or disallows the addition of a new value to the list. if True - the SDK validates the input. If the value does not exist in the pick list, the SDK throws an error. if False - the SDK does not validate the input and makes the API request with the user’s input to the pick list connect_timeout (Default value is None) A Float field to set connect timeout read_timeout (Default value is None) A Float field to set read timeout """ config = SDKConfig(auto_refresh_fields=True, pick_list_validation=False, connect_timeout=None, read_timeout=None) """ The path containing the absolute directory path (in the key resource_path) to store user-specific files containing information about fields in modules. """ resource_path = '/Users/user_name/Documents/python-app' """ Create an instance of RequestProxy class that takes the following parameters 1 -> Host 2 -> Port Number 3 -> User Name. Default value is None 4 -> Password. Default value is None """ request_proxy = RequestProxy(host='host', port=8080) request_proxy = RequestProxy(host='host', port=8080, user='user', password='password') """ Call the static initialize method of Initializer class that takes the following arguments 1 -> UserSignature instance 2 -> Environment instance 3 -> Token instance 4 -> TokenStore instance 5 -> SDKConfig instance 6 -> resource_path 7 -> Logger instance. Default value is None 8 -> RequestProxy instance. Default value is None """ Initializer.initialize(user=user, environment=environment, token=token, store=store, sdk_config=config, resource_path=resource_path, logger=logger, proxy=request_proxy) SDKInitializer.initialize() ``` - You can now access the functionalities of the SDK. Refer to the sample codes to make various API calls through the SDK. ## Class Hierarchy ![classdiagram](class_hierarchy.png) ## Responses and Exceptions All SDK methods return an instance of the APIResponse class. After a successful API request, the **get_object()** method returns an instance of the **ResponseWrapper** (for **GET**) or the **ActionWrapper** (for **POST, PUT, DELETE**) Whenever the API returns an error response, the **get_object()** returns an instance of **APIException** class. **ResponseWrapper** (for **GET** requests) and **ActionWrapper** (for **POST, PUT, DELETE** requests) are the expected objects for Zoho CRM APIs’ responses However, some specific operations have different expected objects, such as the following: - Operations involving records in Tags - **RecordActionWrapper** - Getting Record Count for a specific Tag operation - **CountWrapper** - Operations involving BaseCurrency - **BaseCurrencyActionWrapper** - Lead convert operation - **ConvertActionWrapper** - Retrieving Deleted records operation - **DeletedRecordsWrapper** - Record image download operation - **FileBodyWrapper** - MassUpdate record operations - **MassUpdateActionWrapper** - **MassUpdateResponseWrapper** ### GET Requests - The **get_object()** returns an instance of one of the following classes, based on the return type. - For **application/json** responses - **ResponseWrapper** - **CountWrapper** - **DeletedRecordsWrapper** - **MassUpdateResponseWrapper** - **APIException** - For **file download** responses - **FileBodyWrapper** - **APIException** ### POST, PUT, DELETE Requests - The **getObject()** returns an instance of one of the following classes - **ActionWrapper** - **RecordActionWrapper** - **BaseCurrencyActionWrapper** - **MassUpdateActionWrapper** - **ConvertActionWrapper** - **APIException** - These wrapper classes may contain one or a list of instances of the following classes, depending on the response. - **SuccessResponse Class**, if the request was successful. - **APIException Class**, if the request was erroneous. For example, when you insert two records, and one of them was inserted successfully while the other one failed, the ActionWrapper will contain one instance each of the SuccessResponse and APIException classes. All other exceptions such as SDK anomalies and other unexpected behaviours are thrown under the SDKException class. ## Threading in the Python SDK Threads in a Python program help you achieve parallelism. By using multiple threads, you can make a Python program run faster and do multiple things simultaneously. The **Python SDK** (from version 3.x.x) supports both single-user and multi-user app. ### Multithreading in a Multi-user App Multi-threading for multi-users is achieved using Initializer's static **switch_user()** method. switch_user() takes the value initialized previously for user, enviroment, token and sdk_config incase None is passed (or default value is passed). In case of request_proxy, if intended, the value has to be passed again else None(default value) will be taken. ```python # without proxy Initializer.switch_user(user=user, environment=environment, token=token, sdk_config=sdk_config_instance) # with proxy Initializer.switch_user(user=user, environment=environment, token=token, sdk_config=sdk_config_instance, proxy=request_proxy) ``` Here is a sample code to depict multi-threading for a multi-user app. ```python import threading from zcrmsdk.src.com.zoho.crm.api.user_signature import UserSignature from zcrmsdk.src.com.zoho.crm.api.dc import USDataCenter, EUDataCenter from zcrmsdk.src.com.zoho.api.authenticator.store import DBStore from zcrmsdk.src.com.zoho.api.logger import Logger from zcrmsdk.src.com.zoho.crm.api.initializer import Initializer from zcrmsdk.src.com.zoho.api.authenticator.oauth_token import OAuthToken from zcrmsdk.src.com.zoho.crm.api.record import * from zcrmsdk.src.com.zoho.crm.api.request_proxy import RequestProxy from zcrmsdk.src.com.zoho.crm.api.sdk_config import SDKConfig class MultiThread(threading.Thread): def __init__(self, environment, token, user, module_api_name, sdk_config, proxy=None): super().__init__() self.environment = environment self.token = token self.user = user self.module_api_name = module_api_name self.sdk_config = sdk_config self.proxy = proxy def run(self): try: Initializer.switch_user(user=self.user, environment=self.environment, token=self.token, sdk_config=self.sdk_config, proxy=self.proxy) print('Getting records for User: ' + Initializer.get_initializer().user.get_email()) response = RecordOperations().get_records(self.module_api_name) if response is not None: # Get the status code from response print('Status Code: ' + str(response.get_status_code())) if response.get_status_code() in [204, 304]: print('No Content' if response.get_status_code() == 204 else 'Not Modified') return # Get object from response response_object = response.get_object() if response_object is not None: # Check if expected ResponseWrapper instance is received. if isinstance(response_object, ResponseWrapper): # Get the list of obtained Record instances record_list = response_object.get_data() for record in record_list: for key, value in record.get_key_values().items(): print(key + " : " + str(value)) # Check if the request returned an exception elif isinstance(response_object, APIException): # Get the Status print("Status: " + response_object.get_status().get_value()) # Get the Code print("Code: " + response_object.get_code().get_value()) print("Details") # Get the details dict details = response_object.get_details() for key, value in details.items(): print(key + ' : ' + str(value)) # Get the Message print("Message: " + response_object.get_message().get_value()) except Exception as e: print(e) @staticmethod def call(): logger = Logger.get_instance(level=Logger.Levels.INFO, file_path="/Users/user_name/Documents/python_sdk_log.log") user1 = UserSignature(email="[email protected]") token1 = OAuthToken(client_id="clientId1", client_secret="clientSecret1", grant_token="Grant Token", refresh_token="refresh_token", id="id") environment1 = USDataCenter.PRODUCTION() store = DBStore() sdk_config_1 = SDKConfig(auto_refresh_fields=True, pick_list_validation=False) resource_path = '/Users/user_name/Documents/python-app' user1_module_api_name = 'Leads' user2_module_api_name = 'Contacts' environment2 = EUDataCenter.SANDBOX() user2 = UserSignature(email="[email protected]") sdk_config_2 = SDKConfig(auto_refresh_fields=False, pick_list_validation=True) token2 = OAuthToken(client_id="clientId2", client_secret="clientSecret2",grant_token="GRANT Token", refresh_token="refresh_token", redirect_url="redirectURL", id="id") request_proxy_user_2 = RequestProxy("host", 8080) Initializer.initialize(user=user1, environment=environment1, token=token1, store=store, sdk_config=sdk_config_1, resource_path=resource_path, logger=logger) t1 = MultiThread(environment1, token1, user1, user1_module_api_name, sdk_config_1) t2 = MultiThread(environment2, token2, user2, user2_module_api_name, sdk_config_2, request_proxy_user_2) t1.start() t2.start() t1.join() t2.join() MultiThread.call() ``` - The program execution starts from **call()**. - The details of **user1** are given in the variables user1, token1, environment1. - Similarly, the details of another user **user2** are given in the variables user2, token2, environment2. - For each user, an instance of **MultiThread class** is created. - When the **start()** is called which in-turn invokes the **run()**, the details of user1 are passed to the **switch_user** method through the **MultiThread object**. Therefore, this creates a thread for user1. - Similarly, When the **start()** is invoked again, the details of user2 are passed to the **switch_user** function through the **MultiThread object**. Therefore, this creates a thread for user2. ### Multi-threading in a Single User App Here is a sample code to depict multi-threading for a single-user app. ```python import threading from zcrmsdk.src.com.zoho.crm.api.user_signature import UserSignature from zcrmsdk.src.com.zoho.crm.api.dc import USDataCenter from zcrmsdk.src.com.zoho.api.authenticator.store import DBStore from zcrmsdk.src.com.zoho.api.logger import Logger from zcrmsdk.src.com.zoho.crm.api.initializer import Initializer from zcrmsdk.src.com.zoho.api.authenticator.oauth_token import OAuthToken from zcrmsdk.src.com.zoho.crm.api.sdk_config import SDKConfig from zcrmsdk.src.com.zoho.crm.api.record import * class MultiThread(threading.Thread): def __init__(self, module_api_name): super().__init__() self.module_api_name = module_api_name def run(self): try: print("Calling Get Records for module: " + self.module_api_name) response = RecordOperations().get_records(self.module_api_name) if response is not None: # Get the status code from response print('Status Code: ' + str(response.get_status_code())) if response.get_status_code() in [204, 304]: print('No Content' if response.get_status_code() == 204 else 'Not Modified') return # Get object from response response_object = response.get_object() if response_object is not None: # Check if expected ResponseWrapper instance is received. if isinstance(response_object, ResponseWrapper): # Get the list of obtained Record instances record_list = response_object.get_data() for record in record_list: for key, value in record.get_key_values().items(): print(key + " : " + str(value)) # Check if the request returned an exception elif isinstance(response_object, APIException): # Get the Status print("Status: " + response_object.get_status().get_value()) # Get the Code print("Code: " + response_object.get_code().get_value()) print("Details") # Get the details dict details = response_object.get_details() for key, value in details.items(): print(key + ' : ' + str(value)) # Get the Message print("Message: " + response_object.get_message().get_value()) except Exception as e: print(e) @staticmethod def call(): logger = Logger.get_instance(level=Logger.Levels.INFO, file_path="/Users/user_name/Documents/python_sdk_log.log") user = UserSignature(email="[email protected]") token = OAuthToken(client_id="clientId", client_secret="clientSecret", grant_token="grant_token", refresh_token="refresh_token", redirect_url="redirectURL", id="id") environment = USDataCenter.PRODUCTION() store = DBStore() sdk_config = SDKConfig() resource_path = '/Users/user_name/Documents/python-app' Initializer.initialize(user=user, environment=environment, token=token, store=store, sdk_config=sdk_config, resource_path=resource_path, logger=logger) t1 = MultiThread('Leads') t2 = MultiThread('Quotes') t1.start() t2.start() t1.join() t2.join() MultiThread.call() ``` - The program execution starts from **call()** where the SDK is initialized with the details of the user. - When the **start()** is called which in-turn invokes the run(), the module_api_name is switched through the MultiThread object. Therefore, this creates a thread for the particular MultiThread instance. ## SDK Sample code ```python from datetime import datetime from zcrmsdk.src.com.zoho.crm.api.user_signature import UserSignature from zcrmsdk.src.com.zoho.crm.api.dc import USDataCenter from zcrmsdk.src.com.zoho.api.authenticator.store import DBStore from zcrmsdk.src.com.zoho.api.logger import Logger from zcrmsdk.src.com.zoho.crm.api.initializer import Initializer from zcrmsdk.src.com.zoho.api.authenticator.oauth_token import OAuthToken from zcrmsdk.src.com.zoho.crm.api.record import * from zcrmsdk.src.com.zoho.crm.api import HeaderMap, ParameterMap from zcrmsdk.src.com.zoho.crm.api.sdk_config import SDKConfig class Record(object): def __init__(self): pass @staticmethod def get_records(): """ Create an instance of Logger Class that takes two parameters 1 -> Level of the log messages to be logged. Can be configured by typing Logger.Levels "." and choose any level from the list displayed. 2 -> Absolute file path, where messages need to be logged. """ logger = Logger.get_instance(level=Logger.Levels.INFO, file_path="/Users/user_name/Documents/python_sdk_log.log") # Create an UserSignature instance that takes user Email as parameter user = UserSignature(email="[email protected]") """ Configure the environment which is of the pattern Domain.Environment Available Domains: USDataCenter, EUDataCenter, INDataCenter, CNDataCenter, AUDataCenter Available Environments: PRODUCTION(), DEVELOPER(), SANDBOX() """ environment = USDataCenter.PRODUCTION() """ Create a Token instance that takes the following parameters 1 -> OAuth client id. 2 -> OAuth client secret. 3 -> Grant token. 4 -> Refresh token. 5 -> OAuth redirect URL. 6 -> id """ token = OAuthToken(client_id="clientId", client_secret="clientSecret", grant_token="grant_token", refresh_token="refresh_token", redirect_url="redirectURL", id="id") """ Create an instance of TokenStore 1 -> DataBase host name. Default value "localhost" 2 -> DataBase name. Default value "zohooauth" 3 -> DataBase user name. Default value "root" 4 -> DataBase password. Default value "" 5 -> DataBase port number. Default value "3306" 6-> DataBase table name . Default value "oauthtoken" """ store = DBStore() """ auto_refresh_fields (Default value is False) if True - all the modules' fields will be auto-refreshed in the background, every hour. if False - the fields will not be auto-refreshed in the background. The user can manually delete the file(s) or refresh the fields using methods from ModuleFieldsHandler(zcrmsdk/src/com/zoho/crm/api/util/module_fields_handler.py) pick_list_validation (Default value is True) A boolean field that validates user input for a pick list field and allows or disallows the addition of a new value to the list. if True - the SDK validates the input. If the value does not exist in the pick list, the SDK throws an error. if False - the SDK does not validate the input and makes the API request with the user’s input to the pick list connect_timeout (Default value is None) A Float field to set connect timeout read_timeout (Default value is None) A Float field to set read timeout """ config = SDKConfig(auto_refresh_fields=True, pick_list_validation=False, connect_timeout=None, read_timeout=None) """ The path containing the absolute directory path (in the key resource_path) to store user-specific files containing information about fields in modules. """ resource_path = '/Users/user_name/Documents/python-app' """ Call the static initialize method of Initializer class that takes the following arguments 1 -> UserSignature instance 2 -> Environment instance 3 -> Token instance 4 -> TokenStore instance 5 -> SDKConfig instance 6 -> resource_path 7 -> Logger instance """ Initializer.initialize(user=user, environment=environment, token=token, store=store, sdk_config=config, resource_path=resource_path, logger=logger) try: module_api_name = 'Leads' param_instance = ParameterMap() param_instance.add(GetRecordsParam.converted, 'both') param_instance.add(GetRecordsParam.cvid, '12712717217218') header_instance = HeaderMap() header_instance.add(GetRecordsHeader.if_modified_since, datetime.now()) response = RecordOperations().get_records(module_api_name, param_instance, header_instance) if response is not None: # Get the status code from response print('Status Code: ' + str(response.get_status_code())) if response.get_status_code() in [204, 304]: print('No Content' if response.get_status_code() == 204 else 'Not Modified') return # Get object from response response_object = response.get_object() if response_object is not None: # Check if expected ResponseWrapper instance is received. if isinstance(response_object, ResponseWrapper): # Get the list of obtained Record instances record_list = response_object.get_data() for record in record_list: # Get the ID of each Record print("Record ID: " + record.get_id()) # Get the createdBy User instance of each Record created_by = record.get_created_by() # Check if created_by is not None if created_by is not None: # Get the Name of the created_by User print("Record Created By - Name: " + created_by.get_name()) # Get the ID of the created_by User print("Record Created By - ID: " + created_by.get_id()) # Get the Email of the created_by User print("Record Created By - Email: " + created_by.get_email()) # Get the CreatedTime of each Record print("Record CreatedTime: " + str(record.get_created_time())) if record.get_modified_time() is not None: # Get the ModifiedTime of each Record print("Record ModifiedTime: " + str(record.get_modified_time())) # Get the modified_by User instance of each Record modified_by = record.get_modified_by() # Check if modified_by is not None if modified_by is not None: # Get the Name of the modified_by User print("Record Modified By - Name: " + modified_by.get_name()) # Get the ID of the modified_by User print("Record Modified By - ID: " + modified_by.get_id()) # Get the Email of the modified_by User print("Record Modified By - Email: " + modified_by.get_email()) # Get the list of obtained Tag instance of each Record tags = record.get_tag() if tags is not None: for tag in tags: # Get the Name of each Tag print("Record Tag Name: " + tag.get_name()) # Get the Id of each Tag print("Record Tag ID: " + tag.get_id()) # To get particular field value print("Record Field Value: " + str(record.get_key_value('Last_Name'))) print('Record KeyValues: ') for key, value in record.get_key_values().items(): print(key + " : " + str(value)) # Check if the request returned an exception elif isinstance(response_object, APIException): # Get the Status print("Status: " + response_object.get_status().get_value()) # Get the Code print("Code: " + response_object.get_code().get_value()) print("Details") # Get the details dict details = response_object.get_details() for key, value in details.items(): print(key + ' : ' + str(value)) # Get the Message print("Message: " + response_object.get_message().get_value()) except Exception as e: print(e) Record.get_records() ```
zcrmsdk
/zcrmsdk-4.0.0b3.tar.gz/zcrmsdk-4.0.0b3/README.md
README.md
# ZCross ZCross is a python library used to read low pressure gas sections from various sources like [LXCat](https://lxcat.net/home/). ## Installation To install this package just use pip: ``` shell pip install zcross ``` Cross section databases are not provided by `ZCross`: however, it is possible to download the cross section tables of interest from the [download section](https://nl.lxcat.net/data/download.php) of [LXCat](https:://www.lxcat.net). Once you download the cross sections in `XML` format, you can save it somewhere (we suggest under `/opt/zcross_data`) and to define an enviroment variable pointing to that path: ``` bash export ZCROSS_DATA=/opt/zcross_data ``` (you can add it to your `.profile` file) ## Examples List the database availables: ``` python import zcross zs = zcross.load_all() # be patient, it will take a while ... for z in zs: print(z.database) ``` Show the groups and references of a speficic database: ``` python import zcross z = zcross.load_by_name('ccc') for group in z.database: print(group) for reference in z.database.references: print('[{}]:'.format(reference.type)) for k,v in reference.items(): print(' {:<10} : {}'.format(k,v)) ``` Show the process of a specific group: ``` python import zcross z = zcross.load_by_name('itikawa') group = z.database[0] for process in group: print("Process {}: {}".format(process.id, process.get_simple_type())) print("Comment: {}\n".format(process.comment)) ``` Show the cross section table of a specific process: ``` python import zcross z = zcross.load_by_name('phelps') process = z.database['H2O'][5] print('Reaction:') print(process.get_reaction()) print('Energy [{}],\tArea [{}]'.format(process.energy_units, process.cross_section_units)) for energy, area in process: print('{:8.2f}\t{:e}'.format(energy, area)) ```
zcross
/zcross-0.0.20.tar.gz/zcross-0.0.20/README.md
README.md
from __future__ import annotations import logging from typing import Any import requests from requests.exceptions import HTTPError _LOGGER = logging.getLogger(__name__) class ZcsAzzurroInverter: """Class implementing ZCS Azzurro API for inverters.""" ENDPOINT = "https://third.zcsazzurroportal.com:19003" AUTH_KEY = "Authorization" AUTH_VALUE = "Zcs eHWAeEq0aYO0" CLIENT_AUTH_KEY = "client" CONTENT_TYPE = "application/json" REQUEST_TIMEOUT = 5 HISTORIC_DATA_KEY = "historicData" HISTORIC_DATA_COMMAND = "historicData" REALTIME_DATA_KEY = "realtimeData" REALTIME_DATA_COMMAND = "realtimeData" DEVICES_ALARMS_KEY = "deviceAlarm" DEVICES_ALARMS_COMMAND = "deviceAlarm" COMMAND_KEY = "command" PARAMS_KEY = "params" PARAMS_THING_KEY = "thingKey" PARAMS_REQUIRED_VALUES_KEY = "requiredValues" PARAMS_START_KEY = "start" PARAMS_END_KEY = "end" RESPONSE_SUCCESS_KEY = "success" RESPONSE_VALUES_KEY = "value" # Values of required values REQUIRED_VALUES_ALL = "*" REQUIRED_VALUES_SEP = "," def __init__(self, client: str, thing_serial: str, name: str | None = None) -> None: """Class initialization.""" self.client = client self._thing_serial = thing_serial self.name = name or self._thing_serial def _post_request(self, data: dict) -> requests.Response: """client: the client to set in header. data: the dictionary to be sent as json return: the response from request. """ headers = { ZcsAzzurroApi.AUTH_KEY: ZcsAzzurroApi.AUTH_VALUE, ZcsAzzurroApi.CLIENT_AUTH_KEY: self.client, "Content-Type": ZcsAzzurroApi.CONTENT_TYPE, } _LOGGER.debug( "post_request called with client %s, data %s. headers are %s", self.client, data, headers, ) response = requests.post( ZcsAzzurroApi.ENDPOINT, headers=headers, json=data, timeout=ZcsAzzurroApi.REQUEST_TIMEOUT, ) if response.status_code == 401: raise HTTPError(f"{response.status_code}: Authentication Error") return response def realtime_data_request( self, required_values: list[str] | None = None, ) -> dict: """Request realtime data.""" if not required_values: required_values = [ZcsAzzurroApi.REQUIRED_VALUES_ALL] data = { ZcsAzzurroApi.REALTIME_DATA_KEY: { ZcsAzzurroApi.COMMAND_KEY: ZcsAzzurroApi.REALTIME_DATA_COMMAND, ZcsAzzurroApi.PARAMS_KEY: { ZcsAzzurroApi.PARAMS_THING_KEY: self._thing_serial, ZcsAzzurroApi.PARAMS_REQUIRED_VALUES_KEY: ZcsAzzurroApi.REQUIRED_VALUES_SEP.join( required_values ), }, } } response = self._post_request(data) if not response.ok: raise ConnectionError(f"Request error: {response.status_code}") response_data: dict[str, Any] = response.json()[ZcsAzzurroApi.REALTIME_DATA_KEY] _LOGGER.debug("fetched realtime data %s", response_data) if not response_data[ZcsAzzurroApi.RESPONSE_SUCCESS_KEY]: raise ConnectionError("Response did not return correctly") return response_data[ZcsAzzurroApi.PARAMS_KEY][ ZcsAzzurroApi.RESPONSE_VALUES_KEY ][0][self._thing_serial] def alarms_request(self) -> dict: """Request alarms.""" required_values = [ZcsAzzurroApi.REQUIRED_VALUES_ALL] data = { ZcsAzzurroApi.DEVICES_ALARMS_KEY: { ZcsAzzurroApi.COMMAND_KEY: ZcsAzzurroApi.DEVICES_ALARMS_COMMAND, ZcsAzzurroApi.PARAMS_KEY: { ZcsAzzurroApi.PARAMS_THING_KEY: self._thing_serial, ZcsAzzurroApi.PARAMS_REQUIRED_VALUES_KEY: ZcsAzzurroApi.REQUIRED_VALUES_SEP.join( required_values ), }, } } response = self._post_request(data) if not response.ok: raise ConnectionError("Response did not return correctly") response_data: dict[str, Any] = response.json()[ ZcsAzzurroApi.DEVICES_ALARMS_KEY ] _LOGGER.debug("fetched realtime data %s", response_data) if not response_data[ZcsAzzurroApi.RESPONSE_SUCCESS_KEY]: raise ConnectionError("Response did not return correctly") return response_data[ZcsAzzurroApi.PARAMS_KEY][ ZcsAzzurroApi.RESPONSE_VALUES_KEY ][0][self._thing_serial] @property def identifier(self) -> str: """object identifier.""" return f"{self.client}_{self._thing_serial}" def check_connection(self) -> bool: try: self.realtime_data_request([]) return True except ConnectionError: return False
zcs-azzurro-api
/zcs_azzurro_api-2023.3.2-py3-none-any.whl/zcs_azzurro_api/zcs_azzurro_inverter.py
zcs_azzurro_inverter.py
from __future__ import annotations import logging from typing import Any import requests from .errors import DeviceOfflineError, HttpRequestError _LOGGER = logging.getLogger(__name__) class ZcsBase: """Class implementing ZCS Azzurro API basic functions.""" def __init__(self, client: str, thing_serial: str, name: str | None = None) -> None: """Class initialization.""" self.client = client self._thing_serial = thing_serial self.name = name or self._thing_serial def _post_request(self, data: dict) -> requests.Response: """client: the client to set in header. data: the dictionary to be sent as json return: the response from request. """ headers = { AUTH_KEY: AUTH_VALUE, CLIENT_AUTH_KEY: self.client, "Content-Type": CONTENT_TYPE, } _LOGGER.debug( "post_request called with client %s, data %s. headers are %s", self.client, data, headers, ) response = requests.post( Inverter.ENDPOINT, headers=headers, json=data, timeout=Inverter.REQUEST_TIMEOUT, ) if response.status_code == 401: raise HttpRequestError( f"{response.status_code}: Authentication Error", status_code=response.status_code) return response def realtime_data_request( self, required_values: list[str] | None = None, ) -> dict: """Request realtime data.""" if not required_values: required_values = [Inverter.REQUIRED_VALUES_ALL] data = { Inverter.REALTIME_DATA_KEY: { Inverter.COMMAND_KEY: Inverter.REALTIME_DATA_COMMAND, Inverter.PARAMS_KEY: { Inverter.PARAMS_THING_KEY: self._thing_serial, Inverter.PARAMS_REQUIRED_VALUES_KEY: Inverter.REQUIRED_VALUES_SEP.join( required_values ), }, } } response = self._post_request(data) if not response.ok: raise HttpRequestError( f"Request error: {response.status_code}", status_code=response.status_code) response_data: dict[str, Any] = response.json()[Inverter.REALTIME_DATA_KEY] _LOGGER.debug("fetched realtime data %s", response_data) if not response_data[Inverter.RESPONSE_SUCCESS_KEY]: raise DeviceOfflineError("Device request did not succeed") return response_data[Inverter.PARAMS_KEY][ Inverter.RESPONSE_VALUES_KEY ][0][self._thing_serial] def alarms_request(self) -> dict: """Request alarms.""" required_values = [Inverter.REQUIRED_VALUES_ALL] data = { Inverter.DEVICES_ALARMS_KEY: { Inverter.COMMAND_KEY: Inverter.DEVICES_ALARMS_COMMAND, Inverter.PARAMS_KEY: { Inverter.PARAMS_THING_KEY: self._thing_serial, Inverter.PARAMS_REQUIRED_VALUES_KEY: Inverter.REQUIRED_VALUES_SEP.join( required_values ), }, } } response = self._post_request(data) if not response.ok: raise HttpRequestError( "Response did not return correctly", status_code=response.status_code) response_data: dict[str, Any] = response.json()[ Inverter.DEVICES_ALARMS_KEY ] _LOGGER.debug("fetched realtime data %s", response_data) if not response_data[Inverter.RESPONSE_SUCCESS_KEY]: raise DeviceOfflineError("Device request did not succeed") return response_data[Inverter.PARAMS_KEY][ Inverter.RESPONSE_VALUES_KEY ][0][self._thing_serial] @property def identifier(self) -> str: """object identifier.""" return f"{self.client}_{self._thing_serial}" def check_connection(self) -> bool | None: self.realtime_data_request([]) return True
zcs-azzurro-api
/zcs_azzurro_api-2023.3.2-py3-none-any.whl/zcs_azzurro_api/zcs_base.py
zcs_base.py
from __future__ import annotations import logging from typing import Any import requests from .errors import DeviceOfflineError, HttpRequestError _LOGGER = logging.getLogger(__name__) from .const import ( ENDPOINT, AUTH_KEY, AUTH_VALUE, CLIENT_AUTH_KEY, CONTENT_TYPE, REQUEST_TIMEOUT, REALTIME_DATA_KEY, REALTIME_DATA_COMMAND, DEVICES_ALARMS_KEY, DEVICES_ALARMS_COMMAND, COMMAND_KEY, PARAMS_KEY, PARAMS_THING_KEY, PARAMS_REQUIRED_VALUES_KEY, RESPONSE_SUCCESS_KEY, RESPONSE_VALUES_KEY, REQUIRED_VALUES_ALL, REQUIRED_VALUES_SEP ) class Inverter: """Class implementing ZCS Azzurro API for inverters.""" def __init__(self, client: str, thing_serial: str, name: str | None = None) -> None: """Class initialization.""" self.client = client self._thing_serial = thing_serial self.name = name or self._thing_serial def _post_request(self, data: dict) -> requests.Response: """client: the client to set in header. data: the dictionary to be sent as json return: the response from request. """ headers = { AUTH_KEY: AUTH_VALUE, CLIENT_AUTH_KEY: self.client, "Content-Type": CONTENT_TYPE, } _LOGGER.debug( "post_request called with client %s, data %s. headers are %s", self.client, data, headers, ) response = requests.post( ENDPOINT, headers=headers, json=data, timeout=REQUEST_TIMEOUT, ) if response.status_code == 401: raise HttpRequestError( f"{response.status_code}: Authentication Error", status_code=response.status_code) return response def realtime_data_request( self, required_values: list[str] | None = None, ) -> dict: """Request realtime data.""" if not required_values: required_values = [REQUIRED_VALUES_ALL] data = { REALTIME_DATA_KEY: { COMMAND_KEY: REALTIME_DATA_COMMAND, PARAMS_KEY: { PARAMS_THING_KEY: self._thing_serial, PARAMS_REQUIRED_VALUES_KEY: REQUIRED_VALUES_SEP.join( required_values ), }, } } response = self._post_request(data) if not response.ok: raise HttpRequestError( f"Request error: {response.status_code}", status_code=response.status_code) response_data: dict[str, Any] = response.json()[REALTIME_DATA_KEY] _LOGGER.debug("fetched realtime data %s", response_data) if not response_data[RESPONSE_SUCCESS_KEY]: raise DeviceOfflineError("Device request did not succeed") return response_data[PARAMS_KEY][ RESPONSE_VALUES_KEY ][0][self._thing_serial] def alarms_request(self) -> dict: """Request alarms.""" required_values = [REQUIRED_VALUES_ALL] data = { DEVICES_ALARMS_KEY: { COMMAND_KEY: DEVICES_ALARMS_COMMAND, PARAMS_KEY: { PARAMS_THING_KEY: self._thing_serial, PARAMS_REQUIRED_VALUES_KEY: REQUIRED_VALUES_SEP.join( required_values ), }, } } response = self._post_request(data) if not response.ok: raise HttpRequestError( "Response did not return correctly", status_code=response.status_code) response_data: dict[str, Any] = response.json()[ DEVICES_ALARMS_KEY ] _LOGGER.debug("fetched realtime data %s", response_data) if not response_data[RESPONSE_SUCCESS_KEY]: raise DeviceOfflineError("Device request did not succeed") return response_data[PARAMS_KEY][ RESPONSE_VALUES_KEY ][0][self._thing_serial] @property def identifier(self) -> str: """object identifier.""" return f"{self.client}_{self._thing_serial}" def check_connection(self) -> bool | None: self.realtime_data_request([]) return True
zcs-azzurro-api
/zcs_azzurro_api-2023.3.2-py3-none-any.whl/zcs_azzurro_api/inverter.py
inverter.py
English version README is coming soon # zcs 中文说明 **`zcs`** is short of "<strong>Z</strong> <strong>C</strong>onfiguration <strong>S</strong>ystem": 结合了 `argparse` and `yacs` 的优点而打造出的**灵活, 强大**的配置管理系统 ## 竞品对比 | configuration system | 缺点 | 优点 | | :-- | -- | -- | | `argparse` | 不支持配置文件, 不支持层级结构, 难以有效的 dump 和复现实验参数 | `add_argument` 强大的 default, type 和 help 参数易于使用 | | `yacs` | 类型系统只支持仅有的几种类型, 不支持 `None`, 且类型 check 令人困惑 | 1. 灵活易用的层级配置; 2. 方便的 dump 和 load, 能有效的对实验参数进行记录与复现 | 为此, 我将 `argparse` 和 `yacs` 整合在一起, 打造出了集两者优点于一身的 **`zcs`** : 1. 具有和 `argparse.add_argument` 一样的定义 argument 的接口, 支持 NoneType, 自定义 Type, 易于使用 1. 和 `yacs` 具有同样的层级配置管理能力, 及方便的 dump 和 load, 能有效的对实验参数进行记录与复现 1. 用法完全兼容 `yacs` 和 `argparse`, 学习成本低 ## Example 使用 `zcs` 科学管理 config 的一个样例, 样例的代码在 [zcs/example](./example) **文件结构:** ```bash example/ ├── configs # 可选配置文件 │   ├── resnet_50.py │   └── senet_152.yaml ├── defaults.py # config 的模版 └── main.py ``` 1. 首先得定义一个 config 的模版: **defaults.py** ```python from zcs.config import CfgNode as CN # zcs 具有和 yacs 一样的接口和用法 from zcs import argument cfg = CN() cfg.LR = 1e-3 # 完全兼容 yacs 形式的自动识别 type cfg.OUTPUT = argument(default=None, type=str, help="Output dir") # 使 argument 来配置 default, type, help. # argument 用法和 parser.add_argument 一样 cfg.MODEL = CN() # 新建节点 cfg.MODEL.LAYERS = argument(101, int, "How many layers of model") # 等价于 parser.add_argument(default=101, type=int, help="...") # 支持 choices 等多种 parser.add_argument 的接口 cfg.MODEL.BACKBONE = argument( default='resnet', choices=['resnet', 'shufflenet', 'senet'], help="Backbone of the model", ) ``` 2. 接下来需要对每个实验写一个配置文件: ```yaml # configs/senet_152.yaml OUTPUT: 'outputs/senet_152' MODEL: BACKBONE: 'senet' LAYERS: 152 ``` 当然, 配置文件也可以是 python 文件, 这样会更加灵活和智能: ```python # configs/resnet_50.py from zcs.config import CfgNode as CN cfg = CN() cfg.OUTPUT = 'outputs/resnet_50' cfg.MODEL = CN() cfg.MODEL.LAYERS = 50 ``` 3. 在 **main.py** 内融合各层次的配置, 并生成最终的 cfg ```python import os import argparse from defaults import cfg parser = argparse.ArgumentParser() parser.add_argument( '--config', default="", metavar="FILE", help="Path to config file", ) parser.add_argument( "opts", help="Modify config options using the command-line", default=[], nargs=argparse.REMAINDER, ) if __name__ == "__main__": args = parser.parse_args() # 复制一份 cfg cfg = cfg.clone() # 融合 args.config 指定的的配置文件 cfg.merge_from_file(args.config) # 融合来自命令行的成对配置 cfg.merge_from_list(args.opts) # dump 每次实验参数, 方便复现 cfg.dump(os.path.join(cfg.OUTPUT, 'dump.yaml')) print(cfg) ``` 4. 比如, 现在要做一个基于 senet_152 学习率调大到 0.05 的实验 ```bash $ python main.py --config configs/senet_152.yaml LR 0.005 OUTPUT outputs/senet_152_lr0.005 LR: 0.005 MODEL: BACKBONE: senet LAYERS: 152 OUTPUT: outputs/senet_152_lr0.005 ```
zcs
/zcs-0.1.25.tar.gz/zcs-0.1.25/README.md
README.md
# ZCSCommonLibrary ![GitHub top language](https://img.shields.io/github/languages/top/Zandercraft/ZCSCommonLibrary) <a href="https://pypi.org/project/zcscommonlib/"> ![PyPI - Python Version](https://img.shields.io/pypi/pyversions/zcscommonlib) </a> ![Python package](https://github.com/Zandercraft/ZCSCommonLibrary/workflows/Python%20package/badge.svg) ![CodeQL](https://github.com/Zandercraft/ZCSCommonLibrary/workflows/CodeQL/badge.svg) <a href = "https://pypi.org/project/zcscommonlib/"> ![PyPi Package Deployment](https://github.com/Zandercraft/ZCSCommonLibrary/workflows/Upload%20Python%20Package/badge.svg) </a> <a href="https://commonlib.zandercraft.ca"> ![Website](https://img.shields.io/website?down_message=offline&label=Website&up_message=online&url=https%3A%2F%2Fcommonlib.zandercraft.ca) </a> A Common Library For Use In Computer Science Projects **PIP Package**: zcscommonlib <br /> **Current Version:** <a href = "https://pypi.org/project/zcscommonlib/"> ![PyPI](https://img.shields.io/pypi/v/zcscommonlib) </a> <br /> **License**: Mozilla Public License Version 2.0 [![Contributor Covenant](https://img.shields.io/badge/Contributor%20Covenant-v2.0%20adopted-ff69b4.svg)](CODE_OF_CONDUCT.md) [![Contribution Guide](https://img.shields.io/badge/Contributions%20Guide-Click%20Here!-limegreen.svg)](CONTRIBUTING.md) ### Importing The Library ```python from zcscommonlib import functions as zcs # Then use the functions as zcs.function() ``` ### Build The Library Prepare the library for development and build it. ```commandline pip install -r requirements.txt python setup.py bdist_wheel pip install ./dist/zcscommonlib-VERSION-py3-none-any.whl ``` ### Running Tests Run tests on the functions listed in `test_functions.py`. ```commandline python setup.py pytest ``` ### Wiki/Documentation All documentation for ZCSCommonLibrary is available [here](https://github.com/Zandercraft/ZCSCommonLibrary/wiki).
zcscommonlib
/zcscommonlib-0.3.4.tar.gz/zcscommonlib-0.3.4/README.md
README.md
from __future__ import print_function import argparse import os import re import sys import textwrap import basic_data_analysis import report VFDB_PATH=os.path.join(sys.path[0], "VFDB_setA_nt.v2.fas") # ---- 2a Nanopore-only Assembly ---- # Step1: Canu assembly def canu_assembly(fq_file, genome_size, outdir, prefix='canu_assembly', corrected_error_rate=0.144, minReadLength=1000, minOverlapLength=500): try: shell_path = os.path.join(outdir, '2-1.canu_assembly.sh') outdir = os.path.join(outdir, 'canu_assembly') if not os.path.isdir(outdir): os.makedirs(outdir) with open(shell_path, 'w') as f: f.write('canu -p {} -d {} genomesize={} correctedErrorRate={} useGrid=false -nanopore-raw {} minReadLength={} minOverlapLength={}\n'.format(prefix, outdir, genome_size, corrected_error_rate, fq_file, minReadLength, minOverlapLength)) print('\n---- (2) Start Nanopore-only Assembly ----\n\n---- Step2-1: Canu assembly ----') os.system('sh {}'.format(shell_path)) except Exception as e: raise e # Step2b: Polishing (Racon) def polish_racon(fa_file, fq_file, outdir, thread=16): try: shell_path = os.path.join(outdir, '2-2.polish_racon.sh') old_outdir = outdir outdir = os.path.join(outdir, 'racon_result') if not os.path.isdir(outdir): os.makedirs(outdir) with open(shell_path, 'w') as f: # bwa bam_path = os.path.join(outdir, 'reads.sorted.bam') sam_path = re.sub('.bam', '.sam', bam_path) tmp_fa_path = os.path.join(outdir, 'racon_tmp.fa') final_Fa_path = os.path.join(outdir, 'racon_polished.fa') f.write('bwa index {}\n'.format(fa_file)) f.write('bwa mem -x ont2d -t {} {} {} | samtools sort -o {} -T reads.tmp -\n'.format(thread, fa_file, fq_file, bam_path)) f.write('samtools view -h {} > {}\n'.format(bam_path, sam_path)) # racon consensus f.write('racon {} {} {} -t {} > {} \n'.format(fq_file, sam_path, fa_file, thread, tmp_fa_path)) f.write('grep -v "racon" {} | dos2unix > {}\n'.format(tmp_fa_path, final_Fa_path)) f.write('cp {} {}/assembly.fasta\n'.format(final_Fa_path, old_outdir)) f.write('rm {}\n'.format(tmp_fa_path)) print('---- Step2-2: Polishing using Racon ----') os.system('sh {}'.format(shell_path)) except Exception as e: raise e # ---- 2b Hybrid Assembly ---- # Step1: Hybrid assembly (Unicycler) def unicycler_hybrid_assembly(fq_file, short_r1, short_r2, outdir, mode='normal'): try: if not os.path.isdir(outdir): os.makedirs(outdir) shell_path = os.path.join(outdir, '2-1.unicycler_hybrid_assembly.sh') with open(shell_path, 'w') as f: f.write('unicycler -1 {} -2 {} -l {} --mode {} -o {}\n'.format(short_r1, short_r2, fq_file, mode, outdir)) print('\n---- (2) Start Nanopore-NGS Hybrid Assembly ----\n\n---- Step2-1: Unicycler Hybrid Assembly ----') os.system('sh {}'.format(shell_path)) except Exception as e: raise e def assembly_one(nanopore_fq_file, strain_info, outdir, thread=16, genomesize=None, minReadLength=1000, minOverlapLength=500): for key in strain_info.keys(): if len(strain_info[key]) == 2: if genomesize is None: print('Error: parameter "genomesize" is required when using canu software for long-read-only assembly.\n') sys.exit(-1) canu_assembly_dir = os.path.join(outdir, strain_info[key][1]) canu_assembly(nanopore_fq_file, genomesize, outdir=canu_assembly_dir, minReadLength=minReadLength, minOverlapLength=minOverlapLength) polish_racon(fa_file='{}/*.contigs.fasta'.format(canu_assembly_dir), fq_file=nanopore_fq_file, outdir=outdir, thread=thread) elif len(strain_info[key]) == 4: if os.path.exists(strain_info[key][2]) and os.path.exists(strain_info[key][3]): unicycler_hybrid_assembly(nanopore_fq_file, strain_info[key][2], strain_info[key][3], outdir) else: print('Error: please check the Illuminia fastq path: {} or {}'.format(strain_info[key][2], strain_info[key][3])) sys.exit(-1) def assembly_all(nanopore_fq_path, strain_info, outdir, thread=16, genomesize=None, minReadLength=1000, minOverlapLength=500): for key in strain_info.keys(): nanopore_fq_file = os.path.join(nanopore_fq_path, 'filtered_trimmed_{}.fastq.gz'.format(strain_info[key][1])) if len(strain_info[key]) == 2: if genomesize is None: print('Error: parameter "genomesize" is required when using canu software for long-read-only assembly.\n') sys.exit(-1) canu_assembly_dir = os.path.join(outdir, strain_info[key][1]) canu_assembly(nanopore_fq_file, genomesize, outdir=canu_assembly_dir, minReadLength=minReadLength, minOverlapLength=minOverlapLength) polish_racon(fa_file='{}/*.contigs.fasta'.format(os.path.join(canu_assembly_dir, 'canu_assembly')), fq_file=nanopore_fq_file, outdir=canu_assembly_dir, thread=thread) elif len(strain_info[key]) == 4: if os.path.exists(strain_info[key][2]) and os.path.exists(strain_info[key][3]): unicycler_hybrid_assembly(nanopore_fq_file, strain_info[key][2], strain_info[key][3], os.path.join(outdir, strain_info[key][1])) else: print('Error: please check the Illuminia fastq path: {} or {}'.format(strain_info[key][2], strain_info[key][3])) sys.exit(-1) # ---- 3 Data Analysis after assembly ---- # Step1: Genomic Anotation def prokka_annotation(fasta_file, prefix, outdir): outdir = os.path.join(outdir, '03.Prokka', prefix) shell_path = os.path.join(outdir, '3-1.prokka.sh') if not os.path.isdir(outdir): os.makedirs(outdir) with open(shell_path, 'w') as f: f.write('prokka --force -outdir {} --prefix {} {}\n'.format(outdir, prefix, fasta_file)) os.system('sh {}'.format(shell_path)) # Step2: Circlator Analysis and plot # Step3: AMR gene Identification def card_annotation(fasta_file, prefix, outdir): outdir = os.path.join(outdir, '04.Resistance_genes', prefix) shell_path = os.path.join(outdir, '4-1.rgi.sh') if not os.path.exists(outdir): os.makedirs(outdir) with open(shell_path, 'w') as f: f.write('rgi main -i {} -o {}/{} --clean'.format(fasta_file, outdir, prefix)) os.system('sh {}'.format(shell_path)) # Step4: Virulence Factors Identification def vfdb_annotation(fasta_file, prefix, outdir): outdir = os.path.join(outdir, '05.Virulence_genes', prefix) shell_path = os.path.join(outdir, '5-1.vfdb.sh') if not os.path.isdir(outdir): os.makedirs(outdir) with open(shell_path, 'w') as f: f.write('blastn -query {} -db {} -outfmt "6 qseqid sseqid pident length mismatch gapopen qstart qend sstart send evalue bitscore staxids stitle qcovs qcovhsp" -out {}/{}.vfdb.m6.out\n'.format(fasta_file, VFDB_PATH, outdir, prefix)) os.system('sh {}'.format(shell_path)) # filter vfdb blast result: identity >= 90%, hit_len/total_gene_len >= 90% and qcovs >= 90% out = open('{}/{}.vfdb.xls'.format(outdir, prefix), 'w') with open('{}/{}.vfdb.m6.out'.format(outdir, prefix)) as f: out.write('Contig\tStart\tEnd\tVFDB ID\tGene Name\tVirulence Factor\tGene Function\tSpecies\tIdentity\tHit Length\n') for line in f.readlines(): t = line.strip().split('\t') s = t[-3].split('|') sub_len = int((s[3].split('-'))[-1]) if (float(t[3]) / sub_len >= 0.9) and (float(t[2]) >= 90): s[4] = re.sub('ARO:', '', s[4]) out.write('{}\t{}\t{}\t{}\t{}\t{}\t{}\t{}\t{}\t{}\n'.format(t[0],t[6],t[7],s[4],s[5],s[6],s[8],s[7],t[2],t[3])) out.close() def annotation(strain_info, outdir): for key in strain_info.keys(): prefix = strain_info[key][1] fasta_file = os.path.join(outdir, '02.Assembly', prefix, 'assembly.fasta') prokka_annotation(fasta_file, prefix, outdir) card_annotation(fasta_file, prefix, outdir) vfdb_annotation(fasta_file, prefix, outdir)
zctestpy
/zctestpy-0.2.0.tar.gz/zctestpy-0.2.0/zcpy/assembly.py
assembly.py
import argparse import os import re import sys from version import __version__ import basic_data_analysis import assembly import pre import report def get_arguments(): """ Parse the command line arguments. """ parser = argparse.ArgumentParser(usage='Use "python %(prog)s -h/--help" for more information') parser.add_argument('-i', '--input', help='Fastq file path or Folder path containing fast5 or fastq files', required=True) parser.add_argument('-s', '--strain_list', help='2 columns or 4 columns, a tab-delimited list containing barcodes, strain names and/or PE reads1 path and reads2 path', required=True) parser.add_argument('-b', '--barcoding', help='Search for barcodes to demultiplex sequencing data', action='store_true') # parser.add_argument('-1', '--short_read1', help='FASTQ file of first short reads in each pair (optional)') # parser.add_argument('-2', '--short_read2', help='FASTQ file of second short reads in each pair (optional)') parser.add_argument('-g', '--genomesize', help='genome size, <number>[g|m|k], for example: 4800000, 48000k, 4.8m', default=None) # parser.add_argument('-p', '--prefix', default='sample', help='Prefix name (default: sample)') # parser.add_argument('--format', help='Input file format, fast5 or fastq (default: fastq)', choices=('fast5', 'fastq'), default='fastq') parser.add_argument('-t', '--thread', help='use NUM threads (default: 16)', type=int, default=16) parser.add_argument('-o', '--outdir', help='Output dir (default: None)', default=None) parser.add_argument('--step', help='Analysis steps: only basic data analysis [1], only assembly [2], or basic data analysis and assembly [all]', choices=('1', '2', 'all'), default='all') parser.add_argument('-f', '--flowcell', help='Flowcell used during the sequencing run (default: FLO-MIN106)', default='FLO-MIN106') parser.add_argument('-k', '--kit', help='Kit used during the sequencing run (default: SQK-LSK109)', default='SQK-LSK109') parser.add_argument('--barcode_kit', help='Barcode Kits used during the sequencing run (default: EXP-NBD104)', default='EXP-NBD104') parser.add_argument('-q', '--min_quality', help='Filter on a minimum average read quality score (default: 7)', default=7, type=int) parser.add_argument('-l', '--min_length', help='Filter on a minimum read length (default: 2000)', default=2000, type=int) parser.add_argument('-m', '--max_length', help='Filter on a maximum read length (default: 100000000)', default=100000000, type=int) # parser.add_argument('--minreadlength', help='Filter on a maximum read length (default: 100000000)', default=100000000, type=int) parser.add_argument('--minoverlaplength', help='Canu assembly: ignore read-to-read overlaps shorter than "number" bases long (default: 500)', default=500, type=int) parser.add_argument('-v', '--version', action='version', version=__version__, help="Show program's version number and exit") args = parser.parse_args() return args def main(): args = get_arguments() strain_info, columns = pre.pre_list(args.strain_list) if columns == 2: if args.genomesize is None: print('Error: parameter "genomesize" is required when using canu software for long-read-only assembly.\n') sys.exit(-1) if args.step == '1': # Step1: Basci_Data_Analysis basic_data_analysis_outdir = os.path.join(args.outdir, '01.Basic_Data_Analysis') basic_data_analysis.basic_data_analysis(input=args.input, strain_barcode_dict=strain_info, outdir=basic_data_analysis_outdir, flowcell=args.flowcell, kit=args.kit, barcode_kits=args.barcode_kit, thread=args.thread, barcoding=args.barcoding, min_quality=args.min_quality, min_length=args.min_length, max_length=args.max_length) os.system('cp {} {}'.format(os.path.join(os.path.dirname(os.path.abspath(sys.argv[0])), 'logo.jpg'), basic_data_analysis_outdir)) report.main(basic_data_analysis_outdir, min_quality=args.min_quality, min_len=args.min_length, max_len=args.max_length) elif args.step == '2': # Setp1: Assembly (Skip Basic Data Analysis) assembly_outdir = os.path.join(args.outdir, '02.Assembly') if not os.path.isdir(assembly_outdir): os.makedirs(assembly_outdir) assembly.assembly_one(args.input, strain_info=strain_info, thread=args.thread, genomesize=args.genomesize, outdir=assembly_outdir, minReadLength=args.min_length, minOverlapLength=args.minoverlaplength) # Step3: assembly.annotation(strain_info, args.outdir) else: # Step1: Basci_Data_Analysis basic_data_analysis_outdir = os.path.join(args.outdir, '01.Basic_Data_Analysis') basic_data_analysis.basic_data_analysis(input=args.input, strain_barcode_dict=strain_info, outdir=basic_data_analysis_outdir, flowcell=args.flowcell, kit=args.kit, barcode_kits=args.barcode_kit, thread=args.thread, barcoding=args.barcoding, min_quality=args.min_quality, min_length=args.min_length, max_length=args.max_length) os.system('cp {} {}'.format(os.path.join(os.path.dirname(os.path.abspath(sys.argv[0])), 'logo.jpg'), basic_data_analysis_outdir)) report.main(basic_data_analysis_outdir, min_quality=args.min_quality, min_len=args.min_length, max_len=args.max_length) # Step2: Assembly assembly_outdir = os.path.join(args.outdir, '02.Assembly') if not os.path.isdir(assembly_outdir): os.makedirs(assembly_outdir) if args.barcoding: nanopore_fq_file = os.path.join(basic_data_analysis_outdir, 'barcode_demultiplexing_data') else: nanopore_fq_file = basic_data_analysis_outdir assembly.assembly_all(nanopore_fq_file, strain_info=strain_info, thread=args.thread, genomesize=args.genomesize, outdir=assembly_outdir, minReadLength=args.min_length, minOverlapLength=args.minoverlaplength) # Step3: assembly.annotation(strain_info, args.outdir) if __name__ == '__main__': main()
zctestpy
/zctestpy-0.2.0.tar.gz/zctestpy-0.2.0/zcpy/main.py
main.py
from __future__ import print_function from collections import defaultdict import os import pandas import sys import time import re reload(sys) sys.setdefaultencoding('utf8') TEMPLATE = ''' <!DOCTYPE html> <html> <head> <meta charset="UTF-8"> <title>Nanopore测序数据统计分析报告</title> </head> <body> {body} </body> </html> ''' def buildDOC(body): """ :param body: body of TEMPLATE """ try: html = TEMPLATE.format( body=body ) # 向模板中填充数据 return html except Exception as err: raise err def write_to_file(html, file): """ write html to file """ with open(file, 'w') as f: f.write(html) def statistics(outdir, prefix=None): stat_raw = defaultdict(lambda: defaultdict(lambda: [])) stat_filtered = defaultdict(lambda: defaultdict(lambda: [])) len_dist = [] if os.path.isdir(os.path.join(outdir, 'quality_control')): for item in os.listdir(os.path.join(outdir, 'quality_control')): if item.endswith('_QCstat.out') and not item.startswith("filtered_"): prefix = item.replace('_QCstat.out', '') if prefix == 'none': continue with open(os.path.join(outdir, 'quality_control', item)) as f: for line in f: if line.startswith('# Fastq stats for'): read_len = int((line.strip().split('reads >= '))[-1].replace('bp', '')) len_dist.append(read_len) for i in range(10): line = next(f) tmp = line.strip().split(': ')[-1] if line.startswith('%') or line.startswith('meanLen'): tmp = round(float(tmp), 2) stat_raw[prefix][read_len].append(tmp) elif item.endswith('_QCstat.out') and item.startswith("filtered_"): prefix = item.replace('_QCstat.out', '').replace('filtered_', '') if prefix == 'none': continue with open(os.path.join(outdir, 'quality_control', item)) as f: for line in f: if line.startswith('# Fastq stats for'): read_len = int((line.strip().split('reads >= '))[-1].replace('bp', '')) len_dist.append(read_len) for i in range(10): line = next(f) tmp = line.strip().split(': ')[-1] if line.startswith('%') or line.startswith('meanLen'): tmp = round(float(tmp), 2) stat_filtered[prefix][read_len].append(tmp) elif os.path.exists(os.path.join(outdir, 'QCstat.out')) and os.path.exists(os.path.join(outdir, 'filtered_QCstat.out')): if prefix is None: prefix = 'SAMPLE' with open(os.path.join(outdir, 'QCstat.out')) as f: for line in f: if line.startswith('# Fastq stats for'): read_len = int((line.strip().split('reads >= '))[-1].replace('bp', '')) len_dist.append(read_len) for i in range(10): line = next(f) tmp = line.strip().split(': ')[-1] if line.startswith('%') or line.startswith('meanLen'): tmp = round(float(tmp), 2) stat_raw[prefix][read_len].append(tmp) with open(os.path.join(outdir, 'filtered_QCstat.out')) as f: for line in f: if line.startswith('# Fastq stats for'): read_len = int((line.strip().split('reads >= '))[-1].replace('bp', '')) len_dist.append(read_len) for i in range(10): line = next(f) tmp = line.strip().split(': ')[-1] if line.startswith('%') or line.startswith('meanLen'): tmp = round(float(tmp), 2) stat_filtered[prefix][read_len].append(tmp) else: print('Not found "quality_control" dir or stat file: "QCstat.out" and "filtered_QCstat.out" in dir {}.'.format(outdir)) sys.exit(-1) with open(os.path.join(outdir, 'raw_reads_stat.xls'), 'w') as f: f.write('sample\treads >=\tnumReads\t%totalNumReads\tnumBasepairs\t%totalBasepairs\tmeanLen\tmedianLen\tminLen\tmaxLen\tN50\tL50\n') for key in sorted(stat_raw.keys()): for subkey in sorted(set(len_dist)): if subkey in stat_raw[key].keys(): f.write('{}\t>={}kb\t{}\n'.format(key, round(subkey/1000, 2), '\t'.join(map(str, stat_raw[key][subkey])))) else: f.write('{}\t>={}kb\t{}\n'.format(key, round(subkey/1000, 2), '\t'.join(['0'] * 10))) with open(os.path.join(outdir, 'filtered_reads_stat.xls'), 'w') as f: f.write('sample\treads >=\tnumReads\t%totalNumReads\tnumBasepairs\t%totalBasepairs\tmeanLen\tmedianLen\tminLen\tmaxLen\tN50\tL50\n') for key in sorted(stat_filtered.keys()): for subkey in sorted(set(len_dist)): if subkey in stat_filtered[key].keys(): f.write('{}\t>={}kb\t{}\n'.format(key, round(subkey/1000, 2), '\t'.join(map(str, stat_filtered[key][subkey])))) else: f.write('{}\t>={}kb\t{}\n'.format(key, round(subkey/1000, 2), '\t'.join(['0'] * 10))) def read_table(file): """ convert to html format table """ data = {} # df = pd.DataFrame(data) with open(file, 'r') as f: header = f.readline().replace('\n', '').split('\t') for item in header: data[item] = [] for line in f.readlines(): line = line.replace('\n', '') temp = line.split('\t') for i in range(len(temp)): data[header[i]].append(temp[i]) return data, header def to_html(raw_stat, filtered_stat, min_quality=7, min_len=2000, max_len=None, png=None): localtime = time.strftime("%Y-%m-%d", time.localtime()) body = '<div style="text-align: center;vertical-align: middle;">' if max_len is not None: start = '2)过滤低质量read:(a) 剔除平均质量值Q小于{}的read;(b) 剔除长度小于{} bp或长度大于{} bp的read。'.format(min_quality, min_len, max_len) else: start = '2)过滤低质量read:(a) 剔除平均质量值Q小于{}的read;(b) 剔除长度小于{} bp的read。'.format(min_quality, min_len) body += ('<img src="./logo.jpg" height="8%" width="8%" style="float:left;"><h4 style="float:right;">{}</h4>\n' '<h1 align="center">Nanopore测序数据基本统计分析报告</h1>\n' '<hr style="FILTER: alpha(opacity=100,finishopacity=0,style=2)" width="100%" color=#A2CD5A SIZE=10>\n' '<h2>===== 质控分析简介 ====</h2>\n' '<hr style="border:3px dashed #A2CD5A; height:3px" SIZE=3 width="80%" >\n' '<hr style="border:1px dashed #A2CD5A" width="80%" color=#A2CD5A SIZE=3>\n' '<hr style="border:1px dashed #A2CD5A" width="80%" color=#A2CD5A SIZE=3>\n' '<p align="left" style="font-size:120%">\n' '<br />{}本流程针对Nanopore原始下机数据进行处理,支持fast5或fastq两种格式。基本步骤包括:' '<br />{}1)去接头,拆分barcode标签(可选):利用Qcat软件去除接头序列,同时利用Guppy对不同样本进行拆分,得到各样本的测序数据(如果测序过程中添加了barcode标签)。\n' '<br />{}{}\n' '<br />{}3)统计基本信息:包括原始测序数据和过滤后的测序数据两部分,具体统计信息如下。</p>\n' '<h2>===== 原始测序数据 ====</h2>\n' '<hr style="border:3px dashed #A2CD5A; height:3px" SIZE=3 width="80%" >\n' '<hr style="border:1px dashed #A2CD5A" width="80%" color=#A2CD5A SIZE=3>\n' '<hr style="border:1px dashed #A2CD5A" width="80%" color=#A2CD5A SIZE=3>\n' '<h3>1. 基本信息统计表</h3>\n' '<hr style="border:3px dashed #A2CD5A; height:3px" SIZE=3 width="50%">\n' '<hr style="border:1px dashed #A2CD5A" width="50%" color=#A2CD5A SIZE=3>\n' ).format(localtime, '&nbsp' * 30, '&nbsp' * 30, '&nbsp' * 30, '&nbsp' * 30, start, '&nbsp' * 30) raw_data, raw_header = read_table(raw_stat) raw_df = pandas.DataFrame(raw_data, columns=raw_header) # body += raw_df.to_html(index=False) tmp = raw_df.to_html(index=False) tmp = re.sub('<table border="1" class="dataframe">', '<table border="1" width="80%" class="dataframe" style="text-align: right; margin:auto"><caption>表1:原始测序数据基本信息统计表</caption>', tmp) body += tmp body += ('<h3>2. Reads长度-质量值分布图</h3>\n' '<hr style="border:3px dashed #A2CD5A; height:3px" SIZE=3 width="50%">\n' '<hr style="border:1px dashed #A2CD5A" width="50%" color=#A2CD5A SIZE=3>\n') # body = '<style> img {width: 100px;} </style>\n' if os.path.isdir(os.path.join(os.path.dirname(raw_stat), 'quality_control')): qc_path = os.path.join(os.path.dirname(raw_stat), 'quality_control') for item in os.listdir(qc_path): if item.endswith('.png') and not item.startswith('filtered_'): if item.startswith('none_') or item.startswith('filtered_none_'): continue # body += '<img src=\"{}\">\n'.format(os.path.join(qc_path, item)) body += '<img src=\"./{}\" height="40%" width="40%"><br />\n'.format(os.path.join('quality_control', item)) else: qc_path = os.path.dirname(raw_stat) for item in os.listdir(qc_path): if item.endswith('.png') and not item.startswith('filtered_'): if item.startswith('none_') or item.startswith('filtered_none_'): continue # body += '<img src=\"{}\">\n'.format(os.path.join(qc_path, item)) body += '<img src=\"./{}\" height="40%" width="40%">\n'.format(item) body += ( '<hr style="border:1px dashed #A2CD5A" width="80%" color=#A2CD5A SIZE=3>\n' '<hr style="border:1px dashed #A2CD5A" width="80%" color=#A2CD5A SIZE=3>\n' '<hr style="border:3px dashed #A2CD5A; height:3px" SIZE=3 width="80%" >\n' '<h2>===== 过滤后测序数据 ====</h2>\n' '<hr style="border:3px dashed #A2CD5A; height:3px" SIZE=3 width="80%" >\n' '<hr style="border:1px dashed #A2CD5A" width="80%" color=#A2CD5A SIZE=3>\n' '<hr style="border:1px dashed #A2CD5A" width="80%" color=#A2CD5A SIZE=3>\n' '<h3>1. 基本信息统计表</h3>\n' '<hr style="border:3px dashed #A2CD5A; height:3px" SIZE=3 width="50%">\n' '<hr style="border:1px dashed #A2CD5A" width="50%" color=#A2CD5A SIZE=3>\n') filtered_data, filtered_header = read_table(filtered_stat) filtered_df = pandas.DataFrame(filtered_data, columns=filtered_header) # body += filtered_df.to_html(index=False) tmp = filtered_df.to_html(index=False) tmp = re.sub('<table border="1" class="dataframe">', '<table border="1" width="80%" class="dataframe" style="text-align: right; margin:auto"><caption>表2:过滤后测序数据基本信息统计表</caption>', tmp) body += tmp body += ('<h3>2. Reads长度-质量值分布图</h3>\n' '<hr style="border:3px dashed #A2CD5A; height:3px" SIZE=3 width="50%">\n' '<hr style="border:1px dashed #A2CD5A" width="50%" color=#A2CD5A SIZE=3>\n') if os.path.isdir(os.path.join(os.path.dirname(raw_stat), 'quality_control')): qc_path = os.path.join(os.path.dirname(raw_stat), 'quality_control') for item in os.listdir(qc_path): if item.endswith('.png') and item.startswith('filtered_'): if item.startswith('none_') or item.startswith('filtered_none_'): continue # body += '<img src=\"{}\">\n'.format(os.path.join(qc_path, item)) body += '<img src=\"./{}\" height="40%" width="40%"><br />\n'.format(os.path.join('quality_control', item)) else: qc_path = os.path.dirname(raw_stat) for item in os.listdir(qc_path): if item.endswith('.png') and item.startswith('filtered_'): if item.startswith('none_') or item.startswith('filtered_none_'): continue # body += '<img src=\"{}\">\n'.format(os.path.join(qc_path, item)) body += '<img src=\"./{}\" height="40%" width="40%">\n'.format(item) body += '</div>' html_out = buildDOC(body) write_to_file(html_out, os.path.join(os.path.dirname(raw_stat), 'reads_stat.html')) def main(outdir, min_quality=7, min_len=2000, max_len=None): statistics(outdir) to_html(os.path.join(outdir, 'raw_reads_stat.xls'), os.path.join(outdir, 'filtered_reads_stat.xls'), min_quality, min_len, max_len) if __name__ == '__main__': main(sys.argv[1])
zctestpy
/zctestpy-0.2.0.tar.gz/zctestpy-0.2.0/zcpy/report.py
report.py
from __future__ import print_function import os import re import sys def trim_adaptors(fastq_dir, strain_barcode_dict, barcode_kits='EXP-NBD104', outdir='', thread=16, demultiplexing=False): """ Trim adaptors and demultiplexing using guppy_barcoder and qcat """ try: with open(os.path.join(outdir, '1.demultiplexing_trimming.sh'), 'w') as f: flag = 0 if os.path.isdir(fastq_dir): for item in os.listdir(fastq_dir): if item.endswith('.fastq.gz') or item.endswith('.fastq') or item.endswith('.fq.gz') or item.endswith('.fq'): flag = 1 break if flag == 0: print('Error: Not found ".fastq(.gz)" or ".fq(.gz)" format files in dir {}.'.format(fastq_dir)) sys.exit(-1) if demultiplexing is False: for key in strain_barcode_dict.keys(): tmp_path = os.path.join(outdir, 'tmp.fastq') prefix = os.path.join(outdir, 'trimmed_{}'.format(strain_barcode_dict[key][1])) f.write('cat {} >{} | qcat --trim --detect-middle -f {} -o {}.fastq\nrm {}\n'.format(os.path.join(fastq_dir, '*.fastq'), tmp_path, tmp_path, prefix, tmp_path)) # f.write('cat {} >{} | qcat --trim --detect-middle -f {} -o {}/trimmed_sample.fastq\nrm {}\n'.format(os.path.join(fastq_dir, '*.fastq'), os.path.join(outdir, 'tmp.fastq'), outdir, os.path.join(outdir, 'tmp.fastq'), os.path.join(outdir, 'tmp.fastq'))) else: f.write('guppy_barcoder -i {} --barcode_kits {} -s {}/barcode_demultiplexing_data --min_score 80 -q 100000000 -t {}\n'.format(fastq_dir, barcode_kits, outdir, thread)) if barcode_kits == 'EXP-NBD103' or barcode_kits == 'EXP-NBD104': barcode_kits_qcat = 'NBD103/NBD104' elif barcode_kits == 'EXP-NBD114': barcode_kits_qcat = 'NBD114' else: barcode_kits_qcat = 'NBD104/NBD114' for key in strain_barcode_dict.keys(): print('qcat -k {} --trim --detect-middle -f {}/barcode_demultiplexing_data/{}/*.fastq -o {}/barcode_demultiplexing_data/trimmed_{}.fastq\n'.format(barcode_kits_qcat, outdir, key, outdir, strain_barcode_dict[key][1])) f.write('qcat -k {} --trim --detect-middle -f {}/barcode_demultiplexing_data/{}/*.fastq -o {}/barcode_demultiplexing_data/trimmed_{}.fastq\n'.format(barcode_kits_qcat, outdir, key, outdir, strain_barcode_dict[key][1])) print('####### Demultiplexing Using Guppy and Trimming Adaptors Using Qcat ########') os.system('sh {}/1.demultiplexing_trimming.sh'.format(outdir)) except Exception as e: raise e def quality_control_pauvre(outdir='', prefix=None): """ Quality control using pauvre """ try: with open(os.path.join(outdir, '3.quality_control_pauvre.sh'), 'w') as f: if os.path.isdir('{}/barcode_demultiplexing_data'.format(outdir)): if not os.path.exists(os.path.join(outdir, 'quality_control')): os.makedirs(os.path.join(outdir, 'quality_control')) for item in os.listdir('{}/barcode_demultiplexing_data'.format(outdir)): if item.endswith('.fastq.gz') or item.endswith('.fastq') or item.endswith('.fq.gz') or item.endswith('.fq'): prefix = re.sub('.fastq.gz|.fastq|.fq.gz|.fq|.gz', '', os.path.basename(item)) tmp_name = '{} read length vs mean quality'.format(prefix) f.write('pauvre marginplot -f {}/barcode_demultiplexing_data/{} -y -t \"{}\" -o {}_QCstat >{}/quality_control/{}_QCstat.out && mv *.png {}/quality_control\n'.format(outdir, item, tmp_name, prefix, outdir, prefix, outdir)) else: for item in os.listdir(outdir): if item.startswith('trimmed_') and (item.endswith('.fastq.gz') or item.endswith('.fastq') or item.endswith('.fq.gz') or item.endswith('.fq')): prefix = re.sub('.fastq.gz|.fastq|.fq.gz|.fq|.gz', '', os.path.basename(item)) tmp_name = '{} read length vs mean quality'.format(prefix) f.write('pauvre marginplot -f {}/{} -y -t \"{}\" -o QCstat >{}/QCstat.out && mv *.png {}\n'.format(outdir, item, tmp_name, outdir, outdir)) elif item.startswith('filtered_') and (item.endswith('.fastq.gz') or item.endswith('.fastq') or item.endswith('.fq.gz') or item.endswith('.fq')): prefix = re.sub('.fastq.gz|.fastq|.fq.gz|.fq|.gz', '', os.path.basename(item)) tmp_name = '{} read length vs mean quality'.format(prefix) f.write('pauvre marginplot -f {}/{} -y -t \"{}\" -o filtered_QCstat >{}/filtered_QCstat.out && mv *.png {}\n'.format(outdir, item, tmp_name, outdir, outdir)) # f.write('pauvre marginplot -f {}/trimmed_sample.fastq -y -o QCstat >{}/QCstat.out && mv *.png {}\n'.format(outdir, outdir, outdir)) # f.write('pauvre marginplot -f {}/filtered_trimmed_sample.fastq -y -o filtered_QCstat >{}/filtered_QCstat.out && mv *.png {}\n'.format(outdir, outdir, outdir)) print('####### Quality Control Using Pauvre ########') os.system('sh {}/3.quality_control_pauvre.sh'.format(outdir)) except Exception as e: raise e def filter_reads(outdir='', min_quality=7, min_length=500, max_length=10000000): try: with open(os.path.join(outdir, '2.filter_reads.sh'), 'w') as f: if os.path.isdir('{}/barcode_demultiplexing_data'.format(outdir)): for item in os.listdir('{}/barcode_demultiplexing_data'.format(outdir)): prefix = os.path.join(outdir, 'barcode_demultiplexing_data', 'filtered_{}'.format(item)) if item.endswith('.fastq.gz') or item.endswith('.fq.gz'): f.write('gunzip -c {}/barcode_demultiplexing_data/{} | NanoFilt -q {} -l {} --maxlength {} |gzip > {}\n'.format(outdir, item, min_quality, min_length, max_length, prefix)) elif item.endswith('.fastq') or item.endswith('.fq'): f.write('cat {}/barcode_demultiplexing_data/{} | NanoFilt -q {} -l {} --maxlength {} |gzip > {}.gz\n'.format(outdir, item, min_quality, min_length, max_length, prefix)) else: for item in os.listdir(outdir): prefix = os.path.join(outdir, 'filtered_{}'.format(item)) if item.endswith('.fastq.gz') or item.endswith('.fq.gz'): f.write('gunzip -c {}/{} | NanoFilt -q {} -l {} --maxlength {} |gzip > {}\n'.format(outdir, item, min_quality, min_length, max_length, prefix)) elif item.endswith('.fastq') or item.endswith('.fq'): f.write('cat {}/{} | NanoFilt -q {} -l {} --maxlength {} |gzip > {}.gz\n'.format(outdir, item, min_quality, min_length, max_length, prefix)) # f.write('cat {}/trimmed_sample.fastq | NanoFilt -q {} -l {} --maxlength {} > {}/filtered_trimmed_sample.fastq\n'.format(outdir, min_quality, min_length, max_length, outdir, outdir)) print('####### Filter Low Quality and Short Reads Using NanoFilt ########') os.system('sh {}/2.filter_reads.sh'.format(outdir)) except Exception as e: raise e def basic_data_analysis(input, strain_barcode_dict, outdir=None, flowcell='FLO-MIN106', kit='SQK-LSK109', barcode_kits='EXP-NBD104', thread=16, barcoding=False, min_quality=7, min_length=500, max_length=1000000000): if outdir is None or outdir == '': outdir = '.' else: outdir = outdir if outdir.endswith('/'): outdir = re.sub('/$', '', outdir) if not os.path.isdir(outdir): os.makedirs(outdir) trim_adaptors(fastq_dir=input, strain_barcode_dict=strain_barcode_dict, barcode_kits=barcode_kits, outdir=outdir, thread=thread, demultiplexing=barcoding) filter_reads(outdir, min_quality, min_length, max_length) quality_control_pauvre(outdir)
zctestpy
/zctestpy-0.2.0.tar.gz/zctestpy-0.2.0/zcpy/basic_data_analysis.py
basic_data_analysis.py
import json import boto3 import atexit import logging from functools import partial from datadog import DogStatsd from zocdoc.canoe.timehelp import to_utc logger = logging.getLogger(__name__) def to_lambda_metric_format(metric_type, timestamp, metric_name, value=1, tags=None): time_str = (to_utc(timestamp) if not hasattr(timestamp, 'strftime') else timestamp).strftime('%s') return 'MONITORING|{epoch}|{value}|{metric_type}|{metric_name}|#{tags}'.format( epoch=time_str, value=value, metric_type=metric_type, metric_name=metric_name, tags=','.join(tags or []) ) def cleanup_client(client): try: client.flush() except Exception as e: logger.info('{} failed on atexit cleanup {}'.format(type(client), e)) class MetricLambdaClient(object): @staticmethod def get_default_lambda_client(): return boto3.client('lambda') @staticmethod def create(lambda_name=None, lambda_client=None): return MetricLambdaClient( lambda_name=lambda_name, lambda_client=(lambda_client or MetricLambdaClient.get_default_lambda_client()) ) def __init__(self, lambda_client=None, lambda_name=None): self.lambda_client = lambda_client self.lambda_name = lambda_name self._staged_messages = [] atexit.register(cleanup_client, self) def _send_message(self, message): return self.lambda_client.invoke( FunctionName=self.lambda_name, InvocationType='Event', Payload=json.dumps(dict(payload=message)) ) def _stage_message(self, message): self._staged_messages.append(message) def flush(self): if self._staged_messages: self._send_message(self._staged_messages) self._staged_messages = [] def send_metric(self, *args, **kwargs): self._send_message(to_lambda_metric_format(*args, **kwargs)) def stage_metric(self, *args, **kwargs): self._stage_message(to_lambda_metric_format(*args, **kwargs)) class MockDogStatsd(object): def __init__(self, **kwargs): logger.debug('Create mock DogStatsd {}'.format(str(kwargs))) def __getattr__(self, item): def wrapper(*args, **kwargs): logger.debug('Calling {} with args {} and kwargs {}'.format(item, str(args), str(kwargs))) return wrapper def get_datadog_client(app_config): if app_config.deployment_environment == 'dev': return MockDogStatsd() else: return DogStatsd( host=app_config.datadog_host, port=app_config.datadog_port, namespace=app_config.datadog_namespace, use_ms=True )
zd.testt
/zd.testt-0.1.0-py3-none-any.whl/zd/test/metrics/__init__.py
__init__.py
import uuid import math import json import boto3 import logging from itertools import chain from collections import namedtuple from zocdoc.canoe.tools import list_to_batches logger = logging.getLogger(__name__) class SqsMessageWrapper(object): def __init__(self, message_object): self.message_object = message_object self.__body = None @property def body(self): if self.__body is None: try: self.__body = json.loads(self.message_object.body) except: self.__body = self.message_object.body return self.__body def __getattr__(self, item): return getattr(self.message_object, item) class SqsMessageEventRecord(SqsMessageWrapper): def __init__(self, event_record, *args, **kwargs): super(SqsMessageEventRecord, self).__init__(*args, **kwargs) self.event_record = event_record def queue_approximate_number_of_messages(queue_resource): return int(queue_resource.attributes.get('ApproximateNumberOfMessages') or '0') def query_sqs_messages(queue, max_message_count=10): BOTO3_MAXIMUM_INPUT = 10 # boto3 maximum input min_number_of_tries = int(math.ceil(max_message_count / 10.0)) try_count, new_msgs = 0, [] while ( len(new_msgs) < max_message_count and try_count < min_number_of_tries and queue_approximate_number_of_messages(queue)): try: try_count += 1 num_of_messages_get = min(BOTO3_MAXIMUM_INPUT, max_message_count, max_message_count - len(new_msgs)) msgs = queue.receive_messages(MaxNumberOfMessages=num_of_messages_get) new_msgs.extend(SqsMessageWrapper(m) for m in msgs) except Exception as e: logger.info('Error on pinging queue - ' + str(e)) logger.error(e) print(e) return new_msgs def to_sqs_delete_entries(messages): return [{'Id': str(i), 'ReceiptHandle': msg.receipt_handle} for i, msg in enumerate(messages)] def delete_sqs_messages(queue, entries): """ can only delete 10 per batch. that sucks. i want more. :param queue: the boto3 queue resource :param entries: [{'Id': 'str id', 'ReceiptHandle': 'str'}, ...] """ processed, response = 0, {'Successful': [], 'Failed': []} while processed < len(entries): try: next_batch = entries[processed:min((processed+10), len(entries))] next_resp = queue.delete_messages(Entries=next_batch) response['Successful'].extend(next_resp.get('Successful')) response['Failed'].extend(next_resp.get('Failed', [])) processed += len(next_batch) except Exception as e: logger.info( 'Message: Failure on SQS.Queue batch message delete - Error: {err} - QueueUrl: {queue_url}'.format( err=str(type(e)) + str(e), queue_url=queue.url ) ) raise e return response def send_sqs_messages(queue, messages, is_fifo_queue=False): """ can only send 10 per batch. that sucks. i want more. :param queue: the boto3 queue resource :param entries: list of dictionaries each of the form { 'message_body': (optional) json serializable blob, default = whole blob 'id': (optional) str, default = str(int corresponding to input list index) 'message_deduplication_id': (optional) str, default = guid 'message_group_id': (optional) str, default = '1' so all messages in same group } """ BOTO3_MAXIMUM_INPUT = 10 def to_message_parameters(id, message_blob, is_fifo): input_params = dict( Id=message_blob.get('id') or id, MessageBody=json.dumps(message_blob.get('message_body') or message_blob) ) if is_fifo: input_params.update( MessageDeduplicationId=message_blob.get('message_deduplication_id') or str(uuid.uuid4()), MessageGroupId=message_blob.get('message_group_id') or '1', ) return input_params response = dict(Successful=[], Failed=[]) try: for batch_number, message_batch in enumerate(list_to_batches(messages, BOTO3_MAXIMUM_INPUT)): as_valid_input_params = [ to_message_parameters(str(batch_number*BOTO3_MAXIMUM_INPUT + i), message, is_fifo_queue) for i, message in enumerate(message_batch) ] batch_response = queue.send_messages(Entries=as_valid_input_params) response['Successful'].extend(batch_response.get('Successful') or []) response['Failed'].extend(batch_response.get('Failed') or []) except Exception as e: logger.info( 'Message: Failure on SQS.Queue batch send message - Error: {err} - QueueUrl: {queue_url}'.format( err=str(type(e)) + str(e), queue_url=queue.url ) ) raise e return response _object_info_fields = [ 'bucket_name', 'bucket_arn', 'object_key', 'object_eTag', 'object_sequencer', 'object_size', 'region', 'event_name', 'event_version', 'event_time' ] S3EventObjectInfo = namedtuple('S3EventObjectInfo', _object_info_fields) def sqs_s3_messages(sqs_message): def to_s3_event_summary(record): # full message format https://docs.aws.amazon.com/AmazonS3/latest/dev/notification-content-structure.html s3_bucket = record.get('s3').get('bucket') s3_object = record.get('s3').get('object') return S3EventObjectInfo( s3_bucket.get('name'), s3_bucket.get('arn'), s3_object.get('key'), s3_object.get('eTag'), s3_object.get('sequencer'), s3_object.get('size'), record.get('awsRegion'), record.get('eventName'), record.get('eventVersion'), record.get('eventTime') ) filtered_fields_messages = [SqsMessageEventRecord(to_s3_event_summary(rec), sqs_message) for rec in json.loads(sqs_message.body['Message']).get('Records', []) if 's3' in rec] return filtered_fields_messages def latest_s3_message_by_key(sqs_s3_msgs): if not sqs_s3_msgs: return [] kv = {(msg.event_record.object_key, msg.event_record.object_sequencer): msg for msg in sqs_s3_msgs} list_agg = {u_k: [] for u_k in set(k for k, _ in kv.keys())} for k, v in kv.keys(): list_agg[k].append(v) latest = [(k, sorted(v)[-1]) for k, v in list_agg.items()] return [kv[latest_kv] for latest_kv in latest] class SqsQueue(object): def __init__(self, queue_name, message_transformer=None, sqs_resource=None): self.queue_name = queue_name self.sqs_resource = sqs_resource or boto3.resource('sqs') self.queue = None self.message_transformer = message_transformer def get_queue(self): self.queue = self.sqs_resource.get_queue_by_name(QueueName=self.queue_name) return self def approximate_number_of_messages(self): return queue_approximate_number_of_messages(self.queue) def fifo_queue(self): return (self.queue.attributes.get('FifoQueue') or '').lower() == 'true' def has_messages(self): return self.approximate_number_of_messages() > 0 def receive_messages(self, **kwargs): update_func = self.message_transformer if self.message_transformer else (lambda m: [m]) return list(chain(*[update_func(msg) for msg in query_sqs_messages(queue=self.queue, **kwargs)])) def delete_messages(self, messages): try: resp = delete_sqs_messages(self.queue, entries=to_sqs_delete_entries(messages)) successful = resp.get('Successful', []) success_ct = len(successful) log_msg = 'Message: Removed messages from queue - DeletedMessageCount: {} - DeletedAllMessages: {}' logger.info(log_msg.format(success_ct, len(messages) == success_ct)) failed = resp.get('Failed', []) if failed: log_msg = 'Message: Subset of SQS.Queue batch delete calls failed - FailedDeletes: {}' logger.info(log_msg.format([dict(messages[fail['Id']], **fail) for fail in failed])) return resp except: pass def send_messages(self, messages): return send_sqs_messages(self.queue, messages, self.fifo_queue())
zd.testt
/zd.testt-0.1.0-py3-none-any.whl/zd/test/aws/sqs.py
sqs.py
import uuid import math import json import boto3 import logging from itertools import chain from collections import namedtuple from zocdoc.canoe.tools import list_to_batches logger = logging.getLogger(__name__) class SqsMessageWrapper(object): def __init__(self, message_object): self.message_object = message_object self.__body = None @property def body(self): if self.__body is None: try: self.__body = json.loads(self.message_object.body) except: self.__body = self.message_object.body return self.__body def __getattr__(self, item): return getattr(self.message_object, item) class SqsMessageEventRecord(SqsMessageWrapper): def __init__(self, event_record, *args, **kwargs): super(SqsMessageEventRecord, self).__init__(*args, **kwargs) self.event_record = event_record def queue_approximate_number_of_messages(queue_resource): return int(queue_resource.attributes.get('ApproximateNumberOfMessages') or '0') def query_sqs_messages(queue, max_message_count=10): BOTO3_MAXIMUM_INPUT = 10 # boto3 maximum input min_number_of_tries = int(math.ceil(max_message_count / 10.0)) try_count, new_msgs = 0, [] while ( len(new_msgs) < max_message_count and try_count < min_number_of_tries and queue_approximate_number_of_messages(queue)): try: try_count += 1 num_of_messages_get = min(BOTO3_MAXIMUM_INPUT, max_message_count, max_message_count - len(new_msgs)) msgs = queue.receive_messages(MaxNumberOfMessages=num_of_messages_get) new_msgs.extend(SqsMessageWrapper(m) for m in msgs) except Exception as e: logger.info('Error on pinging queue - ' + str(e)) logger.error(e) print(e) return new_msgs def to_sqs_delete_entries(messages): return [{'Id': str(i), 'ReceiptHandle': msg.receipt_handle} for i, msg in enumerate(messages)] def delete_sqs_messages(queue, entries): """ can only delete 10 per batch. that sucks. i want more. :param queue: the boto3 queue resource :param entries: [{'Id': 'str id', 'ReceiptHandle': 'str'}, ...] """ processed, response = 0, {'Successful': [], 'Failed': []} while processed < len(entries): try: next_batch = entries[processed:min((processed+10), len(entries))] next_resp = queue.delete_messages(Entries=next_batch) response['Successful'].extend(next_resp.get('Successful')) response['Failed'].extend(next_resp.get('Failed', [])) processed += len(next_batch) except Exception as e: logger.info( 'Message: Failure on SQS.Queue batch message delete - Error: {err} - QueueUrl: {queue_url}'.format( err=str(type(e)) + str(e), queue_url=queue.url ) ) raise e return response def send_sqs_messages(queue, messages, is_fifo_queue=False): """ can only send 10 per batch. that sucks. i want more. :param queue: the boto3 queue resource :param entries: list of dictionaries each of the form { 'message_body': (optional) json serializable blob, default = whole blob 'id': (optional) str, default = str(int corresponding to input list index) 'message_deduplication_id': (optional) str, default = guid 'message_group_id': (optional) str, default = '1' so all messages in same group } """ BOTO3_MAXIMUM_INPUT = 10 def to_message_parameters(id, message_blob, is_fifo): input_params = dict( Id=message_blob.get('id') or id, MessageBody=json.dumps(message_blob.get('message_body') or message_blob) ) if is_fifo: input_params.update( MessageDeduplicationId=message_blob.get('message_deduplication_id') or str(uuid.uuid4()), MessageGroupId=message_blob.get('message_group_id') or '1', ) return input_params response = dict(Successful=[], Failed=[]) try: for batch_number, message_batch in enumerate(list_to_batches(messages, BOTO3_MAXIMUM_INPUT)): as_valid_input_params = [ to_message_parameters(str(batch_number*BOTO3_MAXIMUM_INPUT + i), message, is_fifo_queue) for i, message in enumerate(message_batch) ] batch_response = queue.send_messages(Entries=as_valid_input_params) response['Successful'].extend(batch_response.get('Successful') or []) response['Failed'].extend(batch_response.get('Failed') or []) except Exception as e: logger.info( 'Message: Failure on SQS.Queue batch send message - Error: {err} - QueueUrl: {queue_url}'.format( err=str(type(e)) + str(e), queue_url=queue.url ) ) raise e return response _object_info_fields = [ 'bucket_name', 'bucket_arn', 'object_key', 'object_eTag', 'object_sequencer', 'object_size', 'region', 'event_name', 'event_version', 'event_time' ] S3EventObjectInfo = namedtuple('S3EventObjectInfo', _object_info_fields) def sqs_s3_messages(sqs_message): def to_s3_event_summary(record): # full message format https://docs.aws.amazon.com/AmazonS3/latest/dev/notification-content-structure.html s3_bucket = record.get('s3').get('bucket') s3_object = record.get('s3').get('object') return S3EventObjectInfo( s3_bucket.get('name'), s3_bucket.get('arn'), s3_object.get('key'), s3_object.get('eTag'), s3_object.get('sequencer'), s3_object.get('size'), record.get('awsRegion'), record.get('eventName'), record.get('eventVersion'), record.get('eventTime') ) filtered_fields_messages = [SqsMessageEventRecord(to_s3_event_summary(rec), sqs_message) for rec in json.loads(sqs_message.body['Message']).get('Records', []) if 's3' in rec] return filtered_fields_messages def latest_s3_message_by_key(sqs_s3_msgs): if not sqs_s3_msgs: return [] kv = {(msg.event_record.object_key, msg.event_record.object_sequencer): msg for msg in sqs_s3_msgs} list_agg = {u_k: [] for u_k in set(k for k, _ in kv.keys())} for k, v in kv.keys(): list_agg[k].append(v) latest = [(k, sorted(v)[-1]) for k, v in list_agg.items()] return [kv[latest_kv] for latest_kv in latest] class SqsQueue(object): def __init__(self, queue_name, message_transformer=None, sqs_resource=None): self.queue_name = queue_name self.sqs_resource = sqs_resource or boto3.resource('sqs') self.queue = None self.message_transformer = message_transformer def get_queue(self): self.queue = self.sqs_resource.get_queue_by_name(QueueName=self.queue_name) return self def approximate_number_of_messages(self): return queue_approximate_number_of_messages(self.queue) def fifo_queue(self): return (self.queue.attributes.get('FifoQueue') or '').lower() == 'true' def has_messages(self): return self.approximate_number_of_messages() > 0 def receive_messages(self, **kwargs): update_func = self.message_transformer if self.message_transformer else (lambda m: [m]) return list(chain(*[update_func(msg) for msg in query_sqs_messages(queue=self.queue, **kwargs)])) def delete_messages(self, messages): try: resp = delete_sqs_messages(self.queue, entries=to_sqs_delete_entries(messages)) successful = resp.get('Successful', []) success_ct = len(successful) log_msg = 'Message: Removed messages from queue - DeletedMessageCount: {} - DeletedAllMessages: {}' logger.info(log_msg.format(success_ct, len(messages) == success_ct)) failed = resp.get('Failed', []) if failed: log_msg = 'Message: Subset of SQS.Queue batch delete calls failed - FailedDeletes: {}' logger.info(log_msg.format([dict(messages[fail['Id']], **fail) for fail in failed])) return resp except: pass def send_messages(self, messages): return send_sqs_messages(self.queue, messages, self.fifo_queue())
zd.testt
/zd.testt-0.1.0-py3-none-any.whl/zd/boop/aws/sqs.py
sqs.py
========== Change log ========== 4.4 (2022-12-02) ================ - Add support for Python 3.8, 3.9, 3.10, 3.11. - Drop support for Python 3.4. - Drop support for ``python setup.py test`` to run the tests. (#23) - Drop support for installing this package without having ``setuptools``. 4.3 (2018-10-30) ================ - Add support for Python 3.6 and 3.7. - Drop support for Python 3.3. 4.2.0 (2016-12-07) ================== - Add support for Python 3.5. - Drop support for Python 2.6 and 3.2. 4.1.0 (2015-04-16) ================== - Add ``--version`` command line option (fixes https://github.com/zopefoundation/zdaemon/issues/4). - ``kill`` now accepts signal names, not just numbers (https://github.com/zopefoundation/zdaemon/issues/11). - Restore ``logreopen`` as an alias for ``kill USR2`` (removed in version 3.0.0 due to lack of tests): https://github.com/zopefoundation/zdaemon/issues/10. - Make ``logreopen`` also reopen the transcript log: https://github.com/zopefoundation/zdaemon/issues/9. - Reopen event log on ``logreopen`` or ``reopen_transcript``: https://github.com/zopefoundation/zdaemon/issues/8. - Help message for ``reopen_transcript`` (https://github.com/zopefoundation/zdaemon/issues/5). - Fix race condition where ``stop`` would be ignored if the daemon manager was waiting before respawning a crashed program. https://github.com/zopefoundation/zdaemon/issues/13. - Partially fix delayed deadlock when the transcript file runs into a full disk (https://github.com/zopefoundation/zdaemon/issues/1). - Fix test suite leaving stale processes behind (https://github.com/zopefoundation/zdaemon/issues/7). 4.0.1 (2014-12-26) ================== - Add support for PyPy. (PyPy3 is pending release of a fix for: https://bitbucket.org/pypy/pypy/issue/1946) - Add support for Python 3.4. - Add ``-t/--transcript`` command line option. - zdaemon can now be invoked as a module as in ``python -m zdaemon ...`` 4.0.0 (2013-05-10) ================== - Add support for Python 3.2. 4.0.0a1 (2013-02-15) ==================== - Add tox support and MANIFEST.in for proper releasing. - Add Python 3.3 support. - Drop Python 2.4 and 2.5 support. 3.0.5 (2012-11-27) ================== - Fixed: the status command didn't return a non-zero exit status when the program wasn't running. This made it impossible for other software (e.g. Puppet) to tell if a process was running. 3.0.4 (2012-07-30) ================== - Fixed: The start command exited with a zero exit status even when the program being started failed to start (or exited imediately). 3.0.3 (2012-07-10) ================== - Fixed: programs started with zdaemon couldn't, themselves, invoke zdaemon. 3.0.2 (2012-07-10) ================== Fail :( 3.0.1 (2012-06-08) ================== - Fixed: The change in 2.0.6 to set a user's supplemental groups broke common configurations in which the effective user was set via ``su`` or ``sudo -u`` prior to invoking zdaemon. Now, zdaemon doesn't set groups or the effective user if the effective user is already set to the configured user. 3.0.0 (2012-06-08) ================== - Added an option, ``start-test-program`` to supply a test command to test whether the program managed by zdaemon is up and operational, rather than just running. When starting a program, the start command doesn't return until the test passes. You could, for example, use this to wait until a web server is actually accepting connections. - Added a ``start-timeout`` option to error if a program takes too long to start. This is especially useful in combination with the ``start-test-program`` option. - Added an option, stop-timeout, to control how long to wait for a graceful shutdown. Previously, this was controlled by backoff-limit, which didn't make much sense. - Several undocumented, untested, and presumably unused features were removed. 2.0.6 (2012-06-07) ================== - Fixed: When the ``user`` option was used to run as a particular user, supplemental groups weren't set to the user's supplemental groups. 2.0.5 (2012-06-07) ================== (Accidental release. Please ignore.) 2.0.4 (2009-04-20) ================== - Version 2.0.3 broke support for relative paths to the socket (``-s`` option and ``socket-name`` parameter), now relative paths work again as in version 2.0.2. - Fixed change log format, made table of contents nicer. - Fixed author's email address. - Removed zpkg stuff. 2.0.3 (2009-04-11) ================== - Added support to bootstrap on Jython. - If the run directory does not exist it will be created. This allow to use `/var/run/mydaemon` as run directory when /var/run is a tmpfs (LP #318118). Bugs Fixed ---------- - No longer uses a hard-coded file name (/tmp/demo.zdsock) in unit tests. This lets you run the tests on Python 2.4 and 2.5 simultaneously without spurious errors. - make -h work again for both runner and control scripts. Help is now taken from the __doc__ of the options class users by the zdaemon script being run. 2.0.2 (2008-04-05) ================== Bugs Fixed ---------- - Fixed backwards incompatible change in handling of environment option. 2.0.1 (2007-10-31) ================== Bugs Fixed ---------- - Fixed test renormalizer that did not work in certain cases where the environment was complex. 2.0.0 (2007-07-19) ================== - Final release for 2.0.0. 2.0a6 (2007-01-11) ================== Bugs Fixed ---------- - When the user option was used, it only affected running the daemon. 2.0a3, 2.0a4, 2.0a5 (2007-01-10) ================================ Bugs Fixed ---------- - The new (2.0) mechanism used by zdaemon to start the daemon manager broke some applications that extended zdaemon. - Added extra checks to deal with programs that extend zdaemon and copy the schema and thus don't see updates to the ZConfig schema. 2.0a2 (2007-01-10) ================== New Features ------------ - Added support for setting environment variables in the configuration file. This is useful when zdaemon is used to run programs that need environment variables set (e.g. LD_LIBRARY_PATH). - Added a command to rotate the transcript log. 2.0a1 (2006-12-21) ================== Bugs Fixed ---------- - In non-daemon mode, start hung, producing annoying dots when the program exited. - The start command hung producing annoying dots if the daemon failed to start. - foreground and start had different semantics because one used os.system and another used os.spawn New Features ------------ - Documentation - Command-line arguments can now be supplied to the start and foreground (fg) commands - zdctl now invokes itself to run zdrun. This means that it's no-longer necessary to generate a separate zdrun script. This especially when the magic techniques to find and run zdrun using directory sniffing fail to set the path correctly. - The daemon mode is now enabled by default. To get non-daemon mode, you have to use a configuration file and set daemon to off there. The old -d option is kept for backward compatibility, but is a no-op. 1.4a1 (2005-11-21) ================== - Fixed a bug in the distribution setup file. 1.4a1 (2005-11-05) ================== - First semi-formal release. After some unknown release(???) =============================== - Made 'zdaemon.zdoptions' not fail for --help when __main__.__doc__ is None. After 1.1 ========= - Updated test 'testRunIgnoresParentSignals': o Use 'mkdtemp' to create a temporary directory to hold the test socket rather than creating the test socket in the test directory. Hopefully this will be more robust. Sometimes the test directory has a path so long that the test socket can't be created. o Changed management of 'donothing.sh'. This script is now created by the test in the temporarily directory with the necessary permissions. This is to avoids possible mangling of permissions leading to spurious test failures. It also avoids management of a file in the source tree, which is a bonus. - Rearranged source tree to conform to more usual zpkg-based layout: o Python package lives under 'src'. o Dependencies added to 'src' as 'svn:externals'. o Unit tests can now be run from a checkout. - Made umask-based test failures due to running as root emit a more forceful warning. 1.1 (2005-06-09) ================ - SVN tag: svn://svn.zope.org/repos/main/zdaemon/tags/zdaemon-1.1 - Tagged to make better 'svn:externals' linkage possible. To-Dos ====== More docs: - Document/demonstrate some important features, such as: - working directory Bugs: - help command
zdaemon
/zdaemon-4.4.tar.gz/zdaemon-4.4/CHANGES.rst
CHANGES.rst
# zdairi zdairi is zeppelin CLI tool which wrapper zeppelin REST API for control notebook and interpreter. Zeppelin REST API. see https://zeppelin.apache.org/docs/0.7.0/rest-api/rest-notebook.html ## Support version * Zeppelin 0.6 * Zeppelin 0.7 ## Prerequisites * Python 2.7 # Install python setup.py install or pip install zdairi # Configuration Using zdari with yaml format config. ```bash $ zdairi COMMAND #Using default path '~/.zdari.yml' $ zdairi -f /tmp/zdari.yml COMMAND #Using specified path ``` Config example: ``` zeppelin_url: http://your_zeppelin_url # Required # Options zeppelin_auth: true #Default is false zeppelin_user: user_name zeppelin_password: user_password ``` We support specified user to login zeppelin. # Usage Support commands: * Notebook * list * run * print * create * delete * save * Interpreter * list * restart ## Notebook commands ### LIST command List notebooks id and name ``` $ zdairi notebook list [--notebook ${notebook_id|notebook_name}] ``` Output example ``` $ zdairi notebook list id:[2C3XP3FS1], name:[my notebook1] id:[2C9327A66], name:[my notebook2] id:[2CFGUBJX2], name:[my notebook3] ``` ``` $ zdairi notebook list --notebook "my notebook3" id:[20170410-113013_1011211975], status:[FINISHED] id:[20170410-113020_981608729], status:[FINISHED] ``` ### RUN command Run zeppelin notebook/paragraph by id of name ``` $ zdari notebook run --notebook ${notebook_id|$notebook_name} [--paragraph ${paragraph_id|$paragraph_name}] [--parameters ${json}] ``` Example ``` $ zdairi notebook run --notebook mynotebook --paragraph myparagraph --parameters '{ "params":{"forecastDate":"yoo"}}' ``` ### PRINT command Print zeppelin notebook as JSON ``` $ zdari notebook print --notebook ${notebook_id|$notebook_name} ``` ### CREATE command Create zeppelin notebook by .json/.nb ``` $ zdari notebook create --filepath ${filepath} ``` We support create notebook by zeppelin json format or our DSL format. The format as below: ``` # ${notebook name} ############################################################ ${paragraph title} ############################################################ ${paragraph context} ############################################################ ${paragraph title} ############################################################ ${paragraph context} ``` ``` # Test Notebook ############################################################ test_1 ############################################################ %spark import org.apache.commons.io.IOUtils import java.net.URL import java.nio.charset.Charset // load bank data val bankText = sc.parallelize( IOUtils.toString( new URL("https://s3.amazonaws.com/apache-zeppelin/tutorial/bank/bank.csv"), Charset.forName("utf8")).split("\n")) case class Bank(age: Integer, job: String, marital: String, education: String, balance: Integer) val bank = bankText.map(s => s.split(";")).filter(s => s(0) != "\"age\"").map( s => Bank(s(0).toInt, s(1).replaceAll("\"", ""), s(2).replaceAll("\"", ""), s(3).replaceAll("\"", ""), s(5).replaceAll("\"", "").toInt ) ).toDF() bank.registerTempTable("bank2") ############################################################ test_2 ############################################################ %pyspark import os print(os.environ['PYTHONPATH']) count = sc.parallelize(range(1, 10000 + 1)).reduce(lambda x,y: x+y) print("Pi is roughly %f" % (4.0 * count / 12)) accum = sc.accumulator(0) sc.parallelize([1, 2, 3, 4]).foreach(lambda x: accum.add(x)) print(accum.value) ``` ### DELETE command Delete zeppelin notebook by notebook_id or notebook_name ``` $ zdari notebook delete --notebook ${notebook_id|$notebook_name} ``` ### SAVE command Save zeppelin notebook as xxx.np ``` $ zdari notebook save --notebook ${notebook_id|$notebook_name} --filepath $filepath ``` ## Interpreter commands ### LIST command List interpreters id and name ``` $ zdairi interpreter list ``` Output example ``` id:[2CBC3HCAX], name:[spark] id:[2C9CZRM8P], name:[md] id:[2CBBH2DVN], name:[angular] ``` Restart zeppelin interpreter ``` $ zdari interpreter restart --interpreter ${interpreter_id|$interpreter_name} ```
zdairi
/zdairi-0.7.3.tar.gz/zdairi-0.7.3/README.md
README.md
zdas ==== .. image:: https://travis-ci.org/appstore-zencore/zdas.svg?branch=master :target: https://travis-ci.org/appstore-zencore/zdas Zencore daemon application start. Install ------- :: pip install zdas Usage ----- :: import time import threading import signal import zdas stopflag = False def main(): def on_exit(*args, **kwargs): with open("backgroud.log", "a", encoding="utf-8") as fobj: print("process got exit signal...", file=fobj) print(args, file=fobj) print(kwargs, file=fobj) global stopflag stopflag = True signal.signal(signal.SIGTERM, on_exit) signal.signal(signal.SIGINT, on_exit) while not stopflag: time.sleep(1) print(time.time()) if __name__ == "__main__": print("start background application...") zdas.daemon_start(main, "background.pid", True)
zdas
/zdas-0.1.1.tar.gz/zdas-0.1.1/README.rst
README.rst
================================ Zalando Data Lake client library ================================ .. image:: https://img.shields.io/pypi/v/zdatalake.svg :target: https://pypi.python.org/pypi/zdatalake .. image:: https://img.shields.io/travis/zdatalake/zdatalake.svg :target: https://travis-ci.org/zdatalake/zdatalake .. image:: https://readthedocs.org/projects/zdatalake/badge/?version=latest :target: https://zdatalake.readthedocs.io/en/latest/?badge=latest :alt: Documentation Status .. image:: https://pyup.io/repos/github/zdatalake/zdatalake/shield.svg :target: https://pyup.io/repos/github/zdatalake/zdatalake/ :alt: Updates Zalando Data Lake client library helpers and CLI * Free software: MIT license * Documentation: https://zdatalake.readthedocs.io. Features -------- * TODO Credits ------- This package was created with Cookiecutter_ and the `audreyr/cookiecutter-pypackage`_ project template. .. _Cookiecutter: https://github.com/audreyr/cookiecutter .. _`audreyr/cookiecutter-pypackage`: https://github.com/audreyr/cookiecutter-pypackage
zdatalake
/zdatalake-0.1.1.tar.gz/zdatalake-0.1.1/README.rst
README.rst
.. highlight:: shell ============ Contributing ============ Contributions are welcome, and they are greatly appreciated! Every little bit helps, and credit will always be given. You can contribute in many ways: Types of Contributions ---------------------- Report Bugs ~~~~~~~~~~~ Report bugs at https://github.com/zdatalake/zdatalake/issues. If you are reporting a bug, please include: * Your operating system name and version. * Any details about your local setup that might be helpful in troubleshooting. * Detailed steps to reproduce the bug. Fix Bugs ~~~~~~~~ Look through the GitHub issues for bugs. Anything tagged with "bug" and "help wanted" is open to whoever wants to implement it. Implement Features ~~~~~~~~~~~~~~~~~~ Look through the GitHub issues for features. Anything tagged with "enhancement" and "help wanted" is open to whoever wants to implement it. Write Documentation ~~~~~~~~~~~~~~~~~~~ Zalando Data Lake client library could always use more documentation, whether as part of the official Zalando Data Lake client library docs, in docstrings, or even on the web in blog posts, articles, and such. Submit Feedback ~~~~~~~~~~~~~~~ The best way to send feedback is to file an issue at https://github.com/zdatalake/zdatalake/issues. If you are proposing a feature: * Explain in detail how it would work. * Keep the scope as narrow as possible, to make it easier to implement. * Remember that this is a volunteer-driven project, and that contributions are welcome :) Get Started! ------------ Ready to contribute? Here's how to set up `zdatalake` for local development. 1. Fork the `zdatalake` repo on GitHub. 2. Clone your fork locally:: $ git clone [email protected]:your_name_here/zdatalake.git 3. Install your local copy into a virtualenv. Assuming you have virtualenvwrapper installed, this is how you set up your fork for local development:: $ virtualenv zdatalake $ cd zdatalake/ $ python setup.py develop $ pip install -r requirements_dev.txt 4. Create a branch for local development:: $ git checkout -b name-of-your-bugfix-or-feature Now you can make your changes locally. 5. When you're done making changes, check that your changes pass flake8 and the tests, including testing other Python versions with tox:: $ flake8 zdatalake tests $ python setup.py test #or py.test $ tox To get flake8 and tox, just pip install them into your virtualenv. 6. Commit your changes and push your branch to GitHub:: $ git add . $ git commit -m "Your detailed description of your changes." $ git push origin name-of-your-bugfix-or-feature 7. Submit a pull request through the GitHub website. Pull Request Guidelines ----------------------- Before you submit a pull request, check that it meets these guidelines: 1. The pull request should include tests. 2. If the pull request adds functionality, the docs should be updated. Put your new functionality into a function with a docstring, and add the feature to the list in README.rst. 3. The pull request should work for Python 2.7, 3.5 and 3.6, and for PyPy. Check https://travis-ci.org/zdatalake/zdatalake/pull_requests and make sure that the tests pass for all supported Python versions. Tips ---- To run a subset of tests:: $ py.test tests.test_zdatalake Deploying --------- A reminder for the maintainers on how to deploy. Make sure all your changes are committed (including an entry in HISTORY.rst). Then run:: $ bumpversion patch # possible: major / minor / patch $ git push $ git push --tags Travis will then deploy to PyPI if tests pass.
zdatalake
/zdatalake-0.1.1.tar.gz/zdatalake-0.1.1/CONTRIBUTING.rst
CONTRIBUTING.rst
.. highlight:: shell ============ Installation ============ Stable release -------------- To install Zalando Data Lake client library, run this command in your terminal: .. code-block:: console $ pip install zdatalake This is the preferred method to install Zalando Data Lake client library, as it will always install the most recent stable release. If you don't have `pip`_ installed, this `Python installation guide`_ can guide you through the process. .. _pip: https://pip.pypa.io .. _Python installation guide: http://docs.python-guide.org/en/latest/starting/installation/ From sources ------------ The sources for Zalando Data Lake client library can be downloaded from the `Github repo`_. You can either clone the public repository: .. code-block:: console $ git clone git://github.com/zdatalake/zdatalake Or download the `tarball`_: .. code-block:: console $ curl -OL https://github.com/zdatalake/zdatalake/tarball/master Once you have a copy of the source, you can install it with: .. code-block:: console $ python setup.py install .. _Github repo: https://github.com/zdatalake/zdatalake .. _tarball: https://github.com/zdatalake/zdatalake/tarball/master
zdatalake
/zdatalake-0.1.1.tar.gz/zdatalake-0.1.1/docs/installation.rst
installation.rst
![Tests](https://github.com/zillow/datasets/actions/workflows/test.yml/badge.svg) [![Coverage Status](https://coveralls.io/repos/github/zillow/datasets/badge.svg)](https://coveralls.io/github/zillow/datasets) [![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/zillow/datasets/main?urlpath=lab/tree/datasets/tutorials) # Welcome to zdatasets ================================================== TODO ```python import pandas as pd from metaflow import FlowSpec, step from zdatasets import Dataset, Mode from zdatasets.metaflow import DatasetParameter from zdatasets.plugins import BatchOptions # Can also invoke from CLI: # > python zdatasets/tutorials/0_hello_dataset_flow.py run \ # --hello_dataset '{"name": "HelloDataset", "mode": "READ_WRITE", \ # "options": {"type": "BatchOptions", "partition_by": "region"}}' class HelloDatasetFlow(FlowSpec): hello_dataset = DatasetParameter( "hello_dataset", default=Dataset("HelloDataset", mode=Mode.READ_WRITE, options=BatchOptions(partition_by="region")), ) @step def start(self): df = pd.DataFrame({"region": ["A", "A", "A", "B", "B", "B"], "zpid": [1, 2, 3, 4, 5, 6]}) print("saving data_frame: \n", df.to_string(index=False)) # Example of writing to a dataset self.hello_dataset.write(df) # save this as an output dataset self.output_dataset = self.hello_dataset self.next(self.end) @step def end(self): print(f"I have dataset \n{self.output_dataset=}") # output_dataset to_pandas(partitions=dict(region="A")) only df: pd.DataFrame = self.output_dataset.to_pandas(partitions=dict(region="A")) print('self.output_dataset.to_pandas(partitions=dict(region="A")):') print(df.to_string(index=False)) if __name__ == "__main__": HelloDatasetFlow() ```
zdatasets
/zdatasets-1.2.5.tar.gz/zdatasets-1.2.5/README.md
README.md
import argparse import numpy as np import pandas as pd import pysge import glob import os from zdb.modules.df_process import df_merge, df_open_merge def parse_args(): parser = argparse.ArgumentParser() parser.add_argument("path", help="Path to temp dir") parser.add_argument( "-m", "--mode", default="multiprocessing", type=str, help="Parallelisation: 'multiprocessing', 'sge', 'htcondor'", ) parser.add_argument( "-j", "--ncores", default=0, type=int, help="Number of cores for 'multiprocessing' jobs", ) parser.add_argument( "--sge-opts", default="-q hep.q", type=str, help="Options to pass onto qsub", ) parser.add_argument( "-o", "--output", default="output.pkl", type=str, help="Output file", ) return parser.parse_args() def main(): options = parse_args() results = pysge.sge_resume( "zdb", options.path, options=options.sge_opts, sleep=5, request_resubmission_options=True, ) njobs = options.ncores if options.mode in ["multiprocessing"] or options.ncores < 0: njobs = len(results) grouped_args = [list(x) for x in np.array_split(results, njobs)] tasks = [ {"task": df_open_merge, "args": (args,), "kwargs": {"quiet": True}} for args in grouped_args ] if options.mode=="multiprocessing" and options.ncores==0: merge_results = pysge.local_submit(tasks) df = pd.DataFrame() for result in merge_results: df = df_merge(df, result) elif options.mode=="multiprocessing": merge_results = pysge.mp_submit(tasks, ncores=options.ncores) df = pd.DataFrame() for result in merge_results: df = df_merge(df, result) elif options.mode=="sge": merge_results = pysge.sge_submit( "zdb-merge", "_ccsp_temp/", tasks=tasks, options=options.sge_opts, sleep=5, request_resubmission_options=True, ) df = df_open_merge(merge_results) else: df = pd.DataFrame() print(df) path, table = options.output.split(":") df.to_hdf( path, table, format='table', append=False, complevel=9, complib='zlib', ) if __name__ == "__main__": main()
zdb-analysis
/zdb_analysis-0.1.7-py3-none-any.whl/zdb_analysis-0.1.7.data/scripts/zdb_resume.py
zdb_resume.py
import lz4.frame import pickle import numpy as np import pandas as pd import os import shutil from tqdm.auto import tqdm def df_merge(df1, df2): if df1 is None or df1.empty: return df2 if df2 is None or df2.empty: return df1 reindex = df1.index.union(df2.index) return df1.reindex(reindex).fillna(0.) + df2.reindex(reindex).fillna(0.) def df_open_merge(paths, quiet=False): pbar = tqdm(total=len(paths), desc="Merged", dynamic_ncols=True, disable=quiet) obj_out = pd.DataFrame() for path in paths: with lz4.frame.open(path, 'rb') as f: obj_in = pickle.load(f) obj_out = df_merge(obj_out, obj_in) pbar.update() pbar.close() return obj_out def df_process(paths, cfg, chunksize=500000, quiet=False): out_df = pd.DataFrame() # switch to TMPDIR copy_files = False if "TMPDIR" in os.environ: os.chdir(os.environ["TMPDIR"]) copy_files = True pbar_path = tqdm(paths, disable=quiet, unit="file") for path in pbar_path: # copy files inf = path if copy_files: inf = "tmp.h5" shutil.copyfile(path, inf) pbar_path.set_description(os.path.basename(path)) with pd.HDFStore(inf, 'r') as store: pbar_tab = tqdm(cfg["tables"].items(), disable=quiet, unit="table") for table_label, table_name in pbar_tab: pbar_tab.set_description(table_name) hist_cfg = {"table_name": table_label} for df in store.select( table_name, iterator=True, chunksize=chunksize, ): # pre-eval for evs in cfg["eval"]: for key, val in evs.items(): df[key] = eval("lambda "+val)(df) for cutflow_name, cutflow_cfg in cfg["cutflows"].items(): hist_cfg.update(cutflow_cfg) # apply selection sdf = df.loc[df.eval(cutflow_cfg["selection"])] for hist_label in cutflow_cfg["hists"]: hdf = sdf.copy() # hist evals evals = cfg["hists"][hist_label] for evs in evals: for key, val in evs.items(): hdf[key] = eval( "lambda "+val.format(**hist_cfg) )(hdf) # add hist columns = [list(ev.keys())[0] for ev in evals] out_df = df_merge( out_df, ( hdf.loc[:,columns] .groupby(cfg["groupby"]).sum() ), ) return out_df
zdb-analysis
/zdb_analysis-0.1.7-py3-none-any.whl/zdb/modules/df_process.py
df_process.py
import os import shutil import copy import numpy as np import pandas as pd import oyaml as yaml import pysge from zdb.modules.df_skim import df_skim def job(filename, cfg, outname, chunksize=250000): switched = False if "TMPDIR" in os.environ: os.chdir(os.environ["TMPDIR"]) shutil.copyfile(filename, "tmp.h5") inf = "tmp.h5" outf = "res.h5" switched = True else: inf = filename outf = outname result = df_skim(filename, cfg, outf, chunksize=chunksize) if switched: shutil.copyfile(outf, outname) return result def submit_tasks(tasks, mode, ncores, batch_opts): if mode=="multiprocessing" and ncores==0: results = pysge.local_submit(tasks) elif mode=="multiprocessing": results = pysge.mp_submit(tasks, ncores=ncores) elif mode=="sge": results = pysge.sge_submit( tasks, "zdb", "_ccsp_temp/", options=batch_opts, sleep=5, request_resubmission_options=True, return_files=True, ) elif mode=="condor": import conpy results = conpy.condor_submit( "zdb", "_ccsp_temp/", tasks=tasks, options=batch_opts, sleep=5, request_resubmission_options=True, ) return results def skim( config, mode="multiprocessing", ncores=0, nfiles=-1, batch_opts="", output=None, chunksize=250000, ): outdir = os.path.dirname(output) if not os.path.exists(outdir): os.makedirs(outdir) njobs = ncores #setup jobs with open(config, 'r') as f: cfg = yaml.full_load(f) # group jobs files = cfg["files"] if nfiles > 0: files = files[:nfiles] if mode in ["multiprocessing"] or njobs < 0: njobs = len(files) grouped_files = [list(x) for x in np.array_split(files, njobs)] tasks = [ {"task": job, "args": (fs, cfg, output.format(idx)), "kwargs": {"chunksize": chunksize}} for idx, fs in enumerate(grouped_files) ] submit_tasks(tasks, mode, ncores, batch_opts) print("Finished!") def resume_skim(path, batch_opts="", output=None): results = pysge.sge_resume("zdb", path, options=batch_opts) print("Finished!") def multi_skim( configs, mode='multiprocessing', ncores=0, nfiles=-1, batch_opts="", outputs=None, chunksize=250000, ): all_tasks = [] for config, output in zip(configs, outputs): outdir = os.path.dirname(output) if not os.path.exists(outdir): os.makedirs(outdir) njobs = ncores #setup jobs with open(config, 'r') as f: cfg = yaml.full_load(f) # group jobs files = cfg["files"] if nfiles > 0: files = files[:nfiles] if mode in ["multiprocessing"] or njobs < 0: njobs = len(files) grouped_files = [list(x) for x in np.array_split(files, njobs)] tasks = [{ "task": job, "args": (fs, copy.deepcopy(cfg), output.format(idx)), "kwargs": {"chunksize": chunksize}, } for idx, fs in enumerate(grouped_files)] all_tasks.extend(tasks) submit_tasks(all_tasks, mode, ncores, batch_opts) print("Finished!")
zdb-analysis
/zdb_analysis-0.1.7-py3-none-any.whl/zdb/modules/skim.py
skim.py
import os import copy import numpy as np import pandas as pd import pysge import oyaml as yaml import functools from zdb.modules.df_process import df_process, df_merge, df_open_merge def submit_tasks(tasks, mode="multiprocessing", ncores=0, batch_opts=""): if mode=="multiprocessing" and ncores==0: results = pysge.local_submit(tasks) elif mode=="multiprocessing": results = pysge.mp_submit(tasks, ncores=ncores) elif mode=="sge": results = pysge.sge_submit( tasks, "zdb", "_ccsp_temp/", options=batch_opts, sleep=5, request_resubmission_options=True, return_files=True, ) elif mode=="condor": import conpy results = conpy.condor_submit( "zdb", "_ccsp_temp/", tasks=tasks, options=batch_opts, sleep=5, request_resubmission_options=True, ) return results def analyse( config, mode="multiprocesing", ncores=0, nfiles=-1, batch_opts="", output=None, chunksize=500000, merge_opts={}, ): if len(output.split(":"))!=2: raise ValueError( "The output kwarg should be None or a string with the format " "'{file_name}:{table_name}' instead of "+"{}".format(output) ) njobs = ncores # setup jobs with open(config, 'r') as f: cfg = yaml.full_load(f) # group jobs files = cfg["files"] if nfiles > 0: files = files[:nfiles] if mode in ["multiprocessing"] or njobs < 0: njobs = len(files) grouped_files = [list(x) for x in np.array_split(files, njobs)] tasks = [{ "task": df_process, "args": (fs, cfg["query"]), "kwargs": {"chunksize": chunksize}, } for fs in grouped_files] results = submit_tasks(tasks, mode=mode, ncores=ncores, batch_opts=batch_opts) if mode=='multiprocessing': df = functools.reduce(lambda x, y: df_merge(x, y), results) else: # grouped multi-merge merge_njobs = merge_opts.get("ncores", 100) grouped_merges = [list(x) for x in np.array_split(results, merge_njobs)] tasks = [{ "task": df_open_merge, "args": (r,), "kwargs": {}, } for r in grouped_merges] merge_mode = merge_opts.get("mode", "multiprocessing") if merge_mode=="multiprocessing" and ncores==0: semimerged_results = pysge.local_submit(tasks) df = functools.reduce(lambda x, y: df_merge(x, y), results) elif mode=="multiprocessing": semimerged_results = pysge.mp_submit(tasks, ncores=ncores) df = functools.reduce(lambda x, y: df_merge(x, y), results) elif mode=="sge": semimerged_results = pysge.sge_submit( tasks, "zdb-merge", "_ccsp_temp", options=merge_opts.get("batch_opts", "-q hep.q"), sleep=5, request_resubmission_options=True, return_files=True, ) df = df_open_merge(semimerged_results) if output is not None: path, table = output.split(":") df.to_hdf( path, table, format='table', append=False, complevel=9, complib='zlib', ) else: return df def resume_analyse(path, batch_opts="", output=None): results = pysge.sge_resume( "zdb", path, options=batch_opts, sleep=5, request_resubmission_options=True, return_files=True, ) df = df_open_merge(results) if output is not None: path, table = output.split(":") df.to_hdf( path, table, format='table', append=False, complevel=9, complib='zlib', ) else: return df def multi_analyse( configs, mode="multiprocesing", ncores=0, nfiles=-1, batch_opts="", outputs=None, chunksize=500000, merge_opts={}, ): for output in outputs: if len(output.split(":"))!=2: raise ValueError( "The output kwarg should be None or a string with the format " "'{file_name}:{table_name}' instead of "+"{}".format(output) ) all_tasks, sizes = [], [] for config in configs: njobs = ncores # setup jobs with open(config, 'r') as f: cfg = yaml.full_load(f) # group jobs files = cfg["files"] if nfiles > 0: files = files[:nfiles] if mode in ["multiprocessing"] or njobs < 0: njobs = len(files) grouped_files = [list(x) for x in np.array_split(files, njobs)] tasks = [{ "task": df_process, "args": (fs, cfg["query"]), "kwargs": {"chunksize": chunksize}, } for fs in grouped_files] all_tasks.extend(tasks) if len(sizes)==0: sizes.append(len(tasks)) else: sizes.append(len(tasks)+sizes[-1]) all_results = submit_tasks(all_tasks, mode=mode, ncores=ncores, batch_opts=batch_opts) merge_tasks, merge_sizes = [], [] for start, stop in zip([0]+sizes[:-1], sizes): results = all_results[start:stop] if mode=='multiprocessing': df = functools.reduce(lambda x, y: df_merge(x, y), results) else: # grouped multi-merge merge_njobs = merge_opts.get("ncores", 100) grouped_merges = [list(x) for x in np.array_split(results, merge_njobs)] tasks = [{ "task": df_open_merge, "args": (r,), "kwargs": {}, } for r in grouped_merges] merge_tasks.extend(tasks) if len(merge_sizes)==0: merge_sizes.append(len(tasks)) else: merge_sizes.append(len(tasks)+merge_sizes[-1]) all_merge_results = submit_tasks(merge_tasks, **merge_opts) ret_val = [] for output, start, stop in zip(outputs, [0]+merge_sizes[:-1], merge_sizes): merge_results = all_merge_results[start:stop] df = df_open_merge(merge_results) if output is not None: path, table = output.split(":") df.to_hdf( path, table, format='table', append=False, complevel=9, complib='zlib', ) else: ret_val.append(df) return ret_val
zdb-analysis
/zdb_analysis-0.1.7-py3-none-any.whl/zdb/modules/analyse.py
analyse.py
import os import copy import pysge import oyaml as yaml import numpy as np import pandas as pd from zdb.modules.multirun import multidraw def parallel_draw(drawer, jobs, mode, ncores, batch_opts): if len(jobs)==0: return njobs = ncores if mode in ["multiprocessing"]: njobs = len(jobs) grouped_jobs = [list(x) for x in np.array_split(jobs, njobs)] tasks = [ {"task": multidraw, "args": (drawer, args), "kwargs": {}} for args in grouped_jobs ] if mode=="multiprocessing" and ncores==0: pysge.local_submit(tasks) elif mode=="multiprocessing": pysge.mp_submit(tasks, ncores=ncores) elif mode=="sge": pysge.sge_submit( tasks, "zdb-draw", "_ccsp_temp/", options=batch_opts, sleep=5, request_resubmission_options=True, return_files=True, ) def submit_draw_data_mc( infile, drawer, cfg, outdir, nplots=-1, mode="multiprocessing", ncores=0, batch_opts="-q hep.q", ): with open(cfg, 'r') as f: cfg = yaml.full_load(f) # Read in dataframes df_data = pd.read_hdf(infile, "DataAggEvents") df_data = df_data.loc[("central",), :] df_mc = pd.read_hdf(infile, "MCAggEvents") df_mc = df_mc.loc[("central",), :] # dfs dfs = [] if df_data is not None: dfs.append(df_data) if df_mc is not None: dfs.append(df_mc) # varnames varnames = pd.concat(dfs).index.get_level_values("varname0").unique() # datasets if df_data is not None: datasets = df_data.index.get_level_values("parent").unique() else: datasets = ["None"] # cutflows cutflows = pd.concat(dfs).index.get_level_values("selection").unique() # group into histograms jobs = [] for varname in varnames: for dataset in datasets: for cutflow in cutflows: if varname not in cfg: continue job_cfg = copy.deepcopy(cfg[varname]) job_cfg.update(cfg.get("defaults", {})) job_cfg.update(cfg.get(dataset+"_dataset", {})) job_cfg.update(cfg.get(cutflow, {})) job_cfg.update(cfg.get(dataset+"_dataset", {}).get(cutflow, {})) job_cfg.update(cfg.get(dataset+"_dataset", {}).get(cutflow, {}).get(varname, {})) toutdir = os.path.join(outdir, dataset, cutflow) if not os.path.exists(toutdir): os.makedirs(toutdir) job_cfg["outpath"] = os.path.abspath( os.path.join(toutdir, cfg[varname]["outpath"]) ) # data selection if df_data is None or (varname, cutflow, dataset) not in df_data.index: df_data_loc = None else: df_data_loc = df_data.loc[(varname, cutflow, dataset),:] # mc selection if df_mc is None or (varname, cutflow) not in df_mc.index: df_mc_loc = None else: df_mc_loc = df_mc.loc[(varname, cutflow),:] jobs.append((df_data_loc, df_mc_loc, copy.deepcopy(job_cfg))) if nplots >= 0 and nplots < len(jobs): jobs = jobs[:nplots] parallel_draw(drawer, jobs, mode, ncores, batch_opts)
zdb-analysis
/zdb_analysis-0.1.7-py3-none-any.whl/zdb/modules/draw.py
draw.py
import os import re import argparse import pandas as pd import importlib import yaml import copy from zdb.modules.multirun import multidraw from atsge.build_parallel import build_parallel import logging logging.getLogger(__name__).setLevel(logging.INFO) logging.getLogger("atsge.SGEJobSubmitter").setLevel(logging.INFO) logging.getLogger("atsge.WorkingArea").setLevel(logging.INFO) logging.getLogger(__name__).propagate = False logging.getLogger("atsge.SGEJobSubmitter").propagate = False logging.getLogger("atsge.WorkingArea").propagate = False def parse_args(): parser = argparse.ArgumentParser() parser.add_argument("mc", help="Path to MC pandas pickle") parser.add_argument("drawer", help="Path to drawing function") parser.add_argument("cfg", help="Plotting config file") parser.add_argument( "-m", "--mode", default="multiprocessing", type=str, help="Parallelisation: 'multiprocessing', 'sge', 'htcondor'", ) parser.add_argument( "-j", "--ncores", default=0, type=int, help="Number of cores for 'multiprocessing' jobs", ) parser.add_argument( "-n", "--nplots", default=-1, type=int, help="Number of plots to draw. -1 = all", ) parser.add_argument( "-o", "--outdir", default="temp", type=str, help="Output directory", ) return parser.parse_args() def rename_df_index(df, index, rename_list): if df is None: return df indexes = df.index.names tdf = df.reset_index() for new_val, selection in rename_list: tdf.loc[tdf.eval(selection),index] = new_val return tdf.set_index(indexes) def parallel_draw(draw, jobs, options): if len(jobs)==0: return njobs = options.ncores if options.mode in ["multiprocessing"]: njobs = len(jobs)+1 jobs = [ jobs[i:i+len(jobs)//njobs+1] for i in xrange(0, len(jobs), len(jobs)//njobs+1) ] parallel = build_parallel( options.mode, processes=options.ncores, quiet=False, dispatcher_options={"vmem": 6, "walltime": 3*60*60}, ) parallel.begin() try: parallel.communicationChannel.put_multiple([{ 'task': multidraw, 'args': (draw, args), 'kwargs': {}, } for args in jobs]) results = parallel.communicationChannel.receive() except KeyboardInterrupt: parallel.terminate() parallel.end() def main(): options = parse_args() # Setup drawer function module_name, function_name = options.drawer.split(":") draw = getattr(importlib.import_module(module_name), function_name) # open cfg with open(options.cfg, 'r') as f: cfg = yaml.load(f) # Read in dataframes df = pd.read_pickle(options.mc) if options.mc is not None else None df = rename_df_index(df, "parent", [ ("WJetsToENu", "(parent=='WJetsToLNu') & (LeptonIsElectron==1)"), ("WJetsToMuNu", "(parent=='WJetsToLNu') & (LeptonIsMuon==1)"), #("WJetsToTauNu", "(parent=='WJetsToLNu') & (LeptonIsTau==1)"), ("WJetsToTauHNu", "(parent=='WJetsToLNu') & (LeptonIsTau==1) & (nGenTauL==0)"), ("WJetsToTauLNu", "(parent=='WJetsToLNu') & (LeptonIsTau==1) & (nGenTauL==1)"), ("DYJetsToEE", "(parent=='DYJetsToLL') & (LeptonIsElectron==1)"), ("DYJetsToMuMu", "(parent=='DYJetsToLL') & (LeptonIsMuon==1)"), #("DYJetsToTauTau", "(parent=='DYJetsToLL') & (LeptonIsTau==1)"), ("DYJetsToTauHTauH", "(parent=='DYJetsToLL') & (LeptonIsTau==1) & (nGenTauL==0)"), ("DYJetsToTauHTauL", "(parent=='DYJetsToLL') & (LeptonIsTau==1) & (nGenTauL==1)"), ("DYJetsToTauLTauL", "(parent=='DYJetsToLL') & (LeptonIsTau==1) & (nGenTauL==2)"), ]).reset_index(["LeptonIsElectron", "LeptonIsMuon", "LeptonIsTau", "nGenTauL"], drop=True) df = df.groupby(df.index.names).sum() # cutflows cutflows = df.index.get_level_values("selection").unique() # variations varnames = df.index.get_level_values("varname").unique() regex = re.compile("^(?P<varname>[a-zA-Z0-9_]+)_(?P<variation>[a-zA-Z0-9]+)(Up|Down)$") varname_variations = {} for v in varnames: match = regex.search(v) if match: varname = match.group("varname") variation = match.group("variation") if varname not in varname_variations: varname_variations[varname] = [] if variation not in varname_variations[varname]: varname_variations[varname].append(variation) # group into histograms jobs = [] for cutflow in cutflows: for varname, variations in varname_variations.items(): for variation in variations: job_cfg = copy.deepcopy(cfg[variation]) job_cfg.update(cfg.get("defaults", {})) job_cfg.update(cfg.get(cutflow, {})) job_cfg.update(cfg.get(varname, {})) outdir = os.path.join(options.outdir, cutflow, varname) if not os.path.exists(outdir): os.makedirs(outdir) job_cfg["outpath"] = os.path.abspath( os.path.join(outdir, cfg[variation]["outpath"]) ) df_loc = df.loc[ ( (df.index.get_level_values("selection")==cutflow) & (df.index.get_level_values("varname").isin( [varname+"_nominal", varname+"_"+variation+"Up", varname+"_"+variation+"Down"] )) ), : ] jobs.append((df_loc, copy.deepcopy(job_cfg))) if options.nplots >= 0 and options.nplots < len(jobs): jobs = jobs[:options.nplots] parallel_draw(draw, jobs, options) if __name__ == "__main__": main()
zdb-analysis
/zdb_analysis-0.1.7-py3-none-any.whl/zdb/scripts/draw_variation.py
draw_variation.py
import os import argparse import numpy as np import pandas as pd import importlib import yaml import copy import pysge from zdb.modules.multirun import multidraw import logging def parse_args(): parser = argparse.ArgumentParser() parser.add_argument("drawer", help="Path to drawing function") parser.add_argument("cfg", help="Plotting config file") parser.add_argument( "--data", default=None, type=str, help="Path to data pandas pickle", ) parser.add_argument( "--mc", default=None, type=str, help="Path to MC pandas pickle", ) parser.add_argument( "-m", "--mode", default="multiprocessing", type=str, help="Parallelisation: 'multiprocessing', 'sge', 'htcondor'", ) parser.add_argument( "-j", "--ncores", default=0, type=int, help="Number of cores for 'multiprocessing' jobs", ) parser.add_argument( "--sge-opts", default="-q hep.q", type=str, help="Options to pass onto qsub", ) parser.add_argument( "-n", "--nplots", default=-1, type=int, help="Number of plots to draw. -1 = all", ) parser.add_argument( "-o", "--outdir", default="temp", type=str, help="Output directory", ) return parser.parse_args() def rename_df_index(df, index, rename_list): if df is None: return df indexes = df.index.names tdf = df.reset_index() for new_val, selection in rename_list: tdf.loc[tdf.eval(selection),index] = new_val return tdf.set_index(indexes) def parallel_draw(draw, jobs, options): if len(jobs)==0: return mode = options.mode njobs = options.ncores if options.mode in ["multiprocessing"]: njobs = len(jobs)+1 jobs = [list(x) for x in np.array_split(jobs, njobs)] tasks = [ {"task": multidraw, "args": (draw, args), "kwargs": {}} for args in jobs ] if mode=="multiprocessing" and options.ncores==0: results = pysge.local_submit(tasks) elif mode=="multiprocessing": results = pysge.mp_submit(tasks, ncores=options.ncores) elif mode=="sge": results = pysge.sge_submit( tasks, "zdb", "_ccsp_temp/", options=options.sge_opts, request_resubmission_options=True, return_files=True, ) else: results = [] def main(): options = parse_args() # Setup drawer function module_name, function_name = options.drawer.split(":") draw = getattr(importlib.import_module(module_name), function_name) # open cfg with open(options.cfg, 'r') as f: cfg = yaml.load(f) # Read in dataframes df_data = pd.read_pickle(options.data) if options.data is not None else None df_mc = pd.read_pickle(options.mc) if options.mc is not None else None # process MC dataframe if df_mc is not None: df_mc = rename_df_index(df_mc, "parent", [ ("WJetsToENu", "(parent=='WJetsToLNu') & (LeptonIsElectron==1)"), ("WJetsToMuNu", "(parent=='WJetsToLNu') & (LeptonIsMuon==1)"), #("WJetsToTauNu", "(parent=='WJetsToLNu') & (LeptonIsTau==1)"), ("WJetsToTauHNu", "(parent=='WJetsToLNu') & (LeptonIsTau==1) & (nGenTauL==0)"), ("WJetsToTauLNu", "(parent=='WJetsToLNu') & (LeptonIsTau==1) & (nGenTauL==1)"), ("DYJetsToEE", "(parent=='DYJetsToLL') & (LeptonIsElectron==1)"), ("DYJetsToMuMu", "(parent=='DYJetsToLL') & (LeptonIsMuon==1)"), #("DYJetsToTauTau", "(parent=='DYJetsToLL') & (LeptonIsTau==1)"), ("DYJetsToTauHTauH", "(parent=='DYJetsToLL') & (LeptonIsTau==1) & (nGenTauL==0)"), ("DYJetsToTauHTauL", "(parent=='DYJetsToLL') & (LeptonIsTau==1) & (nGenTauL==1)"), ("DYJetsToTauLTauL", "(parent=='DYJetsToLL') & (LeptonIsTau==1) & (nGenTauL==2)"), ]).reset_index(["LeptonIsElectron", "LeptonIsMuon", "LeptonIsTau", "nGenTauL"], drop=True) df_mc = df_mc.groupby(df_mc.index.names).sum() # dfs dfs = [] if df_data is not None: dfs.append(df_data) if df_mc is not None: dfs.append(df_mc) # varnames varnames = pd.concat(dfs).index.get_level_values("varname").unique() # datasets if df_data is not None: datasets = df_data.index.get_level_values("parent").unique() else: datasets = ["None"] # cutflows cutflows = pd.concat(dfs).index.get_level_values("selection").unique() # group into histograms jobs = [] for varname in varnames: for dataset in datasets: for cutflow in cutflows: job_cfg = copy.deepcopy(cfg[varname]) job_cfg.update(cfg.get("defaults", {})) job_cfg.update(cfg.get(dataset+"_dataset", {})) job_cfg.update(cfg.get(cutflow, {})) job_cfg.update(cfg.get(dataset+"_dataset", {}).get(cutflow, {})) job_cfg.update(cfg.get(dataset+"_dataset", {}).get(cutflow, {}).get(varname, {})) outdir = os.path.join(options.outdir, dataset, cutflow) if not os.path.exists(outdir): os.makedirs(outdir) job_cfg["outpath"] = os.path.abspath( os.path.join(outdir, cfg[varname]["outpath"]) ) # data selection if df_data is None or (varname, cutflow, dataset) not in df_data.index: df_data_loc = None else: df_data_loc = df_data.loc[(varname, cutflow, dataset),:] # mc selection if df_mc is None or (varname, cutflow) not in df_mc.index: df_mc_loc = None else: df_mc_loc = df_mc.loc[(varname, cutflow),:] jobs.append((df_data_loc, df_mc_loc, copy.deepcopy(job_cfg))) if options.nplots >= 0 and options.nplots < len(jobs): jobs = jobs[:options.nplots] parallel_draw(draw, jobs, options) if __name__ == "__main__": main()
zdb-analysis
/zdb_analysis-0.1.7-py3-none-any.whl/zdb/scripts/draw.py
draw.py
import argparse import numpy as np import pandas as pd import pysge import oyaml as yaml import functools import tqdm from zdb.modules.df_slim import df_slim def parse_args(): parser = argparse.ArgumentParser() parser.add_argument("config", help="Path to yaml file") parser.add_argument( "-m", "--mode", default="multiprocessing", type=str, help="Parallelisation: 'multiprocessing', 'sge', 'htcondor'", ) parser.add_argument( "-j", "--ncores", default=0, type=int, help="Number of cores for 'multiprocessing' jobs", ) parser.add_argument( "--sge-opts", default="-q hep.q", type=str, help="Options to pass onto qsub", ) parser.add_argument( "-n", "--nfiles", default=-1, type=int, help="Number of files to process. -1 = all", ) parser.add_argument( "-o", "--output", default="output.h5", type=str, help="Output file", ) return parser.parse_args() def main(): options = parse_args() mode = options.mode njobs = options.ncores # setup jobs with open(options.config, 'r') as f: cfg = yaml.full_load(f) # group jobs files = cfg["files"] if options.nfiles > 0: files = files[:options.nfiles] if mode in ["multiprocessing"] or njobs < 0: njobs = len(files) grouped_files = [list(x) for x in np.array_split(files, njobs)] tasks = [ {"task": df_slim, "args": (fs,cfg,options.output.format(idx)), "kwargs": {}} for idx, fs in enumerate(grouped_files) ] if mode=="multiprocessing" and options.ncores==0: results = pysge.local_submit(tasks) elif mode=="multiprocessing": results = pysge.mp_submit(tasks, ncores=options.ncores) elif mode=="sge": results = pysge.sge_submit( tasks, "zdb", "_ccsp_temp/", options=options.sge_opts, sleep=5, request_resubmission_options=True, return_files=True, ) print("Finished!") if __name__ == "__main__": main()
zdb-analysis
/zdb_analysis-0.1.7-py3-none-any.whl/zdb/scripts/zdb_slim.py
zdb_slim.py
import argparse import numpy as np import pandas as pd import pysge import glob import os from zdb.modules.df_process import df_merge, df_open_merge def parse_args(): parser = argparse.ArgumentParser() parser.add_argument("path", help="Path to temp dir") parser.add_argument( "-m", "--mode", default="multiprocessing", type=str, help="Parallelisation: 'multiprocessing', 'sge', 'htcondor'", ) parser.add_argument( "-j", "--ncores", default=0, type=int, help="Number of cores for 'multiprocessing' jobs", ) parser.add_argument( "--sge-opts", default="-q hep.q", type=str, help="Options to pass onto qsub", ) parser.add_argument( "-o", "--output", default="output.pkl", type=str, help="Output file", ) return parser.parse_args() def main(): options = parse_args() results = pysge.sge_resume( "zdb", options.path, options=options.sge_opts, sleep=5, request_resubmission_options=True, ) njobs = options.ncores if options.mode in ["multiprocessing"] or options.ncores < 0: njobs = len(results) grouped_args = [list(x) for x in np.array_split(results, njobs)] tasks = [ {"task": df_open_merge, "args": (args,), "kwargs": {"quiet": True}} for args in grouped_args ] if options.mode=="multiprocessing" and options.ncores==0: merge_results = pysge.local_submit(tasks) df = pd.DataFrame() for result in merge_results: df = df_merge(df, result) elif options.mode=="multiprocessing": merge_results = pysge.mp_submit(tasks, ncores=options.ncores) df = pd.DataFrame() for result in merge_results: df = df_merge(df, result) elif options.mode=="sge": merge_results = pysge.sge_submit( "zdb-merge", "_ccsp_temp/", tasks=tasks, options=options.sge_opts, sleep=5, request_resubmission_options=True, ) df = df_open_merge(merge_results) else: df = pd.DataFrame() print(df) path, table = options.output.split(":") df.to_hdf( path, table, format='table', append=False, complevel=9, complib='zlib', ) if __name__ == "__main__": main()
zdb-analysis
/zdb_analysis-0.1.7-py3-none-any.whl/zdb/scripts/zdb_resume.py
zdb_resume.py
import numpy as np import pandas as pd import matplotlib.pyplot as plt import matplotlib.patches as ptch def process_mc(df, cfg): if df is None: return None indexes = df.index.names df = df.reset_index() for old_name, new_name in cfg["rename_samples"].items(): df.loc[df[cfg["sample"]]==old_name,cfg["sample"]] = new_name df = df.groupby(indexes).sum() binning = np.arange(*cfg["binning"]) indexes = df.index.names df = df.reset_index() mask = (df[cfg["binvar"]] < binning[0]) df.loc[mask,cfg["binvar"]] = 2*binning[0] - binning[1] df.loc[~mask,cfg["binvar"]] = binning[binning.searchsorted(df.loc[~mask,cfg["binvar"]], side='right')-1] df = df.groupby(indexes).sum() return df def create_mc_counts_variance(df, cfg): if df is None: return None, None # Pivot MC tables on samples df_counts = pd.pivot_table( df, values=cfg["counts"], index=cfg["binvar"], columns=cfg["sample"], aggfunc='sum', fill_value=0., ) df_variance = pd.pivot_table( df, values=cfg["variance"], index=cfg["binvar"], columns=cfg["sample"], aggfunc='sum', fill_value=0., ) # Sort MC by sum across bins columns = df_counts.sum().sort_values().index.tolist() if cfg["process_at_bottom"] in columns: columns.remove(cfg["process_at_bottom"]) columns = [cfg["process_at_bottom"]]+columns df_counts = df_counts.reindex(columns, axis=1) return df_counts, df_variance def process_data(df, cfg): if df is None: return None binning = np.arange(*cfg["binning"]) indexes = df.index.names df = df.reset_index() mask = (df[cfg["binvar"]] < binning[0]) df.loc[mask,cfg["binvar"]] = 2*binning[0] - binning[1] df.loc[~mask,cfg["binvar"]] = binning[binning.searchsorted(df.loc[~mask,cfg["binvar"]], side='right')-1] df = df.groupby(indexes).sum() return df def draw_data(ax, df, cfg): if df is None: return binning = np.arange(*cfg["binning"]) binvars = df.index.get_level_values(cfg["binvar"]) bincents = (binning[1:] + binning[:-1])/2. idx_dummy = pd.DataFrame({ "binvar": binning[:-1], cfg["counts"]: [0]*len(binning[:-1]), cfg["variance"]: [0]*len(binning[:-1]), }).set_index("binvar") df = df.reindex_like(idx_dummy) if not cfg["blind"]: ax.errorbar( bincents, df[cfg["counts"]], yerr=np.sqrt(df[cfg["variance"]]), fmt='o', ms=4, lw=0.6, capsize=2.5, color='black', label="Data", ) def draw_mc_counts(ax, df, cfg): if df is None or df.empty or df.sum().sum()==0: return binning = np.arange(*cfg["binning"]) binvars = df.index.get_level_values(cfg["binvar"]) ax.hist( binvars, binning, weights=df.sum(axis=1), histtype='step', color='black', label=r'',#label=r'SM Total', ) columns = df.sum().sort_values().index.tolist() ax.hist( [binvars]*df.shape[1], binning, weights=df.values, histtype='stepfilled', stacked=True, log=True, color=[cfg["process_colors"][p] for p in columns], label=[cfg["process_names"][p] for p in columns], ) def draw_legend(axt, axb, df_mc, df_data, cfg): handles, labels = axt.get_legend_handles_labels() if df_mc is not None: fractions = (df_mc.sum(axis=0) / df_mc.sum().sum()).values[::-1] fraction_labels = ["{:.3f}".format(f) for f in fractions] if df_data is not None and not cfg["blind"]: fraction_labels = [ "{:.3f}".format(df_data[cfg["counts"]].sum().sum()/df_mc.sum().sum()) ] + fraction_labels data_idx = labels.index("Data") data_label = labels.pop(data_idx) labels = [data_label]+labels data_handle = handles.pop(data_idx) handles = [data_handle]+handles blank_handles = [ ptch.Rectangle((0,0), 0, 0, fill=False, edgecolor='none', visible=False) ]*len(fraction_labels) if cfg["legend_off_axes"]: box = axt.get_position() axt.set_position([box.x0, box.y0, box.width*0.8, box.height]) axt.legend( handles+blank_handles, labels+fraction_labels, ncol=2, bbox_to_anchor=(1, 1), handleheight=1.6, labelspacing=0.05, columnspacing=-2, ) box = axb.get_position() axb.set_position([box.x0, box.y0, box.width*0.8, box.height]) else: axt.legend( handles+blank_handles, labels+fraction_labels, ncol=2, handleheight=0.8, labelspacing=0.05, columnspacing=-2, ) else: handles, labels = axt.get_legend_handles_labels() if cfg["legend_off_axes"]: box = axt.get_position() axt.set_position([box.x0, box.y0, box.width*0.8, box.height]) axt.legend(handles, labels, bbox_to_anchor=(1, 1)) box = axb.get_position() axb.set_position([box.x0, box.y0, box.width*0.8, box.height]) else: axt.legend(handles, labels) handles, labels = axb.get_legend_handles_labels() if cfg["legend_off_axes"]: axb.legend(handles, labels, bbox_to_anchor=(1, 1)) else: axb.legned(handles, labels) def draw_cms_header(ax): ax.text( 0, 1, r'$\mathbf{CMS}\ \mathit{Preliminary}$', ha='left', va='bottom', transform=ax.transAxes, fontsize=12, ) ax.text( 1, 1, r'$35.9\ \mathrm{fb}^{-1}(13\ \mathrm{TeV})$', ha='right', va='bottom', transform=ax.transAxes, fontsize=12, ) def draw_ratio(ax, df, cfg): if df is None or cfg["blind"]: return binning = np.arange(*cfg["binning"]) binvars = df.index.get_level_values(cfg["binvar"]) bincents = (binning[1:] + binning[:-1])/2. idx_dummy = pd.DataFrame({ "binvar": binning[:-1], "ratio": [0]*len(binning[:-1]), "data_err": [0]*len(binning[:-1]), "mc_err": [0]*len(binning[:-1]), }).set_index("binvar") df = df.reindex_like(idx_dummy) ax.errorbar( bincents, df["ratio"], yerr=df["data_err"], fmt='o', ms=4, lw=0.6, capsize=2.5, color='black', label='', ) ax.fill_between( binning, list(1.-df["mc_err"])+[1.], list(1.+df["mc_err"])+[1.], step='post', color='#aaaaaa', label='MC stat. unc.', ) def draw_data_mc(df_data, df_mc, cfg): plt.style.use('cms') df_mc = process_mc(df_mc, cfg) df_mc_counts, df_mc_variance = create_mc_counts_variance(df_mc, cfg) df_data = process_data(df_data, cfg) fig, (axt, axb) = plt.subplots( figsize = (4.8, 6), nrows=2, ncols=1, sharex='col', sharey=False, gridspec_kw={'height_ratios': [2.5, 1], 'wspace': 0.1, 'hspace': 0.1}, ) draw_data(axt, df_data, cfg) draw_mc_counts(axt, df_mc_counts, cfg) if df_mc is not None and df_data is not None: df_ratio = pd.DataFrame({ "ratio": df_data["sum_w"] / df_mc_counts.sum(axis=1), "data_err": np.sqrt(df_data["sum_ww"]) / df_mc_counts.sum(axis=1), "mc_err": np.sqrt(df_mc_variance.sum(axis=1)) / df_mc_counts.sum(axis=1), }, index=df_mc_counts.index) else: df_ratio = None draw_ratio(axb, df_ratio, cfg) draw_cms_header(axt) draw_legend(axt, axb, df_mc_counts, df_data, cfg) ylims = axt.get_ylim() binning = np.arange(*cfg["binning"]) axt.set_xlim(binning[0], binning[-1]) axt.set_ylim(max(ylims[0], 0.5), ylims[1]) axt.set_ylabel("Number of events", fontsize=12) ylims = axb.get_ylim() axb.set_ylim(max(ylims[0], 0.5), min(ylims[1], 1.5)) axb.set_xlabel(cfg["label"], fontsize=12) axb.set_ylabel("Data/Simulation", fontsize=12) axb.axhline(1., lw=0.8, ls='--', color='black') #print("Creating {}".format(cfg["outpath"])) fig.align_ylabels([axt, axb]) fig.savefig(cfg["outpath"], format="pdf", bbox_inches="tight") plt.close(fig) return True
zdb-analysis
/zdb_analysis-0.1.7-py3-none-any.whl/zdb/drawing/draw_data_mc.py
draw_data_mc.py
import numpy as np import pandas as pd import matplotlib.pyplot as plt import matplotlib.patches as ptch def process_dataframe(df, cfg): indexes = df.index.names df = df.reset_index() for old_name, new_name in cfg["rename_samples"].items(): df.loc[df[cfg["sample"]]==old_name,cfg["sample"]] = new_name df = df.groupby(indexes).sum() binning = np.arange(*cfg["binning"]) indexes = df.index.names df = df.reset_index() mask = (df[cfg["binvar"]] < binning[0]) df.loc[mask,cfg["binvar"]] = 2*binning[0] - binning[1] df.loc[~mask,cfg["binvar"]] = binning[binning.searchsorted(df.loc[~mask,cfg["binvar"]], side='right')-1] df = df.groupby(indexes).sum() return df def create_mc_counts_variance(df, cfg): # Pivot MC tables on samples df_counts = pd.pivot_table( df, values=cfg["counts"], index=cfg["binvar"], columns=[cfg["sample"], cfg["varname"]], aggfunc='sum', fill_value=0., ) df_variance = pd.pivot_table( df, values=cfg["variance"], index=cfg["binvar"], columns=[cfg["sample"], cfg["varname"]], aggfunc='sum', fill_value=0., ) new_cols = [] for c in df_counts.columns.levels[1]: new_cols.append( "Up" if c.endswith("Up") else "Down" if c.endswith("Down") else "Nominal" ) df_counts.columns = df_counts.columns.set_levels([df_counts.columns.levels[0], new_cols]) df_variance.columns = df_variance.columns.set_levels([df_counts.columns.levels[0], new_cols]) return df_counts, df_variance def draw_ratios(ax, df, cfg): binning = np.arange(*cfg["binning"]) binvars = df.index.get_level_values(cfg["binvar"]).tolist() labels = [c[0] for c in df.columns.to_flat_index()] ax.hist( [binvars]*df.shape[1], bins=binning, weights=df.values, histtype='step', color=[cfg["process_colors"].get(c, c) for c in labels], label=[cfg["process_names"].get(labels[idx], labels[idx]) if idx%2==0 else "" for idx in range(len(labels))], ) def draw_legend(ax): handles, labels = ax.get_legend_handles_labels() return ax.legend(handles, labels) def draw_cms_header(ax): ax.text( 0, 1, r'$\mathbf{CMS}\ \mathit{Preliminary}$', ha='left', va='bottom', transform=ax.transAxes, fontsize=12, ) ax.text( 1, 1, r'$35.9\ \mathrm{fb}^{-1}(13\ \mathrm{TeV})$', ha='right', va='bottom', transform=ax.transAxes, fontsize=12, ) def draw_variation(df, cfg): plt.style.use('cms') df = process_dataframe(df, cfg) df_counts, df_variance = create_mc_counts_variance(df, cfg) dfs = [] for c in df_counts.columns.levels[0]: dfs.append( df_counts.loc[:, (c, ("Up", "Down"))] .divide(df_counts.loc[:, (c, "Nominal")], axis='index') - 1. ) df_ratio = ( pd.concat(dfs, axis='columns') .replace([np.inf, -np.inf], np.nan) .fillna(0.) ) fig, ax = plt.subplots(figsize = (5.4, 4.8)) draw_ratios(ax, df_ratio, cfg) draw_cms_header(ax) draw_legend(ax) ylim = min(0.5, max(map(abs, ax.get_ylim()))) ax.set_ylim(-ylim, +ylim) ax.axhline(1., ls='--', lw=0.8, color='black') print("Creating {}".format(cfg["outpath"])) fig.savefig(cfg["outpath"], format="pdf", bbox_inches="tight") plt.close(fig) return True
zdb-analysis
/zdb_analysis-0.1.7-py3-none-any.whl/zdb/drawing/draw_variation.py
draw_variation.py
import os from dominate import document import dominate.tags as tags import shlex import subprocess as sp from tqdm.auto import tqdm style = ( """ #myInput { background-image: url('/css/searchicon.png'); /* Add a search icon to input */ background-position: 10px 12px; /* Position the search icon */ background-repeat: no-repeat; /* Do not repeat the icon image */ width: 100%; /* Full-width */ font-size: 16px; /* Increase font-size */ padding: 12px 20px 12px 40px; /* Add some padding */ border: 1px solid #ddd; /* Add a grey border */ margin-bottom: 12px; /* Add some space below the input */ } #myUL { /* Remove default list styling */ list-style-type: none; padding: 0; margin: 0; } #myUL li a { border: 1px solid #ddd; /* Add a border to all links */ margin-top: -1px; /* Prevent double borders */ background-color: #f6f6f6; /* Grey background color */ padding: 12px; /* Add some padding */ text-decoration: none; /* Remove default text underline */ font-size: 18px; /* Increase the font-size */ color: black; /* Add a black text color */ display: block; /* Make it into a block element to fill the whole list */ } #myUL li a:hover:not(.header) { background-color: #eee; /* Add a hover effect to all links, except for headers */ } """) style2 = ( """ .row { display: flex; } .column { flex: 33.33%; padding: 5px; } """) def runcommand(cmd): p = sp.run(shlex.split(cmd), stdout=sp.PIPE, stderr=sp.PIPE) return p.stdout, p.stderr def generate_html(dirname, outdir, title="images"): if not os.path.exists(outdir): os.makedirs(outdir) doc = document(title=title) with doc.head: tags.style(style) with doc: with tags.ul(id="myUL"): for category in os.listdir(dirname): tags.li(tags.a(category, href=category)) with open(os.path.join(outdir, "index.html"), 'w') as f: f.write(doc.render()) pbar1 = tqdm(os.listdir(dirname), dynamic_ncols=False) for category in pbar1: pbar1.set_description(category) if not os.path.exists(os.path.join(outdir, category)): os.makedirs(os.path.join(outdir, category)) subdoc = document(title=category) with subdoc.head: tags.style(style) with subdoc: tags.a("back", href="..") with tags.ul(id="myUL"): for subcat in os.listdir(os.path.join(dirname, category)): tags.li(tags.a(subcat, href=subcat)) with open(os.path.join(outdir, category, "index.html"), 'w') as f: f.write(subdoc.render()) pbar2 = tqdm(os.listdir(os.path.join(dirname, category)), dynamic_ncols=False) for subcat in pbar2: pbar2.set_description(subcat) if not os.path.exists(os.path.join(outdir, category, subcat)): os.makedirs(os.path.join(outdir, category, subcat)) ssubdoc = document(title=subcat) with ssubdoc.head: tags.style(style2) imgs = [] pbar3 = tqdm(os.listdir(os.path.join(dirname, category, subcat)), dynamic_ncols=False) for img in pbar3: pbar3.set_description(img) imgpng = img.replace(".pdf", ".png") imgs.append(imgpng) runcommand( "convert -density 150 {} -quality 100 {}".format( os.path.join(dirname, category, subcat, img), os.path.join(outdir, category, subcat, imgpng), ) ) with ssubdoc: tags.a("back", href="..") ncols = 3 for idx in range(0, len(imgs), ncols): with tags.div(_class="row"): final = idx+ncols if final>len(imgs)-1: final = len(imgs)-1 for sidx in range(idx, final): with tags.div(_class="column"): tags.img( src=imgs[sidx], alt=os.path.splitext(imgs[sidx])[0], style="height:500px", ) with open(os.path.join(outdir, category, subcat, "index.html"), 'w') as f: f.write(ssubdoc.render())
zdb-analysis
/zdb_analysis-0.1.7-py3-none-any.whl/zdb/drawing/generate_html.py
generate_html.py
zdbc ---- Python functions: - ls - lients - search Python classes: - entry License ------- zdbc is a python script collection that manages a Zscheile DataBase. Copyright (C) 2016 Erik Kai Alain Zscheile This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see <http://www.gnu.org/licenses/>.
zdbc
/zdbc-0.2.5.tar.gz/zdbc-0.2.5/README.rst
README.rst
======== zdbpydra ======== ``zdbpydra`` is a Python package and command line utility that allows to access JSON-LD data (with PICA+ data embedded) from the German Union Catalogue of Serials (ZDB) via its Hydra-based API (beta). Installation ============ ... via PyPI ~~~~~~~~~~~~ .. code-block:: bash pip install zdbpydra Usage Examples ============== Command Line ~~~~~~~~~~~~ .. code-block:: shell # fetch metadata of serial title zdbpydra --id "2736054-4" # fetch metadata of serial title (pica only) zdbpydra --id "2736054-4" --pica # query metadata of serial titles (cql-based) zdbpydra --query "psg=ZDB-1-CPO" .. code-block:: shell # print help message zdbpydra --help Help Message ------------ :: usage: zdbpydra [-h] [--id ID] [--query QUERY] [--scroll [SCROLL]] [--stream [STREAM]] [--pica [PICA]] [--pretty [PRETTY]] Fetch JSON-LD data (with PICA+ data embedded) from the German Union Catalogue of Serials (ZDB) optional arguments: -h, --help show this help message and exit --id ID id of serial title (default: None) --query QUERY cql-based search query (default: None) --scroll [SCROLL] scroll result set (default: False) --stream [STREAM] stream result set (default: False) --pica [PICA] fetch pica data only (default: False) --pretty [PRETTY] pretty print output (default: False) Interpreter ~~~~~~~~~~~ .. code-block:: python import zdbpydra # fetch metadata of serial title serial = zdbpydra.title("2736054-4") # fetch metadata of serial title (pica only) serial_pica = zdbpydra.title("2736054-4", pica=True) # fetch result page for given query result_page = zdbpydra.search("psg=ZDB-1-CPO") # fetch all result pages for given query result_page_set = zdbpydra.scroll("psg=ZDB-1-CPO") # iterate serial titles found for given query for serial in zdbpydra.stream("psg=ZDB-1-CPO"): print(serial.title) Background ========== See `Hydra: Hypermedia-Driven Web APIs <https://github.com/lanthaler/Hydra>`_ by `Markus Lanthaler <https://github.com/lanthaler>`_ for more information on Hydra APIs in general. Have a look at the `API documentation <https://zeitschriftendatenbank.de/services/schnittstellen/json-api>`_ and `CQL documentation <https://zeitschriftendatenbank.de/services/schnittstellen/hilfe-zur-suche>`_ (both in german) for more information on using the ZDB JSON interface. For details regarding the LD schema, see the `local context <https://zeitschriftendatenbank.de/api/context/zdb.jsonld>`_ file. Information on the PICA-based ZDB-Format can be found in the corresponding `cataloguing documentation <https://zeitschriftendatenbank.de/erschliessung/zdb-format>`_ or in the `PICA+/PICA3 concordance <https://zeitschriftendatenbank.github.io/pica3plus/>`_ (both in german). Usage Terms =========== ZDB metadata ~~~~~~~~~~~~ All metadata in the German Union Catalogue of Serials is available free of charge for general use under the Creative Commons Zero 1.0 (CC0 1.0) license. Most of the holding data in the ZDB is also freely available. A corresponding tag is incorporated into the data record itself. (`Source <https://www.dnb.de/EN/zdb>`_)
zdbpydra
/zdbpydra-0.3.4.tar.gz/zdbpydra-0.3.4/README.rst
README.rst
# zhy-dash-component zhy-dash-component is a Dash component library. Get started with: 1. Install Dash and its dependencies: https://dash.plot.ly/installation 2. Run `python usage.py` 3. Visit http://localhost:8050 in your web browser ## Contributing See [CONTRIBUTING.md](./CONTRIBUTING.md) ### Install dependencies If you have selected install_dependencies during the prompt, you can skip this part. 1. Install npm packages ``` $ npm install ``` 2. Create a virtual env and activate. ``` $ virtualenv venv $ . venv/bin/activate ``` _Note: venv\Scripts\activate for windows_ 3. Install python packages required to build components. ``` $ pip install -r requirements.txt ``` 4. Install the python packages for testing (optional) ``` $ pip install -r tests/requirements.txt ``` ### Write your component code in `src/lib/components/Zdc.react.js`. - The demo app is in `src/demo` and you will import your example component code into your demo app. - Test your code in a Python environment: 1. Build your code ``` $ npm run build ``` 2. Run and modify the `usage.py` sample dash app: ``` $ python usage.py ``` - Write tests for your component. - A sample test is available in `tests/test_usage.py`, it will load `usage.py` and you can then automate interactions with selenium. - Run the tests with `$ pytest tests`. - The Dash team uses these types of integration tests extensively. Browse the Dash component code on GitHub for more examples of testing (e.g. https://github.com/plotly/dash-core-components) - Add custom styles to your component by putting your custom CSS files into your distribution folder (`zdc`). - Make sure that they are referenced in `MANIFEST.in` so that they get properly included when you're ready to publish your component. - Make sure the stylesheets are added to the `_css_dist` dict in `zdc/__init__.py` so dash will serve them automatically when the component suite is requested. - [Review your code](./review_checklist.md) ### Create a production build and publish: 1. Build your code: ``` $ npm run build ``` 2. Create a Python tarball ``` $ python setup.py sdist ``` This distribution tarball will get generated in the `dist/` folder 3. Test your tarball by copying it into a new environment and installing it locally: ``` $ pip install zdc-0.0.1.tar.gz ``` 4. If it works, then you can publish the component to NPM and PyPI: 1. Publish on PyPI ``` $ twine upload dist/* ``` 2. Cleanup the dist folder (optional) ``` $ rm -rf dist ``` 3. Publish on NPM (Optional if chosen False in `publish_on_npm`) ``` $ npm publish ``` _Publishing your component to NPM will make the JavaScript bundles available on the unpkg CDN. By default, Dash serves the component library's CSS and JS locally, but if you choose to publish the package to NPM you can set `serve_locally` to `False` and you may see faster load times._ 5. Share your component with the community! https://community.plot.ly/c/dash 1. Publish this repository to GitHub 2. Tag your GitHub repository with the plotly-dash tag so that it appears here: https://github.com/topics/plotly-dash 3. Create a post in the Dash community forum: https://community.plot.ly/c/dash
zdc
/zdc-0.0.2.tar.gz/zdc-0.0.2/README.md
README.md
MIT License Copyright (c) 2017 Gustavo Ramos Rehermann Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
zdcode
/zdcode-2.13.7.tar.gz/zdcode-2.13.7/LICENSE.md
LICENSE.md
# ZDCode 2.0 "The language that compiles to ye olde DECORATE!" ZDCode is a project that aims to make writing DECORATE _better_; that is, to expand the possibilities not only of what the code itself can do, but also of how it can be written, or concerted with other ZDCode projects and authors, or distributed to modders and players alike. ZDCode is an attempt to make modding for ZDoom-based platforms, like Zandronum, much more similar to the ecosystem of an actual language, like a C linker, or a JavaScript web bundler. Take this example: ```c #UNDEFINE ANNOYING class RunZombie inherits ZombieMan replaces ZombieMan #2055 { set Gravity to 0.4; // high up... set Speed to 0; is NOBLOCKMONST; set Speed to 0; label See { inject SeeCheck; POSS AB 5 A_Recoil(-0.7); inject SeeCheck; POSS AB 4 A_Recoil(-0.7); inject SeeCheck; POSS ABCD 3 A_Recoil(-0.7); inject SeeCheck; goto RunLoop; }; macro SeeCheck { TNT1 A 0 A_Chase; POSS A 0 A_FaceTarget(); }; macro ZombieJump { if ( health > 5 ) return; while ( z == floorz ) { POSS A 5 [Bright]; POSS A 11 ThrustThingZ(0, 30, 0, 1); }; #ifndef ANNOYING while ( z > floorz ) { POSS AB 4; }; POSS G 9; POSS B 22; #endif POSS AB 2 A_Chase; }; label RunLoop { x 2 { POSS ABCD 2 A_Recoil(-0.7); inject SeeCheck; }; inject ZombieJump; loop; }; } ``` This is what happens when that beauty goes through **ZDCode 2.0**: ``` Actor _Call_NPaLK2i4Etrk1DERaszVFVbnG6JiT6KwJHX_0 : Inventory {Inventory.MaxAmount 1} Actor _Call_NPaLK2i4Etrk1DERaszVFVbnG6JiT6KwJHX_1 : Inventory {Inventory.MaxAmount 1} Actor _Call_NPaLK2i4Etrk1DERaszVFVbnG6JiT6KwJHX_2 : Inventory {Inventory.MaxAmount 1} Actor _Call_NPaLK2i4Etrk1DERaszVFVbnG6JiT6KwJHX_3 : Inventory {Inventory.MaxAmount 1} Actor _Call_NPaLK2i4Etrk1DERaszVFVbnG6JiT6KwJHX_4 : Inventory {Inventory.MaxAmount 1} Actor _Call_NPaLK2i4Etrk1DERaszVFVbnG6JiT6KwJHX_5 : Inventory {Inventory.MaxAmount 1} Actor _Call_NPaLK2i4Etrk1DERaszVFVbnG6JiT6KwJHX_6 : Inventory {Inventory.MaxAmount 1} Actor RunZombie : ZombieMan replaces ZombieMan 2055 { Gravity 0.4 Speed 0 Speed 0 +NOBLOCKMONST States { F_SeeCheck: TNT1 A 0 A_Chase POSS A 0 A_FaceTarget TNT1 A 0 A_JumpIfInventory("_Call_NPaLK2i4Etrk1DERaszVFVbnG6JiT6KwJHX_0", 1, "_CLabel0") TNT1 A 0 A_JumpIfInventory("_Call_NPaLK2i4Etrk1DERaszVFVbnG6JiT6KwJHX_1", 1, "_CLabel1") TNT1 A 0 A_JumpIfInventory("_Call_NPaLK2i4Etrk1DERaszVFVbnG6JiT6KwJHX_2", 1, "_CLabel2") TNT1 A 0 A_JumpIfInventory("_Call_NPaLK2i4Etrk1DERaszVFVbnG6JiT6KwJHX_3", 1, "_CLabel3") TNT1 A 0 A_JumpIfInventory("_Call_NPaLK2i4Etrk1DERaszVFVbnG6JiT6KwJHX_4", 1, "_CLabel4") TNT1 A 0 A_JumpIfInventory("_Call_NPaLK2i4Etrk1DERaszVFVbnG6JiT6KwJHX_5", 1, "_CLabel5") TNT1 A -1 F_ZombieJump: TNT1 A 0 A_JumpIf(!(health > 5), 2) TNT1 A 0 A_JumpIfInventory("_Call_NPaLK2i4Etrk1DERaszVFVbnG6JiT6KwJHX_6", 1, "_CLabel6") Stop TNT1 A 0 _WhileBlock0: TNT1 A 0 A_JumpIf(!(z == floorz), 3) POSS A 5 Bright POSS A 11 ThrustThingZ(0, 30, 0, 1) Goto _WhileBlock0 TNT1 A 0 _WhileBlock1: TNT1 A 0 A_JumpIf(!(z > floorz), 3) POSS A 4 POSS B 4 Goto _WhileBlock1 TNT1 A 0 POSS G 9 POSS B 22 POSS A 2 A_Chase POSS B 2 A_Chase TNT1 A 0 A_JumpIfInventory("_Call_NPaLK2i4Etrk1DERaszVFVbnG6JiT6KwJHX_6", 1, "_CLabel6") TNT1 A -1 See: TNT1 A 0 A_GiveInventory("_Call_NPaLK2i4Etrk1DERaszVFVbnG6JiT6KwJHX_0") Goto F_SeeCheck _CLabel0: TNT1 A 0 A_TakeInventory("_Call_NPaLK2i4Etrk1DERaszVFVbnG6JiT6KwJHX_0") POSS A 5 A_Recoil(-0.7) POSS B 5 A_Recoil(-0.7) TNT1 A 0 A_GiveInventory("_Call_NPaLK2i4Etrk1DERaszVFVbnG6JiT6KwJHX_1") Goto F_SeeCheck _CLabel1: TNT1 A 0 A_TakeInventory("_Call_NPaLK2i4Etrk1DERaszVFVbnG6JiT6KwJHX_1") POSS A 4 A_Recoil(-0.7) POSS B 4 A_Recoil(-0.7) TNT1 A 0 A_GiveInventory("_Call_NPaLK2i4Etrk1DERaszVFVbnG6JiT6KwJHX_2") Goto F_SeeCheck _CLabel2: TNT1 A 0 A_TakeInventory("_Call_NPaLK2i4Etrk1DERaszVFVbnG6JiT6KwJHX_2") POSS A 3 A_Recoil(-0.7) POSS B 3 A_Recoil(-0.7) POSS C 3 A_Recoil(-0.7) POSS D 3 A_Recoil(-0.7) TNT1 A 0 A_GiveInventory("_Call_NPaLK2i4Etrk1DERaszVFVbnG6JiT6KwJHX_3") Goto F_SeeCheck _CLabel3: TNT1 A 0 A_TakeInventory("_Call_NPaLK2i4Etrk1DERaszVFVbnG6JiT6KwJHX_3") goto RunLoop RunLoop: POSS A 2 A_Recoil(-0.7) POSS B 2 A_Recoil(-0.7) POSS C 2 A_Recoil(-0.7) POSS D 2 A_Recoil(-0.7) TNT1 A 0 A_GiveInventory("_Call_NPaLK2i4Etrk1DERaszVFVbnG6JiT6KwJHX_4") Goto F_SeeCheck _CLabel4: TNT1 A 0 A_TakeInventory("_Call_NPaLK2i4Etrk1DERaszVFVbnG6JiT6KwJHX_4") POSS A 2 A_Recoil(-0.7) POSS B 2 A_Recoil(-0.7) POSS C 2 A_Recoil(-0.7) POSS D 2 A_Recoil(-0.7) TNT1 A 0 A_GiveInventory("_Call_NPaLK2i4Etrk1DERaszVFVbnG6JiT6KwJHX_5") Goto F_SeeCheck _CLabel5: TNT1 A 0 A_TakeInventory("_Call_NPaLK2i4Etrk1DERaszVFVbnG6JiT6KwJHX_5") TNT1 A 0 A_GiveInventory("_Call_NPaLK2i4Etrk1DERaszVFVbnG6JiT6KwJHX_6") Goto F_ZombieJump _CLabel6: TNT1 A 0 A_TakeInventory("_Call_NPaLK2i4Etrk1DERaszVFVbnG6JiT6KwJHX_6") Goto RunLoop } } ``` While the compile output does look unreadable, expecting readability from it is akin to expecting readability from a binary executable file. This is the principle of using ZDCode instead of DECORATE, after all – a more readable, concise, and organized, way to write DECORATE. Just slap the output in your WAD and... [look at what happens!](https://i.imgur.com/mr5wJ85.gifv) # Design Concepts ### Bundling Similar to web bundling technologies, such as Browserify, ZDCode tackles the issue of incompatibilities among different generated DECORATE files by instead focusing on _bundling_ the ZDCode input into a single DECORATE file. The compiled output of ZDCode, much like a compiled C program after being linked, is treated as an indivisible, integral, and immutable chunk of instructions, to be read and interpreted exclusively by ZDoom. Instead of attempting to merge separate DECORATE outputs, it is much easier, and in fact more plausible, to link other ZDCode projects, libraries, and source code in general, akin to libraries in interpreted languages, like Python. This is also how web bundling technologies operate. ### Non-[Imperativity](https://en.wikipedia.org/wiki/Imperative_programming) and other lower-level limitations Unlike other programming languages, ZDCode has a very specific purpose. C is a portable way of writing a relatively abstract representation of machine instructions (e.g. `myNum += 1` instead of `ADD EAX, 1`), and interpreted languages are instructions for an interpreter's virtual machine, whose computations do still, in the end, reflect machine code. ZDCode does not have the same capabilities of using any arbitrary computer resources, because DECORATE itself doesn't. Rather, DECORATE is a mere form of expressing "classes" (which are only templates for objects in ZDoom-based source ports, rather than actual object-oriented programming constructs), which in turn guide "actors" (fancy name for game objects, instead of being strictly actor model constructs), including their actions, properties, and states (frames). For this reason, ZDCode is not concerned with concepts of actual imperative programming, like actual `for` loops that actually increment a discrete integer variable towards an arbitrary limit. Rather, it tries to make it easier, simpler, more programming-friendly and more systematic to write DECORATE behaviour for ZDoom, without relying on ZScript and requiring modern GZDoom versions for mere language support reasons. Zandronum does not support ZScript. DECORATE itself does not support sharing behaviour between separate actor classes; rather, it supports using actor classes from other classes, but only in indirect ways, such as spawning other actors. It has very limited interaction among different actors, which are more centered around basic concepts that every physical actor has (such as collision, movement, and to an extent AI behaviour), without allowing actual actor-model-esque messages to be passed directly between those game objects. At this point, writing DECORATE in any way that attempts to concert multiple objects becomes contrived and relies on mostly unrelated behaviour intended for very different things; trying to send a message between two actors is similar to using a bottle, usually a small liquid container, as a means of propagating a message across a river, or a lake. It does work, but it relies on the buoyancy and imperviousness of the bottle. It's much better to use something designed for this, like in this example, it could be radio waves, or an actual wire. Unfortunately, ZDCode is not able to overcome these limitations at run-time. It still cannot have imp Bob tell imp Joe that the marine is coming, and that they should ambush from different hiding spots. What it _can_ do, however, is make the _writing_ part of the code a lot simpler, by providing tools to exploit behaviours. ### Distribution The concept of _distribution_ in ZDCode is intimately related to that of bundling, specifically because it concerns with the ease of availability of libraries to the programmer, and also the ease of distribution and dependency management. At one point, it was planned to create a rather standardized format for fetching ZDCode packages using indexes stored in Git. However, this has been wholly deemed unnecessary. Instead, the current roadmap is to add simple support via transports like HTTP, Git and FTP, and allowing other transport implementations as well, using a termina-friendly URI-like format. This is very reminiscent of the syntax Go package management uses, e.g. `go get github.com/someone/something`. This deals with two issues at the same time: it ensures both that players can easily retrieve mods (and updates thereof) directly from the Internet (automatically bundling them if necessary), whilst at the same time enabling ZDCode mod authors to both obtain and share code more efficiently, both for libraries and finished mods. # Programming Constructs Welcome to the main attractions! ## Code Blocking This may seem like a primitive part of a programming language, but DECORATE uses states and labels, instead of code blocking. It's more akin to Assembly (or oldschool C), with jumps rather than groups of statements. ZDCode allows the programmer to group their state code into blocks, useful for statements like repetition (`x 5 { A_Print ("I'm said five times!") }`), control flow, or even mere readability. ## Macros Macros are a way to inject common behaviour into different locations, as states that are used multiple times. ``` // Global macros! macro ThrustUpward(amount) { TNT1 A 0 ThrustThingZ(0, amount, 0, 1); // My, DECORATE has some convoluted things sometimes. } class YipYipHurrahMan extends ZombieMan { // Class-local macros! macro Yip { inject ThrustUpward(20); }; macro Hurrah { inject ThrustUpward(90); }; label Spawn { POSS A 12; // '12' means short yips inject Yip; POSS B 12; inject Yip; POSS D 50; // '50' means long hurrahs inject Hurrah; loop; }; } ``` They are simple, because the states in them are simply copied at compile time, instead of called at runtime. (Functions are legacy, unreliable, and now deprecated.) They support static parameters, as well. They can't change at runtime, but they do make life easier too. ## Conditions In contrast to DECORATE's highly manual and tediously finnicky (and almost Assembly-like) state jumps, ZDCode boasts a much nicer format, that does not require offset maintenance in the source code, nor separate state labels, and that is easier to both integrate with existing code, extend with new code, or nest with more conditions and other constructs. ``` class RedBlue { label Spawn { if (z > floorz + 64) { // Red sprite, normal gravity (except half, but you know). RDBL R 2 A_SetGravity(0.5); }; else { // Blue sprite, reverse gravity. RDBL B 2 A_SetGravity(-0.5); }; loop; }; } ``` ## Preprocessor Yes, there is a C-like preprocessor in ZDCode! It has the usual `#DEFINE`, `#IFDEF`, `#IFNDEF`, and the fundamental part of using any library - `#INCLUDE`. Among other things, too. ``` class LocalizedZombie extends Zombieman replaces Zombieman { // This is a merely illustrative example. // For actual locatization, please just use ZDoom's LANGUAGE lump instead. // Apart from that, it demonstrates the effectiveness of the otherwise // simple and rudimentary preprocessor ZDCode uses. macro SeeMessage(message) { invisi A_Print(message); // invisi == TNT1, also duration defaults to 0 // Any better ideas for a message printing macro? I'm all ears! }; label See { #ifdef LANG #ifeq LANG EN_US inject SeeMessage("Hey, you!"); #else #ifeq LANG PT_BR inject SeeMessage("Ei, você!"); #else // I know, not very pretty. Python gets away with it, though! #ifeq LANG DE_DE inject SeeMessage("Achtung! Halt!"); #else inject SeeMessage("/!\"); // Attention? #endif #endif #endif #endif #endif goto Super.See; }; } ``` ## Templates It was already possible to have a class inherit another. It is very simple DECORATE behaviour that ZDCode of course permits as well, although with a bit cleaner syntax. In addition to that, ZDCode allows instantiating multiple classes that differ slightly from a base _template_. The difference between this and inheritance is that, rather than happening at load time (where ZDoom reads the DECORATE), it happens at compile-time, which means many cool tricks can be done by using this alongside other ZDCode features. ``` class<size> ZombieSizes extends Zombieman { set Scale to size; }; derive SmallZombie as ZombieSizes::(0.5); derive BiggerZombie as ZombieSizes::(1.3); derive SuperZombie as ZombieSizes::(2.2); ``` Derivations can optionally include extra definitions, including the ability to 'implement' **abstract macros**, **abstract labels** and define the values of **abstract arrays** that the template may specify. ``` class<> FiveNumbers { abstract array numbers; abstract macro PrintNumber(n); label Spawn { // for ranges are not supported yet inject PrintNumber(numbers[0]); inject PrintNumber(numbers[1]); inject PrintNumber(numbers[2]); inject PrintNumber(numbers[3]); inject PrintNumber(numbers[4]); stop; }; } derive PlainFibonacci as FiveNumbers::() { macro PrintNumber(n) { TNT1 A 0 A_Log(n); }; array numbers { 1, 1, 2, 3, 5 }; } ``` It is recommended that classes that derive a template also inherit a base class, even if for purely symbolic reasons, since it helps organize the code a bit, and so classes that derive from such templates can be listed property by ZDoom's `dumpclasses` command (using the inheritance base class as an argument), among other things. ## Groups and Group Iteration A _group_ is a compile-time sequence of literals that can be iterated using a `for in` loop. Class syntax allows adding the class' name to a group. More importantly, templates can also specify a group, but rather than the template itself, the names of all derived classes are added to the group when the code is parsed by the ZDCode compiler. ``` group fruits; class<name> Fruit group fruits { macro PrintMe() { TNT1 A 0 A_Print(name); }; } derive Orange as Fruit::("Orange"); derive Banana as Fruit::("Banana"); derive Lemon as Fruit::("Lemon"); derive Grapes as Fruit::("Grapes"); class FruitPrinter { label Spawn { // 'index ___' is optional, but retrieves // the # of the iteration, like the i in a // C 'for (i = 0; i < n; i++)' loop, and can be // quite useful for fruitname index ind in fruits { // Log the index and fruit classname, like some sort of debug example TNT1 A 0 A_Log("Fruit #, Fruit Name"); TNT1 A 0 A_Log(ind); TNT1 A 0 A_Log(fruitname); // Call the derived class macro from FruitPrinter. Yes. // The 'from' keyword means that the macro is from a // different class. The at sign means that the class // name is taken from the parameter 'fruitname', rather // than it being the name of the class itself. It can // also be used in the macro name part for interesting // tricks, similar to C's function pointer syntax. from @fruitname inject PrintMe(); }; stop; }; } ``` ## State Modifiers Another powerful feature of ZDCode is the ability to modify states at runtime, where each modifier applies certain effects based on certain selectors. Each modifier is actually a list of clause, where a clause pairs a selector with one or more effects. ``` class DiscoZombie extends Zombieman { mod DiscoLight { (sprite(TNT1)) suffix TNT1 A 0 A_PlaySound("disco/yeah"); // now we can use TNT1 frames to play some weird sound! (all) +flag Bright; // Always bright, all the time! }; label Spawn { apply DiscoLight { POSS AB 4; }; loop; }; // You can also apply the modifier to other state labels (like See or Missile) // using the apply syntax, but you get the gist. } ```
zdcode
/zdcode-2.13.7.tar.gz/zdcode-2.13.7/README.md
README.md
# Zdd Algorithms Zdd algorithms is a Python library that implements the zdd algorithms that are described on the [wikipedia page](https://en.wikipedia.org/wiki/Zero-suppressed_decision_diagram). With some additional functions for creating a zdd from a set, a set from a zdd and a function to create an image of the zdd. ## Installation Use the package manager [pip](https://pip.pypa.io/en/stable/) to install zdd_algorithms. ```bash pip install zdd-algorithms ``` ## Zero-suppressed decision diagram Zdd are a special kind of binary decision diagram that represents a set of sets. <p align="center"> <img src="https://raw.githubusercontent.com/Thilo-J/zdd_algorithms/e4185fbbc28a4c59e93c847044b9b9964523dd19/13_23_12.png" alt="zdd"/> </p> This Zdd represents the set {{1,3},{2,3},{1,2}} \ Every node has a exactly 2 outgoing edges LO and HI. The LO edge is usally represented by a dotted line and the HI edge with a solid line. The easiest way to get the set from a visual zdd by hand is to take every path from the root node to the {ø} node(⊤ is also often used as a label for this node).\ Every path represents a set and all paths combined is the set of sets that the Zdd represents. In this example there are 3 paths from the root node to {ø}. \ If a node has a LO edge in the path that nodes value is ignored. All the other values together represents a set \ 3 → 2 → {ø} This path represents the set {3,2} \ 3 ⇢ 2 → 1 → {ø} This path represents the set {1,2} \ 3 → 2 ⇢ 1 → {ø} This path represents the set {1,3} \ Therefor this zdd represents the set {{1,3},{2,3},{1,2}} ## Usage Since we cannot have a set of sets in python we use set of frozensets when converting a zdd to the set representation and vice versa ```python import zdd_algorithms as zdd # Creates a set of frozensets set1 = { frozenset({1,3}), frozenset({2,3}) } # Create zdd from the set. This zdd represent now the set {{1,3},{2,3}} zdd1 = zdd.to_zdd(set1) set2 = { frozenset({1,2}) } zdd2 = zdd.to_zdd(set2) # Create an union of two zdds union = zdd.union(zdd1, zdd2) # Create .PNG image of the zdd. This needs graphviz to be installed! zdd.create_image(union) ``` <p align="center"> <img src="https://raw.githubusercontent.com/Thilo-J/zdd_algorithms/e4185fbbc28a4c59e93c847044b9b9964523dd19/13_23_12.png" alt="zdd"/> </p> ## Contributing Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change. ## License [MIT](https://choosealicense.com/licenses/mit/)
zdd-algorithms
/zdd_algorithms-0.1.4.tar.gz/zdd_algorithms-0.1.4/README.md
README.md
import graphviz class ZddNode: def __init__(self, top, left, right): self.top = top self.left = left self.right = right def __eq__(self, other): if isinstance(other, self.__class__): return self.top == other.top and self.left == other.left and self.right == other.right else: return False class ZddManager: UNIQUE_NODES:dict[tuple[int,ZddNode,ZddNode], ZddNode] = {} TOP = ZddNode(1, None, None) BOTTOM = ZddNode(-1, None, None) def to_zdd(set_of_sets:set[frozenset[int]]) -> ZddNode: """Creates an ordered zdd tree that represents the set of sets Args: set_of_sets (set[frozenset[int]]): A set of frozensets of ints that the function will make a zdd of Returns: Zdd: The ordered zdd tree that represents the set of sets """ if(len(set_of_sets) == 0): return empty() if(len(set_of_sets) == 1 and frozenset() in set_of_sets): return base() universe = set().union(*set_of_sets) biggest_element = max(universe) contains_biggest_element = set() for s in set_of_sets: if biggest_element in s: new = set(s) new.remove(biggest_element) contains_biggest_element.add(frozenset(new)) contains_not_biggest_element = set([s for s in set_of_sets if biggest_element not in s]) return get_node( biggest_element, to_zdd(contains_not_biggest_element), to_zdd(contains_biggest_element) ) def to_set_of_sets(P:ZddNode) -> set[frozenset[int]]: """Creates a set of frozensets that P represents Args: P (Zdd): Zdd node that represents a set of sets Returns: set[frozenset[int]]: The set of frozensets that P represents """ def preorder_traversal(node:ZddNode, parent:ZddNode, add:bool, current_set:set[int], set_of_sets:set[frozenset[int]]) -> None: if(add): current_set.add(parent.top) if(node == empty()): return if(node == base()): set_of_sets.add(frozenset(current_set)) return preorder_traversal(node.left, node, False, current_set, set_of_sets) preorder_traversal(node.right, node, True, current_set, set_of_sets) current_set.remove(node.top) if (P == empty()): return set() if (P == base()): return {frozenset({})} set_of_sets = set() current_set = set() preorder_traversal(P.left, P, False, current_set, set_of_sets) preorder_traversal(P.right, P, True, current_set, set_of_sets) return set_of_sets def create_image(P:ZddNode, file_name:str="ZDD") -> None: """Creates an .PNG image that visualizes the tree P Args: P (Zdd): Root node file_name (str, optional): Name of the file the image will be stored """ visited = set() dot = graphviz.Digraph() dot.node(str(id(P)), shape="circle", label=str(P.top)) dot.node(str(id(empty())), shape="square", label='ø') dot.node(str(id(base())), shape="square", label='\{ø\}') def add_nodes_edges(node:ZddNode): if node.left and (id(node) not in visited): dot.node(str(id(node)), shape="circle", label=str(node.top)) dot.edge(str(id(node)), str(id(node.left)), style="dashed") add_nodes_edges(node.left) if node.right and (id(node) not in visited): dot.node(str(id(node)), shape="circle", label=str(node.top)) dot.edge(str(id(node)), str(id(node.right))) add_nodes_edges(node.right) visited.add(id(node)) add_nodes_edges(P) dot.render(file_name, view=True, format='png') def empty() -> ZddNode: """Returns the zdd node that represents the empty family ∅ Returns: Zdd: ∅ """ return ZddManager.BOTTOM def base() -> ZddNode: """Returns the zdd node that represents the unit family {∅} Returns: Zdd: {∅} """ return ZddManager.TOP def get_node(top:int, left:ZddNode, right:ZddNode) -> ZddNode: """Returns the node Zdd(top, left, right) if it already exists or creates that node and returns it if it does not exist Args: top (int): Value of the node left (Zdd): Left child right (Zdd): Right child Returns: Zdd: Zdd(top, left, right) """ if right == empty(): return left if (top, id(left), id(right)) in ZddManager.UNIQUE_NODES: return ZddManager.UNIQUE_NODES[(top, id(left), id(right))] else: new_node = ZddNode(top, left, right) ZddManager.UNIQUE_NODES[(top, id(left), id(right))] = new_node return ZddManager.UNIQUE_NODES[(top, id(left), id(right))] def subset1(P:ZddNode, var:int) -> ZddNode: """Returns the set of subsets of P containing the element var Args: P (Zdd): Set of sets of ints in the form of an zdd node var (int): Element that may or may not be in the sets of P Returns: Zdd: The subset of P containing the element var """ if (P == empty()): return empty() if (P == base()): return empty() if (P.top < var): return empty(); if (P.top == var): return get_node(var, empty(), P.right) if (P.top > var): return get_node(P.top, subset1(P.left, var), subset1(P.right, var)); def subset0(P:ZddNode, var:int) -> ZddNode: """Returns the set of subsets of P not containing the element var Args: P (Zdd): Set of sets of ints in the form of an zdd node var (int): Element that may or may not be in the sets of P Returns: Zdd: The subset of P not containing the element var """ if (P == empty()): return empty() if (P == base()): return base() if (P.top < var): return P if (P.top == var): return P.left if (P.top > var): return get_node(P.top, subset0(P.left, var), subset0(P.right, var)) def change(P:ZddNode, var:int) -> ZddNode: """Returns the set of subsets derived from P by adding element var to those subsets that did not contain it and removing element var from those subsets that contain it. Args: P (Zdd): Set of sets of ints in the form of an zdd node var (int): Element that may or may not be in the sets of P Returns: Zdd: The set of subsets derived from P by adding element var to those subsets that did not contain it and removing element var from those subsets that contain it. """ if (P == empty()): return empty() if (P == base()): return get_node(var, empty(), base()) if (P.top < var): return get_node(var, empty(), P) if (P.top == var): return get_node(var, P.right, P.left) if (P.top > var): return get_node(P.top, change(P.left, var), change(P.right, var)) def union(P:ZddNode, Q:ZddNode) -> ZddNode: """Union between two Zdd nodes that each represent a set of sets Args: P (Zdd): First zdd nodes that represents a set of sets Q (Zdd): Second zdd nodes that represents a set of sets Returns: Zdd: P U Q """ if (P == empty()): return Q if (Q == empty()): return P if (P == Q): return P if (P == base()): return get_node(Q.top, union(P, Q.left), Q.right) if (Q == base()): return get_node(P.top, union(P.left, Q), P.right) if (P.top > Q.top): return get_node(P.top, union(P.left, Q), P.right) if (P.top < Q.top): return get_node(Q.top, union(P, Q.left), Q.right) if (P.top == Q.top): return get_node (P.top, union(P.left, Q.left), union(P.right, Q.right)) def intersection(P:ZddNode, Q:ZddNode) -> ZddNode: """Intersection between two Zdd nodes that each represent a set of sets Args: P (Zdd): First zdd nodes that represents a set of sets Q (Zdd): Second zdd nodes that represents a set of sets Returns: Zdd: P ⋂ Q """ if (P == empty()): return empty() if (Q == empty()): return empty() if (P == Q): return P if (P == base()): return intersection(P, Q.left) if (Q == base()): return intersection(P.left, Q) if (P.top > Q.top): return intersection(P.left, Q) if (P.top < Q.top): return intersection (P, Q.left) if (P.top == Q.top): return get_node(P.top, intersection(P.left, Q.left), intersection(P.right, Q.right)) def difference(P:ZddNode, Q:ZddNode) -> ZddNode: """Difference between the first zdd node and the second zdd node Args: P (Zdd): First zdd nodes that represents a set of sets Q (Zdd): Second zdd nodes that represents a set of sets Returns: Zdd: P - Q """ if (P == empty()): return empty(); if (Q == empty()): return P if (P == Q): return empty() if (P == base()): return difference(P, Q.left) if (Q == base()): return get_node(P.top, difference(P.left, Q), P.right) if (P.top > Q.top): return get_node(P.top, difference(P.left, Q), P.right) if (P.top < Q.top): return difference(P, Q.left) if (P.top == Q.top): return get_node(P.top, difference(P.left, Q.left), difference(P.right, Q.right)) def count(P:ZddNode) -> int: """Counts the number of sets in the set P Args: P (Zdd): Set of sets Returns: int: Number of sets in the set P """ if (P == empty()): return 0 if (P == base()): return 1 return count(P.left) + count(P.right)
zdd-algorithms
/zdd_algorithms-0.1.4.tar.gz/zdd_algorithms-0.1.4/zdd_algorithms/zdd.py
zdd.py
Marathon-ZDD ============ Standalone version of the excellent ZDD script written by the guys over at https://github.com/mesosphere/marathon-lb Zero-downtime Deployments ------------------------- Marathon-lb is able to perform canary style blue/green deployment with zero downtime. To execute such deployments, you must follow certain patterns when using Marathon. The deployment method is described `in this Marathon document`_. Marathon-lb provides an implementation of the aforementioned deployment method with the script ```zdd```_. To perform a zero downtime deploy using ``zdd``, you must: - Specify the ``HAPROXY_DEPLOYMENT_GROUP`` and ``HAPROXY_DEPLOYMENT_ALT_PORT`` labels in your app template - ``HAPROXY_DEPLOYMENT_GROUP``: This label uniquely identifies a pair of apps belonging to a blue/green deployment, and will be used as the app name in the HAProxy configuration - ``HAPROXY_DEPLOYMENT_ALT_PORT``: An alternate service port is required because Marathon requires service ports to be unique across all apps - Only use 1 service port: multiple ports are not yet implemented - Use the provided ``zdd.py`` script to orchestrate the deploy: the script will make API calls to Marathon, and use the HAProxy stats endpoint to gracefully terminate instances - The marathon-lb container must be run in privileged mode (to execute ``iptables`` commands) due to the issues outlined in the excellent blog post by the `Yelp engineering team found here`_ - If you have long-lived TCP connections using the same HAProxy instances, it may cause the deploy to take longer than necessary. The script will wait up to 5 minutes (by default) for connections to drain from HAProxy between steps, but any long-lived TCP connections will cause old instances of HAProxy to stick around. An example minimal configuration for a `test instance of nginx is included here`_. You might execute a deployment from a CI tool like Jenkins with: :: zdd -j 1-nginx.json -m http://master.mesos:8080 -f -l http://marathon-lb.marathon.mesos:9090 --syslog-socket /dev/null Zero downtime deployments are accomplished through the use of a Lua module, which reports the number of HAProxy processes which are currently running by hitting the stats endpoint at the ``/_haproxy_getpids``. After a restart, there will be multiple HAProxy PIDs until all remaining connections have gracefully terminated. By waiting for all connections to complete, you may safely and deterministically drain tasks. A caveat of this, however, is that if you have any long-lived connections on the same LB, HAProxy will continue to run and serve those connections until they complete, thereby breaking this technique. The ZDD script includes the ability to specify a pre-kill hook, which is executed before draining tasks are terminated. This allows you to run your own automated checks against the old and new a .. _in this Marathon document: https://mesosphere.github.io/marathon/docs/blue-green-deploy.html .. _``zdd``: zdd .. _Yelp engineering team found here: http://engineeringblog.yelp.com/2015/04/true-zero-downtime-haproxy-reloads.html .. _test instance of nginx is included here: tests/1-nginx.json
zdd
/zdd-0.1.0.tar.gz/zdd-0.1.0/README.rst
README.rst
from sys import argv, exit import os import argparse import gitlab GITLAB_API = os.getenv("CI_SERVER_URL") def check_arg(): parser = argparse.ArgumentParser(description='A Python package for uploading packages to and downloading packages from Gitlab package registry.') parser.add_argument('-a','--action', type=str, default="download", help="upload/download [default: download]") parser.add_argument('-k', '--key', required=True, help='Gitlab Private Token') parser.add_argument('-r', '--registry', required=True, help="Project name with namespace. [example: it-admin/zdeb-utils]") parser.add_argument('-p', '--package', required=True, help="package name [example: mypackage]") parser.add_argument('-v','--version', required=True, help='Package Version') parser.add_argument('-f','--file', required=True, help='File path to be uploaded or get downloaded into. [example: /path/to/filename.deb] ') return parser # finding package registry project id def find_package_registry_project_id(token, registry): """Returns the project ID of the package registry""" gl = gitlab.Gitlab(GITLAB_API, token) all_projects = gl.projects.list(get_all=True) for project in all_projects: if project.path_with_namespace == registry: print(f"Project ID: {project.id}") return project raise Exception("No Project") def download_package(key, registry, package, version, filename): project = find_package_registry_project_id(key, registry) data = project.generic_packages.download( package_name=package, package_version=version, file_name="package.deb",) with open(filename, 'wb') as f: f.write(data) print("Download Success") def upload_package(key, registry, package, version, filename): project = find_package_registry_project_id(key, registry) filename = os.path.expanduser(filename) ret = project.generic_packages.upload( package_name=package, package_version=version, file_name="package.deb", path=f"{filename}") print(ret.to_json()) print("Upload Success") def main(): parser = check_arg() args = parser.parse_args() if args.action == "upload": upload_package(args.key, args.registry, args.package, args.version, args.file) elif args.action == "download": download_package(args.key, args.registry, args.package, args.version, args.file) else: print("Error: action not defined") parser.print_usage() exit(2) if __name__ == "__main__": main() exit(0)
zdeb-utils
/zdeb-utils-0.2.0.tar.gz/zdeb-utils-0.2.0/zdeb_utils/main.py
main.py
# Contributors ## zdesk 2 * Brent Woodruff (Fork author and maintainer) * Matthew Jaskula (Generic parameter passing) * Tim Allard (HTTP return code checking) * Daniel Inniss (Porting work to `requests`) * Dominik Miedziński (Booksy International Sp. z o. o.) * Sarfaraz Soomro (Incremental ticket pagination) * Craig Davis (Major `api_gen` updates for zdesk 2.6.0) ## zendesk and zdesk 1.x * Stefan Tjarks * Jay Chan * Max Gutman * J.B. Langston * Q * KP * Fred Muya * Antoine Reversat * Vikram Oberoi * Joe Heck * nathanharper * Hany Fahim * MATSUMOTO Akihiro * Brian Zambrano * DemonBob * Jeroen F.J. Laros * Joaquin Casares * Alex Chan * Muya * Nick Day * Paul Pieralde * Sandeep Sidhu * ebpmp * meowcoder
zdesk
/zdesk-2.8.1.tar.gz/zdesk-2.8.1/CONTRIBUTORS.md
CONTRIBUTORS.md
# Zdesk is seeking a contributing team or maintainer Active development on zdesk has been pretty slow for some time now. The zdesk port is not official, and has been authored by just myself (Brent). While I have received contributions, I am just one person. Add in a very busy personal life and no direct professional obligation to maintain this library and you have a recipe for stagnation. I am seeking to connect with users who would be interested in contributing to this project directly (commit access), or otherwise a suitable maintainer to pass this library on to. I think there is a lot of promise in the generator approach to do some more interesting things, but I simply cannot find the time to code it up myself. If you are interested, please email me directly: [email protected] # Special thanks to HashiCorp A big 'thank you' to [HashiCorp](https://www.hashicorp.com) for finding enough value in this library and the utilities I've written that use it to allow me to spend some more time on it. # Note about documentation on github Please refer to the documentation for the specific release you are running. Releases are listed [here](https://github.com/fprimex/zdesk/releases). # Zendesk API Wrapper for Python Zdesk is a Python wrapper for the Zendesk API. This library provides an easy and flexible way for developers to communicate with their Zendesk account in their application. See the [Zendesk developer site](https://developer.zendesk.com/) for API documentation. The underlying `zdesk_api` module has been [automatically generated](https://github.com/fprimex/zdgen) from this documentation. ## Requirements Zdesk works with both Python 2 and Python 3. Tested on Python 2.7.15 and 3.7.0. The requests package is used for authentication and requests pip install requests Note that if you are on an earlier version of Python on particular platforms, you can receive [an error](https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning) from `urllib3`, which is packaged in with `requests`. The simplest solution is to install or update the packages specified in the [solution](https://urllib3.readthedocs.org/en/latest/security.html#pyopenssl). pip install pyopenssl ndg-httpsclient pyasn1 This should be all that is required. If additional steps are required this may be a `zdesk` bug, so please [report it](https://github.com/fprimex/zdesk/issues). ## Installation Zdesk is available on pypi, so installation should be fairly simple: pip install zdesk ## Related projects * [zdeskcfg](https://github.com/fprimex/zdeskcfg): Automatically configure your zdesk scripts from a configuration file and command line arguments. * [zdgrab](https://github.com/fprimex/zdgrab): Download and decompress ticket attachments. # Notes on module usage ## Authentication Zdesk supports three methods of authorizing to Zendesk instances: basic authentication with a password, basic authentication with an API token, and OAuth authentication with an OAuth bearer token. All three are supported by `zdeskcfg` as well. The options are as follows, by precedence: * `zdesk_oauth` - OAuth bearer token. An implicit grant token that works with this option can be generated at the [Zendesk developer site](https://developer.zendesk.com/requests/new). * `zdesk_email` + `zdesk_api` - Basic authentication with a Zendesk account email and an API token as generated from `https://your-company.zendesk.com/agent/admin/api/settings`. * `zdesk_email` + `zdesk_password` - Basic authentication with a Zendesk account email and the password for that user. * `zdesk_email` + `zdesk_password` + `zdesk_token = True` - Basic authentication with a Zendesk account email and an API token, indicating that the password supplied is actually an API token. This option is deprecated in favor of `zdesk_email` + `zdesk_api` and will be removed in a future release. ## API Keyword args Zdesk attempts to identify query string parameters from the online documentation. All query string parameters are optional (default to `None`), and are provided for convenience and reference. However, it is very difficult, if not impossible, to accurately capture all valid query parameters for a particular endpoint from the documentation. So, zdesk passes all provided kwargs on to the Zendesk API as query string parameters without validation, except those that it has reserved for its own use. The current reserved kwargs (described in more detail below) are: * `complete_response` * `get_all_pages` * `mime_type` * `retry_on` * `max_retries` * `raw_query` * `retval` There are a few common query string parameters that the Zendesk API accepts for many calls. The current list at the time of this writing is: * `include` * `page` * `per_page` * `sort_by` * `sort_order` ## Results returned and getting all HTTP response info Under normal circumstances, when a call is made and the response indicates success, the value returned will be formatted to simplify usage. So if a JSON response is returned with the expected return code, then instead of getting back all of the HTTP response information, headers and all, the only thing that is returned is the JSON, which will already be deserialized. In some cases, only a single string in a particular header (location) is returned, and so that will be the return value. Passing `complete_response=True` will cause all response information to be returned, which is the result of a `requests.request`. ## Getting a specific part of a result The Zendesk service sometimes changes what exactly is returned and the automatic return value determination may not be desired. Additionally, it can be tedious to always request `complete_response=True` and working with all of that information. So, now it is possible to pass `retval` in order to request a specific part of the request. Valid values are `'content'`, `'code'`, `'location'`, and `'headers'`. For example, you may not care to retrieve the `location` from a ticket creation, but you do want to check the HTTP return code to ensure success. You can now pass `retval='code'` and then simply check to ensure that the code is equal to (the integer) `201`. ## Getting all pages There is a common pattern where a request will return one page of data along with a `next_page` location. In order to retrieve all results, it is necessary to continue retrieving every `next_page` location. The results then all need to be processed together. A loop to get all pages ends up stamped throughout Zendesk code, since many API methods return paged lists of objects. As a convenience, passing `get_all_pages` to any API method will do this for you, and will also merge all responses. The result is a single, large object that appears to be the result of one single call. The logic for this combination and reduction is well documented in the [source](https://github.com/fprimex/zdesk/blob/master/zdesk/zdesk.py#L534) (look for the line reading `Now we need to try to combine or reduce the results`, if the line number has shifted since this writing). ## MIME types for data By default, all `data` passed to requests is assumed to be of MIME type `application/json`. The value of `data` in this default case should be a JSON object, and it will automatically be converted using `json.dumps` for the request. Some endpoints such as those that allow file uploads expect `data` to be of a different MIME type, and so this can be specified using the `mime_type` keyword argument. If working with files of an unknown MIME type, a module such as [python-magic](https://pypi.python.org/pypi/python-magic/) can be useful. The following code has worked well with zdesk scripts: # import, configure, and connect to Zendesk as shown in the example code. # zd = Zendesk(... import magic fname = 'my_file' mime_type = magic.from_file(fname, mime=True) if type(mime_type) is bytes: mime_type = mime_type.decode() with open(fname, 'rb') as fp: fdata = fp.read() response = zd.upload_create(filename=fname, data=fdata, mime_type=mime_type, complete_response=True) upload_token = response['content']['upload']['token'] ## Multipart file uploads (Help Center attachments) In addition to the `data` argument, zdesk methods can also take a `files` argument. This is a tuple which is passed directly to the `requests` module, so you may wish to reference [their documentation](http://requests.readthedocs.org/en/latest/user/quickstart/#post-a-multipart-encoded-file). Here is an example of using the `help_center_article_attachment_create` method. zd.help_center_article_attachment_create(article_id='205654433', data={}, files={'file':('attach.zip', open('attach.zip', 'rb'))}) The `data` parameter should always be supplied, containing any desired optional parameters such as `data={'inline':'true'}`, or `{}` otherwise. The file data can be provided directly in the tuple, and the MIME type can be explicitly specified. with open('attach.zip', 'rb') as f: fdata = f.read() zd.help_center_article_attachment_create(article_id='205654433', data={}, files={'file':('attach.zip', fdata, 'application/zip')}) ## Raw query strings In some cases it is necessary to pass query parameters that are the same parameter but differ by value, such as multiple `start_time` or `end_time` values. This makes it impossible to use a simple dictionary of query parameters and values. To enable this use case it is now possible to pass a string, starting with `?`, using `raw_query`. This value overrides all query parameters that are named or passed with `kwargs`, and is appended to the URL. The string will be appropriately encoded by `requests`, so there is no need to pre-encode it before passing. ## Rate limits and retrying It is possible to retry all requests made by an instance of `Zendesk` by providing `retry_on` and `max_retries` to `__init__`. In addition, it is also possible to retry one `Zendesk.call` without modifying it's attributes - simply by supplying those kwargs to `Zendesk.call`. `retry_on` specifies on which exceptions raised you want to retry your request. There is also possibility to retry on specific non-200 HTTP codes, also by specyfing them in `retry_on`. `ZendeskError` and `requests.RequestsError` combined are catch-alls. `max_retries` controls how many attempts are made if first request fails. Note that with `get_all_pages` this can make up to `(max_retries + 1) * pages` requests. Currently there is no support for exponential backoff. # Example Use ```python from zdesk import Zendesk ################################################################ ## NEW CONNECTION CLIENT ################################################################ # Manually creating a new connection object zendesk = Zendesk('https://yourcompany.zendesk.com', '[email protected]', 'passwd') # If using an API token, you can create connection object using # zendesk = Zendesk('https://yourcompany.zendesk.com', '[email protected]', 'token', True) # True specifies that the token is being used instead of the password # See the zdeskcfg module for more sophisticated configuration at # the command line and via a configuration file. # https://github.com/fprimex/zdeskcfg ################################################################ ## TICKETS ################################################################ # List zendesk.tickets_list() # Create new_ticket = { 'ticket': { 'requester': { 'name': 'Howard Schultz', 'email': '[email protected]', }, 'subject':'My Starbucks coffee is cold!', 'description': 'please reheat my coffee', 'tags': ['coffee', 'drinks'], 'ticket_field_entries': [ { 'ticket_field_id': 1, 'value': 'venti' }, { 'ticket_field_id': 2, 'value': '$10' } ] } } # Create the ticket and get its URL result = zendesk.ticket_create(data=new_ticket) # Need ticket ID? from zdesk import get_id_from_url ticket_id = get_id_from_url(result) # Show zendesk.ticket_show(id=ticket_id) # Delete zendesk.ticket_delete(id=ticket_id) ``` See the [full example file](https://github.com/fprimex/zdesk/blob/master/examples/example.py) on github, however this is not anywhere close to covering all of the over 400 REST API methods.
zdesk
/zdesk-2.8.1.tar.gz/zdesk-2.8.1/README.md
README.md
## 2.8.1 - Patch `users_search` manually to fix #66. ## 2.8.0 - Regenerate API from updated mirror. see [full commit](https://github.com/fprimex/zdesk/commit/4982b3dad9581fbb49d71307abc229dc4169ab74). Most notably, Zendesk has replace many, many instances of using `id` with, e.g., `ticket_id`, `article_id`, etc. Most of these are positional arguments, so if you were using `foo(id=1234)` I recommend changing to `foo(1234)` to (hopefully) future proof a bit when they decide to change it again. - Update iterable code for Python 3.10 compatibility. Tests pass with 3.10.3 and 3.9.5. - This is not a change, but I want to note here that several of `zdesk`'s dependencies have deprecations. Most notably, if you're using `zdeskcfg`, `plac_ini` has a deprecation that will be removed in 3.12. None of these seem particularly difficult to overcome, but just make sure you test before upgrading. ## 2.7.1 - Immediately noticed an OAuth bug. Reference private variables for some logic. ## 2.7.0 - Support for Python 3.5+ - OAuth token support, and a more clear way of choosing between password, API token, and OAuth token authentication. - Regenerate API from updated mirror. See [full commit](https://github.com/fprimex/zdesk/commit/1cf01a3b730c84b531261bba98b2ab5aa6dd0d18) ## 2.6.0 - Fix incremental pagination by making an exception to status 422, removing the existing query `kwargs`, and looking for incremental and certain conditions to mark the end of `get_all_pages` (by Sarfaraz Soomro). - API generator updates corresponding to the end of web portal and forums support, as well as the replacement of zopim with chat (by Craig Davis). - Add `raw_query` parameter for explicitly setting and overriding the query string. The enables use cases where, for example, query parameters need to be repeated and therefore cannot go into a dictionary. - Add `retval` parameter to allow for explicitly requesting a certain component of a response. Valid values are 'content', 'code', 'location', and 'headers'. - Regenerate API from updated mirror. See [full commit](https://github.com/fprimex/zdesk/commit/6e22dea7af6b129a88f9ce30082660eff2eea621) ## 2.5.0 - Use Pytest and implement some basic tests - Implement retry (major contribution by Dominik Miedziński) - Merge the `batch` support method (by Dominik Miedziński) - Merge 2.6 support (by Ryan Shipp) - Check for json in content-type before attempting to deserialize (by Craig Davis) - Improve API generator handling of duplicates and ambiguous parameters - Add support for optional `locale` help center argument on many methods - Regenerate API from updated mirror. See [full commit](https://github.com/fprimex/zdesk/commit/bb455aeac4ffb9c7a6f5cabb9653cf46cdcb8531) ## 2.4.0 - Support non-JSON endpoint (removed check for .json, for recordings.mp3) - Improve generator formatting of duplicates - Add doc-anchor-links, so docstrings link more closely to the method question - Regenerate API from updated mirror. See [full commit](https://github.com/fprimex/zdesk/commit/7240295278fd596189643ae30fbcbb16a4b8c3d9) ## 2.3.0 - Switch from `httplib2` to `requests` - Add `files` parameter to support multipart uploads for Help Center attachment style requests - Enhance `api_gen.py` to handle downloading and patching of developer.zendesk.com - Add Zopim and numerous other API endpoints - Regenerate API from updated mirror. See [full commit](https://github.com/fprimex/zdesk/commit/d679a734292de5ade82cb4d4533e79368510769d) ## 2.2.1 - Remove `common_params`, allowing all kwargs to be passed as queries ## 2.2.0 - Add exception classes to top level. e.g. `from zdesk import ZendeskError` works now - Modify `api_gen.py` so that `update_many_update` becomes just `update_many` - Regenerate API from updated mirror. See [full commit](https://github.com/fprimex/zdesk/commit/8a6bac52a912ce45c3a47911331b381cf963abc1) ## 2.1.1 - Remove explicit HTTP status code checking. Success is always in the 200 range, with some specific exceptions raised for only a few specific 400 range codes. ## 2.1.0 - Support non-JSON data for, e.g., creating uploads - Add `sort_by` common parameter - Regenerate API from updated mirror. See [full commit](https://github.com/fprimex/zdesk/commit/cbeb1ecd0ae4580caa3ad434c74e7e49d4378c19) - Update `examples/__init__.py` with fixes and ticket updates and uploads - Reorder CHANGELOG.md with most recent releases at top ## 2.0.3 - Add `get_all_pages` option to call to exhaustively follow `next_page` - Combine and reduce multiple requests when using `get_all_pages` ## 2.0.2 - Always inject auth credentials into request when they are supplied ## 2.0.1 - Immediately fix import bug in 2.0.0 ## 2.0.0 - Drop APIv1 support code completely - Drop endpoints dicts for new API generator approach - Support Python 2 and Python 3 in codebase without 2to3 translation ## 1.2.0 - Fork zendesk from eventbrite - Merge PRs and apply fixes - Python 3 compatibility
zdesk
/zdesk-2.8.1.tar.gz/zdesk-2.8.1/CHANGELOG.md
CHANGELOG.md
from __future__ import print_function import sys from zdesk import Zendesk ################################################################ # NEW CONNECTION CLIENT ################################################################ # README: To configure this example so that it will actually execute, you need # to provide a valid Zendesk URL and login information. You have two options: # # 1. Install the zdeskcfg module and use it to configure your Zendesk object. # # To do this, first `pip install zdeskcfg`, then create a file `~/.zdeskcfg` # with contents similar to the following: # # [zdesk] # email = [email protected] # password = t2EVLKMUtt2EVLKMUtt2EVLKMUtt2EVLKMUt # url = https://example.zendesk.com # token = 1 # # [sandbox] # url = https://example-sandbox22012201.zendesk.com # # 2. Provide a manual configuration below by editing the testconfig variable. # # In either case, you can use your actual password and set `token = 0` or # `zdesk_token = False`, but it is a very good idea to configure an API token # by visiting this URL at your own Zendesk instance: # https://example.zendesk.com/settings/api/ try: import zdeskcfg # Create an object using the [zdesk] section of # ~/.zdeskcfg and the zdeskcfg module # zendesk = Zendesk(**zdeskcfg.get_ini_config()) # Create an object using the [zdesk] and [sandbox] sections of # ~/.zdeskcfg and the zdeskcfg module zendesk = Zendesk(**zdeskcfg.get_ini_config(section='sandbox')) except ImportError: testconfig = { 'zdesk_email': '[email protected]', 'zdesk_password': 't2EVLKMUtt2EVLKMUtt2EVLKMUtt2EVLKMUt', 'zdesk_url': 'https://example-sandbox22012201.zendesk.com', 'zdesk_token': True } if testconfig['zdesk_url'] == \ 'https://example-sandbox22012201.zendesk.com': print( 'Could not import zdeskcfg and no manual configuration provided.') print( 'Please `pip install zdeskcfg` or edit example with ' 'manual configuration.') sys.exit() else: zendesk = Zendesk(**testconfig) # Are you getting an error such as... # "SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed"? # zendesk = Zendesk( # 'https://yourcompany.zendesk.com', '[email protected]', 'passwd', # client_args={ # "disable_ssl_certificate_validation": True # } # ) ################################################################ # TICKETS ################################################################ # List zendesk.tickets_list() # Create new_ticket = { 'ticket': { 'requester_name': 'Howard Schultz', 'requester_email': '[email protected]', 'subject': 'My Starbucks coffee is cold!', 'description': 'please reheat my coffee', 'tags': ['coffee', 'drinks'], 'ticket_field_entries': [ { 'ticket_field_id': 1, 'value': 'venti' }, { 'ticket_field_id': 2, 'value': '$10' } ] } } # If a response results in returning a [location] header, then that # will be what is returned. # Create a ticket and get its URL. result = zendesk.ticket_create(data=new_ticket) # Alternatively, you can get the complete response and get the location # yourself. This can be useful for getting other response items that are # not normally returned, such as result['content']['upload']['token'] # when using zendesk.upload_create() # # result = zendesk.ticket_create(data=new_ticket, complete_response=True) # ticket_url = result['response']['location'] # ticket_id = get_id_from_url(ticket_url) # Need ticket ID? from zdesk import get_id_from_url ticket_id = get_id_from_url(result) # Show zendesk.ticket_show(id=ticket_id) # Ticket comments and uploads / attachments commentbody = "Attaching example Python file" # must be in the examples directory when executing so this file can be found fname = 'example.py' with open(fname, 'rb') as fp: fdata = fp.read() # MIME types can be detected with the magic module: # import magic # mime_type = magic.from_file(fname, mime=True) # if type(mime_type) is bytes: # mime_type = mime_type.decode() # But this file is known mime_type = 'text/plain' upload_result = zendesk.upload_create( fdata, filename=fname, mime_type=mime_type, complete_response=True) # for making additional uploads upload_token = upload_result['content']['upload']['token'] data = { "ticket": { "id": ticket_id, "comment": { "public": False, "body": commentbody } } } # I like to add this separately, because it's not an uncommon use case # to have an automated ticket update that may or may not have uploads. if upload_token != "": data['ticket']['comment']['uploads'] = [upload_token] # Post the comment to the ticket, which should reference the upload response = zendesk.ticket_update(ticket_id, data) # Delete zendesk.ticket_delete(id=ticket_id) ################################################################ # ORGANIZATIONS ################################################################ # List zendesk.organizations_list() # Create new_org = { 'organization': { 'name': 'Starbucks Corp' } } result = zendesk.organization_create(data=new_org) org_id = get_id_from_url(result) # Show zendesk.organization_show(id=org_id) # Delete zendesk.organization_delete(id=org_id) ################################################################ # USERS (AGENTS) ################################################################ # List zendesk.users_list() # Create new_user = { 'user': { 'name': 'Howard Schultz', 'email': '[email protected]', 'roles': 4, } } result = zendesk.user_create(data=new_user) user_id = get_id_from_url(result) # Show zendesk.user_show(id=user_id) # Delete zendesk.user_delete(id=user_id) ################################################################ # GROUPS ################################################################ # List zendesk.groups_list() # Create new_group = { 'group': { 'name': 'Starbucks Group', 'agents': [ { 'agent': 123 }, ] } } result = zendesk.group_create(data=new_group) group_id = get_id_from_url(result) # Show zendesk.group_show(id=group_id) # Delete zendesk.group_delete(id=group_id) ################################################################ # TAGS ################################################################ # List zendesk.tags_list() ################################################################ # TICKET TYPES ################################################################ zendesk.ticket_fields_list() ################################################################ # SEARCH ################################################################ results = zendesk.search(query='type:ticket sort:desc', page=1)
zdesk
/zdesk-2.8.1.tar.gz/zdesk-2.8.1/examples/example.py
example.py
import sys import os import inspect import plac import plac_ini # Based on a decorator that modified the call signature for a function # http://www.pythoneye.com/184_18642398/ # http://stackoverflow.com/questions/18625510/how-can-i-programmatically-change-the-argspec-of-a-function-not-in-a-python-de class configure(object): """ zdeskcfg.configure is a decorator that will add to the given object a call signature that includes the standard Zendesk configuration items. This effectively converts the given, decorated function, tgt_func, into a callable object. It then also provides a getconfig method that the decorated function can use to retrieve the Zendesk specific configuration items. The arguments to zdeskcfg.configure should be plac-style annotations for the function being decorated. As a result of employing the decorator and using zdeskcfg.call, the decorated function will first apply function defaults, then override those defaults with the contents of ~/.zdeskcfg, then override that using options specified on the command line. This is all done with plac_ini and plac. See the example script in the zdeskcfg source distribution. """ def __init__(self, **ann): self.ann = ann self.wrapped = None self.__config = {} def __call__(self, tgt_func): tgt_argspec = inspect.getargspec(tgt_func) need_self = False if tgt_argspec[0][0] == 'self': need_self = True name = tgt_func.__name__ argspec = inspect.getargspec(tgt_func) if argspec[0][0] == 'self': need_self = False if need_self: newargspec = (['self'] + argspec[0],) + argspec[1:] else: newargspec = argspec # This gets the original function argument names for actually # calling the tgt_func inside the wrapper. So, the defaults need # to be removed. signature = inspect.formatargspec( formatvalue=lambda val: "", *newargspec )[1:-1] # Defaults for our four new arguments that will go in the wrapper. newdefaults = argspec[3] + (None, None, None, None, None, False) newargspec = argspec[0:3] + (newdefaults,) # Add the new arguments to the argspec newargspec = (newargspec[0] + ['zdesk_email', 'zdesk_oauth', 'zdesk_api', 'zdesk_password', 'zdesk_url', 'zdesk_token'],) + newargspec[1:] # Text version of the arguments with their defaults newsignature = inspect.formatargspec(*newargspec)[1:-1] # Add the annotations for the new arguments to the annotations that were passed in self.ann.update(dict( zdesk_email=("zendesk login email", "option", None, str, None, "EMAIL"), zdesk_oauth=("zendesk OAuth token", "option", None, str, None, "OAUTH"), zdesk_api=("zendesk API token", "option", None, str, None, "API"), zdesk_password=("zendesk password or token", "option", None, str, None, "PW"), zdesk_url=("zendesk instance URL", "option", None, str, None, "URL"), zdesk_token=("specify if password is a zendesk token (deprecated)", "flag", None, bool), )) # Define and exec the wrapping function that will be returned new_func = ( 'def _wrapper_(%(newsignature)s):\n' ' config["zdesk_email"] = zdesk_email\n' ' config["zdesk_oauth"] = zdesk_oauth\n' ' config["zdesk_api"] = zdesk_api\n' ' config["zdesk_password"] = zdesk_password\n' ' config["zdesk_url"] = zdesk_url\n' ' config["zdesk_token"] = zdesk_token\n' ' return %(tgt_func)s(%(signature)s)\n' % {'signature':signature, 'newsignature':newsignature, 'tgt_func':'tgt_func'} ) evaldict = {'tgt_func' : tgt_func, 'plac' : plac, 'config' : self.__config} exec(new_func, evaldict) wrapped = evaldict['_wrapper_'] # Update the wrapper with all of the information from the wrapped function wrapped.__name__ = name wrapped.__doc__ = tgt_func.__doc__ wrapped.func_defaults = newdefaults wrapped.__module__ = tgt_func.__module__ wrapped.__dict__ = tgt_func.__dict__ # Add the complete annotations to the wrapper function, and also add the getconfig method # so that the new arguments can be retrieved inside the wrapped function. # This must come after the __dict__ assignment wrapped.__annotations__ = self.ann wrapped.getconfig = self.getconfig self.wrapped = wrapped return wrapped def getconfig(self, section=None): """ This method provides a way for decorated functions to get the four new configuration parameters *after* it has been called. If no section is specified, then the fully resolved zdesk config will be returned. That is defaults, zdesk ini section, command line options. If a section is specified, then the same rules apply, but also any missing items are filled in by the zdesk section. So the resolution is defaults, zdesk ini section, specified section, command line options. """ if not section: return self.__config.copy() cmd_line = {} for k in self.__config: v = self.wrapped.plac_cfg.get(k, 'PLAC__NOT_FOUND') if v != self.__config[k]: # This config item is different when fully resolved # compared to the ini value. It was specified on the # command line. cmd_line[k] = self.__config[k] # Get the email, password, url, and token config from the indicated # section, falling back to the zdesk config for convenience cfg = { "zdesk_email": self.wrapped.plac_cfg.get(section + '_email', self.__config['zdesk_email']), "zdesk_oauth": self.wrapped.plac_cfg.get(section + '_oauth', self.__config['zdesk_oauth']), "zdesk_api": self.wrapped.plac_cfg.get(section + '_api', self.__config['zdesk_api']), "zdesk_password": self.wrapped.plac_cfg.get(section + '_password', self.__config['zdesk_password']), "zdesk_url": self.wrapped.plac_cfg.get(section + '_url', self.__config['zdesk_url']), "zdesk_token": self.wrapped.plac_cfg.get(section + '_token', self.__config['zdesk_token']), } # The command line trumps all cfg.update(cmd_line) return cfg def call(obj, config=os.path.join(os.path.expanduser('~'), '.zdeskcfg'), section=None, eager=True): return plac_ini.call(obj, config=config, default_section=section, eager=eager) @configure() def __placeholder__(section=None): pass def get_ini_config(config=os.path.join(os.path.expanduser('~'), '.zdeskcfg'), default_section=None, section=None): """This is a convenience function for getting the zdesk configuration from an ini file without the need to decorate and call your own function. Handy when using zdesk and zdeskcfg from the interactive prompt.""" plac_ini.call(__placeholder__, config=config, default_section=default_section) return __placeholder__.getconfig(section)
zdeskcfg
/zdeskcfg-1.2.0.tar.gz/zdeskcfg-1.2.0/zdeskcfg.py
zdeskcfg.py
import argparse import urllib.request import urllib.error import urllib.parse import os.path import sys import re from xml.etree.ElementTree import fromstring XML_API = "http://www.zdf.de/ZDFmediathek/xmlservice/web/beitragsDetails?id=%i" CHUNK_SIZE = 1024*128 # 128 KB def video_key(video): return ( int(video.findtext("videoBitrate", "0")), any(f.text == "progressive" for f in video.iter("facet")), ) def video_valid(video): return (video.findtext("url").startswith("http") and video.findtext("url").endswith(".mp4")) def get_id(url): return int(re.search(r"[^0-9]*([0-9]+)[^0-9]*", url).group(1)) def format_mb(bytes): return "%.2f" % (bytes/(1024*1024)) def video_dl(url, dir): xml = fromstring(urllib.request.urlopen(XML_API % get_id(url)).read()) status = xml.findtext("./status/statuscode") if status != "ok": print("Error retrieving manifest:") print(" %s" % status.statuscode.text) print(" %s" % status.debuginfo.text) return False video = xml.find("video") title = video.findtext("information/title") print(title) print(" %s" % video.findtext("details/vcmsUrl")) videos = sorted((v for v in video.iter("formitaet") if video_valid(v)), key=video_key, reverse=True) for v in videos: url = v.findtext("url") try: video = urllib.request.urlopen(url) except urllib.error.HTTPError as e: if e.code in [403, 404]: print("HTTP status %i on %s" % (e.code, url)) continue raise e basename, ext = os.path.splitext(os.path.basename(urllib.parse.urlparse(url).path)) filename = "{dir}/{title} ({basename}){ext}".format(dir=dir, title=title, basename=basename, ext=ext) print("Downloading %s" % filename) print(" from %s" % url) size = 0 target_size = int(video.info()["Content-Length"].strip()) with open(filename, "wb") as f: data = video.read(CHUNK_SIZE) while data: size += len(data) f.write(data) data = video.read(CHUNK_SIZE) print("%s/%s MB – %0.2f %%" % (format_mb(size), format_mb(target_size), size/target_size*100), " "*10, end="\r") print() return True return False def main(): parser = argparse.ArgumentParser(description= "Download movies from the ZDF Mediathek." "If no URLs are passed on the command line, zdfm reads URLs from standard input until EOF (^D or Ctrl-D)." "File names are automatically chosen. zdfm always downloads the highest quality available.") parser.add_argument("urls", metavar="URL", type=str, nargs="*", help="URL of video in the ZDF Mediathek") parser.add_argument("--dir", default=".", help="Target directory for downloaded files") args = parser.parse_args() if args.urls: urls = args.urls else: urls = sys.stdin.readlines() return 0 if all(video_dl(url, dir=args.dir) for url in urls) else 1 if __name__ == "__main__": sys.exit(main())
zdfm
/zdfm-0.8.tar.gz/zdfm-0.8/zdfm.py
zdfm.py
# Download attachments from Zendesk tickets Zdgrab is a utility for downloading attachments to tickets from [Zendesk](http://www.zendesk.com) and extracting and arranging them. There is integration with [SendSafelyGrab](https://github.com/fprimex/SendSafelyGrab) for downloading SendSafely package links included in comments. ## Note Zdgrab was originally written while I was at Basho, and my repository used to be a fork of their original one. On June 29, 2021 I deleted the forked repo and re-pushed it as its own standalone repository. This is the repo that is the source of the Pypi package. This version is diverging from that old, effectively unmaintained version pretty significantly, and I don't have any control over that repo anymore. Report issues here to have them fixed in the Pypi releases. ## Installing Tested with Python 3.9. Zdgrab requires Python 3.x, [zdeskcfg](http://github.com/fprimex/zdeskcfg), [zdesk](http://github.com/fprimex/zdesk), [asplode](http://github.com/fprimex/asplode), and Python modules, which have their own requirements. ``` pip install zdgrab ``` ## Zendesk Authentication Use one of the [authentication mechanisms](https://github.com/fprimex/zdesk#authentication) supported by `zdesk`. Configure `zdgrab` in `~/.zdeskcfg` similar to the following: # ~/.zdeskcfg [zdesk] url = https://example.zendesk.com email = [email protected] oauth = dneib393fwEF3ifbsEXAMPLEdhb93dw343 # or # api = nde3ibb93fEwwwFXEAPMLEdb93d3www43 [zdgrab] agent = [email protected] ### Usage The script can be invoked with the following synopsis: usage: zdgrab [-h] [-v] [-t TICKETS] [-w WORK_DIR] [-a AGENT] [--ss-host SS_HOST] [--ss-id SS_ID] [--ss-secret SS_SECRET] [--zdesk-email EMAIL] [--zdesk-oauth OAUTH] [--zdesk-api API] [--zdesk-password PW] [--zdesk-url URL] [--zdesk-token] Download attachments from Zendesk tickets. optional arguments: -h, --help show this help message and exit -v, --verbose verbose output -t TICKETS, --tickets TICKETS Ticket(s) to grab attachments (default: all of your open tickets) -w WORK_DIR, --work-dir WORK_DIR Working directory in which to store attachments. (default: ~/zdgrab/) -a AGENT, --agent AGENT Agent whose open tickets to search (default: me) --ss-host SS_HOST SendSafely host to connect to, including protocol --ss-id SS_ID SendSafely API key --ss-secret SS_SECRET SendSafely API secret --zdesk-email EMAIL zendesk login email --zdesk-oauth OAUTH zendesk OAuth token --zdesk-api API zendesk API token --zdesk-password PW zendesk password or token --zdesk-url URL zendesk instance URL --zdesk-token specify if password is a zendesk token (deprecated) Note that command line arguments such as `-agent` and `-work_dir` can also be specified (in lowercase form) within the appropriate section of `.zdeskcfg` as well as, e.g., `agent` and `work_dir`. Here are some basic zdgrab usage examples to get started: ### SendSafely support Zdgrab supports downloading [SendSafely](https://www.sendsafely.com/) packages with [ssgrab](https://github.com/fprimex/ssgrab). To set this up, obtain API credentials from SendSafely for the account to be used. Set the credentials and other configuration items in `~/.zdeskcfg` or provide them as command line parameters: `ss_host`, `ss_id`, `ss_secret`. With `ssgrab` installed, `zdgrab` will search all ticket comments for SendSafely links to packages (for example, those added by the SendSafely Zendesk app). When it finds a link, it will run `ssgrab` with the arguments necessary to retrieve the packaged files. As with attachments, the files will be extracted automatically. #### Help zdgrab -h #### Get/update all attachment for your open tickets zdgrab zdgrab -v #### Get/update all attachments from a specific ticket zdgrab -t 2940 #### Get/update all attachments from a number of specific tickets zdgrab -t 2940,3405,3418 ## Notes zdgrab uses Zendesk API version 2 with JSON zdgrab depends on the following Python modules: * `zdesk` - `requests` * `zdeskcfg` - `plac_ini` - `plac` ### Resources * Python zdesk module: https://github.com/fprimex/zdesk * Python zdeskcfg module: https://github.com/fprimex/zdeskcfg * Zendesk Developer Site (For API information): http://developer.zendesk.com ### Using zdgrab as a module It can be useful to script zdgrab using Python. The configuration is performed followed by the zdgrab, then the return value of the zdgrab can then be used to operate on the attachments and directories that were grabbed. For example: ``` #!/usr/bin/env python from __future__ import print_function import os import zdeskcfg from zdgrab import zdgrab if __name__ == '__main__': # Using zdeskcfg will cause this script to have all of the ini # and command line parsing capabilities of zdgrab. # Passing eager=False is required in this case, otherwise plac and plac_ini # will wrap the function value with list() and destroy the grabs dict. grabs = zdeskcfg.call(zdgrab, section='zdgrab', eager=False) start_dir = os.path.abspath(os.getcwd()) for ticket_dir in grabs.keys(): attach_path = grabs[ticket_dir] # Path to the ticket dir containing the attachment # os.chdir(ticket_dir) # Path to the attachment that was grabbed # os.path.join(ticket_dir, attach_path) # Path to the comments dir in this ticket dir ticket_com_dir = os.path.join(ticket_dir, 'comments') # Handy way to get a list of the comment dirs in numerical order: comment_dirs = [dir for dir in os.listdir(ticket_com_dir) if dir.isdigit()] comment_dirs = map(int, comment_dirs) # convert to ints comment_dirs = map(str, sorted(comment_dirs)) # sort and convert back to strings # Iterate through the dirs and over every file os.chdir(ticket_com_dir) for comment_dir in comment_dirs: for dirpath, dirnames, filenames in os.walk(comment_dir): for filename in filenames: print(os.path.join(ticket_com_dir, dirpath, filename)) os.chdir(start_dir) ``` ### Asplode Archives that are downloaded are automatically extracted using `asplode`.
zdgrab
/zdgrab-4.1.0.tar.gz/zdgrab-4.1.0/README.md
README.md
import os import sys import re import textwrap import base64 import json from datetime import datetime, timedelta from zdesk.zdesk import get_id_from_url import zdeskcfg from zdesk import Zendesk from asplode import asplode try: from ssgrab import ssgrab ss_present = True except ModuleNotFoundError: ss_present = False class verbose_printer: def __init__(self, v): if v: self.print = self.verbose_print else: self.print = self.null_print def verbose_print(self, msg, end='\n'): print(msg, file=sys.stderr, end=end) def null_print(self, msg, end='\n'): pass @zdeskcfg.configure( verbose=('verbose output', 'flag', 'v'), tickets=('Comma separated ticket numbers to grab (default: all of your open tickets)', 'option', 't', str, None, 'TICKETS'), orgs=('Grab from one or more Organizations (default: none)', 'option', 'o', str, None, 'ORGS'), items=('Comma separated items to grab: attachments,comments,audits (default: attachments)', 'option', 'i', str, None, 'ITEMS'), status=('Query expression for ticket status (default: <solved)', 'option', 's', str, None, 'STATUS'), query=('Additional query when searching tickets (default: "")', 'option', 'q', str, None, 'QUERY'), days=('Retrieve tickets opened since a number of days (default: 0, all)', 'option', 'd', int, None, 'DATETIME'), js=('Save response information in JSON format (default: false)', 'flag', 'j'), count=('Retrieve up to this many total specified items (default: 0, all)', 'option', 'c', int, None, 'COUNT'), work_dir=('Working directory to store items in (default: ~/zdgrab)', 'option', 'w', str, None, 'WORK_DIR'), agent=('Agent whose open tickets to search (default: me)', 'option', 'a', str, None, 'AGENT'), ss_host=('SendSafely host to connect to, including protocol', 'option', None, str, None, 'SS_HOST'), ss_id=('SendSafely API key', 'option', None, str, None, 'SS_ID'), ss_secret=('SendSafely API secret', 'option', None, str, None, 'SS_SECRET'), ) def _zdgrab(verbose=False, tickets=None, orgs=None, items="attachments", status="<solved", query="", days=0, js=False, count=0, work_dir=os.path.join(os.path.expanduser('~'), 'zdgrab'), agent='me', ss_host=None, ss_id=None, ss_secret=None): "Download attachments from Zendesk tickets." cfg = _zdgrab.getconfig() zdgrab(verbose=verbose, tickets=tickets, orgs=orgs, items=items, status=status, query=query, days=days, js=js, count=count, work_dir=work_dir, agent=agent, ss_host=ss_host, ss_id=ss_id, ss_secret=ss_secret, zdesk_cfg=cfg) def zdgrab(verbose, tickets, orgs, status, query, items, days, js, count, work_dir, agent, ss_host, ss_id, ss_secret, zdesk_cfg): # ssgrab will only be invoked if the comment body contains a link. # See the corresponding REGEX used by them, which has been ported to Python: # https://github.com/SendSafely/Windows-Client-API/blob/master/SendsafelyAPI/Utilities/ParseLinksUtility.cs ss_link_re = r'https://[-a-zA-Z\.]+/receive/\?[-A-Za-z0-9&=]+packageCode=[-A-Za-z0-9_]+#keyCode=[-A-Za-z0-9_]+' ss_link_pat = re.compile(ss_link_re) vp = verbose_printer(verbose) if zdesk_cfg.get('zdesk_url') and ( zdesk_cfg.get('zdesk_oauth') or (zdesk_cfg.get('zdesk_email') and zdesk_cfg.get('zdesk_password')) or (zdesk_cfg.get('zdesk_email') and zdesk_cfg.get('zdesk_api')) ): vp.print(f'Configuring Zendesk with:\n' f' url: {zdesk_cfg.get("zdesk_url")}\n' f' email: {zdesk_cfg.get("zdesk_email")}\n' f' token: {repr(zdesk_cfg.get("zdesk_token"))}\n' f' password/oauth/api: (hidden)\n') zd = Zendesk(**zdesk_cfg) else: msg = textwrap.dedent("""\ Error: Need Zendesk config to continue. Config file (~/.zdeskcfg) should be something like: [zdesk] url = https://example.zendesk.com email = [email protected] api = dneib393fwEF3ifbsEXAMPLEdhb93dw343 # or # oauth = ndei393bEwF3ifbEsssX [zdgrab] agent = [email protected] """) print(msg) return 1 # Log the cfg vp.print(f'Running with zdgrab config:\n' f' verbose: {verbose}\n' f' tickets: {tickets}\n' f' orgs: {orgs}\n' f' items: {items}\n' f' status: {status}\n' f' query: {query}\n' f' days: {days}\n' f' js: {js}\n' f' count: {count}\n' f' work_dir: {work_dir}\n' f' agent: {agent}\n') if not items: print('Error: No items given to grab') return 1 if days > 0: start_time = datetime.utcnow() - timedelta(days=days) else: # UNIX epoch, 1969 start_time = datetime.fromtimestamp(0) possible_items = {"attachments", "comments", "audits"} items = set(items.split(',')) grab_items = possible_items.intersection(items) invalid_items = items - possible_items if len(invalid_items) > 0: print(f'Error: Invalid item(s) specified: {invalid_items} ') return 1 # tickets=None means default to getting all of the attachments for this # user's open tickets. If tickets is given, try to split it into ints if tickets: # User gave a list of tickets try: tickets = [int(i) for i in tickets.split(',')] except ValueError: print(f'Error: Could not convert to integers: {tickets}') return 1 # dict of paths to attachments retrieved to return. format is: # { 'path/to/ticket/1': [ 'path/to/attachment1', 'path/to/attachment2' ], # 'path/to/ticket/2': [ 'path/to/attachment1', 'path/to/attachment2' ] } grabs = {} # list to hold all of the ticket objects retrieved results = [] # list to hold all of the ticket numbers being retrieved ticket_nums = [] if orgs: orgs = orgs.split(',') for o in orgs: resp = zd.search(query=f'type:organization "{o}"') if resp["count"] == 0: print(f'Error: Could not find org {o}') continue elif resp["count"] > 1: print(f'Error: multiple results for org {o}') for result in resp["results"]: print(f' {result["name"]}') continue q = f'type:ticket organization:"{o}" status{status} created>{start_time.isoformat()[:10]} {query}' resp = zd.search(query=q, get_all_pages=True) results.extend(resp['results']) # Save the current directory so we can go back once done start_dir = os.getcwd() # Normalize all of the given paths to absolute paths work_dir = os.path.abspath(work_dir) # Check for and create working directory if not os.path.isdir(work_dir): os.makedirs(work_dir) # Change to working directory to begin file output os.chdir(work_dir) vp.print('Retrieving tickets') if tickets: # tickets given, query for those. tickets that are explicitly requested # are retrieved regardless as to other options such as since. response = zd.tickets_show_many(ids=','.join([s for s in map(str, tickets)]), get_all_pages=True) # tickets_show_many is not a search, so manually insert 'result_type' for t in response['tickets']: t['result_type'] = 'ticket' results.extend(response['tickets']) if not tickets and not orgs: # No ticket or org given. Get all of the attachments for all of this # user's tickets. q = f'status{status} assignee:{agent} created>{start_time.isoformat()[:10]} {query}' response = zd.search(query=q, get_all_pages=True) results.extend(response['results']) if response['count'] == 0: # No tickets from which to get attachments print("No tickets found for retrieval.") return {} vp.print(f'Located {len(results)} tickets') # Fix up some headers to use for downloading the attachments. # We're going to borrow the zdesk object's httplib client. headers = {} if zd.zdesk_email is not None and zd.zdesk_password is not None: basic = base64.b64encode(zd.zdesk_email.encode('ascii') + b':' + zd.zdesk_password.encode('ascii')) headers["Authorization"] = f"Basic {basic}" for i, ticket in enumerate(results): if ticket['result_type'] != 'ticket': del results[i] ticket_nums.append(ticket['id']) # Get the items from the given zendesk tickets for i, ticket in enumerate(results): vp.print(f'Ticket {ticket["id"]}') ticket_dir = os.path.join(work_dir, str(ticket['id'])) ticket_com_dir = os.path.join(ticket_dir, 'comments') comment_num = 0 attach_num = 0 if not os.path.isdir(ticket_dir): os.makedirs(ticket_dir) if js: os.chdir(ticket_dir) with open('ticket.json', 'w') as f: json.dump(ticket, f, indent=2) response = zd.ticket_audits(ticket_id=ticket['id'], get_all_pages=True) audits = response['audits'][::-1] audit_num = len(audits) + 1 results[i]['audits'] = audits for audit in audits: audit_time = audit.get('created_at') audit_num -= 1 for event in audit['events']: comment_num = audit_num comment_dir = os.path.join(ticket_com_dir, str(comment_num)) if js: if not os.path.isdir(comment_dir): os.makedirs(comment_dir) os.chdir(comment_dir) with open('comment.json', 'w') as f: json.dump(event, f, indent=2) if event['type'] == 'Comment' and 'comments' in grab_items: comment_time = event.get('created_at', audit_time) if not comment_time: comment_time = 'unknown time' if os.path.isfile(os.path.join(comment_dir, 'comment.txt')): vp.print(f' Comment {comment_num} already present') else: # Check for and create the download directory if not os.path.isdir(comment_dir): os.makedirs(comment_dir) os.chdir(comment_dir) with open('comment.txt', 'w') as f: if event['public']: visibility = 'Public' else: visibility = 'Private' vp.print(f' Writing comment {comment_num}') f.write(f'{visibility} comment by {event["author_id"]} at {comment_time}') f.write(event['body']) if count > 0 and attach_num >= count: break if 'attachments' not in grab_items or event['type'] != 'Comment': continue for attachment in event['attachments']: attach_num += 1 if count > 0: attach_msg = f' ({attach_num}/{count})' else: attach_msg = f' ({attach_num})' name = attachment['file_name'] if os.path.isfile(os.path.join(comment_dir, name)): vp.print( f' Attachment {name} already present{attach_msg}') continue # Get this attachment vp.print(f' Downloading attachment {name}{attach_msg}') # Check for and create the download directory if not os.path.isdir(comment_dir): os.makedirs(comment_dir) os.chdir(comment_dir) response = zd.client.request('GET', attachment['content_url'], headers=headers) if response.status_code != 200: print(f'Error downloading {attachment["content_url"]}') continue with open(name, 'wb') as f: f.write(response.content) # Check for and create the grabs entry to return if ticket_dir not in grabs: grabs[ticket_dir] = [] grabs[ticket_dir].append( os.path.join('comments', str(comment_num), name)) # Let's try to extract this if it's compressed asplode(name, verbose=verbose) if not ss_present: continue for link in ss_link_pat.findall(event['body']): attach_num += 1 if count > 0: attach_msg = f' ({attach_num}/{count})' else: attach_msg = f' ({attach_num})' ss_files = ssgrab(verbose=verbose, key=ss_id, secret=ss_secret, host=ss_host, link=link, work_dir=comment_dir, postmsg=attach_msg) # Check for and create the grabs entry to return if ss_files and (ticket_dir not in grabs): grabs[ticket_dir] = [] for name in ss_files: grabs[ticket_dir].append( os.path.join('comments', str(comment_num), name)) # Let's try to extract this if it's compressed os.chdir(comment_dir) asplode(name, verbose=verbose) if js: os.chdir(work_dir) with open('tickets.js', "a+") as f: tickets_data = f.read() if tickets_data: ticketsjs = json.loads(tickets_data) if ticketsjs: for i, ticket in enumerate(ticketsjs): if ticket['id'] in ticket_nums: del ticketsjs[i] results.extend(ticketsjs) f.seek(0) f.write(json.dumps(results, indent=2)) f.truncate() os.chdir(start_dir) return grabs def main(argv=None): zdeskcfg.call(_zdgrab, section='zdgrab')
zdgrab
/zdgrab-4.1.0.tar.gz/zdgrab-4.1.0/zdgrab.py
zdgrab.py
This package includes version 3.5 of dhtmlxscheduler. This software is allowed to be use under GPL. You need to obtain Commercial or Enterise License to use it in non-GPL projects. Please contact [email protected] for details CHANGES ------- *1.0.2* - Updating to dhtmlxscheduler 3.5 - Factoring out themes into includable resources *1.0.1* - Adding missing dhtmlxscheduler resource bundles - PDF, Weeek Agenda, Offline and Mobile CSS. - Corrrecting various resource conditions not debug/no-debug modes - Fixes issue where source (uncompressed) resources would be loaded in debug mode(and vice versa) - Adding symbolic link in sources folder to point to images directory in codebase. - Fixes issue with inaccessible image resources in debug mode PATCHES ------- Applied patch described here http://forum.dhtmlx.com/viewtopic.php?f=6&t=13809&start=50 so that dataprocessor works properly in debug mode. Added locale_recurring.js Added Kiswahili translations
zdhtmlxscheduler
/zdhtmlxscheduler-1.0.2.tar.gz/zdhtmlxscheduler-1.0.2/README.txt
README.txt
# zdiab-tools zdiab-tools is a Python library for preprocessing and automating functions for Rasa NLU ## Installation Use the package manager [pip](https://pip.pypa.io/en/stable/) to install zdiab-tools. ```bash pip install zdiab-tools ``` ## Usage ```python from zdiab_tools import Automate ### returns 'slot' Automate.add_slot(name_form, list_slots) ### returns 'conf_file' Automate.conf_file(path, language='fr', policies=False): ### returns 'pickle file' Automate.def pickle_action(path): ``` ## Function ```python ####################################### ## Fonction pour ajouter avec rasa YAML ####################################### def add_rasa_file(yaml_string, path): def add_intent(name, list_action): def add_forms(name, list_slots): def add_slot(name_form, list_slots): def add_responses(list_name, list_action): def add_file(path, str): ##################### ## Fonction ALTER !!! ##################### def alter_intent(name, list_action): def alter_slot(name_form, list_slots): def alter_forms(name, list_slots): def alter_responses(list_name, list_action): def alter_file(path, li, required=False): ############################################ ## Fonction pour AJOUTER NLU DATA ########## ############################################ def add_nluData(name_intent, list_exemple, head=False): ############################################ ## Fonction pour Alter Config FILE ######### ############################################ def conf_file(path, language='fr', policies=False): ############################################ ## Fonction Create and Save Action FILE #### ############################################ def pickle_action(path): def Action_file(path, filename="test.pkl"): ``` ## Contributing Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change. Please make sure to update tests as appropriate. ## License [MIT](https://choosealicense.com/licenses/mit/)
zdiab-tools
/zdiab_tools-0.3.tar.gz/zdiab_tools-0.3/README.md
README.md
from rasa.shared.nlu.training_data.formats.rasa_yaml import RasaYAMLReader, RasaYAMLWriter import rasa import pickle # It creates a class called Automate. class Automate(): STR_Policy = """\npolicies: # # No configuration for policies was provided. The following default policies were used to train your model. # # If you'd like to customize them, uncomment and adjust the policies. # # See https://rasa.com/docs/rasa/policies for more information. - name: MemoizationPolicy - name: RulePolicy - name: UnexpecTEDIntentPolicy max_history: 5 epochs: 140 """ def __init__(self): return("Great !!!") ####################################### ## Fonction pour ajouter avec rasa YAML ####################################### def add_rasa_file(yaml_string, path): """ The function takes a string in YAML format and a path to a file and writes the YAML string to the file :param yaml_string: This is the string format YAML :param path: The path to the file you want to write to """ f=RasaYAMLReader() training_data = f.reads(yaml_string) d=RasaYAMLWriter() print(d.dump(path,training_data)) ## Domain ##################################################### ## Fonction pour ajouter **Action, intent, enteties** ##################################################### def add_intent(name, list_action): """ :param name: The name of the intent :param list_action: a list of data :return: A string. """ str = "\n" str += name +":\n" for n in list_action: str += "- " + n + "\n" return str ## Domain ############################## ## Fonction pour ajouter forms ############################## def add_forms(name, list_slots): """ It takes a name and a list of slots and returns a string that can be used in the domain.yml file :param name: the name of the form (ex: "product_form") :param list_slots: a list of slots :return: A string that can be used in the domain.yml file """ str = "\nforms:\n " str += name +":\n " str += "required_slots:\n " for n in list_slots: str += "- " + n + "\n " return str ## Domain ############################## ## Fonction pour ajouter slots ############################## def add_slot(name_form, list_slots): """ It takes a list of slot names and returns a string that contains the YAML code for the slots :param name_form: the name of the form (ex: "product_form") :param list_slots: a list of slots :return: A string that contains the YAML code for the slots """ str = "\nslots:\n" for n in list_slots: str += " " + n +":\n" str += " type: text\n" str += " influence_conversation: true\n" str += " mappings:\n" str += " - type: from_text\n" str += " conditions:\n" str += " - active_loop: " + name_form + "\n" str += " requested_slot: " + n + "\n" return str ## Domain ################################################## ## Fonction pour **responses** dans fichier domain ################################################## def add_responses(list_name, list_action): """ It takes two lists of equal length, and returns a string that contains the first element of the first list, followed by the first element of the second list, followed by the second element of the first list, followed by the second element of the second list, and so on :param list_name: a list of utter_.. (e.g. "utter_greet") :param list_action: a list of text data (e.g. "Hello, how can I help you?") :return: A string that contains the first element of the first list, followed by the first element of the second list, followed by the second element of the first list, followed by the second element of the second list, and so on. """ str = "\nresponses:\n " if (len(list_action) == len(list_name)): for n in list_name: for m in list_action: str += n + ":\n " str += "- text: " + m + "\n " else: print("les deux lists sont de taille differents") return str ########################################### ## Fonction pour l'ecriture dans un fichier ########################################### def add_file(path, str): """ This function takes a path and a string as arguments and appends the string to the file at the given path :param path: This is the location of the file ex: "C:/Users/user_name/Desktop/test/config.yml" :param str: the string to be added to the file """ f = open(path, "a") f.write(str) f.close() print("fichier est modifier !!") ##################################################### ##################################################### ##################################################### ##################### ## Fonction ALTER !!! ##################### ## Domain ##################################################### ## Fonction pour ALTER **Action, intent, enteties** ##################################################### def alter_intent(name, list_action): """ It takes a list of strings and returns a list of strings. :param name: The name of the intent, action, or entity :param list_action: a list of data :return: A list of lists. """ str = "" for n in list_action: str += "- " + n + "\n" l = [name, str] return l ## Domain ################################## ## Fonction pour ALTER **slots** ################################## def alter_slot(name_form, list_slots): """ It takes a list of slot names and returns a list of two elements: the first element is the string "slots" and the second element is a string that contains the yaml code for the slots :param name_form: the name of the form (ex: "product_form") :param list_slots: a list of slots :return: A list of two elements. The first element is the string "slots" and the second element is a string that contains the slots. """ str = "" for n in list_slots: str += " " + n +":\n" str += " type: text\n" str += " influence_conversation: true\n" str += " mappings:\n" str += " - type: from_text\n" str += " conditions:\n" str += " - active_loop: " + name_form + "\n" str += " requested_slot: " + n + "\n" name = "slots" l = [name, str] return l ## Domain ################################## ## Fonction pour ALTER **forms** ################################## def alter_forms(name, list_slots): """ :param name: the name of the form (e.g. "product_form") :param list_slots: a list of slots :return: A list of two elements. The first element is the name of the form, and the second element is a string. """ str = "" for n in list_slots: str += " - " + n + "\n" l = [name, str] return l ## Domain ########################################################## ## Fonction pour ALTER **responses** dans fichier domain ########################################################## def alter_responses(list_name, list_action): """ It takes two lists as input, and returns a list of two elements. The first element is the string "responses", and the second element is a string that contains the contents of the two lists :param list_name: a list of utter_.. (e.g. "utter_greet") :param list_action: a list of examples (e.g: "Bonjour comment je peux vous aider?") :return: A list of two strings. """ str = "" if (len(list_action) == len(list_name)): for i in range(len(list_name )): str += " " + list_name[i] + ":\n" str += " - text: " + list_action[i] + "\n" else: print("les deux lists sont de taille differents") name = "responses" l = [name, str] return l ## Domain ##### Alter FILE ############################################################################### ## Fonction pour MODIFIER Intent, Action, Entities et Responses dans un fichier ############################################################################### def alter_file(path, li, required=False): """ It takes a file path, a list of strings, and a boolean as arguments. It opens the file, reads the lines, and inserts the strings at the appropriate place in the file :param path: the path to the file you want to modify :param li: a list of strings and lists :param required: True, defaults to False (optional) """ name = li[0] str = "" str += li[1] f = open(path, "r+") flag = 0 lines = f.readlines() for i, line in enumerate(lines): if line.strip() == (name + ":"): print("set flag") flag = 1 index = i if flag==1 and line.strip().startswith("required_slots:") and required: print("catch") index = i + 1 lines.insert(index, str) f.seek(0) for line in lines: f.write(line) print("fichier est modifier "+ name +" !!") ## NLU DATA ########## ##### ADD Intent ############################################### ## Fonction pour AJOUTER dans /data/NLU.yml ############################################### def add_nluData(name_intent, list_exemple, head=False): """ It adds the header to the nlu data. :param name_intent: the name of the intent :param list_exemple: a list of examples :param head: If True, the function will return the header of the nlu data file, defaults to False (optional) :return: A string """ if head: str = 'version: "3.0"\nnlu:' else: str = "" str += "\n- intent: " + name_intent +"\n" + " examples: |\n" for n in range(len(list_exemple)): str += " - " + list_exemple[n] + "\n" return str ## Config ##### Alter Config FILE ############################################# ## Fonction pour MODIFIER le fichier config ############################################# def conf_file(path, language='fr', policies=False): """ It takes a file path, a language, and a boolean as arguments. It opens the file, reads the lines, and then replaces the line that starts with "language" with the language argument. :param path: This is the location of the file, for example: "C:/Users/nom_user/Desktop/test/config.yml" :param language: the language you want to use, defaults to fr (optional) :param policies: if you want to add predefined policies, defaults to False (optional) """ name = language f = open(path, "r+") lines = f.readlines() f.seek(0) data = f.read() for line in lines: if line.split(':')[0] == ("language"): data = data.replace(line, 'language: ' + name + "\n") f.seek(0) f.write(data) f.close() print("fichier est modifier "+ name +" !!") if policies: add_file(path, STR_Policy) ## Action ##### Create and Save Action FILE ############################################# ## Fonction pour creer le fichier Action ############################################# def pickle_action(path): """ It reads the contents of a file, then writes the contents of that file to a pickle file. :param path: The path to the file you want to pickle """ f = open(path, 'r') data = f.read() f.close() pickle.dump(data, open('test.pkl', 'wb')) def Action_file(path, filename="test.pkl"): """ It takes a path and a filename, opens the file, loads the data, and writes the data to the path :param path: This is the location of the file, for example: "C:/Users/nom_user/Desktop/test/config.yml" :param filename: the name of the file you want to save the model to, defaults to test.pkl (optional) """ f = open(path, 'w') infile = open(filename,'rb') data = pickle.load(infile, encoding='latin1') f.write(data) f.close()
zdiab-tools
/zdiab_tools-0.3.tar.gz/zdiab_tools-0.3/zdiab_tools/Automate_func.py
Automate_func.py
======================================== zdict ======================================== |license| |version| |python version| |month download| |stars| |forks| |contributors| |pull requests| |issues| |github_actions| |circleci| |coveralls| |docker build status| |gitter| |pyup status| **[ ~ Dependencies scanned by PyUp.io ~ ]** ---- zdict is a CLI dictionary framework mainly focus on any kind of online dictionary. This project originally forked from https://github.com/chenpc/ydict, which is a CLI tool for the Yahoo! online dictionary. After heavily refactoring the original project including: 1. Change from Python 2 to Python 3 2. Focus on being a flexible framework for any kind online dicitionaries, not only just a CLI tool for querying Yahoo! online dictionary. 3. Based on an open source project skeleton. So, we decided to create a new project. ---- .. contents:: Table of Contents ---- Installation ------------------------------ from `PyPI <https://pypi.org/project/zdict/>`_ : .. code-block:: sh pip install zdict from `GitHub <https://github.com/zdict/zdict.git>`_ : .. code-block:: sh pip install git+https://github.com/zdict/zdict.git from `Docker Hub <https://hub.docker.com/r/zdict/zdict/>`_ : .. code-block:: sh # Pull the image of latest commit of master branch from Docker Hub docker pull zdict/zdict # Pull the image of latest release from Docker Hub docker pull zdict/zdict:release # Pull the image of specific release version from Docker Hub docker pull zdict/zdict:${version} docker pull zdict/zdict:v0.10.0 How to run the zdict docker image .. code-block:: sh # Run interactive mode docker run -it --rm zdict/zdict # latest commit docker run -it --rm zdict/zdict:release # latest release docker run -it --rm zdict/zdict:v0.10.0 # use zdict v0.10.0 docker run -it --rm zdict/zdict:$tag # with specific tag # Run normal mode docker run -it --rm zdict/zdict apple bird # latest commit docker run -it --rm zdict/zdict:release apple bird # latest release docker run -it --rm zdict/zdict:v0.10.0 apple bird # use zdict v0.10.0 docker run -it --rm zdict/zdict:$tag apple bird # with specific tag # You can also add the options while using docker run either interactive mode or normal mode docker run -it --rm zdict/zdict:v0.10.0 -dt moe # use moe dict in interactive mode docker run -it --rm zdict/zdict:v0.10.0 -dt moe 哈 # use moe dict in normal mode Usage ------------------------------ :: usage: zdict [-h] [-v] [-d] [-t QUERY_TIMEOUT] [-j [JOBS]] [-sp] [-su] [-dt itaigi,moe,moe-taiwanese,spanish,oxford,jisho,yahoo,naer,wiktionary,urban,yandex,all] [-ld] [-V] [-c] [--dump [PATTERN]] [-D] [word [word ...]] positional arguments: word Words for searching its translation optional arguments: -h, --help show this help message and exit -v, --version show program's version number and exit -d, --disable-db-cache Temporarily not using the result from db cache. (still save the result into db) -t QUERY_TIMEOUT, --query-timeout QUERY_TIMEOUT Set timeout for every query. default is 5 seconds. -j [JOBS], --jobs [JOBS] Allow N jobs at once. Do not pass any argument to use the number of CPUs in the system. -sp, --show-provider Show the dictionary provider of the queried word -su, --show-url Show the url of the queried word -dt itaigi,moe,moe-taiwanese,spanish,oxford,jisho,yahoo,naer,wiktionary,urban,yandex,all, --dict itaigi,moe,moe-taiwanese,spanish,oxford,jisho,yahoo,naer,wiktionary,urban,yandex,all Must be seperated by comma and no spaces after each comma. Choose the dictionary you want. (default: yahoo) Use 'all' for qureying all dictionaries. If 'all' or more than 1 dictionaries been chosen, --show- provider will be set to True in order to provide more understandable output. -ld, --list-dicts Show currently supported dictionaries. -V, --verbose Show more information for the queried word. (If the chosen dictionary have implemented verbose related functions) -c, --force-color Force color printing (zdict automatically disable color printing when output is not a tty, use this option to force color printing) --dump [PATTERN] Dump the querying history, can be filtered with regex -D, --debug Print raw html prettified by BeautifulSoup for debugging. Screenshots ------------------------------ `Yahoo Dictionary <http://tw.dictionary.search.yahoo.com/>`_ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ * Normal Mode ``zdict hello`` .. image:: http://i.imgur.com/iFTysUz.png * Interactive Mode ``zdict`` .. image:: http://i.imgur.com/NtbWXKH.png `Moe Dictionary 萌典 <https://www.moedict.tw>`_ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. image:: http://i.imgur.com/FZD4HBS.png .. image:: http://i.imgur.com/tF2S98h.png `Urban Dictionary <http://www.urbandictionary.com/>`_ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. image:: http://i.imgur.com/KndSJqz.png .. image:: http://i.imgur.com/nh62wi1.png `SpanishDict <http://www.spanishdict.com/>`_ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. image:: http://i.imgur.com/Ld2QVvP.png .. image:: http://i.imgur.com/HJ9h5JO.png `Jisho Japanese Dictionary <http://jisho.org/>`_ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. image:: http://i.imgur.com/63n3qmH.png .. image:: http://i.imgur.com/UMP8k4e.png `Yandex Translate <https://translate.yandex.com/>`_ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. image:: https://user-images.githubusercontent.com/2716047/29741879-ca1a3826-8a3a-11e7-9701-4a7e9a15971a.png `Oxford Dictionary <https://en.oxforddictionaries.com/>`_ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. image:: http://i.imgur.com/VkPEfKh.png To use this source, you should first `apply <https://developer.oxforddictionaries.com/>`_ an API key and place it under ``~/.zdict/oxford.key`` in the format:: app_id, app_key `Wiktionary <https://en.wiktionary.org/>`_ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. image:: https://i.imgur.com/5OzIFU3.png .. image:: https://i.imgur.com/UO5nQjU.png `iTaigi-愛台語 <https://itaigi.tw/>`_ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. image:: https://user-images.githubusercontent.com/1645228/55309799-656acd00-5491-11e9-9d79-4ae578c83f8b.jpg .. image:: https://user-images.githubusercontent.com/1645228/55309820-7582ac80-5491-11e9-998d-51ebfb183375.jpg `國家教育研究院 - 雙語詞彙、學術名詞暨辭書資訊網 <https://terms.naer.edu.tw/>`_ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. image:: https://user-images.githubusercontent.com/1645228/86770837-e6951480-c083-11ea-95f2-51b1e6f7e04f.jpg .. image:: https://user-images.githubusercontent.com/1645228/86770828-e432ba80-c083-11ea-813a-e357f213826a.jpg Development & Contributing --------------------------- Testing ^^^^^^^^ During development, you can install our project as *editable*. If you use `virtualenv`, you may want to create a new environment for `zdict`:: $ git clone https://github.com/zdict/zdict.git $ cd zdict $ pip install -e . Once you installed it with the command above, just execute `zdict` after modification. No need to install it again. Install the packages for testing:: $ pip install -r requirements-test.txt or:: $ make install-test-deps Use the command below to execute the tests:: $ py.test or:: $ make test After runing testing, we will get a coverage report in html. We can browse around it:: $ cd htmlcov $ python -m http.server Also, there is some configs for ``py.test`` in ``setup.cfg``. Change it if you need. Debugging ^^^^^^^^^^ ``py.test`` can prompt ``pdb`` shell when your test case failed:: $ py.test --pdb or:: $ make test-with-pdb Bug Report ^^^^^^^^^^^ Feel free to send a bug report to https://github.com/zdict/zdict/issues. Please attach the error message and describe how to reproduce the bug. PR is also welcome. Please use the ``-d/--disable-db-cache`` option to query before sending the bug report. Sometimes we modify the data schema in database for a dictionary, but the default dictionary query of zdict uses the cache in the database which may be stored within an old schema. This might cause an error while showing the result. Just use the ``-d/--disable-db-cache`` to update the cache in database. Related Projects ------------------------------ * `zdict.vim <https://github.com/zdict/zdict.vim>`_ * A vim plugin integrates with zdict. * `zdict.sh <https://github.com/zdict/zdict.sh>`_ * A collection of shell completion scripts for zdict. * `zdict_jupyter <https://github.com/zdict/zdict_jupyter>`_ * Use zdict in Jupyter Notebook. .. |version| image:: https://img.shields.io/pypi/v/zdict.svg :target: https://pypi.org/project/zdict .. |python version| image:: https://img.shields.io/pypi/pyversions/zdict.svg :target: https://pypi.org/project/zdict .. |month download| image:: https://img.shields.io/pypi/dm/zdict.svg :target: https://pypi.org/project/zdict .. |stars| image:: https://img.shields.io/github/stars/zdict/zdict.svg :target: https://github.com/zdict/zdict/ .. |forks| image:: https://img.shields.io/github/forks/zdict/zdict.svg :target: https://github.com/zdict/zdict/ .. |contributors| image:: https://img.shields.io/github/contributors/zdict/zdict.svg :target: https://github.com/zdict/zdict/graphs/contributors .. |pull requests| image:: https://img.shields.io/github/issues-pr/zdict/zdict.svg :target: https://github.com/zdict/zdict/pulls .. |issues| image:: https://img.shields.io/github/issues/zdict/zdict.svg :target: https://github.com/zdict/zdict/issues .. |github_actions| image:: https://github.com/zdict/zdict/workflows/macOS%20testings/badge.svg :target: https://github.com/zdict/zdict/actions .. |circleci| image:: https://circleci.com/gh/zdict/zdict.svg?style=svg :target: https://circleci.com/gh/zdict/zdict .. |license| image:: https://img.shields.io/github/license/zdict/zdict.svg :target: https://github.com/zdict/zdict/blob/master/LICENSE.md .. |gitter| image:: https://badges.gitter.im/Join%20Chat.svg :alt: Join the chat at https://gitter.im/zdict/zdict :target: https://gitter.im/zdict/zdict .. |coveralls| image:: https://coveralls.io/repos/zdict/zdict/badge.svg :target: https://coveralls.io/github/zdict/zdict .. |docker build status| image:: https://img.shields.io/docker/cloud/build/zdict/zdict :target: https://hub.docker.com/r/zdict/zdict .. |pyup status| image:: https://pyup.io/repos/github/zdict/zdict/shield.svg :target: https://pyup.io/repos/github/zdict/zdict/ :alt: pyup.io badge
zdict
/zdict-5.0.1.tar.gz/zdict-5.0.1/README.rst
README.rst
GNU GENERAL PUBLIC LICENSE Version 3, 29 June 2007 Copyright (C) 2007 Free Software Foundation, Inc. <http://fsf.org/> Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. Preamble The GNU General Public License is a free, copyleft license for software and other kinds of works. The licenses for most software and other practical works are designed to take away your freedom to share and change the works. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change all versions of a program--to make sure it remains free software for all its users. We, the Free Software Foundation, use the GNU General Public License for most of our software; it applies also to any other work released this way by its authors. You can apply it to your programs, too. When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for them if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs, and that you know you can do these things. To protect your rights, we need to prevent others from denying you these rights or asking you to surrender the rights. Therefore, you have certain responsibilities if you distribute copies of the software, or if you modify it: responsibilities to respect the freedom of others. For example, if you distribute copies of such a program, whether gratis or for a fee, you must pass on to the recipients the same freedoms that you received. You must make sure that they, too, receive or can get the source code. And you must show them these terms so they know their rights. Developers that use the GNU GPL protect your rights with two steps: (1) assert copyright on the software, and (2) offer you this License giving you legal permission to copy, distribute and/or modify it. For the developers' and authors' protection, the GPL clearly explains that there is no warranty for this free software. For both users' and authors' sake, the GPL requires that modified versions be marked as changed, so that their problems will not be attributed erroneously to authors of previous versions. Some devices are designed to deny users access to install or run modified versions of the software inside them, although the manufacturer can do so. This is fundamentally incompatible with the aim of protecting users' freedom to change the software. The systematic pattern of such abuse occurs in the area of products for individuals to use, which is precisely where it is most unacceptable. Therefore, we have designed this version of the GPL to prohibit the practice for those products. If such problems arise substantially in other domains, we stand ready to extend this provision to those domains in future versions of the GPL, as needed to protect the freedom of users. Finally, every program is threatened constantly by software patents. States should not allow patents to restrict development and use of software on general-purpose computers, but in those that do, we wish to avoid the special danger that patents applied to a free program could make it effectively proprietary. To prevent this, the GPL assures that patents cannot be used to render the program non-free. The precise terms and conditions for copying, distribution and modification follow. TERMS AND CONDITIONS 0. Definitions. "This License" refers to version 3 of the GNU General Public License. "Copyright" also means copyright-like laws that apply to other kinds of works, such as semiconductor masks. "The Program" refers to any copyrightable work licensed under this License. Each licensee is addressed as "you". "Licensees" and "recipients" may be individuals or organizations. To "modify" a work means to copy from or adapt all or part of the work in a fashion requiring copyright permission, other than the making of an exact copy. The resulting work is called a "modified version" of the earlier work or a work "based on" the earlier work. A "covered work" means either the unmodified Program or a work based on the Program. To "propagate" a work means to do anything with it that, without permission, would make you directly or secondarily liable for infringement under applicable copyright law, except executing it on a computer or modifying a private copy. Propagation includes copying, distribution (with or without modification), making available to the public, and in some countries other activities as well. To "convey" a work means any kind of propagation that enables other parties to make or receive copies. Mere interaction with a user through a computer network, with no transfer of a copy, is not conveying. An interactive user interface displays "Appropriate Legal Notices" to the extent that it includes a convenient and prominently visible feature that (1) displays an appropriate copyright notice, and (2) tells the user that there is no warranty for the work (except to the extent that warranties are provided), that licensees may convey the work under this License, and how to view a copy of this License. If the interface presents a list of user commands or options, such as a menu, a prominent item in the list meets this criterion. 1. Source Code. The "source code" for a work means the preferred form of the work for making modifications to it. "Object code" means any non-source form of a work. A "Standard Interface" means an interface that either is an official standard defined by a recognized standards body, or, in the case of interfaces specified for a particular programming language, one that is widely used among developers working in that language. The "System Libraries" of an executable work include anything, other than the work as a whole, that (a) is included in the normal form of packaging a Major Component, but which is not part of that Major Component, and (b) serves only to enable use of the work with that Major Component, or to implement a Standard Interface for which an implementation is available to the public in source code form. A "Major Component", in this context, means a major essential component (kernel, window system, and so on) of the specific operating system (if any) on which the executable work runs, or a compiler used to produce the work, or an object code interpreter used to run it. The "Corresponding Source" for a work in object code form means all the source code needed to generate, install, and (for an executable work) run the object code and to modify the work, including scripts to control those activities. However, it does not include the work's System Libraries, or general-purpose tools or generally available free programs which are used unmodified in performing those activities but which are not part of the work. For example, Corresponding Source includes interface definition files associated with source files for the work, and the source code for shared libraries and dynamically linked subprograms that the work is specifically designed to require, such as by intimate data communication or control flow between those subprograms and other parts of the work. The Corresponding Source need not include anything that users can regenerate automatically from other parts of the Corresponding Source. The Corresponding Source for a work in source code form is that same work. 2. Basic Permissions. All rights granted under this License are granted for the term of copyright on the Program, and are irrevocable provided the stated conditions are met. This License explicitly affirms your unlimited permission to run the unmodified Program. The output from running a covered work is covered by this License only if the output, given its content, constitutes a covered work. This License acknowledges your rights of fair use or other equivalent, as provided by copyright law. You may make, run and propagate covered works that you do not convey, without conditions so long as your license otherwise remains in force. You may convey covered works to others for the sole purpose of having them make modifications exclusively for you, or provide you with facilities for running those works, provided that you comply with the terms of this License in conveying all material for which you do not control copyright. Those thus making or running the covered works for you must do so exclusively on your behalf, under your direction and control, on terms that prohibit them from making any copies of your copyrighted material outside their relationship with you. Conveying under any other circumstances is permitted solely under the conditions stated below. Sublicensing is not allowed; section 10 makes it unnecessary. 3. Protecting Users' Legal Rights From Anti-Circumvention Law. No covered work shall be deemed part of an effective technological measure under any applicable law fulfilling obligations under article 11 of the WIPO copyright treaty adopted on 20 December 1996, or similar laws prohibiting or restricting circumvention of such measures. When you convey a covered work, you waive any legal power to forbid circumvention of technological measures to the extent such circumvention is effected by exercising rights under this License with respect to the covered work, and you disclaim any intention to limit operation or modification of the work as a means of enforcing, against the work's users, your or third parties' legal rights to forbid circumvention of technological measures. 4. Conveying Verbatim Copies. You may convey verbatim copies of the Program's source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice; keep intact all notices stating that this License and any non-permissive terms added in accord with section 7 apply to the code; keep intact all notices of the absence of any warranty; and give all recipients a copy of this License along with the Program. You may charge any price or no price for each copy that you convey, and you may offer support or warranty protection for a fee. 5. Conveying Modified Source Versions. You may convey a work based on the Program, or the modifications to produce it from the Program, in the form of source code under the terms of section 4, provided that you also meet all of these conditions: a) The work must carry prominent notices stating that you modified it, and giving a relevant date. b) The work must carry prominent notices stating that it is released under this License and any conditions added under section 7. This requirement modifies the requirement in section 4 to "keep intact all notices". c) You must license the entire work, as a whole, under this License to anyone who comes into possession of a copy. This License will therefore apply, along with any applicable section 7 additional terms, to the whole of the work, and all its parts, regardless of how they are packaged. This License gives no permission to license the work in any other way, but it does not invalidate such permission if you have separately received it. d) If the work has interactive user interfaces, each must display Appropriate Legal Notices; however, if the Program has interactive interfaces that do not display Appropriate Legal Notices, your work need not make them do so. A compilation of a covered work with other separate and independent works, which are not by their nature extensions of the covered work, and which are not combined with it such as to form a larger program, in or on a volume of a storage or distribution medium, is called an "aggregate" if the compilation and its resulting copyright are not used to limit the access or legal rights of the compilation's users beyond what the individual works permit. Inclusion of a covered work in an aggregate does not cause this License to apply to the other parts of the aggregate. 6. Conveying Non-Source Forms. You may convey a covered work in object code form under the terms of sections 4 and 5, provided that you also convey the machine-readable Corresponding Source under the terms of this License, in one of these ways: a) Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by the Corresponding Source fixed on a durable physical medium customarily used for software interchange. b) Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by a written offer, valid for at least three years and valid for as long as you offer spare parts or customer support for that product model, to give anyone who possesses the object code either (1) a copy of the Corresponding Source for all the software in the product that is covered by this License, on a durable physical medium customarily used for software interchange, for a price no more than your reasonable cost of physically performing this conveying of source, or (2) access to copy the Corresponding Source from a network server at no charge. c) Convey individual copies of the object code with a copy of the written offer to provide the Corresponding Source. This alternative is allowed only occasionally and noncommercially, and only if you received the object code with such an offer, in accord with subsection 6b. d) Convey the object code by offering access from a designated place (gratis or for a charge), and offer equivalent access to the Corresponding Source in the same way through the same place at no further charge. You need not require recipients to copy the Corresponding Source along with the object code. If the place to copy the object code is a network server, the Corresponding Source may be on a different server (operated by you or a third party) that supports equivalent copying facilities, provided you maintain clear directions next to the object code saying where to find the Corresponding Source. Regardless of what server hosts the Corresponding Source, you remain obligated to ensure that it is available for as long as needed to satisfy these requirements. e) Convey the object code using peer-to-peer transmission, provided you inform other peers where the object code and Corresponding Source of the work are being offered to the general public at no charge under subsection 6d. A separable portion of the object code, whose source code is excluded from the Corresponding Source as a System Library, need not be included in conveying the object code work. A "User Product" is either (1) a "consumer product", which means any tangible personal property which is normally used for personal, family, or household purposes, or (2) anything designed or sold for incorporation into a dwelling. In determining whether a product is a consumer product, doubtful cases shall be resolved in favor of coverage. For a particular product received by a particular user, "normally used" refers to a typical or common use of that class of product, regardless of the status of the particular user or of the way in which the particular user actually uses, or expects or is expected to use, the product. A product is a consumer product regardless of whether the product has substantial commercial, industrial or non-consumer uses, unless such uses represent the only significant mode of use of the product. "Installation Information" for a User Product means any methods, procedures, authorization keys, or other information required to install and execute modified versions of a covered work in that User Product from a modified version of its Corresponding Source. The information must suffice to ensure that the continued functioning of the modified object code is in no case prevented or interfered with solely because modification has been made. If you convey an object code work under this section in, or with, or specifically for use in, a User Product, and the conveying occurs as part of a transaction in which the right of possession and use of the User Product is transferred to the recipient in perpetuity or for a fixed term (regardless of how the transaction is characterized), the Corresponding Source conveyed under this section must be accompanied by the Installation Information. But this requirement does not apply if neither you nor any third party retains the ability to install modified object code on the User Product (for example, the work has been installed in ROM). The requirement to provide Installation Information does not include a requirement to continue to provide support service, warranty, or updates for a work that has been modified or installed by the recipient, or for the User Product in which it has been modified or installed. Access to a network may be denied when the modification itself materially and adversely affects the operation of the network or violates the rules and protocols for communication across the network. Corresponding Source conveyed, and Installation Information provided, in accord with this section must be in a format that is publicly documented (and with an implementation available to the public in source code form), and must require no special password or key for unpacking, reading or copying. 7. Additional Terms. "Additional permissions" are terms that supplement the terms of this License by making exceptions from one or more of its conditions. Additional permissions that are applicable to the entire Program shall be treated as though they were included in this License, to the extent that they are valid under applicable law. If additional permissions apply only to part of the Program, that part may be used separately under those permissions, but the entire Program remains governed by this License without regard to the additional permissions. When you convey a copy of a covered work, you may at your option remove any additional permissions from that copy, or from any part of it. (Additional permissions may be written to require their own removal in certain cases when you modify the work.) You may place additional permissions on material, added by you to a covered work, for which you have or can give appropriate copyright permission. Notwithstanding any other provision of this License, for material you add to a covered work, you may (if authorized by the copyright holders of that material) supplement the terms of this License with terms: a) Disclaiming warranty or limiting liability differently from the terms of sections 15 and 16 of this License; or b) Requiring preservation of specified reasonable legal notices or author attributions in that material or in the Appropriate Legal Notices displayed by works containing it; or c) Prohibiting misrepresentation of the origin of that material, or requiring that modified versions of such material be marked in reasonable ways as different from the original version; or d) Limiting the use for publicity purposes of names of licensors or authors of the material; or e) Declining to grant rights under trademark law for use of some trade names, trademarks, or service marks; or f) Requiring indemnification of licensors and authors of that material by anyone who conveys the material (or modified versions of it) with contractual assumptions of liability to the recipient, for any liability that these contractual assumptions directly impose on those licensors and authors. All other non-permissive additional terms are considered "further restrictions" within the meaning of section 10. If the Program as you received it, or any part of it, contains a notice stating that it is governed by this License along with a term that is a further restriction, you may remove that term. If a license document contains a further restriction but permits relicensing or conveying under this License, you may add to a covered work material governed by the terms of that license document, provided that the further restriction does not survive such relicensing or conveying. If you add terms to a covered work in accord with this section, you must place, in the relevant source files, a statement of the additional terms that apply to those files, or a notice indicating where to find the applicable terms. Additional terms, permissive or non-permissive, may be stated in the form of a separately written license, or stated as exceptions; the above requirements apply either way. 8. Termination. You may not propagate or modify a covered work except as expressly provided under this License. Any attempt otherwise to propagate or modify it is void, and will automatically terminate your rights under this License (including any patent licenses granted under the third paragraph of section 11). However, if you cease all violation of this License, then your license from a particular copyright holder is reinstated (a) provisionally, unless and until the copyright holder explicitly and finally terminates your license, and (b) permanently, if the copyright holder fails to notify you of the violation by some reasonable means prior to 60 days after the cessation. Moreover, your license from a particular copyright holder is reinstated permanently if the copyright holder notifies you of the violation by some reasonable means, this is the first time you have received notice of violation of this License (for any work) from that copyright holder, and you cure the violation prior to 30 days after your receipt of the notice. Termination of your rights under this section does not terminate the licenses of parties who have received copies or rights from you under this License. If your rights have been terminated and not permanently reinstated, you do not qualify to receive new licenses for the same material under section 10. 9. Acceptance Not Required for Having Copies. You are not required to accept this License in order to receive or run a copy of the Program. Ancillary propagation of a covered work occurring solely as a consequence of using peer-to-peer transmission to receive a copy likewise does not require acceptance. However, nothing other than this License grants you permission to propagate or modify any covered work. These actions infringe copyright if you do not accept this License. Therefore, by modifying or propagating a covered work, you indicate your acceptance of this License to do so. 10. Automatic Licensing of Downstream Recipients. Each time you convey a covered work, the recipient automatically receives a license from the original licensors, to run, modify and propagate that work, subject to this License. You are not responsible for enforcing compliance by third parties with this License. An "entity transaction" is a transaction transferring control of an organization, or substantially all assets of one, or subdividing an organization, or merging organizations. If propagation of a covered work results from an entity transaction, each party to that transaction who receives a copy of the work also receives whatever licenses to the work the party's predecessor in interest had or could give under the previous paragraph, plus a right to possession of the Corresponding Source of the work from the predecessor in interest, if the predecessor has it or can get it with reasonable efforts. You may not impose any further restrictions on the exercise of the rights granted or affirmed under this License. For example, you may not impose a license fee, royalty, or other charge for exercise of rights granted under this License, and you may not initiate litigation (including a cross-claim or counterclaim in a lawsuit) alleging that any patent claim is infringed by making, using, selling, offering for sale, or importing the Program or any portion of it. 11. Patents. A "contributor" is a copyright holder who authorizes use under this License of the Program or a work on which the Program is based. The work thus licensed is called the contributor's "contributor version". A contributor's "essential patent claims" are all patent claims owned or controlled by the contributor, whether already acquired or hereafter acquired, that would be infringed by some manner, permitted by this License, of making, using, or selling its contributor version, but do not include claims that would be infringed only as a consequence of further modification of the contributor version. For purposes of this definition, "control" includes the right to grant patent sublicenses in a manner consistent with the requirements of this License. Each contributor grants you a non-exclusive, worldwide, royalty-free patent license under the contributor's essential patent claims, to make, use, sell, offer for sale, import and otherwise run, modify and propagate the contents of its contributor version. In the following three paragraphs, a "patent license" is any express agreement or commitment, however denominated, not to enforce a patent (such as an express permission to practice a patent or covenant not to sue for patent infringement). To "grant" such a patent license to a party means to make such an agreement or commitment not to enforce a patent against the party. If you convey a covered work, knowingly relying on a patent license, and the Corresponding Source of the work is not available for anyone to copy, free of charge and under the terms of this License, through a publicly available network server or other readily accessible means, then you must either (1) cause the Corresponding Source to be so available, or (2) arrange to deprive yourself of the benefit of the patent license for this particular work, or (3) arrange, in a manner consistent with the requirements of this License, to extend the patent license to downstream recipients. "Knowingly relying" means you have actual knowledge that, but for the patent license, your conveying the covered work in a country, or your recipient's use of the covered work in a country, would infringe one or more identifiable patents in that country that you have reason to believe are valid. If, pursuant to or in connection with a single transaction or arrangement, you convey, or propagate by procuring conveyance of, a covered work, and grant a patent license to some of the parties receiving the covered work authorizing them to use, propagate, modify or convey a specific copy of the covered work, then the patent license you grant is automatically extended to all recipients of the covered work and works based on it. A patent license is "discriminatory" if it does not include within the scope of its coverage, prohibits the exercise of, or is conditioned on the non-exercise of one or more of the rights that are specifically granted under this License. You may not convey a covered work if you are a party to an arrangement with a third party that is in the business of distributing software, under which you make payment to the third party based on the extent of your activity of conveying the work, and under which the third party grants, to any of the parties who would receive the covered work from you, a discriminatory patent license (a) in connection with copies of the covered work conveyed by you (or copies made from those copies), or (b) primarily for and in connection with specific products or compilations that contain the covered work, unless you entered into that arrangement, or that patent license was granted, prior to 28 March 2007. Nothing in this License shall be construed as excluding or limiting any implied license or other defenses to infringement that may otherwise be available to you under applicable patent law. 12. No Surrender of Others' Freedom. If conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot convey a covered work so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not convey it at all. For example, if you agree to terms that obligate you to collect a royalty for further conveying from those to whom you convey the Program, the only way you could satisfy both those terms and this License would be to refrain entirely from conveying the Program. 13. Use with the GNU Affero General Public License. Notwithstanding any other provision of this License, you have permission to link or combine any covered work with a work licensed under version 3 of the GNU Affero General Public License into a single combined work, and to convey the resulting work. The terms of this License will continue to apply to the part which is the covered work, but the special requirements of the GNU Affero General Public License, section 13, concerning interaction through a network will apply to the combination as such. 14. Revised Versions of this License. The Free Software Foundation may publish revised and/or new versions of the GNU General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. Each version is given a distinguishing version number. If the Program specifies that a certain numbered version of the GNU General Public License "or any later version" applies to it, you have the option of following the terms and conditions either of that numbered version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of the GNU General Public License, you may choose any version ever published by the Free Software Foundation. If the Program specifies that a proxy can decide which future versions of the GNU General Public License can be used, that proxy's public statement of acceptance of a version permanently authorizes you to choose that version for the Program. Later license versions may give you additional or different permissions. However, no additional obligations are imposed on any author or copyright holder as a result of your choosing to follow a later version. 15. Disclaimer of Warranty. THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION. 16. Limitation of Liability. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. 17. Interpretation of Sections 15 and 16. If the disclaimer of warranty and limitation of liability provided above cannot be given local legal effect according to their terms, reviewing courts shall apply local law that most closely approximates an absolute waiver of all civil liability in connection with the Program, unless a warranty or assumption of liability accompanies a copy of the Program in return for a fee. END OF TERMS AND CONDITIONS How to Apply These Terms to Your New Programs If you develop a new program, and you want it to be of the greatest possible use to the public, the best way to achieve this is to make it free software which everyone can redistribute and change under these terms. To do so, attach the following notices to the program. It is safest to attach them to the start of each source file to most effectively state the exclusion of warranty; and each file should have at least the "copyright" line and a pointer to where the full notice is found. {one line to give the program's name and a brief idea of what it does.} Copyright (C) {year} {name of author} This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see <http://www.gnu.org/licenses/>. Also add information on how to contact you by electronic and paper mail. If the program does terminal interaction, make it output a short notice like this when it starts in an interactive mode: {project} Copyright (C) {year} {fullname} This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'. This is free software, and you are welcome to redistribute it under certain conditions; type `show c' for details. The hypothetical commands `show w' and `show c' should show the appropriate parts of the General Public License. Of course, your program's commands might be different; for a GUI interface, you would use an "about box". You should also get your employer (if you work as a programmer) or school, if any, to sign a "copyright disclaimer" for the program, if necessary. For more information on this, and how to apply and follow the GNU GPL, see <http://www.gnu.org/licenses/>. The GNU General Public License does not permit incorporating your program into proprietary programs. If your program is a subroutine library, you may consider it more useful to permit linking proprietary applications with the library. If this is what you want to do, use the GNU Lesser General Public License instead of this License. But first, please read <http://www.gnu.org/philosophy/why-not-lgpl.html>.
zdict
/zdict-5.0.1.tar.gz/zdict-5.0.1/LICENSE.md
LICENSE.md
import hashlib import binascii import os import io import datetime import zipfile import shutil def mkdir(path): try: os.makedirs(path) except: pass def generate_key(name, content, shard=2): try: sha1 = hashlib.sha1(content).digest() except: sha1 = hashlib.sha1( to_bytes(content) ).digest() key = binascii.hexlify(sha1) if shard: return "%s/%s" % (key[:shard], key) return key class Processor(object): def __init__(self): self.steps = [] def step(self, fun): self.steps.append(fun) def process(self, value): x = value for f in self.steps: x = f(x) return x def to_bytes(content): if type(content) == type(""): return content if type(content) == type(u""): return content.encode("utf-8") return str(content) def to_unicode(value): if type(value) == type(u""): return value if type(value) == type(""): return value.decode("utf-8") return unicode(value) class Bin(object): def __init__(self, path): self.path = path mkdir(path) self.unicode = Processor() self.unicode.step(to_unicode) self.unicode.step(lambda x: x.replace("\t", "\\t")) self.unicode.step(lambda x: x.replace("\n", "\\n")) self.bytes = Processor() self.bytes.step(to_bytes) def log(self, logname, *args): logpath = os.path.join(self.path, logname) args = [self.unicode.process(x) for x in args] r = u"%s\t%s\n" % ( datetime.datetime.utcnow().isoformat(), "\t".join(args)) with io.open(logpath, "a", encoding="utf-8") as f: f.write(r) def put(self, name, mime, content): key = generate_key(name, content) p = os.path.join(self.path, key) mkdir("/".join(p.split("/")[0:-1])) self.log("logfile.txt", key, name, mime, len(content)) if not os.path.exists(p): with io.open(p, "wb") as f: f.write(self.bytes.process(content)) def comment(self, content): self.log("comments.txt", content) @property def items(self): with io.open(os.path.join(self.path, "logfile.txt")) as f: t = [parse_logline(self.path, x) for x in f if x.strip()] return t def parse_logline(bin, line): d, k, n, m, l = line.split("\t") return ( os.path.join(bin, k), datetime.datetime.strptime(d, "%Y-%m-%dT%H:%M:%S.%f"), n, m, int(l.strip()), ) def tree(p): for root, dirs, files in os.walk(p): for fn in files: yield os.path.join(root, fn) class Z(object): def __init__(self, base): self.base = base mkdir(base) self.bin = self.find_bin() self.items_cache = {} def find_bin(self): for x in os.listdir(self.base): if x.endswith(".zdir"): return Bin(os.path.join(self.base, x)) return None def create_bin(self): k = "%s-%s.zdir" % ( datetime.datetime.utcnow().strftime("%Y%m%d"), binascii.hexlify(os.urandom(2)) ) self.bin = Bin(os.path.join(self.base, k)) return self.bin def finalize(self): if self.bin is None: return n = self.bin.path + ".zip" tempname = os.path.join(self.base, binascii.hexlify(os.urandom(6))) os.rename(self.bin.path, tempname) self.bin = None with zipfile.ZipFile(n, 'w') as f: for g in tree(tempname): name = g[len(tempname):] f.write(g, name) shutil.rmtree(tempname) with io.open(n, 'rb') as f: zip_sha = hashlib.sha1( f.read()).digest() nn = n.split(".")[0].split("/")[-1] zip_sha_hex = binascii.hexlify(zip_sha) name = n.replace(nn, zip_sha_hex) os.rename(n,name) def put(self, name, mime, content): if self.bin is None: self.bin = self.create_bin() self.bin.put(name, mime, content) def comment(self, content): if self.bin is None: self.bin = self.create_bin() self.bin.comment(content) @property def items(self): results = [] items = [os.path.join(self.base, x) for x in os.listdir(self.base) if ".zdir" in x] for z in [x for x in items if x.endswith(".zip")]: with zipfile.ZipFile(z) as f: t = [parse_logline(z, x) for x in f.read( "logfile.txt").split("\n") if x.strip()] results.extend(t) if self.bin: results.extend(self.bin.items) return results def filter(self, fun): for x in self.items: if fun(x): yield x, self.read(x[0]) def read(self, key): ext = ".zip" if ext in key: i = key.index(ext) fn, kn = key[:i+len(ext)], key[i+len(ext)+1:] with zipfile.ZipFile(fn) as f: data = f.read(kn).decode("utf-8") else: with io.open(key, encoding="utf-8") as f: data = f.read() return data
zdir
/zdir-0.0.5.tar.gz/zdir-0.0.5/zdir.py
zdir.py
# zdir Library for handling many small files. Packs files into directory, nameing by hash of content. ## Installation pip install zdir ## Usage ```python # coding=utf8 import zdir # for testing import datetime import os # test directory pwd = "/home/oskar/test/zdir" # create a new zdir z = zdir.Z(pwd) # store some files for i in range(500): z.put( name="file-%s" % i, mime="test-%s/mime" % i, content="Data: %d\n%s" % (i, datetime.datetime.utcnow().isoformat())) print os.listdir(pwd) #>>> ['20151015-989c.zdir'] # create zip z.finalize() print os.listdir(pwd) #>>> ['20151015-989c.zdir.zip'] # Get list of stored files, (or at least 3 of them) # Path, created, date, mime, length for x in z.items[0:3]: print x # >>> ('/home/oskar/test/zdir/20151015-989c.zdir.zip/a3/a397f24e44d677c1a79ea02e07ffe75bb8b1bf8d', datetime.datetime(2015, 10, 15, 19, 28, 55, 710758), 'file-0', 'test-0/mime', 34) # >>> ('/home/oskar/test/zdir/20151015-989c.zdir.zip/79/79625e9b86892b74745d9ad458dd90e18f509d04', datetime.datetime(2015, 10, 15, 19, 28, 55, 711143), 'file-1', 'test-1/mime', 34) # >>> ('/home/oskar/test/zdir/20151015-989c.zdir.zip/77/7776bd6fe6891a1bbbda7e26bea3bb8527f079b1', datetime.datetime(2015, 10, 15, 19, 28, 55, 711446), 'file-2', 'test-2/mime', 34) # Read specific file name = z.items[0][0] print "Name:", name # >>> Name: /home/oskar/test/zdir/20151015-989c.zdir.zip/a3/a397f24e44d677c1a79ea02e07ffe75bb8b1bf8d print z.read(name) # >>> Data: 0 # >>> 2015-10-15T19:28:55.710473 # Read a bunch of files for name, blob in z.filter(lambda x: "40" in x[2]): print name print blob # ('/home/oskar/test/zdir/20151015-df9b.zdir.zip/2c/2c8190ad51ed4b429aeeca995df588b288f48e1c', datetime.datetime(2015, 10, 15, 19, 32, 41, 272739), 'file-40', 'test-40/mime', 35) # Data: 40 # 2015-10-15T19:32:41.272644 # ('/home/oskar/test/zdir/20151015-df9b.zdir.zip/85/8515d2680d148a83ea0c6178cd50acc8ad280496', datetime.datetime(2015, 10, 15, 19, 32, 41, 294231), 'file-140', 'test-140/mime', 36) # Data: 140 # 2015-10-15T19:32:41.294089 # ('/home/oskar/test/zdir/20151015-df9b.zdir.zip/7a/7aca8dbe5fd22f01db7fc92b9b00cd5749263baa', datetime.datetime(2015, 10, 15, 19, 32, 41, 312538), 'file-240', 'test-240/mime', 36) # Data: 240 # 2015-10-15T19:32:41.312478 # ('/home/oskar/test/zdir/20151015-df9b.zdir.zip/a6/a647d77c2bbe3f25c0ef564c0041e99e2342aee6', datetime.datetime(2015, 10, 15, 19, 32, 41, 329076), 'file-340', 'test-340/mime', 36) # Data: 340 # 2015-10-15T19:32:41.329020 # ('/home/oskar/test/zdir/20151015-df9b.zdir.zip/04/0465a228df1f606aefd071dc6df3d6f58c36a7a7', datetime.datetime(2015, 10, 15, 19, 32, 41, 338529), 'file-400', 'test-400/mime', 36) # Data: 400 # 2015-10-15T19:32:41.338474 # ('/home/oskar/test/zdir/20151015-df9b.zdir.zip/b4/b4654281829673bb0644af2c83329328d979e629', datetime.datetime(2015, 10, 15, 19, 32, 41, 338679), 'file-401', 'test-401/mime', 36) # Data: 401 # 2015-10-15T19:32:41.338625 # ('/home/oskar/test/zdir/20151015-df9b.zdir.zip/92/92beadaed1275b17079ddf5e39e510fabfeb818a', datetime.datetime(2015, 10, 15, 19, 32, 41, 338825), 'file-402', 'test-402/mime', 36) # Data: 402 # 2015-10-15T19:32:41.338772 # ('/home/oskar/test/zdir/20151015-df9b.zdir.zip/d3/d362be42dce169331b8fc2d90ebcd43607df00de', datetime.datetime(2015, 10, 15, 19, 32, 41, 338972), 'file-403', 'test-403/mime', 36) # Data: 403 # 2015-10-15T19:32:41.338918 # ('/home/oskar/test/zdir/20151015-df9b.zdir.zip/49/492316c4925100193a8a0bc4e2a17e8fbf00add7', datetime.datetime(2015, 10, 15, 19, 32, 41, 339119), 'file-404', 'test-404/mime', 36) # Data: 404 # 2015-10-15T19:32:41.339065 # ('/home/oskar/test/zdir/20151015-df9b.zdir.zip/0c/0cea8dbc4dd1f37610c0ebf10be2ebc2795410b0', datetime.datetime(2015, 10, 15, 19, 32, 41, 339267), 'file-405', 'test-405/mime', 36) # Data: 405 # 2015-10-15T19:32:41.339213 # ('/home/oskar/test/zdir/20151015-df9b.zdir.zip/9d/9d227feb2850bf0b6e116407303bf8118dfce1ae', datetime.datetime(2015, 10, 15, 19, 32, 41, 339415), 'file-406', 'test-406/mime', 36) # Data: 406 # 2015-10-15T19:32:41.339361 # ('/home/oskar/test/zdir/20151015-df9b.zdir.zip/8f/8f5a321971eaa27c4a9c0d3c1990fcb8790f9926', datetime.datetime(2015, 10, 15, 19, 32, 41, 339562), 'file-407', 'test-407/mime', 36) # Data: 407 # 2015-10-15T19:32:41.339509 # ('/home/oskar/test/zdir/20151015-df9b.zdir.zip/96/96a32b2e3988a8e2119cf9a68d529be8a531bc76', datetime.datetime(2015, 10, 15, 19, 32, 41, 339710), 'file-408', 'test-408/mime', 36) # Data: 408 # 2015-10-15T19:32:41.339656 # ('/home/oskar/test/zdir/20151015-df9b.zdir.zip/1e/1ee0d9e154fe7c48e8824e20cd5b027632a2b9ba', datetime.datetime(2015, 10, 15, 19, 32, 41, 339857), 'file-409', 'test-409/mime', 36) # Data: 409 # 2015-10-15T19:32:41.339803 # ('/home/oskar/test/zdir/20151015-df9b.zdir.zip/cf/cfd7f535085933b20b29af65e004163e929b3830', datetime.datetime(2015, 10, 15, 19, 32, 41, 344571), 'file-440', 'test-440/mime', 36) # Data: 440 # 2015-10-15T19:32:41.344498 ```
zdir
/zdir-0.0.5.tar.gz/zdir-0.0.5/README.md
README.md
import datetime import json import logging import time from .constants import MQTT_PREFIX_REQ_DEV, MQTT_PREFIX_JOB, MQTT_PREFIX_STRONG_PRIVATE_STATUS, MQTT_KEY_FOTA from .mqtt import MQTTClient from ..logging import ZdmLogger import os logger = ZdmLogger().get_logger() from .credentials import Config from .credentials import Credentials class ZDMClient: """ ================ The ZDMClient class ================ .. class:: ZDMClient(cred=None, cfg=None, jobs_dict={}, condition_tags=[], on_timestamp=None, on_open_conditions=None, verbose=False) Creates a ZDM client instance. * :samp:`cred` is the object that contains the credentials of the device. If None the configurations are read from zdevice.json file. * :samp:`cfg` is the object that contains the mqtt configurations. If None set the default configurations. * :samp:`jobs_dict` is the dictionary that defines the device's available jobs (default None). * :samp:`condition_tags` is the list of condition tags that the device can open and close (default []). * :samp:`verbose` boolean flag for verbose output (default False). * :samp:`on_timestamp` callback called when the ZDM responds to the timestamp request. on_timestamp(client, timestamp) * :samp:`on_open_conditions` callback called when the ZDM responds to the open conditions request. on_open_conditions(client, conditions) """ def __init__(self, cred=None, cfg=None, jobs_dict={}, condition_tags=[], on_timestamp=None, on_open_conditions=None, verbose=False): if verbose: logger.setLevel(logging.DEBUG) # get configuration self._cfg = Config() if cfg is None else cfg self._creds = Credentials(os.getcwd()) if cred is None else cred self.mqttClient = MQTTClient(mqtt_id=self._creds.device_id, clean_session=self._cfg.clean_session) self._set_mqtt_credentials(self.mqttClient) self.jobs = jobs_dict self.condition_tags = condition_tags self._on_timestamp = on_timestamp self._on_open_conditions = on_open_conditions self.data_topic = '/'.join(['j', 'data', self._creds.device_id]) self.up_topic = '/'.join(['j', 'up', self._creds.device_id]) self.dn_topic = '/'.join(['j', 'dn', self._creds.device_id]) def id(self): """ .. method:: id() Return the device id. """ return self._creds.device_id def connect(self): """ .. method:: connect() Connect your device to the ZDM. """ for i in range(5): try: logger.info("ZDMClient.connect attempt {} ".format(i)) self.mqttClient.connect(host=self._creds.endpoint, port=self._creds.port, keepalive=self._cfg.keepalive) break except Exception as e: logger.error("ZDMClient.connect", e) pass time.sleep(2) if not self.mqttClient.connected: raise Exception("Failed to connect") self._subscribe_down() self.request_status() self._send_manifest() def publish(self, payload, tag): """ .. method:: publish(payload, tag) Publish a message to the ZDM. * :samp:`payload` is a dictionary containing the payload. * :samp:`tag` is the tag associated to the published payload. """ topic = self._build_ingestion_topic(tag) self.mqttClient.publish(topic, payload) def request_status(self): self._send_up_msg(MQTT_PREFIX_REQ_DEV, "status") logger.debug("Status requested") def request_timestamp(self): """ .. method:: request_timestamp() Request the timestamp to the ZDM. When the timestamp is received, the callback :samp:`on_timestamp` is called. """ self._send_up_msg(MQTT_PREFIX_REQ_DEV, "now") logger.debug("Timestamps requested") def request_open_conditions(self): """ .. method:: request_open_conditions() Request all the open conditions of the device not yet closed. When the open conditions are received, the callback :samp:`on_open_conditions` is called. """ self._send_up_msg(MQTT_PREFIX_REQ_DEV, "conditions") def new_condition(self, condition_tag): """ .. method:: new_condition(condition_tag) Create and return a new condition. * :samp:`condition_tag` the tag as string of the new condition. """ if condition_tag in self.condition_tags: return Condition(self, condition_tag) else: raise Exception( "Condition tag '{}' not found. Please initialize condition tag in the constructor.".format( condition_tag)) def _set_mqtt_credentials(self, client): self._creds.configure_mqtt_client(client) def _handle_dn_msg(self, client, data, msg): try: payload = json.loads(msg.payload.decode("utf-8")) logger.debug("ZdmClient._handle_dn_msg receive message: {}".format(payload)) if "key" not in payload: raise Exception( "The key is not present into the message {}".format(payload)) if "value" not in payload: raise Exception( "The value is not present into the message {}".format(payload)) method = payload["key"] value = payload["value"] if method.startswith(MQTT_PREFIX_JOB): delta_method = method[1:] self._handle_job_request(delta_method, value) elif method.startswith(MQTT_PREFIX_REQ_DEV): delta_method = method[1:] self._handle_delta_request(delta_method, value) else: print("zlib_zdm.Device.handle_dn_msg received custom key") # TODO: mange the custom key, with callback ?? except Exception as e: logger.error("Error", e) def _handle_job_request(self, job, args): if "args" in args: args = args["args"] else: logger.warning("ZdmClient.handle_dn_msg args key not present.") if job == 'fota': logger.error("FOTA is not supported on ZdmClient") self._reply_job(job, {"error": "FOTA is not supported in the zdm client py."}) elif job == 'reset': logger.error("Reset is not supported on ZdmClient") self._reply_job(job, {"error": "Reset is not supported in the zdm client py."}) elif job in self.jobs: try: res = self.jobs[job](self, args) logger.info("Job {} executed with result res: {}".format(job, res)) self._reply_job(job, res) except Exception as e: print("zlib_zdm.Device.handle_job_request", e) res = 'exception' self._reply_job(job, {"error": str(e)}) else: print("zlib_zdm.Device.handle_job_request invalid job request") self._reply_job(job, {"error": "Job {} not supported".format(job)}) def _handle_delta_request(self, delta_key, args): if delta_key == 'status': self._handle_delta_status(args) elif delta_key == 'now': self._handle_delta_timestamp(args) elif delta_key == 'conditions': self._handle_delta_conditions(args) else: print("zlib_zdm.Device.handle_delta_request received user-defined delta") # TODO pass custom delta_key and arg to user callback? def _handle_delta_timestamp(self, arg): if self._on_timestamp is None: logger.error("to ask timestamp, you must initialize [on_timestamp] function first") raise Exception("No timestamp callback initialized") else: self._on_timestamp(self, arg) def _handle_delta_status(self, arg): logger.debug("Received a delta status. Msg:{}".format(arg)) if ('expected' in arg) and (arg['expected'] is not None): if MQTT_KEY_FOTA in arg['expected']: logger.warning("FOTA is not supported on ZdmClient") else: # handle other keys for expected_key in arg['expected']: value = arg['expected'][expected_key]['v'] if expected_key.startswith(MQTT_PREFIX_JOB): delta_method = expected_key[1:] self._handle_job_request(delta_method, value) else: logger.warning( "ZdmClient._handle_delta_status expected key '{}' not recognized ".format(expected_key)) # TODO: what to do if the expected key if not a job ? whey the zdm lib save it into the expected ? # self.expected.update({expected_key: value}) # TODO check if ('current' in arg) and (arg['current'] is not None): where the current status contains something to do... def _handle_delta_conditions(self, open_conditions): op_conditions = [] # {'1593073070356.4473': {'tag': 'epspplzzjz', 'start': '2020-06-25T08:17:50Z'}, for uuid, value in open_conditions.items(): if "tag" in value: c = Condition(self, value['tag']) c.uuid = uuid if 'start' in value: c.start = value['start'] else: logger.warning("Start time not set in condition {}".format(uuid)) op_conditions.append(c) else: raise Exception("Bad open condition received. No tag present") if self._on_open_conditions is None: raise Exception("Open Conditions callback is not defined.") else: self._on_open_conditions(self, op_conditions) def _reply_job(self, key, value): self._send_up_msg(MQTT_PREFIX_JOB, key, value) def _subscribe_down(self): logger.debug("ZdmClient._subscribe_down subscribed to topic: {}".format(self.dn_topic)) self.mqttClient.subscribe(self.dn_topic, callback=self._handle_dn_msg) def _send_manifest(self): value = { 'jobs': [k for k in self.jobs], 'conditions': self.condition_tags } self._send_up_msg(MQTT_PREFIX_STRONG_PRIVATE_STATUS, "manifest", value) def _send_up_msg(self, prefix, key, value={}): msg = { 'key': prefix + key, 'value': value } logger.info(msg) self.mqttClient.publish(self.up_topic, msg) # @deprecated method. Use the send_up_msg def _publish_up(self, payload): topic = self.up_topic self.mqttClient.publish(topic, payload) logger.debug("Msg published on UP topic correctly. Msg: {}, topic:{}".format(payload, topic)) def _build_ingestion_topic(self, tag): # build the topic for the ingestion topic # ex. data/<deviceid>/<TAG>/ return '/'.join([self.data_topic, tag]) class Condition: """ ==================== The Conditions class ===================== .. class:: Condition(client, tag) Creates a Condition on a tag. * :samp:`client` is the object ZDMClient object used to open and close the condition. * :samp:`tag` is the tag associated with the condition. """ def __init__(self, client, tag): self.uuid = self._gen_uuid() self.tag = tag self.client = client self.start = None self.finish = None def get_id(self): return str(self.uuid) def get_tag(self): return self.tag def get_start(self): return str(self.start) def get_finish(self): return str(self.finish) def open(self, payload=None, start=None): """ .. method:: open(payload=None, start=None) Open the condition. * :samp:`payload`, a dictionary for associating additional data to the opened condition (default None). * :samp:`start`, a date time (rfc3339) used to set the start time, If None the current timestamp is used (default None). """ if start is None: d = datetime.datetime.utcnow() self.start = d.isoformat("T") + "Z" else: self.start = start value = { 'uuid': self.get_id(), 'tag': self.get_tag(), 'payload': payload, 'start': self.get_start(), } self.client._send_up_msg('', 'condition', value) def close(self, payload=None, finish=None): """ .. method:: close(payload=None, finish=None) Close the condition. * :samp:`payload`, a dictionary for associating additional data to the closed condition. Default None. * :samp:`finish`, a date time (rfc3339) used to set the finish time of the condition. If None the current timestamp is used. Default None. """ if finish is None: d = datetime.datetime.utcnow() self.finish = d.isoformat("T") + "Z" else: self.finish = finish value = { 'uuid': self.get_id(), 'payloadf': payload, 'finish': self.get_finish() } self.client._send_up_msg('', 'condition', value) def reset(self): """ .. method:: reset() Reset the condition by generatung a new id and resetting the start and end time. """ self.uuid = self._gen_uuid() self.start = None self.finish = None def _gen_uuid(self): return str(time.time() * 1000.0) def is_open(self): """ .. method:: is_open() Return True if the condition is open. False otherwise. """ return self.start is not None and self.finish is None def __str__(self): return "Condition (id={}, tag={}, start={}, finish={})".format(self.uuid, self.tag, self.start, self.finish)
zdm-client-py
/zdm_client_py-1.0.1-py3-none-any.whl/zdm/device/zdmclient.py
zdmclient.py
import base64 import json import os import time import tempfile import ssl import jwt from ..logging import ZdmLogger logger = ZdmLogger().get_logger() # Load zdevice.json file def load_zdevice(root_zdevice, file="zdevice.json"): path = os.path.join(root_zdevice, file) logger.info("Reading credential file: '{}'".format(path)) with open(path) as ff: content = json.load(ff) return content def default_time_func(): return int(time.time()) class Credentials(): def __init__(self, root_zdevice): if isinstance(root_zdevice, dict): logger.debug("Reading zdevice.json from dict: {}".format(root_zdevice)) nfo = root_zdevice else: try: nfo = load_zdevice(root_zdevice) except Exception as e: logger.error("Can't load device provisioning info. {}".format( e)) raise e logger.debug("Credential file: {}'".format(nfo)) self.device_id = nfo["devinfo"]["device_id"] self.mode = nfo["devinfo"]["mode"] self.secret = nfo["prvkey"] self.endpoint = nfo["endpoint"]["host"] self.port = nfo["endpoint"]["port"] self.key_id = nfo["devinfo"].get("key_id", 0) self.key_type = nfo["devinfo"].get("key_type", "sym") self.endpoint_mode = nfo["endpoint"].get("mode", "secure") logger.info("Credential type: '{}' Endpoint mode: '{}'".format(self.mode, self.endpoint_mode)) if self.endpoint_mode == "secure": if self.mode == "device_token" or self.mode == "cloud_token": # Save ca_cert into temporary file cacert = nfo.get("cacert", "") if cacert != "": self.caCertPath = self._save_to_tempfile(cacert) else: raise Exception("Empty ca cert file") else: # Save cli_cert and prvkey into temporary files clicert = nfo.get("clicert", "") if clicert != "": self.cliCertPath = self._save_to_tempfile(clicert) if self.secret != "": self.prvKeyClientPath = self._save_to_tempfile(self.secret) def configure_mqtt_client(self, client): if self.endpoint_mode == "secure": if self.mode=="device_token" or self.mode=="cloud_token": logger.debug("Reading Ca cert path: {}".format(self.caCertPath)) client.client.tls_set( ca_certs=self.caCertPath, cert_reqs=ssl.CERT_REQUIRED, tls_version=ssl.PROTOCOL_TLSv1_2 ) else: logger.debug("Reading Ca Cert path: {}".format(self.caCertPath)) logger.debug("Reading Client cert from path: {} ".format(self.cliCertPath)) logger.debug("Reading Private Key cert from path: {} ".format(self.prvKeyClientPath)) # options=ssl.CERT_REQUIRED | ssl.SERVER_AUT client.client.tls_set( ca_certs=self.caCertPath, cert_reqs=ssl.CERT_REQUIRED, certfile=self.cliCertPath, keyfile=self.prvKeyClientPath, tls_version=ssl.PROTOCOL_TLSv1_2, ) if self.mode == "cloud_token": client.set_username_pw(self.device_id, self.secret) elif self.mode == "device_token": token = self.generate_token() client.set_username_pw(self.device_id, token) else: logger.debug("Unsupported mode") raise Exception("Unsupported mode") def generate_token(self): # get current timestamp ts = default_time_func() payload = {"sub": self.device_id, "key": self.key_id, "exp": ts + 3600} # encode token secret = base64.b64decode(self.secret) token = jwt.encode(payload, secret, 'HS256' if self.key_type == "sym" else "ES256") logger.debug("Generated Jwt Token: '{}'".format(token)) return token def _save_to_tempfile(self, content): path = os.path.join(tempfile.gettempdir(), os.urandom(24).hex()) logger.debug("Saving '{}' to file path '{}'".format(content, path)) with open(path, 'w') as fp: fp.write(content) fp.flush() return path class Config(): def __init__(self, keepalive=60): self.keepalive = keepalive # self.cycle_timeout=cycle_timeout # self.command_timeout=command_timeout self.sock_keepalive = [1, 10, 5] self.clean_session = True self.qos_publish = 0 self.qos_subscribe = 1
zdm-client-py
/zdm_client_py-1.0.1-py3-none-any.whl/zdm/device/credentials.py
credentials.py
import json import paho.mqtt.client as mqtt from ..logging import ZdmLogger from .constants import MQTT_PREFIX_JOB, MQTT_PREFIX_REQ_DEV, MQTT_PREFIX_PRIVATE_STATUS, MQTT_PREFIX_STRONG_PRIVATE_STATUS import sys logger = ZdmLogger().get_logger() class MQTTClient: def __init__(self, mqtt_id, clean_session=False, ssl_ctx=None): self.client = mqtt.Client(mqtt_id, clean_session=clean_session) self.ssl_ctx = ssl_ctx self.client.on_connect = self.on_connect self.client.on_disconnect = self.on_disconnect self.client.on_message = self.on_message self.client.on_publish = self.on_publish self._ready_msg = {} # used only for caching the messages to be sent, and print the when they are effectively sent to the broker self.connected = False def set_username_pw(self, username, password): self.client.username_pw_set(username=username, password=password) def connect(self, host, port=1883, keepalive=60): self.client.connect(host, port=port, keepalive=keepalive) self.client.loop_start() logger.info("Connecting to: {}:{}".format(host, port)) def on_connect(self, client, userdata, flags, rc): # self.connected = True # 0: Connection successful # 1: Connection refused - incorrect protocol version # 2: Connection refused - invalid client identifier # 3: Connection refused - server unavailable # 4: Connection refused - bad username or password # 5: Connection refused - not authorised 6-255: Currently unused. logger.debug("On connect flags:{}, rc:{}".format(flags, mqtt.error_string(rc))) if rc == 0: self.connected = True logger.info("Successfully connected.") else: self.connected = False logger.error("Error in connection. Returned code={}".format(rc)) def on_disconnect(self, client, userdata, rc): logger.info("On disconnect rc:{}".format(rc)) if rc != 0: logger.error("Unexpected disconnection. Return code={}".format(rc)) else: logger.warning("Client disconnected after disconnect() is called. Return code={}".format(rc)) # TODO; call loop_stop() ?? # loop_stop def publish(self, topic, payload=None, qos=1): if isinstance(payload, dict): payload_str = json.dumps(payload) else: payload_str = payload try: ret = self.client.publish(topic, payload_str, qos=qos) self._ready_msg[ret.mid] = payload except Exception as e: logger.error("Error" + e) def on_publish(self, client, userdata, mid): payload = self._ready_msg[mid] if "key" in payload: k = payload["key"] if k.startswith((MQTT_PREFIX_JOB, MQTT_PREFIX_REQ_DEV, MQTT_PREFIX_PRIVATE_STATUS, MQTT_PREFIX_STRONG_PRIVATE_STATUS)): logger.debug("Publish message: {}".format(payload)) else: logger.info("Publish message: {}".format(payload)) else: logger.info("Publish message: {}".format(payload)) self._ready_msg.pop(mid, None) def on_message(self, client, userdata, msg): logger.info("#################### Message received: {}".format(msg)) def subscribe(self, topic, callback=None, qos=1): self.client.subscribe(topic=topic, qos=qos) logger.debug("Subscribed to topic: {}".format(topic)) if callback: self.client.message_callback_add(topic, callback)
zdm-client-py
/zdm_client_py-1.0.1-py3-none-any.whl/zdm/device/mqtt.py
mqtt.py
import numpy as np import MDAnalysis from numba import jit, float32 import cStringIO import drsip_common as common import types import errno @jit(float32[:,::1](float32), nopython=True) def z_rot_func(rad): trans_rot = np.zeros((3,3), dtype=np.float32) trans_rot[0,:] = [np.cos(rad), -np.sin(rad), 0.0] trans_rot[1,:] = [np.sin(rad), np.cos(rad), 0.0] trans_rot[2,:] = [0.0, 0.0, 1.0] return trans_rot @jit(float32[:,::1](float32), nopython=True) def x_rot_func(rad): trans_rot = np.zeros((3,3), dtype=np.float32) trans_rot[0,:] = [1.0, 0.0, 0.0] trans_rot[1,:] = [0.0, np.cos(rad), -np.sin(rad)] trans_rot[2,:] = [0.0, np.sin(rad), np.cos(rad)] return trans_rot @jit(float32[:,::1](float32[::1]), nopython=True) def euler_to_rot_mat(rot_angles): return np.dot(np.dot(z_rot_func(rot_angles[2]), x_rot_func(rot_angles[1])), z_rot_func(rot_angles[0])) @jit(float32[:,::1](float32[::1],float32[:,::1]), nopython=True) def rot_with_euler_rot(rot_angles, coord): return np.dot(coord, euler_to_rot_mat(rot_angles).T) def _read_next_timestep(self, ts=None): """copy next frame into timestep""" if self.ts.frame >= self.n_frames-1: raise IOError(errno.EIO, 'trying to go over trajectory limit') if ts is None: ts = self.ts ts.frame += 1 self.zdock_inst._set_pose_num(ts.frame+1) ts._pos = self.zdock_inst.static_mobile_copy_uni.trajectory.ts._pos return ts class ZDOCK(object): """Parse ZDOCK output file and return coordinates of poses or writes to PDB files Takes the ZDOCK output file, static and mobile PDB files as input Able to return the poses and write out PDB files """ def __init__(self, zdock_output_file, zdock_static_file_path='', zdock_mobile_file_path=''): self.zdock_output_file = zdock_output_file self.zdock_static_file_path = zdock_static_file_path self.zdock_mobile_file_path = zdock_mobile_file_path self.grid_size = None self.grid_spacing = None self.switch = None self.recep_init_rot = None self.lig_init_rot = None self.recep_init_trans = None self.lig_init_trans = None self.num_poses = None self.zdock_output_data = None self.temp_static_selection_str = '' self.temp_mobile_selection_str = '' self.parse_zdock_output(self.zdock_output_file) self.static_uni = self.load_pdb_structures(self.process_ZDOCK_marked_file(self.zdock_static_file_path)) self.mobile_uni = self.load_pdb_structures(self.process_ZDOCK_marked_file(self.zdock_mobile_file_path)) if self.switch: self.reverse_init_lig_rot_mat = euler_to_rot_mat(-self.lig_init_rot[::-1]) self.init_trans = self.lig_init_trans else: self.reverse_init_recep_rot_mat = euler_to_rot_mat(-self.recep_init_rot[::-1]) self.init_trans = self.recep_init_trans self.static_mobile_uni = MDAnalysis.Merge(self.static_uni.atoms, self.mobile_uni.atoms) self.static_mobile_copy_uni = MDAnalysis.Merge(self.static_uni.atoms, self.mobile_uni.atoms) self.static_mobile_copy_uni.trajectory.zdock_inst = self self.static_mobile_copy_uni.trajectory.n_frames = self.num_poses self.static_mobile_copy_uni.trajectory._read_next_timestep = types.MethodType(_read_next_timestep, self.static_mobile_copy_uni.trajectory) self.initial_mobile_coord = self.mobile_uni.atoms.positions self.initial_static_coord = self.static_uni.atoms.positions self.mobile_origin_coord = self.get_mobile_origin_coord(self.initial_mobile_coord) self.static_num_atoms = self.static_uni.atoms.n_atoms def get_mobile_origin_coord(self, coord): if self.switch: return self.apply_initial_rot_n_trans(self.recep_init_rot, self.recep_init_trans, coord) else: return self.apply_initial_rot_n_trans(self.lig_init_rot, self.lig_init_trans, coord) def process_ZDOCK_marked_file(self, marked_filename): new_pdb_str = '' pdb_file_lines = [] if isinstance(marked_filename, cStringIO.OutputType): pdb_file_lines = marked_filename.readlines() else: with open(marked_filename, 'r') as marked_file: pdb_file_lines = marked_file.readlines() for line in pdb_file_lines: if line[0:6] in ['ATOM ', 'HETATM']: new_pdb_str += line[0:54] + '\n' return common.convert_str_to_StrIO(new_pdb_str) def load_pdb_structures(self, pdb_stringio): return MDAnalysis.Universe(MDAnalysis.lib.util.NamedStream(pdb_stringio, 'marked.pdb')) def get_trans_vect(self, trans_vect, grid_size, grid_spacing): half_grid_size = grid_size/2.0 gte_half_grid_size = trans_vect >= half_grid_size trans_vect[gte_half_grid_size] = grid_size - trans_vect[gte_half_grid_size] trans_vect[~gte_half_grid_size] *= -1 trans_vect = trans_vect * grid_spacing return trans_vect def zdock_trans_rot(self, grid_size, grid_spacing, init_trans, mobile_rot, mobile_trans, init_coord, switch=False): if switch: dock_coord = init_coord - self.get_trans_vect(mobile_trans, grid_size, grid_spacing) dock_coord = rot_with_euler_rot(-mobile_rot[::-1], dock_coord) + init_trans dock_coord = dock_coord.dot(self.reverse_init_lig_rot_mat.T) else: dock_coord = rot_with_euler_rot(mobile_rot, init_coord) dock_coord += self.get_trans_vect(mobile_trans, grid_size, grid_spacing) + init_trans dock_coord = dock_coord.dot(self.reverse_init_recep_rot_mat.T) return dock_coord def parse_zdock_output(self, zdock_output_file): if isinstance(zdock_output_file, cStringIO.OutputType): zdock_output_lines = zdock_output_file.readlines() else: with open(zdock_output_file, 'r') as zdock_output_file_obj: zdock_output_lines = zdock_output_file_obj.readlines() self.grid_size = np.float32(zdock_output_lines[0].split()[0]) self.grid_spacing = np.float32(zdock_output_lines[0].split()[1]) self.switch = np.bool(np.int32(zdock_output_lines[0].split()[2])) if self.zdock_static_file_path == '': if self.switch: self.zdock_static_file_path = zdock_output_lines[4].split()[0] else: self.zdock_static_file_path = zdock_output_lines[3].split()[0] if self.zdock_mobile_file_path == '': if self.switch: self.zdock_mobile_file_path = zdock_output_lines[3].split()[0] else: self.zdock_mobile_file_path = zdock_output_lines[4].split()[0] # Euler rotation angles in ZDOCK output file are in: Z2, X1, Z1. # We will reverse the order to: Z1, X1, Z2. self.recep_init_rot = np.array(zdock_output_lines[1].split()[::-1], dtype='float32') self.lig_init_rot = np.array(zdock_output_lines[2].split()[::-1], dtype='float32') self.recep_init_trans = np.array(zdock_output_lines[3].split()[1:], dtype='float32') self.lig_init_trans = np.array(zdock_output_lines[4].split()[1:], dtype='float32') self.num_poses = len(zdock_output_lines[5:]) self.zdock_output_data = np.zeros((self.num_poses,7), dtype='float32') for idx, trans_rot_data in enumerate(zdock_output_lines[5:]): self.zdock_output_data[idx,:] = np.array(trans_rot_data.split(), dtype='float32') self.zdock_output_data[idx,:3] = self.zdock_output_data[idx,2::-1] # Reverse the order of the Euler angles def get_num_poses(self): return self.num_poses def set_mobile_selection(self, selection_str): if (selection_str != self.temp_mobile_selection_str): self.temp_mobile_selection = self.mobile_uni.select_atoms(selection_str) self.temp_mobile_selection_str = selection_str def set_static_selection(self, selection_str): if (selection_str != self.temp_static_selection_str): self.temp_static_selection = self.static_uni.select_atoms(selection_str) self.temp_static_selection_str = selection_str def apply_initial_rot_n_trans(self, initial_rot, initial_trans, coord): return rot_with_euler_rot(initial_rot, coord) - initial_trans def _set_pose_num(self, pose_num): """WARNING: Applies only to the MDAnalysis_Wrapper """ current_mobile_coord = self.mobile_origin_coord self.static_mobile_copy_uni.trajectory.ts._pos[self.static_num_atoms:,:] = self.zdock_trans_rot(self.grid_size, self.grid_spacing, self.init_trans, self.zdock_output_data[pose_num-1,0:3].copy(), self.zdock_output_data[pose_num-1,3:6].copy(), current_mobile_coord, self.switch) def get_MDAnalysis_Wrapper(self): return self.static_mobile_copy_uni def get_pose(self, pose_num, mobile_only=False, static_sel_str=None, mobile_sel_str=None): if pose_num > self.num_poses: raise Exception('Pose number: %d is larger than total number of of poses: %d' % (pose_num, self.num_poses)) current_mobile_coord = self.mobile_origin_coord current_static_coord = self.initial_static_coord if static_sel_str != None: self.set_static_selection(static_sel_str) current_static_coord = self.temp_static_selection.positions if mobile_sel_str != None: self.set_mobile_selection(mobile_sel_str) current_mobile_coord = self.get_mobile_origin_coord(self.temp_mobile_selection.positions) if mobile_only: return self.zdock_trans_rot(self.grid_size, self.grid_spacing, self.init_trans, self.zdock_output_data[pose_num-1,0:3].copy(), self.zdock_output_data[pose_num-1,3:6].copy(), current_mobile_coord, self.switch) else: return np.append(current_static_coord, self.zdock_trans_rot(self.grid_size, self.grid_spacing, self.init_trans, self.zdock_output_data[pose_num-1,0:3].copy(), self.zdock_output_data[pose_num-1,3:6].copy(), current_mobile_coord, self.switch), axis=0) def write_pose(self, pose_num, output_file_path, mobile_only=False): temp_coord = self.get_pose(pose_num, mobile_only=mobile_only) common.makedir(output_file_path) if mobile_only: self.mobile_uni.atoms.positions = temp_coord self.mobile_uni.atoms.write(output_file_path) self.mobile_uni.atoms.positions = self.initial_mobile_coord else: self.static_mobile_uni.atoms.positions = temp_coord self.static_mobile_uni.atoms.write(output_file_path)
zdock-parser
/zdock-parser-0.13.tar.gz/zdock-parser-0.13/zdock_parser/__init__.py
__init__.py
__version__ = "1.0" __author__ = "GianptDev" __date__ = '14-2-2022' # Revisioned. # will not include default properties in PatchData and TextureData, output will become smaller. compact_mode = True # ---------------------------------------- class PatchData(): """ Patch information for a patch element. Is used inside TextureData, but it can work indipendently. """ # ---------------------------------------- # List of style types. STYLE_TYPE = [ "add", "copy", "copyalpha", "copynewalpha", "modulate", "overlay", "reversesubtract", "subtract", "translucent", ] # all possible rotates, i really whish the engine could use an actual rotation. ROTATE_TYPE = [ 0, 90, 180, 270, ] # blend mode definition in Textures is bullshit, just use one of these in blend_mode BLEND_NONE = 0 BLEND_COLOR = 1 BLEND_TINT = 2 BLEND_TRANSLATION = 3 # ---------------------------------------- # all the properties of a single patch. # to change the blend to use, set blend_mode to one of the BLEND_ values. def __init__(self, path = "", positionX = 0, positionY = 0, flipX = False, flipY = False, use_offsets = False, style = "copy", rotate = 0, alpha = 1.0, blend_mode = BLEND_NONE, blend = (255,255,255), tint = 255, translation = "" ) -> None: self.path = str(path) self.positionX = int(positionX) self.positionY = int(positionY) self.flipX = bool(flipX) self.flipY = bool(flipY) self.use_offsets = bool(use_offsets) self.style = str(style) self.rotate = int(rotate) self.alpha = float(alpha) self.blend_mode = int(blend_mode) self.blend = blend # r,g,b self.tint = int(tint) self.translation = str(translation) def __repr__(self) -> str: return "PatchData[ \"" + str(self.path) + "\" ]" # ---------------------------------------- # write the patch block and return it as a string, on problems it will print some messages but will continue whit execution. # newline -> specifcy a string to use for new lines. # tab -> specify a string to use as tabulation. def write(self, newline = "\n", tab = "\t") -> str: result = "" props = "" # ---------------------------------------- if (not self.style.lower() in self.STYLE_TYPE): print( "Inside the patch \" " + str(self.path) + " \":\n" + " - The style \" " + str(self.style) + " \" is unknow.\n" + " Possible values are: " + str(self.STYLE_TYPE) ) return "" if (not int(self.rotate) in self.ROTATE_TYPE): print( "Inside the patch \" " + str(self.path) + " \":\n" + " - The rotate \" " + str(self.rotate) + " \" is unknow.\n" + " Possible values are: " + str(self.ROTATE_TYPE) ) return "" if ((self.blend_mode < self.BLEND_NONE) or (self.blend_mode > self.BLEND_TRANSLATION)): print( "Inside the patch \" " + str(self.path) + " \":\n" + " - The blend mode \" " + str(self.blend_mode) + " \" is unknow, please see BLEND_ values." ) # ---------------------------------------- # start of patch definition result += "patch \"" + str(self.path) + "\", " + str(int(self.positionX)) + ", " + str(int(self.positionY)) # flags if (self.use_offsets == True): props += tab + "UseOffsets" + newline if (self.flipX == True): props += tab + "flipX" + newline if (self.flipY == True): props += tab + "flipY" + newline # properties if ((compact_mode == False) or (compact_mode == True) and (self.style != "copy")): props += tab + "style " + str(self.style) + newline if ((compact_mode == False) or (compact_mode == True) and (self.rotate != 0)): props += tab + "rotate " + str(self.rotate) + newline if ((compact_mode == False) or (compact_mode == True) and (self.alpha != 1.0)): props += tab + "alpha " + str(self.alpha) + newline # color blend and tint work the same way. if ((self.blend_mode == self.BLEND_COLOR) or (self.blend_mode == self.BLEND_TINT)): props += tab + "blend " # check if is a iterable type if ((type(self.blend) is tuple) or (type(self.blend) is list)): if (len(self.blend) < 3): print( "Inside the patch \" " + str(self.path) + " \":\n" + " - The blend property require at least 3 (r,g,b) values." ) # if is a iterable type add all his value (even if only 3 are required...) for b in self.blend: props += str(b) + ", " props = props[:-2] # remove last ", " # if is a string it can be used as a hex color, nothing will check if is valid. elif (type(self.blend) is str): # add the quotes and the # if missing (slade automatically add it but gzdoom does not required it, so i'm not sure....) props += "\"" + ("#" if (self.blend[0] != "#") else "") + str(self.blend).upper() + "\"" # add the tint argoument if (self.blend_mode == self.BLEND_TINT): props += ", " + str(self.tint) props += newline # color translation is just a string tk add elif (self.blend_mode == self.BLEND_TRANSLATION): props += tab + "blend \"" + str(self.translation) + "\"" + newline # add property shit only if property do actually exist. if (props != ""): result += newline + "{" + newline + props + "}" + newline # ---------------------------------------- return result # ---------------------------------------- # to do #def read(self,data) -> bool: #return False # ---------------------------------------- # ---------------------------------------- class TextureData(): """ This class contain all the information about a texture definition. The result of write can be directly used as valid textures data. """ # ---------------------------------------- # list of know textures types. TEXTURE_TYPE = [ "sprite", "texture", "flat", "graphic", "walltexture", ] # ---------------------------------------- def __init__(self, name = "", type = "texture", sizeX = 64, sizeY = 128, optional = False, world_panning = False, no_decals = False, null_texture = False, offsetX = 0, offsetY = 0, scaleX = 1.0, scaleY = 1.0 ) -> None: self.name = str(name) self.type = str(type) self.sizeX = int(sizeX) self.sizeY = int(sizeY) self.offsetX = int(offsetX) self.offsetY = int(offsetY) self.scaleX = float(scaleX) self.scaleY = float(scaleY) self.optional = bool(optional) self.world_panning = bool(world_panning) self.no_decals = bool(no_decals) self.null_texture = bool(null_texture) self.patches = [] # This is the list of all patches inside this texture block def __repr__(self) -> str: return "<TextureData[ \"" + str(self.name) + "\" ]>" # ---------------------------------------- # add a patch in the list of patches, but only if is a valid PatchData def add_patch(self, patch) -> None: if (not type(patch) is PatchData): print( "Inside the texture \" " + str(self.name) + " \":\n" + " - Non-PatchData cannot be added, it may result in errors" ) return self.patches.append(patch) # return all patches that uses the specific path name. def get_patches(self, path) -> list: patches = self.patches result = [] for p in patches: if (p.path == path): result.append(p) return result # ---------------------------------------- # write the texture block and return it as a string, the result can be directly used for a textures file. # newline -> specify a string to use for new lines. # tab -> specify a string to use as tabulation. def write(self, newline = "\n", tab = "\t") -> str: result = "" # ---------------------------------------- if (not self.type.lower() in self.TEXTURE_TYPE): print( "Inside the texture \" " + str(self.name) + " \":\n" + " - The type \" " + str(type) + " \" is unknow.\n" + " Possible values are: " + str(self.TEXTURE_TYPE) ) return "" if (len(self.patches) <= 0): print( "Inside the texture \" " + str(self.name) + " \":\n" + " - No patch are used, the texture will be empty." ) # ---------------------------------------- # set the texture type result += self.type # add the optional flag first if (self.optional == True): result += " optional" # start of texture definition result += " \"" + str(self.name) + "\", " + str(int(self.sizeX)) + ", " + str(int(self.sizeY)) + newline + "{" + newline # flags if (self.world_panning == True): result += tab + "WorldPanning" + newline if (self.no_decals == True): result += tab + "NoDecals" + newline if (self.null_texture == True): result += tab + "NullTexture" + newline # properties if ((compact_mode == False) or (compact_mode == True) and ((self.offsetX != 0) or (self.offsetY != 0))): result += tab + "offset " + str(int(self.offsetX)) + ", " + str(int(self.offsetY)) + newline if ((compact_mode == False) or (compact_mode == True) and (self.scaleX != 1.0)): result += tab + "Xscale " + str(float(self.scaleX)) + newline if ((compact_mode == False) or (compact_mode == True) and (self.scaleY != 1.0)): result += tab + "Yscale " + str(float(self.scaleY)) + newline # add each patch to the result and make sure to tabulate. for p in self.patches: b = p.write(newline,tab) # fix extra newline if (b[-1] == newline): b = b[:-1] # do not execute work if the string is empty. if (b == ""): continue else: result += tab + b.replace(newline, newline + tab) + newline # end of patch definition result += "}" + newline return result # ---------------------------------------- # ---------------------------------------- # write a list of TextureData into a single string as a valid textures lump, does not write any file. # invalid data is ignored and will show a message. def write_textures(blocks, newline = "\n", tab = "\t") -> str: result = "" invalid_count = 0 # count invalid data clone_found = False # true if a texture is defined twince or more clone_count = {} # count every cloned definition # ---------------------------------------- # loop to every data in the input for b in blocks: # check if data is valid if (not type(b) is TextureData): invalid_count += 1 continue # check if a clone exist if (b.name in clone_count): clone_found = True clone_count[b.name] += 1 else: clone_count[b.name] = 1 # just write the block and merge whit the result result += b.write(newline,tab) + newline # ---------------------------------------- # display the amount of invalid data if (invalid_count > 0): print( "While writing the lump of size " + str(len(blocks)) + ":\n" + " - The input contain " + str(invalid_count) + " invalid data,\n" + " maybe non-TextureData or None are inside." ) # display the amount of clones if (clone_found == True): print( "While writing the lump of size " + str(len(blocks)) + ":\n" + " - Some textures are defined more than once:" ) # display each clone by the name and amount of clones for c in clone_count: if (clone_count[c] <= 1): continue print( " - - \"" + str(c) + "\" is defined " + str(clone_count[c]) + " times." ) # ---------------------------------------- return result # parse an actual textures definition into TextureData and PatchData instances, will not load a file. # the function work, but does not handle all errors yet, will receive changes in future versions. # load_textures, does nothing. # load_patches, if enabled will load patches data, if disabled patches are not loaded (resulting in empty textures). def read_textures(parse, endline = "\n", tab = "\t", load_textures = True, load_patches = True) -> list: result = [] # ---------------------------------------- # parse from string become an array. parse = parse.split(endline) # remove garbage for d in range(len(parse)): parse[d] = parse[d].replace(tab,"") parse[d] = parse[d].replace(",","") # clear useless stuff for d in range(len(parse)): if (d >= len(parse)): break if (parse[d] == ""): del parse[d] elif (parse[d] == "}"): parse[d] = None elif (parse[d] == "{"): del parse[d] # start to instance stuff current_patch = None current_texture = None for d in range(len(parse)): info = parse[d] if (info == None): if (current_patch != None): current_patch = None continue if (current_texture != None): current_texture = None continue # error to add print("what the? } used twince?") return [] # this is all the info when need to read the textures lump! info = info.split(" ") # stuff to load a texture if (info[0] in TextureData.TEXTURE_TYPE): if (current_texture != None): print("what the? texture defined twince?") return [] if (len(info) < 4): print("what the? not enough texture informations?") return [] is_optional = False if (info[1].lower() == "optional"): is_optional = True del info[1] # remove quotes if they exist. if (info[1][0] == "\""): info[1] = info[1][1:] if (info[1][-1] == "\""): info[1] = info[1][:-1] current_texture = TextureData() current_texture.type = info[0] current_texture.name = info[1] current_texture.sizeX = float(info[2]) current_texture.sizeY = float(info[3]) current_texture.optional = is_optional result.append(current_texture) # stuff to load a patch if ((load_patches == True) and (info[0].lower() == "patch")): if (current_texture == None): print("what the? patch connected to nothing?") return [] if (current_patch != None): print("what the? patch defined twince?") return [] if (len(info) < 4): print("what the? not enough patch informations?") return [] # remove quotes if they exist. if (info[1][0] == "\""): info[1] = info[1][1:] if (info[1][-1] == "\""): info[1] = info[1][:-1] current_patch = PatchData() current_patch.type = info[0] current_patch.path = info[1] current_patch.positionX = float(info[2]) current_patch.positionY = float(info[3]) current_texture.add_patch(current_patch) if (current_patch != None): p = info[0].lower() # properties if (len(info) >= 2): if (p == "style"): current_patch.style = info[1] elif (p == "rotate"): current_patch.rotate = int(info[1]) elif (p == "alpha"): current_patch.alpha = float(info[1]) elif (p == "blend"): # todo: blend mode is detected like shit if (len(info) >= 4): current_patch.blend = (int(info[1]),int(info[2]),int(info[3])) if (len(info) >= 5): current_patch.tint = int(info[4]) current_patch.blend_mode = current_patch.BLEND_TINT else: current_patch.blend_mode = current_patch.BLEND_COLOR elif (len(info) >= 2): current_patch.blend = info[1] current_patch.translation = info[1] # yeah... if (len(info) >= 3): current_patch.tint = int(info[2]) current_patch.blend_mode = current_patch.BLEND_TINT else: current_patch.blend_mode = current_patch.BLEND_COLOR else: print("what the? wrong blend data?") # flags else: if (p == "flipx"): current_patch.flipX = True elif (p == "flipy"): current_patch.flipY = True elif (p == "useoffsets"): current_patch.use_offsets = True if (current_texture != None): p = info[0].lower() # properties if (len(info) >= 2): if (p == "offset"): current_texture.offsetX = int(info[1]) current_texture.offsetY = int(info[2]) elif (p == "xscale"): current_texture.scaleX = float(info[1]) elif (p == "yscale"): current_texture.scaleY = float(info[1]) # flags else: if (p == "worldpanning"): current_texture.world_panning = True elif (p == "nodecals"): current_texture.no_decals = True elif (p == "nulltexture"): current_texture.null_texture = True # ---------------------------------------- # return a beautiful amount of classes! return result # ---------------------------------------- # Will convert a string into a valid sprite name, will add the frame character and angle by using a simple number. # index is the range between A and Z, a greater number will wrap around and override the name. # angle is the rotate of the sprite, 0 is no rotate and 1 to 8 are all rotate keys. def to_sprite_name(name, index = 0, angle = 0) -> None: result = "" # get only 4 characters for the name, it will be used to wrap around. wrap = [ord(name[0]) - 65,ord(name[1]) - 65,ord(name[2]) - 65,ord(name[3]) - 65] base = 25 # from A to Z # convert to base 26 while(True): # if the index is already under the limit, then no more shit is required. if (index >= base): index -= base # increase the next character every time the number is greater than the limit. for i in range(len(wrap)): i = len(wrap) - (i + 1) if (wrap[i] >= base): wrap[i] = 0 else: wrap[i] += 1 break else: break # build the new name. name = "" for i in wrap: name += chr(65 + i) frame = chr(65 + index) # add the frame string to the name. result += name + frame # add the rotate index. if (angle == 0): result += "0" elif (angle == 1): result += "1" elif (angle == 2): result += frame + "8" elif (angle == 3): result += frame + "7" elif (angle == 4): result += frame + "6" elif (angle == 5): result += "5" elif (angle == 6): result += frame + "4" elif (angle == 7): result += frame + "3" elif (angle == 8): result += frame + "2" return result # ---------------------------------------- # Exampes if __name__ == '__main__': # load test #ims = read_textures(open("test.txt","r").read()) #print(write_textures(ims)) #input() print("Zdoom Textures Parser examples:\n") empty = TextureData(type = "sprite",sizeX = 32, sizeY = 16) empty.name = to_sprite_name("PIST",0) wall = TextureData("WALLBRICK",type = "walltexture", optional = True, scaleY = 1.2) p = PatchData("textures/brick.png") wall.add_patch(p) more_patches = TextureData("WALLSTONE","walltexture",sizeX = 64, sizeY = 64) for i in [ PatchData("textures/stone1.png", flipX = True, rotate = 90), PatchData("textures/stone2.png",positionX = 32, blend_mode = PatchData.BLEND_TINT, blend = "ff0000"), ]: more_patches.add_patch(i) print("Empty texture example:") print(empty.write()) print("Texture whit a single patch:") print(wall.write()) print("Texture whit more patches:") print(more_patches.write()) # spam test #for i in range(26 ** 4): # c = to_sprite_name("AAAA",i) # print(c) # write test #print(write_textures([empty])) #open("test.txt","w").write(write_textures([more_patches,wall]))
zdoom-textures-tool
/zdoom_textures_tool-1.0-py3-none-any.whl/zdtwriter/__init__.py
__init__.py
from elasticsearch import Elasticsearch from typing import Dict,Any,List class EsClient: def __init__(self, ip:str = "127.0.0.1", port:int = 9200, hosts:Dict[str, Any] = None, timeout = 3600) -> None: """ 初始化数据 """ self.ip = ip self.port = port self.hosts = None if hosts is None: self.hosts = [{'host': ip, 'port': port}] self.conn = Elasticsearch(self.hosts, timeout = timeout) def find(self, index: str = None, body: Dict = None, doc_type: str = None, id: int = None, filter_path:List=None): """ 查询数据 """ # ID不为空,调用的是get方法 if id is not None: return self.conn.get(index=index, filter_path=filter_path, doc_type=doc_type, id=id) # 查询所有数据:index不为None,body和ID为None if index is not None and body is None and id is None: return self.conn.search(index=index, filter_path=filter_path, body={"query": {"match_all": {}}}) # ID为空,调用的是search方法 result = self.conn.search( index=index, body=body, filter_path=filter_path, doc_type=doc_type) return result def find_count(self, index:str=None, doc_type:str=None) -> Dict: """ 查询数据总数 """ return self.conn.count(index=index, doc_type=doc_type) def find_all_index(self) -> Dict: """ 查询所有索引 """ return self.conn.indices.get_alias("*") def add(self, index: str = None, doc_type=None, id: int = None, body: Dict = None) -> Dict: """ 添加数据 """ if id is None: self.conn.index(index=index, doc_type=doc_type, body=body) else: self.conn.index(index=index, id=id, doc_type=doc_type, body=body) def add_index(self, index:str) -> None: """ 新增索引 """ return self.conn.indices.create(index=index) def update(self, index:str=None, doc_type=None, id:int=None, body:Dict=None, query:Dict=None) -> Dict: """ 更新数据 """ # 条件更新 if query is not None: return self.conn.update_by_query(index=index, doc_type=doc_type, body=query) # 单条更新 return self.conn.update(index=index, doc_type=doc_type, id=id, body=body) def delete(self, index:str=None, doc_type:str=None, id:int=None, query:Dict=None) -> Dict: """ 删除数据 """ # 根据ID删除 if id is not None: return self.conn.delete(index=index, doc_type=doc_type, id=id) # 条件查询 if query is not None: return self.conn.delete_by_query(index=index, body=query) # 直接删除索引 return self.conn.indices.delete(index) def health(self) -> bool: """ 获取Elasticsearch集群的健康状态 """ return self.conn.ping() def info(self) -> Dict: """ 获取集群的基本信息 """ return self.conn.info() def detail(self) -> Dict: """ 获取集群的详细信息 """ return self.conn.cluster.health() def client_info(self) -> Dict: """ 查看当前客户端信息 """ return self.conn.cluster.client.info() def indexs(self) -> str: """ 查看所有的索引 """ return self.conn.cat.indices() def stats(self) -> Dict: """ 查看集群的更多信息 """ return self.conn.cluster.stats() def tasks_get(self): return self.conn.tasks.get() def tasks_list(self): return self.conn.tasks.list()
zdpapi-elastic-search
/zdpapi_elastic_search-1.0.0-py3-none-any.whl/zdpapi_elastic_search/object.py
object.py
# zdpapi_modbus python版modbus协议快速开发工具库 安装方式 ```shell pip install zdpapi_modbus ``` ## 一、快速入门 ### 1.1 实例1:读写数据 #### 1.1.1 slave读写master数据 ```python from zdpapi_modbus import cst, modbus_tcp import time import random # 创建一个TCP服务 server = modbus_tcp.TcpServer() # 启动server server.start() # 添加一个slave slave_id = 1 slave_1 = server.add_slave(slave_id) # 添加一个block block_name = "0" slave_1.add_block(block_name, cst.HOLDING_REGISTERS, 0, 100) # 不断的写入数据 while True: # 写入数据 slave = server.get_slave(slave_id) address = 0 values = [random.randint(0, 100) for _ in range(6)] slave.set_values(block_name, address, values) values = slave.get_values(block_name, address, len(values)) print("slave上的values是:", values) # 读取数据 slave = server.get_slave(slave_id) address = 10 values = slave.get_values(block_name, address, len(values)) print("slave接收到server传过来的数据:", values) time.sleep(1) ``` #### 1.1.2 master从slave读数据 ```python from zdpapi_modbus import cst, modbus_tcp import time import random master = modbus_tcp.TcpMaster() master.set_timeout(5.0) slave_id = 1 while True: # 读取数据 values = master.execute(slave_id, cst.READ_HOLDING_REGISTERS, 0, 6) print("values:", values) # 写入数据 address = 10 values = [random.randint(10, 20) for _ in range(6)] master.execute(slave_id, cst.WRITE_MULTIPLE_REGISTERS, address, output_value=values) # 1s执行一次 time.sleep(1) ``` ### 1.2 使用钩子 #### 1.2.1 slave ```python import sys from zdpapi_modbus import cst, modbus_tcp, utils import logging def main(): logger = utils.create_logger(name="console", record_format="%(message)s") try: # 创建一个TCP服务 server = modbus_tcp.TcpServer() logger.info("running...") logger.info("enter 'quit' for closing the server") # 启动server server.start() # 添加一个slave slave_1 = server.add_slave(1) # 添加一个block slave_1.add_block('0', cst.HOLDING_REGISTERS, 0, 100) while True: cmd = sys.stdin.readline() args = cmd.split(' ') # 退出 if cmd.find('quit') == 0: sys.stdout.write('bye-bye\r\n') break # 添加slave elif args[0] == 'add_slave': slave_id = int(args[1]) server.add_slave(slave_id) sys.stdout.write('done: slave %d added\r\n' % slave_id) # 添加block elif args[0] == 'add_block': slave_id = int(args[1]) name = args[2] block_type = int(args[3]) starting_address = int(args[4]) length = int(args[5]) slave = server.get_slave(slave_id) slave.add_block(name, block_type, starting_address, length) sys.stdout.write('done: block %s added\r\n' % name) # 写入数据 elif args[0] == 'set_values': slave_id = int(args[1]) name = args[2] address = int(args[3]) values = [] for val in args[4:]: values.append(int(val)) slave = server.get_slave(slave_id) slave.set_values(name, address, values) values = slave.get_values(name, address, len(values)) sys.stdout.write('done: values written: %s\r\n' % str(values)) # 读取数据 elif args[0] == 'get_values': slave_id = int(args[1]) name = args[2] address = int(args[3]) length = int(args[4]) slave = server.get_slave(slave_id) values = slave.get_values(name, address, length) sys.stdout.write('done: values read: %s\r\n' % str(values)) else: sys.stdout.write("unknown command %s\r\n" % args[0]) finally: server.stop() if __name__ == "__main__": main() ``` #### 1.2.2 master ```python from __future__ import print_function from zdpapi_modbus import cst, modbus_tcp, hooks, utils, modbus import logging def main(): """main""" logger = utils.create_logger("console", level=logging.DEBUG) # 读取数据之后的回调 def on_after_recv(data): master, bytes_data = data logger.info(bytes_data) # 注册回调 hooks.install_hook('modbus.Master.after_recv', on_after_recv) try: # 连接之前的回调 def on_before_connect(args): master = args[0] logger.debug("on_before_connect {0} {1}".format(master._host, master._port)) # 注册回调 hooks.install_hook("modbus_tcp.TcpMaster.before_connect", on_before_connect) # 读取数据之后的回调 def on_after_recv(args): response = args[1] logger.debug("on_after_recv {0} bytes received".format(len(response))) hooks.install_hook("modbus_tcp.TcpMaster.after_recv", on_after_recv) # 连接到slave master = modbus_tcp.TcpMaster() master.set_timeout(5.0) logger.info("connected") # 读取数据 logger.info(master.execute(1, cst.READ_HOLDING_REGISTERS, 0, 3)) # logger.info(master.execute(1, cst.READ_HOLDING_REGISTERS, 0, 2, data_format='f')) # Read and write floats # master.execute(1, cst.WRITE_MULTIPLE_REGISTERS, starting_address=0, output_value=[3.14], data_format='>f') # logger.info(master.execute(1, cst.READ_HOLDING_REGISTERS, 0, 2, data_format='>f')) # send some queries # logger.info(master.execute(1, cst.READ_COILS, 0, 10)) # logger.info(master.execute(1, cst.READ_DISCRETE_INPUTS, 0, 8)) # logger.info(master.execute(1, cst.READ_INPUT_REGISTERS, 100, 3)) # logger.info(master.execute(1, cst.READ_HOLDING_REGISTERS, 100, 12)) # logger.info(master.execute(1, cst.WRITE_SINGLE_COIL, 7, output_value=1)) # logger.info(master.execute(1, cst.WRITE_SINGLE_REGISTER, 100, output_value=54)) # logger.info(master.execute(1, cst.WRITE_MULTIPLE_COILS, 0, output_value=[1, 1, 0, 1, 1, 0, 1, 1])) # logger.info(master.execute(1, cst.WRITE_MULTIPLE_REGISTERS, 100, output_value=xrange(12))) except modbus.ModbusError as exc: logger.error("%s- Code=%d", exc, exc.get_exception_code()) if __name__ == "__main__": main() ``` ### 1.3 批量写入和批量读取modbus ### 1.3.1 slave ```python """ 按照1秒钟传递100台机组的数据,1台机组25个变量 """ from zdpapi_modbus import cst, modbus_tcp, rand_float, trans_float_to_int import time # 创建一个TCP服务 server = modbus_tcp.TcpServer() # 启动server server.start() # 添加一个slave slave_id = 1 slave_1 = server.add_slave(slave_id) # 添加一个block block_name = "0" slave_1.add_block(block_name, cst.HOLDING_REGISTERS, 0, 3000) # 单台风机的数据 variables = [ "机舱X方向振动", "机舱Y方向振动", "限功率运行状态", "电网有功功率", "有功功率", "风轮转速", "环境温度", "瞬时风向", "瞬时风速", "工作模式", "测试写入主控变量1", "1#风向仪瞬时风向", "2#风向仪瞬时风向", "机舱外风向", "偏航方位角", "测试(高频数据)1", "测试(高频数据)2", "测试(高频数据)3", "测试(高频数据)4", "测试(高频数据)5", "测试(高频数据)6", "测试(高频数据)7", "测试(高频数据)8", "测试(高频数据)9", "测试(高频数据)10", ] # 分两个slave传,一个传50台风机 # 生成数据 data = [] address = 0 for i in range(1, 51): for j in range(1, len(variables)+1): control = {} control["device_id"] = i control["cname"] = variables[j-1] control["name"] = f"v{i}_{j}" address += 2 control["address"] = address control["length"] = 2 control["func"] = 3 control["type"] = "F" data.append(control) # 生成随机数 data_float = [rand_float(0, 100) for _ in data] # 不断的写入数据 while True: # 生成数据 # 写入数据 slave = server.get_slave(slave_id) # slave address = 0 values = trans_float_to_int(data_float) value_length = len(values) print("要传输的数据个数:", value_length) index = 0 while True: slave.set_values(block_name, index, values[index:index + 100]) # 最后一次传输 value_length -= 100 if value_length <= 100: slave.set_values(block_name, index, values[index:]) break # 每次传100个数 index += 100 time.sleep(1) ``` ### 1.3.2 master ```python """ 从服务端获取100台机组的数据,每台机组有25个变量 """ from zdpapi_modbus import cst, modbus_tcp, trans_int_to_float import time import random master = modbus_tcp.TcpMaster() master.set_timeout(5.0) slave_id = 1 # 单台风机的数据 variables = [ "机舱X方向振动", "机舱Y方向振动", "限功率运行状态", "电网有功功率", "有功功率", "风轮转速", "环境温度", "瞬时风向", "瞬时风速", "工作模式", "测试写入主控变量1", "1#风向仪瞬时风向", "2#风向仪瞬时风向", "机舱外风向", "偏航方位角", "测试(高频数据)1", "测试(高频数据)2", "测试(高频数据)3", "测试(高频数据)4", "测试(高频数据)5", "测试(高频数据)6", "测试(高频数据)7", "测试(高频数据)8", "测试(高频数据)9", "测试(高频数据)10", ] # 分两个slave传,一个传50台风机 # 生成数据 data = [] address = 0 for i in range(1, 51): for j in range(1, len(variables)+1): control = {} control["device_id"] = i control["cname"] = variables[j-1] control["name"] = f"v{i}_{j}" address += 2 control["address"] = address control["length"] = 2 control["func"] = 3 control["type"] = "F" data.append(control) while True: # 读取数据 data = [] data_length = 2500 # 要取出2500个数 index = 0 while True: # 每次取出100个数 length = 100 values = master.execute( slave_id, cst.READ_HOLDING_REGISTERS, index, length) data.extend(values) # 最后一次取 data_length -= 100 if data_length <= 100: values = master.execute( slave_id, cst.READ_HOLDING_REGISTERS, index, data_length) data.extend(values) break index += 100 # print("data:", data, len(data)) # 解析为真实的数组 result = trans_int_to_float(data, keep_num=2) print("最终结果:", result, len(result)) # 1s执行一次 time.sleep(1) ``` ### 1.4 使用Slave和Master类 #### 1.4.1 slave ```python """ 按照1秒钟传递100台机组的数据,1台机组25个变量 """ from zdpapi_modbus import Slave, rand_float slave = Slave() slave.add_slave(1) slave.add_block(1, "0", 3) # 单台风机的数据 variables = [ "机舱X方向振动", "机舱Y方向振动", "限功率运行状态", "电网有功功率", "有功功率", "风轮转速", "环境温度", "瞬时风向", "瞬时风速", "工作模式", "测试写入主控变量1", "1#风向仪瞬时风向", "2#风向仪瞬时风向", "机舱外风向", "偏航方位角", "测试(高频数据)1", "测试(高频数据)2", "测试(高频数据)3", "测试(高频数据)4", "测试(高频数据)5", "测试(高频数据)6", "测试(高频数据)7", "测试(高频数据)8", "测试(高频数据)9", "测试(高频数据)10", ] # 分两个slave传,一个传50台风机 # 生成数据 data = [] address = 0 for i in range(1, 51): for j in range(1, len(variables)+1): control = {} control["device_id"] = i control["cname"] = variables[j-1] control["name"] = f"v{i}_{j}" address += 2 control["address"] = address control["length"] = 2 control["func"] = 3 control["type"] = "F" data.append(control) # 生成随机数 data_float = [rand_float(0, 100) for _ in data] slave.run(1, "0", data_float, random_data=True) ``` #### 1.4.2 master ```python """ 从服务端获取100台机组的数据,每台机组有25个变量 """ from zdpapi_modbus import Master master = Master() slave_id = 1 # 单台风机的数据 variables = [ "机舱X方向振动", "机舱Y方向振动", "限功率运行状态", "电网有功功率", "有功功率", "风轮转速", "环境温度", "瞬时风向", "瞬时风速", "工作模式", "测试写入主控变量1", "1#风向仪瞬时风向", "2#风向仪瞬时风向", "机舱外风向", "偏航方位角", "测试(高频数据)1", "测试(高频数据)2", "测试(高频数据)3", "测试(高频数据)4", "测试(高频数据)5", "测试(高频数据)6", "测试(高频数据)7", "测试(高频数据)8", "测试(高频数据)9", "测试(高频数据)10", ] # 分两个slave传,一个传50台风机 # 生成数据 data = [] address = 0 for i in range(1, 51): for j in range(1, len(variables)+1): control = {} control["device_id"] = i control["cname"] = variables[j-1] control["name"] = f"v{i}_{j}" address += 2 control["address"] = address control["length"] = 2 control["func"] = 3 control["type"] = "F" data.append(control) master.run_read_many_float(slave_id, 2500, console=True) ``` ## 二、数据的打包和解包 ### 2.1 基本使用 ```python from zdpapi_modbus import * data = [11, 22, 33] # 测试打包 print("============================================测试打包=====================================================") print(pack_byte(data)) print(pack_int(data)) print(pack_long(data)) print(pack_float(data)) print(pack_double(data)) print("============================================测试完毕=====================================================\n\n") # 测试解包 print("============================================测试解包=====================================================") print(unpack_byte(len(data), pack_byte(data))) print(unpack_int(len(data), pack_int(data))) print(unpack_long(len(data), pack_long(data))) print(unpack_float(len(data), pack_float(data))) print(unpack_double(len(data), pack_double(data))) print("============================================测试完毕=====================================================\n\n") ``` ### 2.2 数据类型转换 ```python from zdpapi_modbus import * data = [11.11, 22.22, 33.33] # 将浮点数转换为整数,再将整数还原为浮点数 print(trans_float_to_int(data)) print(trans_int_to_float(trans_float_to_int(data))) ``` ## 三、生成随机数 ### 3.1 生成随机浮点数 ```python from zdpapi_modbus import * data = [rand_float(0, 100) for _ in range(50)] print(data) # 将浮点数转换为整数,再将整数还原为浮点数 print(trans_float_to_int(data)) print(trans_int_to_float(trans_float_to_int(data), keep_num=6)) ```
zdpapi-modbus
/zdpapi_modbus-1.7.1.tar.gz/zdpapi_modbus-1.7.1/README.md
README.md
from typing import Tuple from .libs.modbus_tk import modbus_tcp from .libs.modbus_tk import defines as cst from .zstruct import trans_int_to_float import time class Master: def __init__(self, host: str = "127.0.0.1", port: int = 502, timeout_in_sec: float = 5.0) -> None: self.master = modbus_tcp.TcpMaster( host=host, port=port, timeout_in_sec=timeout_in_sec) def read_float(self, slave_id: int, data_length: int, keep_num: int = 2): """ 批量读取float类型的数据 """ data = [] index = 0 while True: # 每次取出100个数 length = 100 values = self.master.execute( slave_id, cst.READ_HOLDING_REGISTERS, index, length) data.extend(values) # 最后一次取 data_length -= 100 if data_length <= 100: values = self.master.execute( slave_id, cst.READ_HOLDING_REGISTERS, index, data_length) data.extend(values) break index += 100 # 解析为真实的数组 result = trans_int_to_float(data, keep_num=keep_num) return result def read(self, slave_id, func_code, address, length): """ 从modbus读取数据 """ # 超过124个了 if length > 124: data = [] while length > 124: temp = self.master.execute(slave_id, func_code, address, 124) data.extend(temp) address += 124 length -= 124 else: # 保证读取完毕 temp = self.master.execute(slave_id, func_code, address, length) data.extend(temp) return data # 不超过则正常读取 return self.master.execute(slave_id, func_code, address, length) def to_float(self, data: Tuple[int], keep_num: int = 2): """ 将整数类型的列表转换为浮点数类型的列表 """ result = trans_int_to_float(data, keep_num=keep_num) return result
zdpapi-modbus
/zdpapi_modbus-1.7.1.tar.gz/zdpapi_modbus-1.7.1/zdpapi_modbus/master.py
master.py
import asyncio import struct from .libs.modbus_tk import defines from .libs.modbus_tk import LOGGER from .libs.modbus_tk.hooks import call_hooks from .libs.modbus_tk.utils import to_data, get_log_buffer from .libs.modbus_tk.exceptions import( ModbusError, ModbusFunctionNotSupportedError, ModbusInvalidResponseError ) from .libs.modbus_tk.modbus import Master from .libs.modbus_tk import modbus_tcp from typing import Tuple from .zstruct import trans_int_to_float class MasterAsync(Master): """ 异步的Master类 """ def __init__(self, host="127.0.0.1", port=502, timeout_in_sec=5.0): """ host: 主机地址 port: 端口号 timeout_in_sec: 超时时间 """ super(MasterAsync, self).__init__(timeout_in_sec) self._host = host self._port = port self._reader = None self._writer = None async def open(self): """ 建立与slave的通讯 """ if not self._is_opened: await self._do_open() self._is_opened = True def close(self): """ 关闭与slave的通讯 """ if self._is_opened: ret = self._do_close() if ret: self._is_opened = False async def _do_open(self): """ 连接slave """ if self._writer: self._writer.close() call_hooks("modbus_tcp.TcpMaster.before_connect", (self, )) try: self._reader, self._writer = await asyncio.open_connection(self._host, self._port) except: return False call_hooks("modbus_tcp.TcpMaster.after_connect", (self, )) return True def _do_close(self): """ 关闭与slave的连接 """ if self._writer: call_hooks("modbus_tcp.TcpMaster.before_close", (self, )) self._writer.close() call_hooks("modbus_tcp.TcpMaster.after_close", (self, )) self._reader = None self._writer = None return True async def _send(self, request): """ 向slave发送请求 """ retval = call_hooks( "modbus_tcp.TcpMaster.before_send", (self, request)) if retval is not None: request = retval try: self._writer.write(request) except Exception as e: await asyncio.sleep(1) if self._verbose: LOGGER.debug(f"send data timeout {e}") async def _recv(self, expected_length=-1): """ 从slave接收请求 """ try: response = to_data('') length = 255 while len(response) < length: rcv_byte = await self._reader.read(1) if rcv_byte: response += rcv_byte if len(response) == 6: to_be_recv_length = struct.unpack(">HHH", response)[2] length = to_be_recv_length + 6 else: break retval = call_hooks( "modbus_tcp.TcpMaster.after_recv", (self, response)) if retval is not None: return retval except Exception as e: self._is_opened = False await asyncio.sleep(1) if self._verbose: LOGGER.debug(f"recv data timeout {e}") return response def _make_query(self): """ tcp协议查询 """ return modbus_tcp.TcpQuery() async def execute( self, slave, function_code, starting_address, quantity_of_x=0, output_value=0, data_format="", expected_length=-1, write_starting_address_FC23=0): """ 从modbus执行请求,获取数据 """ pdu = "" is_read_function = False nb_of_digits = 0 # open the connection if it is not already done await self.open() # Build the modbus pdu and the format of the expected data. # It depends of function code. see modbus specifications for details. if function_code == defines.READ_COILS or function_code == defines.READ_DISCRETE_INPUTS: is_read_function = True pdu = struct.pack(">BHH", function_code, starting_address, quantity_of_x) byte_count = quantity_of_x // 8 if (quantity_of_x % 8) > 0: byte_count += 1 nb_of_digits = quantity_of_x if not data_format: data_format = ">" + (byte_count * "B") if expected_length < 0: # No length was specified and calculated length can be used: # slave + func + bytcodeLen + bytecode + crc1 + crc2 expected_length = byte_count + 5 elif function_code == defines.READ_INPUT_REGISTERS or function_code == defines.READ_HOLDING_REGISTERS: is_read_function = True pdu = struct.pack(">BHH", function_code, starting_address, quantity_of_x) if not data_format: data_format = ">" + (quantity_of_x * "H") if expected_length < 0: # No length was specified and calculated length can be used: # slave + func + bytcodeLen + bytecode x 2 + crc1 + crc2 expected_length = 2 * quantity_of_x + 5 elif (function_code == defines.WRITE_SINGLE_COIL) or (function_code == defines.WRITE_SINGLE_REGISTER): if function_code == defines.WRITE_SINGLE_COIL: if output_value != 0: output_value = 0xff00 fmt = ">BHH" else: fmt = ">BH"+("H" if output_value >= 0 else "h") pdu = struct.pack(fmt, function_code, starting_address, output_value) if not data_format: data_format = ">HH" if expected_length < 0: # No length was specified and calculated length can be used: # slave + func + adress1 + adress2 + value1+value2 + crc1 + crc2 expected_length = 8 elif function_code == defines.WRITE_MULTIPLE_COILS: byte_count = len(output_value) // 8 if (len(output_value) % 8) > 0: byte_count += 1 pdu = struct.pack(">BHHB", function_code, starting_address, len(output_value), byte_count) i, byte_value = 0, 0 for j in output_value: if j > 0: byte_value += pow(2, i) if i == 7: pdu += struct.pack(">B", byte_value) i, byte_value = 0, 0 else: i += 1 if i > 0: pdu += struct.pack(">B", byte_value) if not data_format: data_format = ">HH" if expected_length < 0: # No length was specified and calculated length can be used: # slave + func + adress1 + adress2 + outputQuant1 + outputQuant2 + crc1 + crc2 expected_length = 8 elif function_code == defines.WRITE_MULTIPLE_REGISTERS: if output_value and data_format: byte_count = struct.calcsize(data_format) else: byte_count = 2 * len(output_value) pdu = struct.pack(">BHHB", function_code, starting_address, byte_count // 2, byte_count) if output_value and data_format: pdu += struct.pack(data_format, *output_value) else: for j in output_value: fmt = "H" if j >= 0 else "h" pdu += struct.pack(">" + fmt, j) # data_format is now used to process response which is always 2 registers: # 1) data address of first register, 2) number of registers written data_format = ">HH" if expected_length < 0: # No length was specified and calculated length can be used: # slave + func + adress1 + adress2 + outputQuant1 + outputQuant2 + crc1 + crc2 expected_length = 8 elif function_code == defines.READ_EXCEPTION_STATUS: pdu = struct.pack(">B", function_code) data_format = ">B" if expected_length < 0: # No length was specified and calculated length can be used: expected_length = 5 elif function_code == defines.DIAGNOSTIC: # SubFuncCode are in starting_address pdu = struct.pack(">BH", function_code, starting_address) if len(output_value) > 0: for j in output_value: # copy data in pdu pdu += struct.pack(">B", j) if not data_format: data_format = ">" + (len(output_value) * "B") if expected_length < 0: # No length was specified and calculated length can be used: # slave + func + SubFunc1 + SubFunc2 + Data + crc1 + crc2 expected_length = len(output_value) + 6 elif function_code == defines.READ_WRITE_MULTIPLE_REGISTERS: is_read_function = True byte_count = 2 * len(output_value) pdu = struct.pack( ">BHHHHB", function_code, starting_address, quantity_of_x, write_starting_address_FC23, len(output_value), byte_count ) for j in output_value: fmt = "H" if j >= 0 else "h" # copy data in pdu pdu += struct.pack(">"+fmt, j) if not data_format: data_format = ">" + (quantity_of_x * "H") if expected_length < 0: # No lenght was specified and calculated length can be used: # slave + func + bytcodeLen + bytecode x 2 + crc1 + crc2 expected_length = 2 * quantity_of_x + 5 else: raise ModbusFunctionNotSupportedError( "The {0} function code is not supported. ".format(function_code)) # instantiate a query which implements the MAC (TCP or RTU) part of the protocol query = self._make_query() # add the mac part of the protocol to the request request = query.build_request(pdu, slave) # send the request to the slave retval = call_hooks("modbus.Master.before_send", (self, request)) if retval is not None: request = retval if self._verbose: LOGGER.debug(get_log_buffer("-> ", request)) await self._send(request) call_hooks("modbus.Master.after_send", (self, )) if slave != 0: # receive the data from the slave response = await self._recv(expected_length) if len(response) == 0: LOGGER.exception(f"recv data timeout") return retval = call_hooks("modbus.Master.after_recv", (self, response)) if retval is not None: response = retval if self._verbose: LOGGER.debug(get_log_buffer("<- ", response)) # extract the pdu part of the response response_pdu = query.parse_response(response) # analyze the received data (return_code, byte_2) = struct.unpack(">BB", response_pdu[0:2]) if return_code > 0x80: # the slave has returned an error exception_code = byte_2 raise ModbusError(exception_code) else: if is_read_function: # get the values returned by the reading function byte_count = byte_2 data = response_pdu[2:] if byte_count != len(data): # the byte count in the pdu is invalid raise ModbusInvalidResponseError( "Byte count is {0} while actual number of bytes is {1}. ".format( byte_count, len(data)) ) else: # returns what is returned by the slave after a writing function data = response_pdu[1:] # returns the data as a tuple according to the data_format # (calculated based on the function or user-defined) result = struct.unpack(data_format, data) if nb_of_digits > 0: digits = [] for byte_val in result: for i in range(8): if len(digits) >= nb_of_digits: break digits.append(byte_val % 2) byte_val = byte_val >> 1 result = tuple(digits) return result async def read(self, slave_id, func_code, address, length): """ 从modbus读取数据 """ # 超过124个了 if length > 124: data = [] while length > 124: temp = await self.execute(slave_id, func_code, address, 124) data.extend(temp) address += 124 length -= 124 else: # 保证读取完毕 temp = await self.execute( slave_id, func_code, address, length) data.extend(temp) return data # 不超过则正常读取 data = await self.execute(slave_id, func_code, address, length) return data def to_float(self, data: Tuple[int], keep_num: int = 2): """ 将整数类型的列表转换为浮点数类型的列表 """ result = trans_int_to_float(data, keep_num=keep_num) return result
zdpapi-modbus
/zdpapi_modbus-1.7.1.tar.gz/zdpapi_modbus-1.7.1/zdpapi_modbus/master_async.py
master_async.py
import struct from typing import List, Tuple import random # modbus的类型字典 TYPE_DICT = { "c": { "C": "char", # c语言中的类型 "python": "string of length 1", # python中的类型 "byte": 1, # 字节个数,也是modbus中的长度 }, "b": { "C": "singed char", "python": "integer", "byte": 1, "address": 1 }, "B": { "C": "unsigned char", "python": "integer", "byte": 1, "address": 1 }, "?": { "C": "_Bool", "python": "bool", "byte": 1, "address": 1 }, "h": { "C": "short", "python": "integer", "byte": 2, "address": 1 }, "H": { "C": "unsigned short", "python": "integer", "byte": 2, "address": 1 }, "i": { "C": "int", "python": "integer", "byte": 4, "address": 2 }, "I": { "C": "unsigned int", "python": "integer", "byte": 4, "address": 2 }, "l": { "C": "long", "python": "integer", "byte": 4, "address": 2 }, "L": { "C": "unsigned long", "python": "long", "byte": 4, "address": 2 }, "q": { "C": "long long", "python": "long", "byte": 8, "address": 4 # address表示modbus寄存器位置,1个位置能表示2个字节 }, "Q": { "C": "unsigned long long", "python": "long", "byte": 8, "address": 4 }, "f": { "C": "float", "python": "float", "byte": 4, "address": 2 }, "d": { "C": "double", "python": "float", "byte": 8, "address": 4, }, "s": { "C": "char[]", "python": "string", }, "p": { "C": "char[]", "python": "float", }, "P": { "C": "void *", "python": "long", }, } def get_length(type_str): """ 获取数据类型的对应字节个数 @ param type_str:类型字符串 """ """""" type_ = TYPE_DICT.get(type_str) # 查原本的 if type_ is not None: return type_.get("byte") # 查小写的 type_ = TYPE_DICT.get(type_str.lower()) if type_ is not None: return type_.get("byte") raise Exception(f"不存在该数据类型:{type_str}") def get_address_length(type_str): """ 获取指定数据类型占用多少个内存地址 """ type_ = TYPE_DICT.get(type_str) # 查原本的 if type_ is not None: return type_.get("address") # 查小写的 type_ = TYPE_DICT.get(type_str.lower()) if type_ is not None: return type_.get("address") raise Exception(f"不存在该数据类型:{type_str}") def get_data_real_length(type_str, data_length): """ 获取真实数据的个数 @param type_str: 类型字符串 @param data_length: 数据长度,字节个数 """ # modbus地址位个数 address_length = get_address_length(type_str) # 数据长度 // 单个需要的地址位个数 result = data_length // address_length return result def trans_float_to_int(num_arr: List[float]) -> Tuple[int]: """ 将浮点型转换为整型 @param num_arr:浮点数的列表 """ msg = struct.pack(f"{len(num_arr)}f", *num_arr) b = struct.unpack(f"{len(num_arr) * 2}H", msg) return b def trans_int_to_float(num_arr: List[int], keep_num: int = 2) -> List[float]: """ 将整数类型转换为浮点数类型 @param num_arr: 整数类型的数组 @param keep_num: 保留多少位小数 """ if num_arr is None: return [] r = struct.unpack(f"{len(num_arr) // 2}f", struct.pack(f"{len(num_arr)}H", *num_arr)) r = [round(i, keep_num) for i in r] return r def transform_type_arr(type_str, num_arr, keep_num=2, reverse=False): """ 根据类型字符串转换数据类型 """ result = [] if type_str.lower() == "f": # 浮点数类型 if reverse: # 将整数数组转换为浮点数数组 result = trans_int_to_float(num_arr, keep_num=keep_num) # 将浮点数数组转换为整数数组 else: result = trans_float_to_int(num_arr) elif type_str.lower() == "b": # 布尔值类型 # 不需要解析和反解析 result = num_arr return result def pack_byte(data: List[int]): """ 打包短整数 """ return struct.pack(f"{len(data)}h", *data) def pack_int(data: List[int]): """ 打包整数 """ return struct.pack(f"{len(data)}i", *data) def pack_long(data: List[int]): """ 打包长整数 """ return struct.pack(f"{len(data)}l", *data) def pack_float(data: List[float]): """ 打包单精度浮点数 """ return struct.pack(f"{len(data)}f", *data) def pack_double(data: List[float]): """ 打包双精度浮点数 """ return struct.pack(f"{len(data)}d", *data) def unpack_byte(data_length: int, msg) -> List[int]: """ 解包短短整数 :param data_length: 原始数据长度,原始数据元素个数 :param msg: 打包的消息字符串 :return: 解包结果的元素列表 """ origin_format = f"{data_length}h" format = f"{data_length}h" b = struct.unpack(format, msg) r = struct.unpack(origin_format, struct.pack(format, *b)) r = list(r) return r def unpack_int(data_length: int, msg) -> List[int]: """ 解包整数 :param data_length: 原始数据长度,原始数据元素个数 :param msg: 打包的消息字符串 :return: 解包结果的元素列表 """ origin_format = f"{data_length}i" format = f"{data_length}i" b = struct.unpack(format, msg) r = struct.unpack(origin_format, struct.pack(format, *b)) r = list(r) return r def unpack_long(data_length: int, msg) -> List[int]: """ 解包短长整数 :param data_length: 原始数据长度,原始数据元素个数 :param msg: 打包的消息字符串 :return: 解包结果的元素列表 """ origin_format = f"{data_length}l" format = f"{data_length}l" b = struct.unpack(format, msg) r = struct.unpack(origin_format, struct.pack(format, *b)) r = list(r) return r def unpack_float(data_length: int, msg, float_keep_num: int = 2) -> List[float]: """ 解包单精度浮点型 :param data_length: 原始数据长度,原始数据元素个数 :param msg: 打包的消息字符串 :return: 解包结果的元素列表 """ origin_format = f"{data_length}f" format = f"{data_length * 2}h" b = struct.unpack(format, msg) r = struct.unpack(origin_format, struct.pack(format, *b)) r = list(map(lambda x: round(x, float_keep_num), r)) return r def unpack_double(data_length: int, msg, float_keep_num: int = 2) -> List[float]: """ 解包单精度浮点型 :param data_length: 数据长度,原始数据有多少个数 :param msg: 打包的消息字符串 :param float_keep_num: 保留到小数点后的多少位 :return: 解包结果的元素列表 """ origin_format = f"{data_length}d" format = f"{data_length * 4}h" b = struct.unpack(format, msg) r = struct.unpack(origin_format, struct.pack(format, *b)) float_keep_num = len(str(int(float_keep_num))) + float_keep_num def float_to_int(x): # 浮点数转换为指定小数位的浮点数 r = len(str(int(x))) + float_keep_num return round(x, r) r = list(map(float_to_int, r)) return r def random_type_arr(type_str, min: int = 0, max: int = 100, length: int = 20, keep_num: int = 2): """ 根据类型生成数组 """ result = [] if type_str.lower() == "f": # 浮点数 result = [ round(random.random() * max + min, keep_num) for _ in range(length) ] elif type_str.lower() == "b": # 布尔值 result = [ random.randint(0, 1) for _ in range(length) ] return result if __name__ == '__main__': data = [11, 22, 33] # 测试打包 print("============================================测试打包=====================================================") print(pack_byte(data)) print(pack_int(data)) print(pack_long(data)) print(pack_float(data)) print(pack_double(data)) print("============================================测试完毕=====================================================\n\n") # 测试解包 print("============================================测试解包=====================================================") print(unpack_byte(len(data), pack_byte(data))) print(unpack_int(len(data), pack_int(data))) print(unpack_long(len(data), pack_long(data))) print(unpack_float(len(data), pack_float(data))) print(unpack_double(len(data), pack_double(data))) print("============================================测试完毕=====================================================\n\n")
zdpapi-modbus
/zdpapi_modbus-1.7.1.tar.gz/zdpapi_modbus-1.7.1/zdpapi_modbus/zstruct.py
zstruct.py
from .master import trans_int_to_float import time import json import asyncio import redis from .libs.modbus_tk import modbus_tcp class Device: def __init__(self, *, modbus_ip: str = "127.0.0.1", modbus_port:int = 502, device_id, address, length) -> None: self.device_id = device_id self.master = modbus_tcp.TcpMaster(modbus_ip, modbus_port) self.address = address self.length = length self.redis = None # redis连接 def connect_redis(self, redis_ip:int = "127.0.0.1", redis_port:int=6379, redis_db:int = 1): """ 连接到Redis """ if self.redis is None: self.redis = redis.Redis(host=redis_ip, port=redis_port, db=redis_db) def write_to_redis(self, *, redis_ip: int = "127.0.0.1", redis_port: int = 6379, redis_db: int = 1, controls, freeq_seconds: int = 0.04, debug: bool = False): """ 从modbus读取数据 controls:{主控名称:该变量在modbus上的字节数} """ self.connect_redis(redis_ip=redis_ip, redis_port=redis_port, redis_db=redis_db) start = time.time() # 从modbus读取数据 data = self.master.execute(1, 3, self.address, self.length) values = trans_int_to_float(data) self.raw = {} # 聚合数据 self.data_ = {} # 非聚合数据 def mean(value_list): """取平均数""" sum = 0 for i in value_list: sum += i return sum / len(value_list) count = 0 for k, v in controls.items(): # 从队列中取数 self.raw[k] = values[count: count + v] self.data_[k] = mean(values[count: count + v]) count += v # 浮点数一次进2个字节 # 存入Redis self.redis.set(f"{self.device_id}_control_raw", json.dumps(self.raw)) self.redis.set(f"{self.device_id}_control_data", json.dumps(self.data_)) if debug: end = time.time() print("单次读取耗时: ", end - start) print(f"设备{self.device_id}采集到的数据是什么:", self.data_) print(f"设备{self.device_id}采集到的高频数据是什么:", self.raw) return self.raw, self.data_
zdpapi-modbus
/zdpapi_modbus-1.7.1.tar.gz/zdpapi_modbus-1.7.1/zdpapi_modbus/device.py
device.py