code
stringlengths 501
5.19M
| package
stringlengths 2
81
| path
stringlengths 9
304
| filename
stringlengths 4
145
|
---|---|---|---|
ZooKeeper
=========
Overview
--------
Zuul has a microservices architecture with the goal of no single point of
failure in mind.
Zuul is an event driven system with several event loops that interact
with each other:
* Driver event loop: Drivers like GitHub or Gerrit have their own event loops.
They perform preprocessing of the received events and add events into the
scheduler event loop.
* Scheduler event loop: This event loop processes the pipelines and
reconfigurations.
Each of these event loops persists data in ZooKeeper so that other
components can share or resume processing.
A key aspect of scalability is maintaining an event queue per
pipeline. This makes it easy to process several pipelines in
parallel. A new driver event is first processed in the driver event
queue. This adds a new event into the scheduler event queue. The
scheduler event queue then checks which pipeline may be interested in
this event according to the tenant configuration and layout. Based on
this the event is dispatched to all matching pipeline queues.
In order to make reconfigurations efficient we store the parsed branch
config in Zookeeper. This makes it possible to create the current
layout without the need to ask the mergers multiple times for the
configuration. This is used by zuul-web to keep an up-to-date layout
for API requests.
We store the pipeline state in Zookeeper. This contains the complete
information about queue items, jobs and builds, as well as a separate
abbreviated state for quick access by zuul-web for the status page.
Driver Event Ingestion
----------------------
There are three types of event receiving mechanisms in Zuul:
* Active event gathering: The connection actively listens to events (Gerrit)
or generates them itself (git, timer, zuul)
* Passive event gathering: The events are sent to Zuul from outside (GitHub
webhooks)
* Internal event generation: The events are generated within Zuul itself and
typically get injected directly into the scheduler event loop.
The active event gathering needs to be handled differently from
passive event gathering.
Active Event Gathering
~~~~~~~~~~~~~~~~~~~~~~
This is mainly done by the Gerrit driver. We actively maintain a
connection to the target and receive events. We utilize a leader
election to make sure there is exactly one instance receiving the
events.
Passive Event Gathering
~~~~~~~~~~~~~~~~~~~~~~~
In case of passive event gathering the events are sent to Zuul
typically via webhooks. These types of events are received in zuul-web
which then stores them in Zookeeper. This type of event gathering is
used by GitHub and other drivers. In this case we can have multiple
instances but still receive only one event so that we don't need to
take special care of event deduplication or leader election. Multiple
instances behind a load balancer are safe to use and recommended for
such passive event gathering.
Configuration Storage
---------------------
Zookeeper is not designed as a database with a large amount of data,
so we should store as little as possible in zookeeper. Thus we only
store the per project-branch unparsed config in zookeeper. From this,
every part of Zuul, like the scheduler or zuul-web, can quickly
recalculate the layout of each tenant and keep it up to date by
watching for changes in the unparsed project-branch-config.
We store the actual config sharded in multiple nodes, and those nodes
are stored under per project and branch znodes. This is needed because
of the 1MB limit per znode in zookeeper. It further makes it less
expensive to cache the global config in each component as this cache
is updated incrementally.
Executor and Merger Queues
--------------------------
The executors and mergers each have an execution queue (and in the
case of executors, optionally per-zone queues). This makes it easy
for executors and mergers to simply pick the next job to run without
needing to inspect the entire pipeline state. The scheduler is
responsible for submitting job requests as the state changes.
Zookeeper Map
-------------
This is a reference for object layout in Zookeeper.
.. path:: zuul
All ephemeral data stored here. Remove the entire tree to "reset"
the system.
.. path:: zuul/cache/connection/<connection>
The connection cache root. Each connection has a dedicated space
for its caches. Two types of caches are currently implemented:
change and branch.
.. path:: zuul/cache/connection/<connection>/branches
The connection branch cache root. Contains the cache itself and a
lock.
.. path:: zuul/cache/connection/<connection>/branches/data
:type: BranchCacheZKObject (sharded)
The connection branch cache data. This is a single sharded JSON blob.
.. path:: zuul/cache/connection/<connection>/branches/lock
:type: RWLock
The connection branch cache read/write lock.
.. path:: zuul/cache/connection/<connection>/cache
The connection change cache. Each node under this node is an entry
in the change cache. The node ID is a sha256 of the cache key, the
contents are the JSON serialization of the cache entry metadata.
One of the included items is the `data_uuid` which is used to
retrieve the actual change data.
When a cache entry is updated, a new data node is created without
deleting the old data node. They are eventually garbage collected.
.. path:: zuul/cache/connection/<connection>/data
Data for the change cache. These nodes are identified by a UUID
referenced from the cache entries.
These are sharded JSON blobs of the change data.
.. path:: zuul/cache/blob/data
Data for the blob store. These nodes are identified by a
sha256sum of the secret content.
These are sharded blobs of data.
.. path:: zuul/cache/blob/lock
Side-channel lock directory for the blob store. The store locks
by key id under this znode when writing.
.. path:: zuul/cleanup
This node holds locks for the cleanup routines to make sure that
only one scheduler runs them at a time.
.. path:: build_requests
.. path:: connection
.. path:: general
.. path:: merge_requests
.. path:: node_request
.. path:: sempahores
.. path:: zuul/components
The component registry. Each Zuul process registers itself under
the appropriate node in this hierarchy so the system has a holistic
view of what's running. The name of the node is based on the
hostname but is a sequence node in order to handle multiple
processes. The nodes are ephemeral so an outage is automatically
detected.
The contents of each node contain information about the running
process and may be updated periodically.
.. path:: executor
.. path:: fingergw
.. path:: merger
.. path:: scheduler
.. path:: web
.. path:: zuul/config/cache
The unparsed config cache. This contains the contents of every
Zuul config file returned by the mergers for use in configuration.
Organized by repo canonical name, branch, and filename. The files
themeselves are sharded.
.. path:: zuul/config/lock
Locks for the unparsed config cache.
.. path:: zuul/events/connection/<connection>/events
:type: ConnectionEventQueue
The connection event queue root. Each connection has an event
queue where incoming events are recorded before being moved to the
tenant event queue.
.. path:: zuul/events/connection/<connection>/events/queue
The actual event queue. Entries in the queue reference separate
data nodes. These are sequence nodes to maintain the event order.
.. path:: zuul/events/connection/<connection>/events/data
Event data nodes referenced by queue items. These are sharded.
.. path:: zuul/events/connection/<connection>/events/election
An election to determine which scheduler processes the event queue
and moves events to the tenant event queues.
Drivers may have additional elections as well. For example, Gerrit
has an election for the watcher and poller.
.. path:: zuul/events/tenant/<tenant>
Tenant-specific event queues. Each queue described below has a
data and queue subnode.
.. path:: zuul/events/tenant/<tenant>/management
The tenant-specific management event queue.
.. path:: zuul/events/tenant/<tenant>/trigger
The tenant-specific trigger event queue.
.. path:: zuul/events/tenant/<tenant>/pipelines
Holds a set of queues for each pipeline.
.. path:: zuul/events/tenant/<tenant>/pipelines/<pipeline>/management
The pipeline management event queue.
.. path:: zuul/events/tenant/<tenant>/pipelines/<pipeline>/result
The pipeline result event queue.
.. path:: zuul/events/tenant/<tenant>/pipelines/<pipeline>/trigger
The pipeline trigger event queue.
.. path:: zuul/executor/unzoned
:type: JobRequestQueue
The unzoned executor build request queue. The generic description
of a job request queue follows:
.. path:: requests/<request uuid>
Requests are added by UUID. Consumers watch the entire tree and
order the requests by znode creation time.
.. path:: locks/<request uuid>
:type: Lock
A consumer will create a lock under this node before processing
a request. The znode containing the lock and the requent znode
have the same UUID. This is a side-channel lock so that the
lock can be held while the request itself is deleted.
.. path:: params/<request uuid>
Parameters can be quite large, so they are kept in a separate
znode and only read when needed, and may be removed during
request processing to save space in ZooKeeper. The data may be
sharded.
.. path:: result-data/<request uuid>
When a job is complete, the results of the merge are written
here. The results may be quite large, so they are sharded.
.. path:: results/<request uuid>
Since writing sharded data is not atomic, once the results are
written to ``result-data``, a small znode is written here to
indicate the results are ready to read. The submitter can watch
this znode to be notified that it is ready.
.. path:: waiters/<request uuid>
:ephemeral:
A submitter who requires the results of the job creates an
ephemeral node here to indicate their interest in the results.
This is used by the cleanup routines to ensure that they don't
prematurely delete the result data. Used for merge jobs
.. path:: zuul/executor/zones/<zone>
A zone-specific executor build request queue. The contents are the
same as above.
.. path:: zuul/layout/<tenant>
The layout state for the tenant. Contains the cache and time data
needed for a component to determine if its in-memory layout is out
of date and update it if so.
.. path:: zuul/layout-data/<layout uuid>
Additional information about the layout. This is sharded data for
each layout UUID.
.. path:: zuul/locks
Holds various types of locks so that multiple components can coordinate.
.. path:: zuul/locks/connection
Locks related to connections.
.. path:: zuul/locks/connection/<connection>
Locks related to a single connection.
.. path:: zuul/locks/connection/database/migration
:type: Lock
Only one component should run a database migration; this lock
ensures that.
.. path:: zuul/locks/events
Locks related to tenant event queues.
.. path:: zuul/locks/events/trigger/<tenant>
:type: Lock
The scheduler locks the trigger event queue for each tenant before
processing it. This lock is only needed when processing and
removing items from the queue; no lock is required to add items.
.. path:: zuul/locks/events/management/<tenant>
:type: Lock
The scheduler locks the management event queue for each tenant
before processing it. This lock is only needed when processing and
removing items from the queue; no lock is required to add items.
.. path:: zuul/locks/pipeline
Locks related to pipelines.
.. path:: zuul/locks/pipeline/<tenant>/<pipeline>
:type: Lock
The scheduler obtains a lock before processing each pipeline.
.. path:: zuul/locks/tenant
Tenant configuration locks.
.. path:: zuul/locks/tenant/<tenant>
:type: RWLock
A write lock is obtained at this location before creating a new
tenant layout and storing its metadata in ZooKeeper. Components
which later determine that they need to update their tenant
configuration to match the state in ZooKeeper will obtain a read
lock at this location to ensure the state isn't mutated again while
the components are updating their layout to match.
.. path:: zuul/ltime
An empty node which serves to coordinate logical timestamps across
the cluster. Components may update this znode which will cause the
latest ZooKeeper transaction ID to appear in the zstat for this
znode. This is known as the `ltime` and can be used to communicate
that any subsequent transactions have occurred after this `ltime`.
This is frequently used for cache validation. Any cache which was
updated after a specified `ltime` may be determined to be
sufficiently up-to-date for use without invalidation.
.. path:: zuul/merger
:type: JobRequestQueue
A JobRequestQueue for mergers. See :path:`zuul/executor/unzoned`.
.. path:: zuul/nodepool
:type: NodepoolEventElection
An election to decide which scheduler will monitor nodepool
requests and generate node completion events as they are completed.
.. path:: zuul/results/management
Stores results from management events (such as an enqueue event).
.. path:: zuul/scheduler/timer-election
:type: SessionAwareElection
An election to decide which scheduler will generate events for
timer pipeline triggers.
.. path:: zuul/scheduler/stats-election
:type: SchedulerStatsElection
An election to decide which scheduler will report system-wide stats
(such as total node requests).
.. path:: zuul/global-semaphores/<semaphore>
:type: SemaphoreHandler
Represents a global semaphore (shared by multiple tenants).
Information about which builds hold the semaphore is stored in the
znode data.
.. path:: zuul/semaphores/<tenant>/<semaphore>
:type: SemaphoreHandler
Represents a semaphore. Information about which builds hold the
semaphore is stored in the znode data.
.. path:: zuul/system
:type: SystemConfigCache
System-wide configuration data.
.. path:: conf
The serialized version of the unparsed abide configuration as
well as system attributes (such as the tenant list).
.. path:: conf-lock
:type: WriteLock
A lock to be acquired before updating :path:`zuul/system/conf`
.. path:: zuul/tenant/<tenant>
Tenant-specific information here.
.. path:: zuul/tenant/<tenant>/pipeline/<pipeline>
Pipeline state.
.. path:: zuul/tenant/<tenant>/pipeline/<pipeline>/dirty
A flag indicating that the pipeline state is "dirty"; i.e., it
needs to have the pipeline processor run.
.. path:: zuul/tenant/<tenant>/pipeline/<pipeline>/queue
Holds queue objects.
.. path:: zuul/tenant/<tenant>/pipeline/<pipeline>/item/<item uuid>
Items belong to queues, but are held in their own hierarchy since
they may shift to differrent queues during reconfiguration.
.. path:: zuul/tenant/<tenant>/pipeline/<pipeline>/item/<item uuid>/buildset/<buildset uuid>
There will only be one buildset under the buildset/ node. If we
reset it, we will get a new uuid and delete the old one. Any
external references to it will be automatically invalidated.
.. path:: zuul/tenant/<tenant>/pipeline/<pipeline>/item/<item uuid>/buildset/<buildset uuid>/repo_state
The global repo state for the buildset is kept in its own node
since it can be large, and is also common for all jobs in this
buildset.
.. path:: zuul/tenant/<tenant>/pipeline/<pipeline>/item/<item uuid>/buildset/<buildset uuid>/job/<job name>
The frozen job.
.. path:: zuul/tenant/<tenant>/pipeline/<pipeline>/item/<item uuid>/buildset/<buildset uuid>/job/<job name>/build/<build uuid>
Information about this build of the job. Similar to buildset,
there should only be one entry, and using the UUID automatically
invalidates any references.
.. path:: zuul/tenant/<tenant>/pipeline/<pipeline>/item/<item uuid>/buildset/<buildset uuid>/job/<job name>/build/<build uuid>/parameters
Parameters for the build; these can be large so they're in their
own znode and will be read only if needed.
| zuul | /zuul-9.1.0.tar.gz/zuul-9.1.0/doc/source/developer/zookeeper.rst | zookeeper.rst |
Release Notes
=============
Zuul uses `reno`_ for release note management. When adding a noteworthy
feature, fixing a noteworthy bug or introducing a behavior change that a
user or operator should know about, it is a good idea to add a release note
to the same patch.
Installing reno
---------------
reno has a command, ``reno``, that is expected to be run by developers
to create a new release note. The simplest thing to do is to install it locally
with pip:
.. code-block:: bash
pip install --user reno
Adding a new release note
-------------------------
Adding a new release note is easy:
.. code-block:: bash
reno new releasenote-slug
Where ``releasenote-slug`` is a short identifier for the release note.
reno will then create a file in ``releasenotes/notes`` that contains an
initial template with the available sections.
The file it creates is a yaml file. All of the sections except for ``prelude``
contain lists, which will be combined with the lists from similar sections in
other note files to create a bulleted list that will then be processed by
Sphinx.
The ``prelude`` section is a single block of text that will also be
combined with any other prelude sections into a single chunk.
.. _reno: https://docs.openstack.org/reno/latest/
| zuul | /zuul-9.1.0.tar.gz/zuul-9.1.0/doc/source/developer/releasenotes.rst | releasenotes.rst |
Data Model Changelog
====================
Record changes to the ZooKeeper data model which require API version
increases here.
When making a model change:
* Increment the value of ``MODEL_API`` in ``model_api.py``.
* Update code to use the new API by default and add
backwards-compatibility handling for older versions. This makes it
easier to clean up backwards-compatibility handling in the future.
* Make sure code that special cases model versions either references a
``model_api`` variable or has a comment like `MODEL_API: >
{version}` so that we can grep for that and clean up compatability
code that is no longer needed.
* Add a test to ``test_model_upgrade.py``.
* Add an entry to this log so we can decide when to remove
backwards-compatibility handlers.
Version 0
---------
:Prior Zuul version: 4.11.0
:Description: This is an implied version as of Zuul 4.12.0 to
initialize the series.
Version 1
---------
:Prior Zuul version: 4.11.0
:Description: No change since Version 0. This explicitly records the
component versions in ZooKeeper.
Version 2
---------
:Prior Zuul version: 5.0.0
:Description: Changes the sempahore handle format from `<item_uuid>-<job_name>`
to a dictionary with buildset path and job name.
Version 3
---------
:Prior Zuul version: 5.0.0
:Description: Add a new `SupercedeEvent` and use that for dequeuing of
superceded items from other pipelines. This only affects the
schedulers.
Version 4
---------
:Prior Zuul version: 5.1.0
:Description: Adds QueueItem.dequeued_missing_requirements and sets it to True
if a change no longer meets merge requirements in dependent
pipelines. This only affects schedulers.
Version 5
---------
:Prior Zuul version: 5.1.0
:Description: Changes the result data attributes on Build from
ResultData to JobData instances and uses the
inline/offloading paradigm from FrozenJob. This affects
schedulers and executors.
Version 6
---------
:Prior Zuul version: 5.2.0
:Description: Stores the complete layout min_ltimes in /zuul/layout-data.
This only affects schedulers.
Version 7
---------
:Prior Zuul version: 5.2.2
:Description: Adds the blob store and stores large secrets in it.
Playbook secret references are now either an integer
index into the job secret list, or a dict with a blob
store key. This affects schedulers and executors.
Version 8
---------
:Prior Zuul version: 6.0.0
:Description: Deduplicates jobs in dependency cycles. Affects
schedulers only.
Version 9
---------
:Prior Zuul version: 6.3.0
:Description: Adds nodeset_alternatives and nodeset_index to frozen job.
Removes nodset from frozen job. Affects schedulers and executors.
Version 10
----------
:Prior Zuul version: 6.4.0
:Description: Renames admin_rules to authz_rules in unparsed abide.
Affects schedulers and web.
Version 11
----------
:Prior Zuul version: 8.0.1
:Description: Adds merge_modes to branch cache. Affects schedulers and web.
Version 12
----------
:Prior Zuul version: 8.0.1
:Description: Adds job_versions and build_versions to BuildSet.
Affects schedulers.
Version 13
----------
:Prior Zuul version: 8.2.0
:Description: Stores only the necessary event info as part of a queue item
instead of the full trigger event.
Affects schedulers.
Version 14
----------
:Prior Zuul version: 8.2.0
:Description: Adds the pre_fail attribute to builds.
Affects schedulers.
Version 15
----------
:Prior Zuul version: 9.0.0
:Description: Adds ansible_split_streams to FrozenJob.
Affects schedulers and executors.
| zuul | /zuul-9.1.0.tar.gz/zuul-9.1.0/doc/source/developer/model-changelog.rst | model-changelog.rst |
Documentation
=============
This is a brief style guide for Zuul documentation.
ReStructuredText Conventions
----------------------------
Code Blocks
~~~~~~~~~~~
When showing a YAML example, use the ``.. code-block:: yaml``
directive so that the sample appears as a code block with the correct
syntax highlighting.
Literal Values
~~~~~~~~~~~~~~
Filenames and literal values (such as when we instruct a user to type
a specific string into a configuration file) should use the RST
````literal```` syntax.
YAML supports boolean values expressed with or without an initial
capital letter. In examples and documentation, use ``true`` and
``false`` in lowercase type because the resulting YAML is easier for
users to type and read.
Terminology
~~~~~~~~~~~
Zuul employs some specialized terminology. To help users become
acquainted with it, we employ a glossary. Observe the following:
* Specialized terms should have entries in the glossary.
* If the term is being defined in the text, don't link to the glossary
(that would be redundant), but do emphasize it with ``*italics*``
the first time it appears in that definition. Subsequent uses
within the same subsection should be in regular type.
* If it's being used (but not defined) in the text, link the first
usage within a subsection to the glossary using the ``:term:`` role,
but subsequent uses should be in regular type.
* Be cognizant of how readers may jump to link targets within the
text, so be liberal in considering that once you cross a link
target, you may be in a new "subsection" for the above guideline.
Zuul Sphinx Directives
----------------------
The following extra Sphinx directives are available in the ``zuul``
domain. The ``zuul`` domain is configured as the default domain, so the
``zuul:`` prefix may be omitted.
zuul:attr::
~~~~~~~~~~~
This should be used when documenting Zuul configuration attributes.
Zuul configuration is heavily hierarchical, and this directive
facilitates documenting these by emphasising the hierarchy as
appropriate. It will annotate each configuration attribute with a
nice header with its own unique hyperlink target. It displays the
entire hierarchy of the attribute, but emphasises the last portion
(i.e., the field being documented).
To use the hierarchical features, simply nest with indentation in the
normal RST manner.
It supports the ``required`` and ``default`` options and will annotate
the header appropriately. Example:
.. code-block:: rst
.. attr:: foo
Some text about ``foo``.
.. attr:: bar
:required:
:default: 42
Text about ``foo.bar``.
.. attr:: foo
:noindex:
Some text about ``foo``.
.. attr:: bar
:noindex:
:required:
:default: 42
Text about ``foo.bar``.
zuul:value::
~~~~~~~~~~~~
Similar to zuul:attr, but used when documenting a literal value of an
attribute.
.. code-block:: rst
.. attr:: foo
Some text about foo. It supports the following values:
.. value:: bar
One of the supported values for ``foo`` is ``bar``.
.. value:: baz
Another supported values for ``foo`` is ``baz``.
.. attr:: foo
:noindex:
Some text about foo. It supports the following values:
.. value:: bar
:noindex:
One of the supported values for ``foo`` is ``bar``.
.. value:: baz
:noindex:
Another supported values for ``foo`` is ``baz``.
zuul:var::
~~~~~~~~~~
Also similar to zuul:attr, but used when documenting an Ansible
variable which is available to a job's playbook. In these cases, it's
often necessary to indicate the variable may be an element of a list
or dictionary, so this directive supports a ``type`` option. It also
supports the ``hidden`` option so that complex data structure
definitions may continue across sections. To use this, set the hidden
option on a ``zuul:var::`` directive with the root of the data
structure as the name. Example:
.. code-block:: rst
.. var:: foo
Foo is a dictionary with the following keys:
.. var:: items
:type: list
Items is a list of dictionaries with the following keys:
.. var:: bar
Text about bar
Section Boundary
.. var:: foo
:hidden:
.. var:: baz
Text about baz
.. End of code block; start example
.. var:: foo
:noindex:
Foo is a dictionary with the following keys:
.. var:: items
:noindex:
:type: list
Items is a list of dictionaries with the following keys:
.. var:: bar
:noindex:
Text about bar
Section Boundary
.. var:: foo
:noindex:
:hidden:
.. var:: baz
:noindex:
Text about baz
.. End of example
Zuul Sphinx Roles
-----------------
The following extra Sphinx roles are available. Use these within the
text when referring to attributes, values, and variables defined with
the directives above. Use these roles for the first appearance of an
object within a subsection, but use the ````literal```` role in
subsequent uses.
\:zuul:attr:
~~~~~~~~~~~~
This creates a reference to the named attribute. Provide the fully
qualified name (e.g., ``:attr:`pipeline.manager```)
\:zuul:value:
~~~~~~~~~~~~~
This creates a reference to the named value. Provide the fully
qualified name (e.g., ``:attr:`pipeline.manager.dependent```)
\:zuul:var:
~~~~~~~~~~~
This creates a reference to the named variable. Provide the fully
qualified name (e.g., ``:var:`zuul.executor.name```)
| zuul | /zuul-9.1.0.tar.gz/zuul-9.1.0/doc/source/developer/docs.rst | docs.rst |
:title: Metrics
Metrics
=======
Event Overview
--------------
The following table illustrates the event and pipeline processing
sequence as it relates to some of the metrics described in
:ref:`statsd`. This is intended as general guidance only and is not
an exhaustive list.
+----------------------------------------+------+------+------+--------------------------------------+
| Event | Metrics | Attribute |
+========================================+======+======+======+======================================+
| Event generated by source | | | | event.timestamp |
+----------------------------------------+------+ + +--------------------------------------+
| Enqueued into driver queue | | | | |
+----------------------------------------+------+ + +--------------------------------------+
| Enqueued into tenant trigger queue | | | | event.arrived_at_scheduler_timestamp |
+----------------------------------------+ + [8] + +--------------------------------------+
| Forwarded to matching pipelines | [1] | | | |
+----------------------------------------+ + + +--------------------------------------+
| Changes enqueued ahead | | | | |
+----------------------------------------+ + + +--------------------------------------+
| Change enqueued | | | | item.enqueue_time |
+----------------------------------------+------+------+ +--------------------------------------+
| Changes enqueued behind | | | | |
+----------------------------------------+------+------+ +--------------------------------------+
| Set item configuration | | | | build_set.configured_time |
+----------------------------------------+------+------+ +--------------------------------------+
| Request files changed (if needed) | | | | |
+----------------------------------------+ +------+ +--------------------------------------+
| Request merge | [2] | | | |
+----------------------------------------+ +------+ +--------------------------------------+
| Wait for merge (and files if needed) | | | [9] | |
+----------------------------------------+------+------+ +--------------------------------------+
| Generate dynamic layout (if needed) | [3] | | | |
+----------------------------------------+------+------+ +--------------------------------------+
| Freeze job graph | [4] | | | |
+----------------------------------------+------+------+ +--------------------------------------+
| Request global repo state (if needed) | | | | build_set.repo_state_request_time |
+----------------------------------------+ [5] +------+ +--------------------------------------+
| Wait for global repo state (if needed) | | | | |
+----------------------------------------+------+------+ +--------------------------------------+
| Deduplicate jobs | | | | |
+----------------------------------------+------+------+ +--------------------------------------+
| Acquire semaphore (non-resources-first)| | | | |
+----------------------------------------+------+------+ +--------------------------------------+
| Request nodes | | | | request.created_time |
+----------------------------------------+ [6] +------+ +--------------------------------------+
| Wait for nodes | | | | |
+----------------------------------------+------+------+ +--------------------------------------+
| Acquire semaphore (resources-first) | | | | |
+----------------------------------------+------+------+ +--------------------------------------+
| Enqueue build request | | | | build.execute_time |
+----------------------------------------+ [7] +------+ +--------------------------------------+
| Executor starts job | | | | build.start_time |
+----------------------------------------+------+------+------+--------------------------------------+
====== =============================
Metric Name
====== =============================
1 event_enqueue_processing_time
2 merge_request_time
3 layout_generation_time
4 job_freeze_time
5 repo_state_time
6 node_request_time
7 job_wait_time
8 event_enqueue_time
9 event_job_time
====== =============================
| zuul | /zuul-9.1.0.tar.gz/zuul-9.1.0/doc/source/developer/metrics.rst | metrics.rst |
================================================
Enhanced regional distribution of zuul-executors
================================================
.. warning:: This is not authoritative documentation. These features
are not currently available in Zuul. They may change significantly
before final implementation, or may never be fully completed.
Problem description
===================
When running large distributed deployments it can be desirable to keep traffic
as local as possible. To facilitate this zuul supports zoning of zuul-executors.
Using zones executors only process jobs on nodes that are running in the same
zone. This works well in many cases. However there is currently a limitation
around live log streaming that makes it impossible to use this feature in
certain environments.
Live log streaming via zuul-web or zuul-fingergw requires that each executor
is directly addressable from zuul-web or zuul-fingergw. This is not the case
if
* zuul-executors are behind a NAT. In this case one would need to create a NAT
rule per executor on different ports which can become a maintenance nightmare.
* zuul-executors run in a different Kubernetes or OpenShift. In this case one
would need a Ingress/Route or NodePort per executor which also makes
maintenance really hard.
Proposed change
---------------
In both use cases it would be desirable to have one service in each zone that
can further dispatch log streams within its own zone. Addressing a single
service is much more feasable by e.g. a single NAT rule or a Route or NodePort
service in Kubernetes.
.. graphviz::
:align: center
graph {
graph [fontsize=10 fontname="Verdana"];
node [fontsize=10 fontname="Verdana"];
user [ label="User" ];
subgraph cluster_1 {
node [style=filled];
label = "Zone 1";
web [ label="Web" ];
executor_1 [ label="Executor 1" ];
}
subgraph cluster_2 {
node [style=filled];
label = "Zone 2";
route [ label="Route/Ingress/NAT" ]
fingergw_zone2 [ label="Fingergw Zone 2"];
executor_2 [ label="Executor 2" ];
executor_3 [ label="Executor 3" ];
}
user -- web [ constraint=false ];
web -- executor_1
web -- route [ constraint=false ]
route -- fingergw_zone2
fingergw_zone2 -- executor_2
fingergw_zone2 -- executor_3
}
Current log streaming is essentially the same for zuul-web and zuul-fingergw and
works like this:
* Fingergw gets stream request by user
* Fingergw resolves stream address by calling get_job_log_stream_address and
supplying a build uuid
* Scheduler responds with the executor hostname and port on which the build
is running.
* Fingergw connects to the stream address, supplies the build uuid and connects
the streams.
The proposed process is almost the same:
* Fingergw gets stream request by user
* Fingergw resolves stream address by calling get_job_log_stream_address and
supplying the build uuid *and the zone of the fingergw (optional)*
* Scheduler responds:
* Address of executor if the zone provided with the request matches the zone
of the executor running the build, or the executor is un-zoned.
* Address of fingergw in the target zone otherwise.
* Fingergw connects to the stream address, supplies the build uuid and connects
the streams.
In case the build runs in a different zone the fingergw in the target zone will
follow the exact same process and get the executor stream process as this will
be in the same zone.
In order to facilitate this the following changes need to be made:
* The fingergw registers itself in the zk component registry and offers its
hostname, port and optionally zone. The hostname further needs to be
configurable like it is for the executors.
* Zuul-web and fingergw need a new optional config parameter containing their
zone.
While zuul-web and zuul-fingergw will be aware of what zone they are running in,
end-users will not need this information; the user-facing instances of those
services will continue to serve the entirely of the Zuul system regardless of
which zone they reside in, all from a single public URL or address.
Gearman
-------
The easiest and most standard way of getting non-http traffic into a
Kubernetes/Openshift cluster is using Ingres/Routes in combination with TLS and
SNI (server name indication). SNI is used in this case for dispatching the
connection to the correct service. Gearman currently doesn't support SNI which
makes it harder to route it into an Kubernetes/Openshift cluster from outside.
Security considerations
-----------------------
Live log streams can potentially contain sensitive data. Especially when
transferring them between different datacenters encryption would be useful.
So we should support optionally encrypting the finger streams using TLS with
optional client auth like we do with gearman. The mechanism should also support
SNI (Server name indication).
| zuul | /zuul-9.1.0.tar.gz/zuul-9.1.0/doc/source/developer/specs/enhanced-regional-executors.rst | enhanced-regional-executors.rst |
Circular Dependencies
=====================
.. warning:: This is not authoritative documentation. These features
are not currently available in Zuul. They may change significantly
before final implementation, or may never be fully completed.
The current assumption in Zuul is that dependencies form a Directed Acyclic
Graph (DAG). This is also what should be considered best practice. However,
there can be cases where we have circular dependencies and with that no longer
a DAG.
The current implementation to detect and prevent cycles will visit all vertices
of the dependency graph and bail out if an item is encountered twice. This
method is no longer feasible when we want to allow circular dependencies
between changes.
Instead, we need to find the `strongly connected components`_ (changes) of a
given dependency graph. The individual changes in those subgraphs need to know
about each other.
Circular dependency handling needs to be configurable on a per tenant and
project basis.
.. _strongly connected components: https://en.wikipedia.org/wiki/Strongly_connected_component
Proposed change
---------------
By default, Zuul will retain the current behavior of preventing dependency
cycles. The circular dependency handling must be explicitly enabled in the
tenant configuration.
.. code-block:: yaml
allow-circular-dependencies: true
In addition, the tenant default may be overridden on a per-project basis:
.. code-block:: yaml
[...]
untrusted-projects:
- org/project:
allow-circular-dependencies: true
[...]
Changes with cross-repo circular dependencies are required to share the same
change queue. We would still enqueue one queue item per change but hold back
reporting of the cycle until all items have finished. All the items in a cycle
would reference a shared bundle item.
A different approach would be to allow the enqueuing of changes across change
queues. This, however, would be a very substantial change with a lot of edge
cases and will therefore not be considered.
Dependencies are currently expressed with a ``Depends-On`` in the footer of a
commit message or pull-request body. This information is already used for
detecting cycles in the dependency graph.
A cycle is created by having a mutual ``Depends-On`` for the changes that
depend on each other.
We might need a way to prevent changes from being enqueued before all changes
that are part of a cycle are prepared. For this, we could introduce a special
value (e.g. ``null``) for the ``Depends-On`` to indicate that the cycle is not
complete yet. This is since we don't know the change URLs ahead of time.
From a user's perspective this would look as follows:
1. Set ``Depends-On: null`` on the first change that is uploaded.
2. Reference the change URL of the previous change in the ``Depends-On``.
Repeat this for all changes that are part of the cycle.
3. Set the ``Depends-On`` (e.g. pointing to the last uploaded change) to
complete the cycle.
Implementation
--------------
1. Detect strongly connected changes using e.g. `Tarjan's algorithm`_, when
enqueuing a change and its dependencies.
.. _Tarjan's algorithm: https://en.wikipedia.org/wiki/Tarjan%27s_strongly_connected_components_algorithm
2. Introduce a new class (e.g. ``Bundle``) that will hold a list of strongly
connected components (changes) in the order in which they need to be merged.
In case a circular dependency is detected all instances of ``QueueItem``
that are strongly connected will hold a reference to the same ``Bundle``
instance. In case there is no cycle, this reference will be ``None``.
3. The merger call for a queue item that has an associated bundle item will
always include all changes in the bundle.
However each ``QueueItem`` will only have and execute the job graph for a
particular change.
4. Hold back reporting of a ``QueueItem`` in case it has an associated
``Bundle`` until all related ``QueueItem`` have finished.
Report the individual job results for a ``QueueItem`` as usual. The last
reported item will also report a summary of the overall bundle result to
each related change.
Challenges
----------
Ordering of changes
Usually, the order of change in a strongly connected component doesn't
matter. However for sources that have the concept of a parent-child
relationship (e.g. Gerrit changes) we need to keep the order and report a
parent change before the child.
This information is available in ``Change.git_needs_changes``.
To not change the reporting logic too much (currently only the first item in
the queue can report), the changes need to be enqueued in the correct order.
Due to the recursive implementation of ``PipelineManager.addChange()``, this
could mean that we need to allow enqueuing changes ahead of others.
Windows size in the dependent pipeline manager
Since we need to postpone reporting until all items of a bundle have
finished those items will be kept in the queue. This will prevent new
changes from entering the active window. It might even lead to a deadlock in
case the number of changes within the strongly connected component is larger
than the current window size.
One solution would be to increase the size of the window by one every time
we hold an item that has finished but is still waiting for other items in a
bundle.
Reporting of bundle items
The current logic will try to report an item as soon as all jobs have
finished. In case this item is part of a bundle we have to hold back the
reporting until all items that are part of the bundle have succeeded or we
know that the whole bundle will fail.
In case the first item of a bundle did already succeed but a subsequent item
fails we must not reset the builds of queue items that are part of this
bundle, as it would currently happen when the jobs are canceled. Instead, we
need to keep the existing results for all items in a bundle.
When reporting a queue item that is part of a bundle, we need to make sure
to also report information related to the bundle as a whole. Otherwise, the
user might not be able to identify why a failure is reported even though all
jobs succeeded.
The reporting of the bundle summary needs to be done in the last item of a
bundle because only then we know if the complete bundle was submitted
successfully or not.
Recovering from errors
Allowing circular dependencies introduces the risk to end up with a broken
state when something goes wrong during the merge of the bundled changes.
Currently, there is no way to more or less atomically submit multiple
changes at once. Gerrit offers an option to submit a complete topic. This,
however, also doesn't offer any guarantees for being atomic across
repositories [#atomic]_. When considering changes with a circular
dependency, spanning multiple sources (e.g. Gerrit + Github) this seems no
longer possible at all.
Given those constraints, Zuul can only work on a best effort basis by
trying hard to make sure to not start merging the chain of dependent
changes unless it is safe to assume that the merges will succeed.
Even in those cases, there is a chance that e.g. due to a network issue,
Zuul fails to submit all changes of a bundle.
In those cases, the best way would be to automatically recover from the
situation. However, this might mean pushing a revert or force-pushing to
the target branch and reopening changes, which will introduce a new set of
problems on its own. In addition, the recovery might be affected by e.g.
network issues as well and can potentially fail.
All things considered, it's probably best to perform a gate reset as with a
normal failing item and require human intervention to bring the
repositories back into a consistent state. Zuul can assist in that by
logging detailed information about the performed steps and encountered
errors to the affected change pages.
Execution overhead
Without any de-duplication logic, every change that is part of a bundle
will have its jobs executed. For circular dependent changes with the same
jobs configured this could mean executing the same jobs twice.
.. rubric:: Footnotes
.. [#atomic] https://groups.google.com/forum/#!topic/repo-discuss/OuCXboAfEZQ
| zuul | /zuul-9.1.0.tar.gz/zuul-9.1.0/doc/source/developer/specs/circular-dependencies.rst | circular-dependencies.rst |
Kubernetes Operator
===================
.. warning:: This is not authoritative documentation. These features
are not currently available in Zuul. They may change significantly
before final implementation, or may never be fully completed.
While Zuul can be happily deployed in a Kubernetes environment, it is
a complex enough system that a Kubernetes Operator could provide value
to deployers. A Zuul Operator would allow a deployer to create, manage
and operate "A Zuul" in their Kubernetes and leave the details of how
that works to the Operator.
To that end, the Zuul Project should create and maintain a Kubernetes
Operator for running Zuul. Given the close ties between Zuul and Ansible,
we should use `Ansible Operator`_ to implement the Operator. Our existing
community is already running Zuul in both Kubernetes and OpenShift, so
we should ensure our Operator works in both. When we're happy with it,
we should publish it to `OperatorHub`_.
That's the easy part. The remainder of the document is for hammering out
some of the finer details.
.. _Ansible Operator: https://github.com/operator-framework/operator-sdk/blob/master/doc/ansible/user-guide.md
.. _OperatorHub: https://www.operatorhub.io/
Custom Resource Definitions
---------------------------
One of the key parts of making an Operator is to define one or more
Custom Resource Definition (CRD). These allow a user to say "hey k8s,
please give me a Thing". It is then the Operator's job to take the
appropriate actions to make sure the Thing exists.
For Zuul, there should definitely be a Zuul CRD. It should be namespaced
with ``zuul-ci.org``. There should be a section for each service for
managing service config as well as capacity:
::
apiVersion: zuul-ci.org/v1alpha1
kind: Zuul
spec:
merger:
count: 5
executor:
count: 5
web:
count: 1
fingergw:
count: 1
scheduler:
count: 1
.. note:: Until the distributed scheduler exists in the underlying Zuul
implementation, the ``count`` parameter for the scheduler service
cannot be set to anything greater than 1.
Zuul requires Nodepool to operate. While there are friendly people
using Nodepool without Zuul, from the context of the Operator, the Nodepool
services should just be considered part of Zuul.
::
apiVersion: zuul-ci.org/v1alpha1
kind: Zuul
spec:
merger:
count: 5
executor:
count: 5
web:
count: 1
fingergw:
count: 1
scheduler:
count: 1
# Because of nodepool config sharding, count is not valid for launcher.
launcher:
builder:
count: 2
Images
------
The Operator should, by default, use the ``docker.io/zuul`` images that
are published. To support locally built or overridden images, the Operator
should have optional config settings for each image.
::
apiVersion: zuul-ci.org/v1alpha1
kind: Zuul
spec:
merger:
count: 5
image: docker.io/example/zuul-merger
executor:
count: 5
web:
count: 1
fingergw:
count: 1
scheduler:
count: 1
launcher:
builder:
count: 2
External Dependencies
---------------------
Zuul needs some services, such as a RDBMS and a Zookeeper, that themselves
are resources that should or could be managed by an Operator. It is out of
scope (and inappropriate) for Zuul to provide these itself. Instead, the Zuul
Operator should use CRDs provided by other Operators.
On Kubernetes installs that support the Operator Lifecycle Manager, external
dependencies can be declared in the Zuul Operator's OLM metadata. However,
not all Kubernetes installs can handle this, so it should also be possible
for a deployer to manually install a list of documented operators and CRD
definitions before installing the Zuul Operator.
For each external service dependency where the Zuul Operator would be relying
on another Operator to create and manage the given service, there should be
a config override setting to allow a deployer to say "I already have one of
these that's located at Location, please don't create one." The config setting
should be the location and connection information for the externally managed
version of the service, and not providing that information should be taken
to mean the Zuul Operator should create and manage the resource.
::
---
apiVersion: v1
kind: Secret
metadata:
name: externalDatabase
type: Opaque
stringData:
dburi: mysql+pymysql://zuul:[email protected]/zuul
---
apiVersion: zuul-ci.org/v1alpha1
kind: Zuul
spec:
# If the database section is omitted, the Zuul Operator will create
# and manage the database.
database:
secretName: externalDatabase
key: dburi
While Zuul supports multiple backends for RDBMS, the Zuul Operator should not
attempt to support managing both. If the user chooses to let the Zuul Operator
create and manage RDBMS, the `Percona XtraDB Cluster Operator`_ should be
used. Deployers who wish to use a different one should use the config override
setting pointing to the DB location.
.. _Percona XtraDB Cluster Operator: https://operatorhub.io/operator/percona-xtradb-cluster-operator
Zuul Config
-----------
Zuul config files that do not contain information that the Operator needs to
do its job, or that do not contain information into which the Operator might
need to add data, should be handled by ConfigMap resources and not as
parts of the CRD. The CRD should take references to the ConfigMap objects.
Completely external files like ``clouds.yaml`` and ``kube/config``
should be in Secrets referenced in the config. Zuul files like
``nodepool.yaml`` and ``main.yaml`` that contain no information the Operator
needs should be in ConfigMaps and referenced.
::
apiVersion: zuul-ci.org/v1alpha1
kind: Zuul
spec:
merger:
count: 5
executor:
count: 5
web:
count: 1
fingergw:
count: 1
scheduler:
count: 1
config: zuulYamlConfig
launcher:
config: nodepoolYamlConfig
builder:
config: nodepoolYamlConfig
externalConfig:
openstack:
secretName: cloudsYaml
kubernetes:
secretName: kubeConfig
amazon:
secretName: botoConfig
Zuul files like ``/etc/nodepool/secure.conf`` and ``/etc/zuul/zuul.conf``
should be managed by the Operator and their options should be represented in
the CRD.
The Operator will shard the Nodepool config by provider-region using a utility
pod and create a new ConfigMap for each provider-region with only the subset of
config needed for that provider-region. It will then create a pod for each
provider-region.
Because the Operator needs to make decisions based on what's going on with
the ``zuul.conf``, or needs to directly manage some of it on behalf of the
deployer (such as RDBMS and Zookeeper connection info), the ``zuul.conf``
file should be managed by and expressed in the CRD.
Connections should each have a stanza that is mostly a passthrough
representation of what would go in the corresponding section of ``zuul.conf``.
Due to the nature of secrets in kubernetes, fields that would normally contain
either a secret string or a path to a file containing secret information
should instead take the name of a kubernetes secret and the key name of the
data in that secret that the deployer will have previously defined. The
Operator will use this information to mount the appropriate secrets into a
utility container, construct appropriate config files for each service,
reupload those into kubernetes as additional secrets, and then mount the
config secrets and the needed secrets containing file content only in the
pods that need them.
::
---
apiVersion: v1
kind: Secret
metadata:
name: gerritSecrets
type: Opaque
data:
sshkey: YWRtaW4=
http_password: c2VjcmV0Cg==
---
apiVersion: v1
kind: Secret
metadata:
name: githubSecrets
type: Opaque
data:
app_key: aRnwpen=
webhook_token: an5PnoMrlw==
---
apiVersion: v1
kind: Secret
metadata:
name: pagureSecrets
type: Opaque
data:
api_token: Tmf9fic=
---
apiVersion: v1
kind: Secret
metadata:
name: smtpSecrets
type: Opaque
data:
password: orRn3V0Gwm==
---
apiVersion: v1
kind: Secret
metadata:
name: mqttSecrets
type: Opaque
data:
password: YWQ4QTlPO2FpCg==
ca_certs: PVdweTgzT3l5Cg==
certfile: M21hWF95eTRXCg==
keyfile: JnhlMElpNFVsCg==
---
apiVersion: zuul-ci.org/v1alpha1
kind: Zuul
spec:
merger:
count: 5
git_user_email: [email protected]
git_user_name: Example Zuul
executor:
count: 5
manage_ansible: false
web:
count: 1
status_url: https://zuul.example.org
fingergw:
count: 1
scheduler:
count: 1
connections:
gerrit:
driver: gerrit
server: gerrit.example.com
sshkey:
# If the key name in the secret matches the connection key name,
# it can be omitted.
secretName: gerritSecrets
password:
secretName: gerritSecrets
# If they do not match, the key must be specified.
key: http_password
user: zuul
baseurl: http://gerrit.example.com:8080
auth_type: basic
github:
driver: github
app_key:
secretName: githubSecrets
key: app_key
webhook_token:
secretName: githubSecrets
key: webhook_token
rate_limit_logging: false
app_id: 1234
pagure:
driver: pagure
api_token:
secretName: pagureSecrets
key: api_token
smtp:
driver: smtp
server: smtp.example.com
port: 25
default_from: [email protected]
default_to: [email protected]
user: zuul
password:
secretName: smtpSecrets
mqtt:
driver: mqtt
server: mqtt.example.com
user: zuul
password:
secretName: mqttSecrets
ca_certs:
secretName: mqttSecrets
certfile:
secretName: mqttSecrets
keyfile:
secretName: mqttSecrets
Executor job volume
-------------------
To manage the executor job volumes, the CR also accepts a list of volumes
to be bind mounted in the job bubblewrap contexts:
::
name: Text
context: <trusted | untrusted>
access: <ro | rw>
path: /path
volume: Kubernetes.Volume
For example, to expose a GCP authdaemon token, the Zuul CR can be defined as
::
apiVersion: zuul-ci.org/v1alpha1
kind: Zuul
spec:
...
jobVolumes:
- context: trusted
access: ro
path: /authdaemon/token
volume:
name: gcp-auth
hostPath:
path: /var/authdaemon/executor
type: DirectoryOrCreate
Which would result in a new executor mountpath along with this zuul.conf change:
::
trusted_ro_paths=/authdaemon/token
Logging
-------
By default, the Zuul Operator should perform no logging config which should
result in Zuul using its default of logging to ``INFO``. There should be a
simple config option to switch that to enable ``DEBUG`` logging. There should
also be an option to allow specifying a named ``ConfigMap`` with a logging
config. If a logging config ``ConfigMap`` is given, it should override the
``DEBUG`` flag.
| zuul | /zuul-9.1.0.tar.gz/zuul-9.1.0/doc/source/developer/specs/kubernetes-operator.rst | kubernetes-operator.rst |
Specifications
==============
This section contains specifications for future Zuul development. As
we work on implementing significant changes, these document our plans
for those changes and help us work on them collaboratively. Once a
specification is implemented, it should be removed. All relevant
details for implemented work must be reflected correctly in Zuul's
documentation instead.
.. warning:: These are not authoritative documentation. These
features are not currently available in Zuul. They may change
significantly before final implementation, or may never be fully
completed.
.. toctree::
:maxdepth: 1
circular-dependencies
community-matrix
enhanced-regional-executors
kubernetes-operator
nodepool-in-zuul
tenant-resource-quota
tenant-scoped-admin-web-API
tracing
zuul-runner
| zuul | /zuul-9.1.0.tar.gz/zuul-9.1.0/doc/source/developer/specs/index.rst | index.rst |
Use Matrix for Chat
===================
.. warning:: This is not authoritative documentation. These features
are not currently available in Zuul. They may change significantly
before final implementation, or may never be fully completed.
We just switched IRC networks from Freenode to OFTC. This was
done quickly because remaining on Freenode was untenable due to recent
changes, and the OpenDev community had an existing plan prepared to
move to OFTC should such a situation arise.
Now that the immediate issue is addressed, we can take a considered
approach to evaluating whether an alternative to IRC such as Matrix
would be more suited.
Requirements
------------
Here are some concerns that affect us as a community:
* Some users like to stay connected all the time so they can read
messages from when they are away.
* Others are only interested in connecting when they have something to
say.
* On Freenode, nick registration was required to join #zuul in order
to mitigate spam. It is unclear whether the same will be true for
OFTC.
* Some users prefer simple text-based clients.
* Others prefer rich messaging and browser or mobile clients.
* We rely heavily on gerritbot.
* We use the logs recorded by eavesdrop from time to time.
* We benefit from the OpenDev statusbot.
* We collaborate with a large number of people in the OpenDev
community in various OFTC channels. We also collaborate with folks
in Ansible and other communities in libera.chat channels.
* Users must be able to access our chat using Free and Open-Source
Software.
* The software running the chat system itself should be Free and
Open-Source as well if possible. Both of these are natural
extensions of the Open Infrastructure community's Four Opens, as
well as OpenDev's mantra that Free Software needs Free Tools.
Benefits Offered by Matrix
--------------------------
* The Matrix architecture associates a user with a "homeserver", and
that homeserver is responsible for storing messages in all of the
rooms the user is present. This means that every Matrix user has
the ability to access messages received while their client is
disconnected. Users don't need to set up separate "bouncers".
* Authentication happens with the Matrix client and homeserver, rather
than through a separate nickserv registration system. This process
is familiar to all users of web services, so should reduce barriers
to access for new users.
* Matrix has a wide variety of clients available, including the
Element web/desktop/mobile clients, as well as the weechat-matrix
plugin. This addresses users of simple text clients and rich media.
* Bots are relatively simple to implement with Matrix.
* The Matrix community is dedicated to interoperability. That drives
their commitment to open standards, open source software, federation
using Matrix itself, and bridging to other communities which
themselves operate under open standards. That aligns very well with
our four-opens philosophy, and leads directly to the next point:
* Bridges exist to OFTC, libera.chat, and, at least for the moment,
Freenode. That means that any of our users who have invested in
establishing a presence in Matrix can relatively easily interact
with communities who call those other networks home.
* End-to-end encrypted channels for private chats. While clearly the
#zuul channel is our main concern, and it will be public and
unencrypted, the ability for our community members to have ad-hoc
chats about sensitive matters (such as questions which may relate to
security) is a benefit. If Matrix becomes more widely used such
that employees of companies feel secure having private chats in the
same platform as our public community interactions, we all benefit
from the increased availability and accessibility of people who no
longer need to split their attention between multiple platforms.
Reasons to Move
---------------
We could continue to call the #zuul channel on OFTC home, and
individual users could still use Matrix on their own to obtain most of
those benefits by joining the portal room on the OFTC matrix.org
bridge. The reasons to move to a native Matrix room are:
* Eliminate a potential failure point. If many/most of us are
connected via Matrix and the bridge, then either a Matrix or an OFTC
outage would affect us.
* Eliminate a source of spam. Spammers find IRC networks very easy to
attack. Matrix is not immune to this, but it is more difficult.
* Isolate ourselves from OFTC-related technology or policy changes.
For example, if we find we need to require registration to speak in
channel, that would take us back to the state where we have to teach
new users about nick registration.
* Elevating the baseline level of functionality expected from our chat
platform. By saying that our home is Matrix, we communicate to
users that the additional functionality offered by the platform is
an expected norm. Rather than tailoring our interactions to the
lowest-common-denominator of IRC, we indicate that the additional
features available in Matrix are welcomed.
* Provide a consistent and unconfusing message for new users. Rather
than saying "we're on OFTC, use Matrix to talk to us for a better
experience", we can say simply "use Matrix".
* Lead by example. Because of the recent fragmentation in the Free
and Open-Source software communities, Matrix is a natural way to
frictionlessly participate in a multitude of communities. Let's
show people how that can work.
Reasons to Stay
---------------
All of the work to move to OFTC has been done, and for the moment at
least, the OFTC matrix.org bridge is functioning well. Moving to a
native room will require some work.
Implementation Plan
-------------------
To move to a native Matrix room, we would do the following:
* Create a homeserver to host our room and bots. Technically, this is
not necessary, but having a homeserver allows us more control over
the branding, policy, and technology of our room. It means we are
isolated from policy decisions by the admins of matrix.org, and it
fully utilizes the federated nature of the technology.
We should ask the OpenDev collaboratory to host a homeserver for
this purpose. That could either be accomplished by running a
synapse server on a VM in OpenDev's infrastructure, or the
Foundation could subscribe to a hosted server run by Element.
At this stage, we would not necessarily host any user accounts on
the homeserver; it would only be used for hosting rooms and bot
accounts.
The homeserver would likely be for opendev.org; so our room would be
#zuul:opendev.org, and we might expect bot accounts like
@gerrit:opendev.org.
The specifics of this step are out of scope for this document. To
accomplish this, we will start an OpenDev spec to come to agreement
on the homeserver.
* Ensure that the OpenDev service bots upon which we rely (gerrit, and
status) support matrix. This is also under the domain of OpenDev;
but it is a pre-requisite for us to move.
We also rely somewhat on eavesdrop. Matrix does support searching,
but that doesn't cause it to be indexed by search engines, and
searching a decade worth of history may not work as well, so we
should also include eavesdrop in that list.
OpenDev also runs a meeting bot, but we haven't used it in years.
* Create the #zuul room.
* Create instructions to tell users how to join it. We will recommend
that if they do not already have a Matrix homeserver, they register
with matrix.org.
* Announce the move, and retire the OFTC channel.
Potential Future Enhancements
-----------------------------
Most of this is out of scope for the Zuul community, and instead
relates to OpenDev, but we should consider these possibilities when
weighing our decision.
It would be possible for OpenDev and/or the Foundation to host user
accounts on the homeserver. This might be more comfortable for new
users who are joining Matrix at the behest of our community.
If that happens, user accounts on the homeserver could be tied to a
future OpenDev single-sign-on system, meaning that registration could
become much simpler and be shared with all OpenDev services.
It's also possible for OpenDev and/or the Foundation to run multiple
homeservers in multiple locations in order to aid users who may live
in jurisdictions with policy or technical requirements that prohibit
their accessing the matrix.org homeserver.
All of these, if they come to pass, would be very far down the road,
but they do illustrate some of the additional flexibility our
communities could obtain by using Matrix.
| zuul | /zuul-9.1.0.tar.gz/zuul-9.1.0/doc/source/developer/specs/community-matrix.rst | community-matrix.rst |
Nodepool in Zuul
================
.. warning:: This is not authoritative documentation. These features
are not currently available in Zuul. They may change significantly
before final implementation, or may never be fully completed.
The following specification describes a plan to move Nodepool's
functionality into Zuul and end development of Nodepool as a separate
application. This will allow for more node and image related features
as well as simpler maintenance and deployment.
Introduction
------------
Nodepool exists as a distinct application from Zuul largely due to
historical circumstances: it was originally a process for launching
nodes, attaching them to Jenkins, detaching them from Jenkins and
deleting them. Once Zuul grew its own execution engine, Nodepool
could have been adopted into Zuul at that point, but the existing
loose API meant it was easy to maintain them separately and combining
them wasn't particularly advantageous.
However, now we find ourselves with a very robust framework in Zuul
for dealing with ZooKeeper, multiple components, web services and REST
APIs. All of these are lagging behind in Nodepool, and it is time to
address that one way or another. We could of course upgrade
Nodepool's infrastructure to match Zuul's, or even separate out these
frameworks into third-party libraries. However, there are other
reasons to consider tighter coupling between Zuul and Nodepool, and
these tilt the scales in favor of moving Nodepool functionality into
Zuul.
Designing Nodepool as part of Zuul would allow for more features
related to Zuul's multi-tenancy. Zuul is quite good at
fault-tolerance as well as scaling, so designing Nodepool around that
could allow for better cooperation between node launchers. Finally,
as part of Zuul, Nodepool's image lifecycle can be more easily
integrated with Zuul-based workflow.
There are two Nodepool components: nodepool-builder and
nodepool-launcher. We will address the functionality of each in the
following sections on Image Management and Node Management.
This spec contemplates a new Zuul component to handle image and node
management: zuul-launcher. Much of the Nodepool configuration will
become Zuul configuration as well. That is detailed in its own
section, but for now, it's enough to know that the Zuul system as a
whole will know what images and node labels are present in the
configuration.
Image Management
----------------
Part of nodepool-builder's functionality is important to have as a
long-running daemon, and part of what it does would make more sense as
a Zuul job. By moving the actual image build into a Zuul job, we can
make the activity more visible to users of the system. It will be
easier for users to test changes to image builds (inasmuch as they can
propose a change and a check job can run on that change to see if the
image builds sucessfully). Build history and logs will be visible in
the usual way in the Zuul web interface.
A frequently requested feature is the ability to verify images before
putting them into service. This is not practical with the current
implementation of Nodepool because of the loose coupling with Zuul.
However, once we are able to include Zuul jobs in the workflow of
image builds, it is easier to incorporate Zuul jobs to validate those
images as well. This spec includes a mechanism for that.
The parts of nodepool-builder that makes sense as a long-running
daemon are the parts dealing with image lifecycles. Uploading builds
to cloud providers, keeping track of image builds and uploads,
deciding when those images should enter or leave service, and deleting
them are all better done with state management and long-running
processes (we should know -- early versions of Nodepool attempted to
do all of that with Jenkins jobs with limited success).
The sections below describe how we will implement image management in
Zuul.
First, a reminder that using custom images is optional with Zuul.
Many Zuul systems will be able to operate using only stock cloud
provider images. One of the strengths of nodepool-builder is that it
can build an image for Zuul without relying on any particular cloud
provider images. A Zuul system whose operator wants to use custom
images will need to bootstrap that process, and under the proposed
system where images are build in Zuul jobs, that would need to be done
using a stock cloud image. In other words, to bootstrap a system such
as OpenDev from scratch, the operators would need to use a stock cloud
image to run the job to build the custom image. Once a custom image
is available, further image builds could be run on either the stock
cloud image or the custom image. That decision is left to the
operator and involves consideration of fault tolerance and disaster
recovery scenarios.
To build a custom image, an operator will define a fairly typical Zuul
job for each image they would like to produce. For example, a system
may have one job to build a debian-stable image, a second job for
debian-unstable, a third job for ubuntu-focal, a fourth job for
ubuntu-jammy. Zuul's job inheritance system could be very useful here
to deal with many variations of a similar process.
Currently nodepool-builder will build an image under three
circumstances: 1) the image (or the image in a particular format) is
missing; 2) a user has directly requested a build; 3) on an automatic
interval (typically daily). To map this into Zuul, we will use Zuul's
existing pipeline functionality, but we will add a new trigger for
case #1. Case #2 can be handled by a manual Zuul enqueue command, and
case #3 by a periodic pipeline trigger.
Since Zuul knows what images are configured and what their current
states are, it will be able to emit trigger events when it detects
that a new image (or image format) has been added to its
configuration. In these cases, the `zuul` driver in Zuul will enqueue
an `image-build` trigger event on startup or reconfiguration for every
missing image. The event will include the image name. Pipelines will
be configured to trigger on `image-build` events as well as on a timer
trigger.
Jobs will include an extra attribute to indicate they build a
particular image. This serves two purposes; first, in the case of an
`image-build` trigger event, it will act as a matcher so that only
jobs matching the image that needs building are run. Second, it will
allow Zuul to determine which formats are needed for that image (based
on which providers are configured to use it) and include that
information as job data.
The job will be responsible for building the image and uploading the
result to some storage system. The URLs for each image format built
should be returned to Zuul as artifacts.
Finally, the `zuul` driver reporter will accept parameters which will
tell it to search the result data for these artifact URLs and update
the internal image state accordingly.
An example configuration for a simple single-stage image build:
.. code-block:: yaml
- pipeline:
name: image
trigger:
zuul:
events:
- image-build
timer:
time: 0 0 * * *
success:
zuul:
image-built: true
image-validated: true
- job:
name: build-debian-unstable-image
image-build-name: debian-unstable
This job would run whenever Zuul determines it needs a new
debian-unstable image or daily at midnight. Once the job completes,
because of the ``image-built: true`` report, it will look for artifact
data like this:
.. code-block:: yaml
artifacts:
- name: raw image
url: https://storage.example.com/new_image.raw
metadata:
type: zuul_image
image_name: debian-unstable
format: raw
- name: qcow2 image
url: https://storage.example.com/new_image.qcow2
metadata:
type: zuul_image
image_name: debian-unstable
format: qcow2
Zuul will update internal records in ZooKeeper for the image to record
the storage URLs. The zuul-launcher process will then start
background processes to download the images from the storage system
and upload them to the configured providers (much as nodepool-builder
does now with files on disk). As a special case, it may detect that
the image files are stored in a location that a provider can access
directly for import and may be able to import directly from the
storage location rather than downloading locally first.
To handle image validation, a flag will be stored for each image
upload indicating whether it has been validated. The example above
specifies ``image-validated: true`` and therefore Zuul will put the
image into service as soon as all image uploads are complete.
However, if it were false, then Zuul would emit an `image-validate`
event after each upload is complete. A second pipeline can be
configured to perform image validation. It can run any number of
jobs, and since Zuul has complete knowledge of image states, it will
supply nodes using the new image upload (which is not yet in service
for normal jobs). An example of this might look like:
.. code-block:: yaml
- pipeline:
name: image-validate
trigger:
zuul:
events:
- image-validate
success:
zuul:
image-validated: true
- job:
name: validate-debian-unstable-image
image-build-name: debian-unstable
nodeset:
nodes:
- name: node
label: debian
The label should specify the same image that is being validated. Its
node request will be made with extra specifications so that it is
fulfilled with a node built from the image under test. This process
may repeat for each of the providers using that image (normal pipeline
queue deduplication rules may need a special case to allow this).
Once the validation jobs pass, the entry in ZooKeeper will be updated
and the image will go into regular service.
A more specific process definition follows:
After a buildset reports with ``image-built: true``, Zuul will scan
result data and for each artifact it finds, it will create an entry in
ZooKeeper at `/zuul/images/<image_name>/<sequence>`. Zuul will know
not to emit any more `image-build` events for that image at this
point.
For every provider using that image, Zuul will create an entry in
ZooKeeper at
`/zuul/image-uploads/<image_name>/<image_number>/provider/<provider_name>`.
It will set the remote image ID to null and the `image-validated` flag
to whatever was specified in the reporter.
Whenever zuul-launcher observes a new `image-upload` record without an
ID, it will:
* Lock the whole image
* Lock each upload it can handle
* Unlocks the image while retaining the upload locks
* Downloads artifact (if needed) and uploads images to provider
* If upload requires validation, it enqueues an `image-validate` zuul driver trigger event
* Unlocks upload
The locking sequence is so that a single launcher can perform multiple
uploads from a single artifact download if it has the opportunity.
Once more than two builds of an image are in service, the oldest is
deleted. The image ZooKeeper record set to the `deleting` state.
Zuul-launcher will delete the uploads from the providers. The `zuul`
driver emits an `image-delete` event with item data for the image
artifact. This will trigger an image-delete job that can delete the
artifact from the cloud storage.
All of these pipeline definitions should typically be in a single
tenant (but need not be), but the images they build are potentially
available to each tenant that includes the image definition
configuration object (see the Configuration section below). Any repo
in a tenant with an image build pipeline will be able to cause images
to be built and uploaded to providers.
Snapshot Images
~~~~~~~~~~~~~~~
Nodepool does not currently support snapshot images, but the spec for
the current version of Nodepool does contemplate the possibility of a
snapshot based nodepool-builder process. Likewise, this spec does not
require us to support snapshot image builds, but in case we want to
add support in the future, we should have a plan for it.
The image build job in Zuul could, instead of running
diskimage-builder, act on the remote node to prepare it for a
snapshot. A special job attribute could indicate that it is a
snapshot image job, and instead of having the zuul-launcher component
delete the node at the end of the job, it could snapshot the node and
record that information in ZooKeeper. Unlike an image-build job, an
image-snapshot job would need to run in each provider (similar to how
it is proposed that an image-validate job will run in each provider).
An image-delete job would not be required.
Node Management
---------------
The techniques we have developed for cooperative processing in Zuul
can be applied to the node lifecycle. This is a good time to make a
significant change to the nodepool protocol. We can achieve several
long-standing goals:
* Scaling and fault-tolerance: rather than having a 1:N relationship
of provider:nodepool-launcher, we can have multiple zuul-launcher
processes, each of which is capable of handling any number of
providers.
* More intentional request fulfillment: almost no intelligence goes
into selecting which provider will fulfill a given node request; by
assigning providers intentionally, we can more efficiently utilize
providers.
* Fulfilling node requests from multiple providers: by designing
zuul-launcher for cooperative work, we can have nodesets that
request nodes which are fulfilled by different providers. Generally
we should favor the same provider for a set of nodes (since they may
need to communicate over a LAN), but if that is not feasible,
allowing multiple providers to fulfill a request will permit
nodesets with diverse node types (e.g., VM + static, or VM +
container).
Each zuul-launcher process will execute a number of processing loops
in series; first a global request processing loop, and then a
processing loop for each provider. Each one will involve obtaining a
ZooKeeper lock so that only one zuul-launcher process will perform
each function at a time.
Zuul-launcher will need to know about every connection in the system
so that it may have a fuul copy of the configuration, but operators
may wish to localize launchers to specific clouds. To support this,
zuul-launcher will take an optional command-line argument to indicate
on which connections it should operate.
Currently a node request as a whole may be declined by providers. We
will make that more granular and store information about each node in
the request (in other words, individual nodes may be declined by
providers).
All drivers for providers should implement the state machine
interface. Any state machine information currently storen in memory
in nodepool-launcher will need to move to ZooKeeper so that other
launchers can resume state machine processing.
The individual provider loop will:
* Lock a provider in ZooKeeper (`/zuul/provider/<name>`)
* Iterate over every node assigned to that provider in a `building` state
* Drive the state machine
* If success, update request
* If failure, determine if it's a temporary or permanent failure
and update the request accordingly
* If quota available, unpause provider (if paused)
The global queue process will:
* Lock the global queue
* Iterate over every pending node request, and every node within that request
* If all providers have failed the request, clear all temp failures
* If all providers have permanently failed the request, return error
* Identify providers capable of fulfilling the request
* Assign nodes to any provider with sufficient quota
* If no providers with sufficient quota, assign it to first (highest
priority) provider that can fulfill it later and pause that
provider
Configuration
-------------
The configuration currently handled by Nodepool will be refactored and
added to Zuul's configuration syntax. It will be loaded directly from
git repos like most Zuul configuration, however it will be
non-speculative (like pipelines and semaphores -- changes must merge
before they take effect).
Information about connecting to a cloud will be added to ``zuul.conf``
as a ``connection`` entry. The rate limit setting will be moved to
the connection configuration. Providers will then reference these
connections by name.
Because providers and images reference global (i.e., outside tenant
scope) concepts, ZooKeeper paths for data related to those should
include the canonical name of the repo where these objects are
defined. For example, a `debian-unstable` image in the
`opendev/images` repo should be stored at
``/zuul/zuul-images/opendev.org%2fopendev%2fimages/``. This avoids
collisions if different tenants contain different image objects with
the same name.
The actual Zuul config objects will be tenant scoped. Image
definitions which should be available to a tenant should be included
in that tenant's config. Again using the OpenDev example, the
hypothetical `opendev/images` repository should be included in every
OpenDev tenant so all of those images are available.
Within a tenant, image names must be unique (otherwise it is a tenant
configuration error, similar to a job name collision).
The diskimage-builder related configuration items will no longer be
necessary since they will be encoded in Zuul jobs. This will reduce
the complexity of the configuration significantly.
The provider configuration will change as we take the opportunity to
make it more "Zuul-like". Instead of a top-level dictionary, we will
use lists. We will standardize on attributes used across drivers
where possible, as well as attributes which may be located at
different levels of the configuration.
The goals of this reorganization are:
* Allow projects to manage their own image lifecycle (if permitted by
site administrators).
* Manage access control to labels, images and flavors via standard
Zuul mechanisms (whether an item appears within a tenant).
* Reduce repetition and boilerplate for systems with many clouds,
labels, or images.
The new configuration objects are:
Image
This represents any kind of image (A Zuul image built by a job
described above, or a cloud image). By using one object to
represent both, we open the possibility of having a label in one
provider use a cloud image and in another provider use a Zuul image
(because the label will reference the image by short-name which may
resolve to a different image object in different tenants). A given
image object will specify what type it is, and any relevant
information about it (such as the username to use, etc).
Flavor
This is a new abstraction layer to reference instance types across
different cloud providers. Much like labels today, these probably
won't have much information associated with them other than to
reserve a name for other objects to reference. For example, a site
could define a `small` and a `large` flavor. These would later be
mapped to specific instance types on clouds.
Label
Unlike the current Nodepool ``label`` definitions, these labels will
also specify the image and flavor to use. These reference the two
objects above, which means that labels themselves contain the
high-level definition of what will be provided (e.g., a `large
ubuntu` node) while the specific mapping of what `large` and
`ubuntu` mean are left to the more specific configuration levels.
Section
This looks a lot like the current ``provider`` configuration in
Nodepool (but also a little bit like a ``pool``). Several parts of
the Nodepool configuration (such as separating out availability
zones from providers into pools) were added as an afterthought, and
we can take the opportunity to address that here.
A ``section`` is part of a cloud. It might be a region (if a cloud
has regions). It might be one or more availability zones within a
region. A lot of the specifics about images, flavors, subnets,
etc., will be specified here. Because a cloud may have many
sections, we will implement inheritance among sections.
Provider
This is mostly a mapping of labels to sections and is similar to a
provider pool in the current Nodepool configuration. It exists as a
separate object so that site administrators can restrict ``section``
definitions to central repos and allow tenant administrators to
control their own image and labels by allowing certain projects to
define providers.
It mostly consists of a list of labels, but may also include images.
When launching a node, relevant attributes may come from several
sources (the pool, image, flavor, or provider). Not all attributes
make sense in all locations, but where we can support them in multiple
locations, the order of application (later items override earlier
ones) will be:
* ``image`` stanza
* ``flavor`` stanza
* ``label`` stanza
* ``section`` stanza (top level)
* ``image`` within ``section``
* ``flavor`` within ``section``
* ``provider`` stanza (top level)
* ``label`` within ``provider``
This reflects that the configuration is built upwards from general and
simple objects toward more specific objects image, flavor, label,
section, provider. Generally speaking, inherited scalar values will
override, dicts will merge, lists will concatenate.
An example configuration follows. First, some configuration which may
appear in a central project and shared among multiple tenants:
.. code-block:: yaml
# Images, flavors, and labels are the building blocks of the
# configuration.
- image:
name: centos-7
type: zuul
# Any other image-related info such as:
# username: ...
# python-path: ...
# shell-type: ...
# A default that can be overridden by a provider:
# config-drive: true
- image:
name: ubuntu
type: cloud
- flavor:
name: large
- label:
name: centos-7
min-ready: 1
flavor: large
image: centos-7
- label:
name: ubuntu
flavor: small
image: ubuntu
# A section for each cloud+region+az
- section:
name: rax-base
abstract: true
connection: rackspace
boot-timeout: 120
launch-timeout: 600
key-name: infra-root-keys-2020-05-13
# The launcher will apply the minimum of the quota reported by the
# driver (if available) or the values here.
quota:
instances: 2000
subnet: some-subnet
tags:
section-info: foo
# We attach both kinds of images to providers in order to provide
# image-specific info (like config-drive) or username.
images:
- name: centos-7
config-drive: true
# This is a Zuul image
- name: ubuntu
# This is a cloud image, so the specific cloud image name is required
image-name: ibm-ubuntu-20-04-3-minimal-amd64-1
# Other information may be provided
# username ...
# python-path: ...
# shell-type: ...
flavors:
- name: small
cloud-flavor: "Performance 8G"
- name: large
cloud-flavor: "Performance 16G"
- section:
name: rax-dfw
parent: rax-base
region: 'DFW'
availability-zones: ["a", "b"]
# A provider to indicate what labels are available to a tenant from
# a section.
- provider:
name: rax-dfw-main
section: rax-dfw
labels:
- name: centos-7
- name: ubuntu
key-name: infra-root-keys-2020-05-13
tags:
provider-info: bar
The following configuration might appear in a repo that is only used
in a single tenant:
.. code-block:: yaml
- image:
name: devstack
type: zuul
- label:
name: devstack
- provider:
name: rax-dfw-devstack
section: rax-dfw
# The images can be attached to the provider just as a section.
image:
- name: devstack
config-drive: true
labels:
- name: devstack
Here is a potential static node configuration:
.. code-block:: yaml
- label:
name: big-static-node
- section:
name: static-nodes
connection: null
nodes:
- name: static.example.com
labels:
- big-static-node
host-key: ...
username: zuul
- provider:
name: static-provider
section: static-nodes
labels:
- big-static-node
Each of the the above stanzas may only appear once in a tenant for a
given name (like pipelines or semaphores, they are singleton objects).
If they appear in more than one branch of a project, the definitions
must be identical; otherwise, or if they appear in more than one repo,
the second definition is an error. These are meant to be used in
unbranched repos. Whatever tenants they appear in will be permitted
to access those respective resources.
The purpose of the ``provider`` stanza is to associate labels, images,
and sections. Much of the configuration related to launching an
instance (including the availability of zuul or cloud images) may be
supplied in the ``provider`` stanza and will apply to any labels
within. The ``section`` stanza also allows configuration of the same
information except for the labels themselves. The ``section``
supplies default values and the ``provider`` can override them or add
any missing values. Images are additive -- any images that appear in
a ``provider`` will augment those that appear in a ``section``.
The result is a modular scheme for configuration, where a single
``section`` instance can be used to set as much information as
possible that applies globally to a provider. A simple configuration
may then have a single ``provider`` instance to attach labels to that
section. A more complex installation may define a "standard" pool
that is present in every tenant, and then tenant-specific pools as
well. These pools will all attach to the same section.
References to sections, images and labels will be internally converted
to canonical repo names to avoid ambiguity. Under the current
Nodepool system, labels are truly a global object, but under this
proposal, a label short name in one tenant may be different than one
in another. Therefore the node request will internally specify the
canonical label name instead of the short name. Users will never use
canonical names, only short names.
For static nodes, there is some repitition to labels: first labels
must be associated with the individual nodes defined on the section,
then the labels must appear again on a provider. This allows an
operator to define a collection of static nodes centrally on a
section, then include tenant-specific sets of labels in a provider.
For the simple case where all static node labels in a section should
be available in a provider, we could consider adding a flag to the
provider to allow that (e.g., ``include-all-node-labels: true``).
Static nodes themselves are configured on a section with a ``null``
connection (since there is no cloud provider associated with static
nodes). In this case, the additional ``nodes`` section attribute
becomes available.
Upgrade Process
---------------
Most users of diskimages will need to create new jobs to build these
images. This proposal also includes significant changes to the node
allocation system which come with operational risks.
To make the transition as minimally disruptive as possible, we will
support both systems in Zuul, and allow for selection of one system or
the other on a per-label and per-tenant basis.
By default, if a nodeset specifies a label that is not defined by a
``label`` object in the tenant, Zuul will use the old system and place
a ZooKeeper request in ``/nodepool``. If a matching ``label`` is
available in the tenant, The request will use the new system and be
sent to ``/zuul/node-requests``. Once a tenant has completely
converted, a configuration flag may be set in the tenant configuration
and that will allow Zuul to treat nodesets that reference unknown
labels as configuration errors. A later version of Zuul will remove
the backwards compatability and make this the standard behavior.
Because each of the systems will have unique metadata, they will not
recognize each others nodes, and it will appear to each that another
system is using part of their quota. Nodepool is already designed to
handle this case (at least, handle it as well as possible).
Library Requirements
--------------------
The new zuul-launcher component will need most of Nodepool's current
dependencies, which will entail adding many third-party cloud provider
interfaces. As of writing, this uses another 420M of disk space.
Since our primary method of distribution at this point is container
images, if the additional space is a concern, we could restrict the
installation of these dependencies to only the zuul-launcher image.
Diskimage-Builder Testing
-------------------------
The diskimage-builder project team has come to rely on Nodepool in its
testing process. It uses Nodepool to upload images to a devstack
cloud, launch nodes from those instances, and verify that they
function. To aid in continuity of testing in the diskimage-builder
project, we will extract the OpenStack image upload and node launching
code into a simple Python script that can be used in diskimage-builder
test jobs in place of Nodepool.
Work Items
----------
* In existing Nodepool convert the following drivers to statemachine:
gce, kubernetes, openshift, openshift, openstack (openstack is the
only one likely to require substantial effort, the others should be
trivial)
* Replace Nodepool with an image upload script in diskimage-builder
test jobs
* Add roles to zuul-jobs to build images using diskimage-builder
* Implement node-related config items in Zuul config and Layout
* Create zuul-launcher executable/component
* Add image-name item data
* Add image-build-name attribute to jobs
* Including job matcher based on item image-name
* Include image format information based on global config
* Add zuul driver pipeline trigger/reporter
* Add image lifecycle manager to zuul-launcher
* Emit image-build events
* Emit image-validate events
* Emit image-delete events
* Add Nodepool driver code to Zuul
* Update zuul-launcher to perform image uploads and deletion
* Implement node launch global request handler
* Implement node launch provider handlers
* Update Zuul nodepool interface to handle both Nodepool and
zuul-launcher node request queues
* Add tenant feature flag to switch between them
* Release a minor version of Zuul with support for both
* Remove Nodepool support from Zuul
* Release a major version of Zuul with only zuul-launcher support
* Retire Nodepool
| zuul | /zuul-9.1.0.tar.gz/zuul-9.1.0/doc/source/developer/specs/nodepool-in-zuul.rst | nodepool-in-zuul.rst |
=========================
Resource Quota per Tenant
=========================
.. warning:: This is not authoritative documentation. These features
are not currently available in Zuul. They may change significantly
before final implementation, or may never be fully completed.
Problem Description
===================
Zuul is inherently built to be tenant scoped and can be operated as a shared CI
system for a large number of more or less independent projects. As such, one of
its goals is to provide each tenant a fair amount of resources.
If Zuul, and more specifically Nodepool, are pooling build nodes from shared
providers (e.g. a limited number of OpenStack clouds) the principle of a fair
resource share across tenants can hardly be met by the Nodepool side. In large
Zuul installations, it is not uncommon that some tenants request far more
resources and at a higher rate from the Nodepool providers than other tenants.
While Zuuls "fair scheduling" mechanism makes sure each queue item gets treated
justly, there is no mechanism to limit allocated resources on a per-tenant
level. This, however, would be useful in different ways.
For one, in a shared pool of computing resources, it can be necessary to
enforce resource budgets allocated to tenants. That is, a tenant shall only be
able to allocate resources within a defined and payed limit. This is not easily
possible at the moment as Nodepool is not inherently tenant-aware. While it can
limit the number of servers, CPU cores, and RAM allocated on a per-pool level,
this does not directly translate to Zuul tenants. Configuring a separate pool
per tenant would not only lead to much more complex Nodepool configurations,
but also induce performance penalties as each pool runs in its own Python
thread.
Also, in scenarios where Zuul and auxiliary services (e.g. GitHub or
Artifactory) are operated near or at their limits, the system can become
unstable. In such a situation, a common measure is to lower Nodepools resource
quota to limit the number of concurrent builds and thereby reduce the load on
Zuul and other involved services. However, this can currently be done only on
a per-provider or per-pool level, most probably affecting all tenants. This
would contradict the principle of fair resource pooling as there might be less
eager tenants that do not, or rather insignificantly, contribute to the overall
high load. It would therefore be more advisable to limit only those tenants'
resources that induce the most load.
Therefore, it is suggested to implement a mechanism in Nodepool that allows to
define and enforce limits of currently allocated resources on a per-tenant
level. This specification describes how resource quota can be enforced in
Nodepool with minimal additional configuration and execution overhead and with
little to no impact on existing Zuul installations. A per-tenant resource limit
is then applied additionally to already existing pool-level limits and treated
globally across all providers.
Proposed Change
===============
The proposed change consists of several parts in both, Zuul and Nodepool. As
Zuul is the only source of truth for tenants, it must pass the name of the
tenant with each NodeRequest to Nodepool. The Nodepool side must consider this
information and adhere to any resource limits configured for the corresponding
tenant. However, this shall be backwards compatible, i.e., if no tenant name is
passed with a NodeRequest, tenant quotas shall be ignored for this request.
Vice versa, if no resource limit is configured for a tenant, the tenant on the
NodeRequest does not add any additional behaviour.
To keep record of currently consumed resources globally, i.e., across all
providers, the number of CPU cores and main memory (RAM) of a Node shall be
stored with its representation in ZooKeeper by Nodepool. This allows for
a cheap and provider agnostic aggregation of the currently consumed resources
per tenant from any provider. The OpenStack driver already stores the resources
in terms of cores, ram, and instances per ``zk.Node`` in a separate property in
ZooKeeper. This is to be expanded to other drivers where applicable (cf.
"Implementation Caveats" below).
Make Nodepool Tenant Aware
--------------------------
1. Add ``tenant`` attribute to ``zk.NodeRequest`` (applies to Zuul and
Nodepool)
2. Add ``tenant`` attribute to ``zk.Node`` (applies to Nodepool)
Introduce Tenant Quotas in Nodepool
-----------------------------------
1. introduce new top-level config item ``tenant-resource-limits`` for Nodepool
config
.. code-block:: yaml
tenant-resource-limits:
- tenant-name: tenant1
max-servers: 10
max-cores: 200
max-ram: 800
- tenant-name: tenant2
max-servers: 100
max-cores: 1500
max-ram: 6000
2. for each node request that has the tenant attribute set and a corresponding
``tenant-resource-limits`` config exists
- get quota information from current active and planned nodes of same tenant
- if quota for current tenant would be exceeded
- defer node request
- do not pause the pool (as opposed to exceeded pool quota)
- leave the node request unfulfilled (REQUESTED state)
- return from handler for another iteration to fulfill request when tenant
quota allows eventually
- if quota for current tenant would not be exceeded
- proceed with normal process
3. for each node request that does not have the tenant attribute or a tenant
for which no ``tenant-resource-limits`` config exists
- do not calculate the per-tenant quota and proceed with normal process
Implementation Caveats
----------------------
This implementation is ought to be driver agnostic and therefore not to be
implemented separately for each Nodepool driver. For the Kubernetes, OpenShift,
and Static drivers, however, it is not easily possible to find the current
allocated resources. The proposed change therefore does not currently apply to
these. The Kubernetes and OpenShift(Pods) drivers would need to enforce
resource request attributes on their labels which are optional at the moment
(cf. `Kubernetes Driver Doc`_). Another option would be to enforce resource
limits on a per Kubernetes namespace level. How such limits can be implemented
in this case needs to be addressed separately. Similarly, the AWS, Azure, and
GCE drivers do not fully implement quota information for their nodes. E.g. the
AWS driver only considers the number of servers, not the number of cores or
RAM. Therefore, nodes from these providers also cannot be fully taken into
account when calculating a global resource limit besides of number of servers.
Implementing full quota support in those drivers is not within the scope of
this change. However, following this spec, implementing quota support there to
support a per-tenant limit would be straight forward. It just requires them to
set the corresponding ``zk.Node.resources`` attributes. As for now, only the
OpenStack driver exports resource information about its nodes to ZooKeeper, but
as other drivers get enhanced with this feature, they will inherently be
considered for such global limits as well.
In the `QuotaSupport`_ mixin class, we already query ZooKeeper for the used and
planned resources. Ideally, we can extend this method to also return the
resources currently allocated by each tenant without additional costs and
account for this additional quota information as we already do for provider and
pool quotas (cf. `SimpleTaskManagerHandler`_). However, calculation of
currently consumed resources by a provider is done only for nodes of the same
provider. This does not easily work for global limits as intended for tenant
quotas. Therefore, this information (``cores``, ``ram``, ``instances``) will be
stored in a generic way on ``zk.Node.resources`` objects for any provider to
evaluate these quotas upon an incoming node request.
.. _`Kubernetes Driver Doc`: https://zuul-ci.org/docs/nodepool/kubernetes.html#attr-providers.[kubernetes].pools.labels.cpu
.. _`QuotaSupport`: https://opendev.org/zuul/nodepool/src/branch/master/nodepool/driver/utils.py#L180
.. _`SimpleTaskManagerHandler`: https://opendev.org/zuul/nodepool/src/branch/master/nodepool/driver/simple.py#L218
| zuul | /zuul-9.1.0.tar.gz/zuul-9.1.0/doc/source/developer/specs/tenant-resource-quota.rst | tenant-resource-quota.rst |
===========================
Tenant-scoped admin web API
===========================
https://storyboard.openstack.org/#!/story/2001771
The aim of this spec is to extend the existing web API of Zuul to
privileged actions, and to scope these actions to tenants, projects and privileged users.
Problem Description
===================
Zuul 3 introduced tenant isolation, and most privileged actions, being scoped
to a specific tenant, reflect that change. However the only way to trigger
these actions is through the Zuul CLI, which assumes either access to the
environment of a Zuul component or to Zuul's configuration itself. This is a
problem as being allowed to perform privileged actions on a tenant or for a
specific project should not entail full access to Zuul's admin capabilities.
.. Likewise, Nodepool provides actions that could be scoped to a tenant:
* Ability to trigger an image build when the definition of an image used by
that tenant has changed
* Ability to delete nodesets that have been put on autohold (this is mitigated
by the max-hold-age setting in Nodepool, if set)
These actions can only be triggered through Nodepool's CLI, with the same
problems as Zuul. Another important blocker is that Nodepool has no notion of
tenancy as defined by Zuul.
Proposed Change
===============
Zuul will expose privileged actions through its web API. In order to do so, Zuul
needs to support user authentication. A JWT (JSON Web Token) will be used to carry
user information; from now on it will be called the **Authentication Token** for the
rest of this specification.
Zuul needs also to support authorization and access control. Zuul's configuration
will be modified to include access control rules.
A Zuul operator will also be able to generate an Authentication Token manually
for a user, and communicate the Authentication Token to said user. This Authentication
Token can optionally include authorization claims that override Zuul's authorization
configuration, so that an operator can provide privileges temporarily to a user.
By querying Zuul's web API with the Authentication Token set in an
"Authorization" header, the user can perform administration tasks.
Zuul will need to provide the following minimal new features:
* JWT validation
* Access control configuration
* administration web API
The manual generation of Authentication Tokens can also be used for testing
purposes or non-production environments.
JWT Validation
--------------
Expected Format
...............
Note that JWTs can be arbitrarily extended with custom claims, giving flexibility
in its contents. It also allows to extend the format as needed for future
features.
In its minimal form, the Authentication Token's contents will have the following
format:
.. code-block:: javascript
{
'iss': 'jwt_provider',
'aud': 'my_zuul_deployment',
'exp': 1234567890,
'iat': 1234556780,
'sub': 'alice'
}
* **iss** is the issuer of the Authorization Token. This can be logged for
auditing purposes, and it can be used to filter Identity Providers.
* **aud**, as the intended audience, is the client id for the Zuul deployment in the
issuer.
* **exp** is the Authorization Token's expiry timestamp.
* **iat** is the Authorization Token's date of issuance timestamp.
* **sub** is the default, unique identifier of the user.
These are standard JWT claims and ensure that Zuul can consume JWTs issued
by external authentication systems as Authentication Tokens, assuming the claims
are set correctly.
Authentication Tokens lacking any of these claims will be rejected.
Authentication Tokens with an ``iss`` claim not matching the white list of
accepted issuers in Zuul's configuration will be rejected.
Authentication Tokens addressing a different audience than the expected one
for the specific issuer will be rejected.
Unsigned or incorrectly signed Authentication Tokens will be rejected.
Authentication Tokens with an expired timestamp will be rejected.
Extra Authentication Claims
...........................
Some JWT Providers can issue extra claims about a user, like *preferred_username*
or *email*. Zuul will allow an operator to set such an extra claim as the default,
unique user identifier in place of *sub* if it is more convenient.
If the chosen claim is missing from the Authentication Token, it will be rejected.
Authorization Claims
....................
If the Authentication Token is issued manually by a Zuul Operator, it can include
extra claims extending Zuul's authorization rules for the Authentication Token's
bearer:
.. code-block:: javascript
{
'iss': 'zuul_operator',
'aud': 'zuul.openstack.org',
'exp': 1234567890,
'iat': 1234556780,
'sub': 'alice',
'zuul': {
'admin': ['tenantA', 'tenantB']
}
}
* **zuul** is a claim reserved for zuul-specific information about the user.
It is a dictionary, the only currently supported key is **admin**.
* **zuul.admin** is a list of tenants on which the user is allowed privileged
actions.
In the previous example, user **alice** can perform privileged actions
on every project of **tenantA** and **tenantB**. This is on top of alice's
default authorizations.
These are intended to be **whitelists**: if a tenant is unlisted the user is
assumed not to be allowed to perform a privileged action (unless the
authorization rules in effect for this deployment of Zuul allow it.)
Note that **iss** is set to ``zuul_operator``. This can be used to reject Authentication
Tokens with a ``zuul`` claim if they come from other issuers.
Access Control Configuration
----------------------------
The Zuul main.yaml configuration file will accept new **admin-rule** objects
describing access rules for privileged actions.
Authorization rules define conditions on the claims
in an Authentication Token; if these conditions are met the action is authorized.
In order to allow the parsing of claims with complex structures like dictionaries,
an XPath-like format will be supported.
Here is an example of how rules can be defined:
.. code-block:: yaml
- admin-rule:
name: affiliate_or_admin
conditions:
- resources_access.account.roles: "affiliate"
iss: external_institution
- resources_access.account.roles: "admin"
- admin-rule:
name: alice_or_bob
conditions:
- zuul_uid: alice
- zuul_uid: bob
* **name** is how the authorization rule will be refered as in Zuul's tenants
configuration.
* **conditions** is the list of conditions that define a rule. An Authentication
Token must match **at least one** of the conditions for the rule to apply. A
condition is a dictionary where keys are claims. **All** the associated values must
match the claims in the user's Authentication Token.
Zuul's authorization engine will adapt matching tests depending on the nature of
the claim in the Authentication Token, eg:
* if the claim is a JSON list, check that the condition value is in the claim
* if the claim is a string, check that the condition value is equal to the claim's value
The special ``zuul_uid`` claim refers to the ``uid_claim`` setting in an
authenticator's configuration, as will be explained below. By default it refers
to the ``sub`` claim of an Authentication Token.
This configuration file is completely optional, if the ``zuul.admin`` claim
is set in the Authentication Token to define tenants on which privileged actions
are allowed.
Under the above example, the following Authentication Token would match rules
``affiliate_or_admin`` and ``alice_or_bob``:
.. code-block:: javascript
{
'iss': 'external_institution',
'aud': 'my_zuul_deployment',
'exp': 1234567890,
'iat': 1234556780,
'sub': 'alice',
'resources_access': {
'account': {
'roles': ['affiliate', 'other_role']
}
},
}
And this Authentication Token would only match rule ``affiliate_or_admin``:
.. code-block:: javascript
{
'iss': 'some_hellish_dimension',
'aud': 'my_zuul_deployment',
'exp': 1234567890,
'sub': 'carol',
'iat': 1234556780,
'resources_access': {
'account': {
'roles': ['admin', 'other_role']
}
},
}
Privileged actions are tenant-scoped. Therefore the access control will be set
in tenants definitions, e.g:
.. code-block:: yaml
- tenant:
name: tenantA
admin_rules:
- an_authz_rule
- another_authz_rule
source:
gerrit:
untrusted-projects:
- org/project1:
- org/project2
- ...
- tenant:
name: tenantB
admin_rules:
- yet_another_authz_rule
source:
gerrit:
untrusted-projects:
- org/project1
- org/project3
- ...
An action on the ``tenantA`` tenant will be allowed if ``an_authz_rule`` OR
``another_authz_rule`` is matched.
An action on the ``tenantB`` tenant will be authorized if ``yet_another_authz_rule``
is matched.
Administration Web API
----------------------
Unless specified, all the following endpoints require the presence of the ``Authorization``
header in the HTTP query.
Unless specified, all calls to the endpoints return with HTTP status code 201 if
successful, 401 if unauthenticated, 403 if the user is not allowed to perform the
action, and 400 with a JSON error description otherwise.
In case of a 401 code, an additional ``WWW-Authenticate`` header is emitted, for example::
WWW-Authenticate: Bearer realm="zuul.openstack.org"
error="invalid_token"
error_description="Token expired"
Zuul's web API will be extended to provide the following endpoints:
POST /api/tenant/{tenant}/project/{project}/enqueue
...................................................
This call allows a user to re-enqueue a buildset, like the *enqueue* or
*enqueue-ref* subcommands of Zuul's CLI.
To trigger the re-enqueue of a change, the following JSON body must be sent in
the query:
.. code-block:: javascript
{"trigger": <Zuul trigger>,
"change": <changeID>,
"pipeline": <pipeline>}
To trigger the re-enqueue of a ref, the following JSON body must be sent in
the query:
.. code-block:: javascript
{"trigger": <Zuul trigger>,
"ref": <ref>,
"oldrev": <oldrev>,
"newrev": <newrev>,
"pipeline": <pipeline>}
POST /api/tenant/{tenant}/project/{project}/dequeue
...................................................
This call allows a user to dequeue a buildset, like the *dequeue* subcommand of
Zuul's CLI.
To dequeue a change, the following JSON body must be sent in the query:
.. code-block:: javascript
{"change": <changeID>,
"pipeline": <pipeline>}
To dequeue a ref, the following JSON body must be sent in
the query:
.. code-block:: javascript
{"ref": <ref>,
"pipeline": <pipeline>}
POST /api/tenant/{tenant}/project/{project}/autohold
..............................................................
This call allows a user to automatically put a node set on hold in case of
a build failure on the chosen job, like the *autohold* subcommand of Zuul's
CLI.
Any of the following JSON bodies must be sent in the query:
.. code-block:: javascript
{"change": <changeID>,
"reason": <reason>,
"count": <count>,
"node_hold_expiration": <expiry>,
"job": <job>}
or
.. code-block:: javascript
{"ref": <ref>,
"reason": <reason>,
"count": <count>,
"node_hold_expiration": <expiry>,
"job": <job>}
GET /api/user/authorizations
.........................................
This call returns the list of tenant the authenticated user can perform privileged
actions on.
This endpoint can be consumed by web clients in order to know which actions to display
according to the user's authorizations, either from Zuul's configuration or
from the valid Authentication Token's ``zuul.admin`` claim if present.
The return value is similar in form to the `zuul.admin` claim:
.. code-block:: javascript
{
'zuul': {
'admin': ['tenantA', 'tenantB']
}
}
The call needs authentication and returns with HTTP code 200, or 401 if no valid
Authentication Token is passed in the request's headers. If no rule applies to
the user, the return value is
.. code-block:: javascript
{
'zuul': {
'admin': []
}
}
Logging
.......
Zuul will log an event when a user presents an Authentication Token with a
``zuul.admin`` claim, and if the authorization override is granted or denied:
.. code-block:: bash
Issuer %{iss}s attempt to override user %{sub}s admin rules granted|denied
At DEBUG level the log entry will also contain the ``zuul.admin`` claim.
Zuul will log an event when a user presents a valid Authentication Token to
perform a privileged action:
.. code-block:: bash
User %{sub}s authenticated from %{iss}s requesting %{action}s on %{tenant}s/%{project}s
At DEBUG level the log entry will also contain the JSON body passed to the query.
The events will be logged at zuul.web's level but a new handler focused on auditing
could also be created.
Zuul Client CLI and Admin Web API
.................................
The CLI will be modified to call the REST API instead of using a Gearman server
if the CLI's configuration file is lacking a ``[gearman]`` section but has a
``[web]`` section.
In that case the CLI will take the --auth-token argument on
the ``autohold``, ``enqueue``, ``enqueue-ref`` and ``dequeue`` commands. The
Authentication Token will be used to query the web API to execute these
commands; allowing non-privileged users to use the CLI remotely.
.. code-block:: bash
$ zuul --auth-token AaAa.... autohold --tenant openstack --project example_project --job example_job --reason "reason text" --count 1
Connecting to https://zuul.openstack.org...
<usual autohold output>
JWT Generation by Zuul
-----------------------
Client CLI
..........
A new command will be added to the Zuul Client CLI to allow an operator to generate
an Authorization Token for a third party. It will return the contents of the
``Authorization`` header as it should be set when querying the admin web API.
.. code-block:: bash
$ zuul create-auth-token --auth-config zuul-operator --user alice --tenant tenantA --expires-in 1800
bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJodHRwOi8vbWFuYWdlc2Yuc2ZyZG90ZXN0aW5zdGFuY2Uub3JnIiwienV1bC50ZW5hbnRzIjp7ImxvY2FsIjoiKiJ9LCJleHAiOjE1Mzc0MTcxOTguMzc3NTQ0fQ.DLbKx1J84wV4Vm7sv3zw9Bw9-WuIka7WkPQxGDAHz7s
The ``auth-config`` argument refers to the authenticator configuration to use
(see configuration changes below). The configuration must mention the secret
to use to sign the Token.
This way of generating Authorization Tokens is meant for testing
purposes only and should not be used in production, where the use of an
external Identity Provider is preferred.
Configuration Changes
.....................
JWT creation and validation require a secret and an algorithm. While several algorithms are
supported by the pyJWT library, using ``RS256`` offers asymmetrical encryption,
which allows the public key to be used in untrusted contexts like javascript
code living browser side. Therefore this should be the preferred algorithm for
issuers. Zuul will also support ``HS256`` as the most widely used algorithm.
Some identity providers use key sets (also known as **JWKS**), therefore the key to
use when verifying the Authentication Token's signatures cannot be known in advance.
Zuul must support the ``RS256`` algorithm with JWKS as well.
Here is an example defining the three supported types of authenticators:
.. code-block:: ini
[web]
listen_address=127.0.0.1
port=9000
static_cache_expiry=0
status_url=https://zuul.example.com/status
# symmetrical encryption
[auth "zuul_operator"]
driver=HS256
# symmetrical encryption only needs a shared secret
secret=exampleSecret
# accept "zuul.actions" claim in Authentication Token
allow_authz_override=true
# what the "aud" claim must be in Authentication Token
client_id=zuul.openstack.org
# what the "iss" claim must be in Authentication Token
issuer_id=zuul_operator
# the claim to use as the unique user identifier, defaults to "sub"
uid_claim=sub
# Auth realm, used in 401 error messages
realm=openstack
# (optional) Ensure a Token cannot be valid for longer than this amount of time, in seconds
max_validity_time = 1800000
# (optional) Account for skew between clocks, in seconds
skew = 3
# asymmetrical encryption
[auth "my_oidc_idp"]
driver=RS256
public_key=/path/to/key.pub
# optional, needed only if Authentication Token must be generated manually as well
private_key=/path/to/key
# if not explicitly set, allow_authz_override defaults to False
# what the "aud" claim must be in Authentication Token
client_id=my_zuul_deployment_id
# what the "iss" claim must be in Authentication Token
issuer_id=my_oidc_idp_id
# Auth realm, used in 401 error messages
realm=openstack
# (optional) Ensure a Token cannot be valid for longer than this amount of time, in seconds
max_validity_time = 1800000
# (optional) Account for skew between clocks, in seconds
skew = 3
# asymmetrical encryption using JWKS for validation
# The signing secret being known to the Identity Provider only, this
# authenticator cannot be used to manually issue Tokens with the CLI
[auth google_oauth_playground]
driver=RS256withJWKS
# URL of the JWKS; usually found in the .well-known config of the Identity Provider
keys_url=https://www.googleapis.com/oauth2/v3/certs
# what the "aud" claim must be in Authentication Token
client_id=XXX.apps.googleusercontent.com
# what the "iss" claim must be in Authentication Token
issuer_id=https://accounts.google.com
uid_claim=name
# Auth realm, used in 401 error messages
realm=openstack
# (optional) Account for skew between clocks, in seconds
skew = 3
Implementation
==============
Assignee(s)
-----------
Primary assignee:
mhu
.. feel free to add yourself as an assignee, the more eyes/help the better
Gerrit Topic
------------
Use Gerrit topic "zuul_admin_web" for all patches related to this spec.
.. code-block:: bash
git-review -t zuul_admin_web
Work Items
----------
Due to its complexity the spec should be implemented in smaller "chunks":
* https://review.openstack.org/576907 - Add admin endpoints, support for JWT
providers declaration in the configuration, JWT validation mechanism
* https://review.openstack.org/636197 - Allow Auth Token generation from
Zuul's CLI
* https://review.openstack.org/636315 - Allow users to use the REST API from
the CLI (instead of Gearman), with a bearer token
* https://review.openstack.org/#/c/639855 - Authorization configuration objects declaration and validation
* https://review.openstack.org/640884 - Authorization engine
* https://review.openstack.org/641099 - REST API: add /api/user/authorizations route
Documentation
-------------
* The changes in the configuration will need to be documented:
* configuring authenticators in zuul.conf, supported algorithms and their
specific configuration options
* creating authorization rules
* The additions to the web API need to be documented.
* The additions to the Zuul Client CLI need to be documented.
* The potential impacts of exposing administration tasks in terms of build results
or resources management need to be clearly documented for operators (see below).
Security
--------
Anybody with a valid Authentication Token can perform administration tasks exposed
through the Web API. Revoking JWT is not trivial, and not in the scope of this spec.
As a mitigation, Authentication Tokens should be generated with a short time to
live, like 30 minutes or less. This is especially important if the Authentication
Token overrides predefined authorizations with a ``zuul.admin`` claim. This
could be the default value for generating Tokens with the CLI; this will depend on the configuration of
other external issuers otherwise. If using the ``zuul.admin`` claims, the
Authentication Token should also be generated with as little a scope as possible
(one tenant only) to reduce the surface of attack should the
Authentication Token be compromised.
Exposing administration tasks can impact build results (dequeue-ing buildsets),
and pose potential resources problems with Nodepool if the ``autohold`` feature
is abused, leading to a significant number of nodes remaining in "hold" state for
extended periods of time. Such power should be handed over responsibly.
These security considerations concern operators and the way they handle this
feature, and do not impact development. They however need to be clearly documented,
as operators need to be aware of the potential side effects of delegating privileges
to other users.
Testing
-------
* Unit testing of the new web endpoints will be needed.
* Validation of the new configuration parameters will be needed.
Follow-up work
--------------
The following items fall outside of the scope of this spec but are logical features
to implement once the tenant-scoped admin REST API gets finalized:
* Web UI: log-in, log-out and token refresh support with an external Identity Provider
* Web UI: dequeue button near a job's status on the status page, if the authenticated
user has sufficient authorization
* autohold button near a job's build result on the builds page, if the authenticated
user has sufficient authorization
* reenqueue button near a buildset on a buildsets page, if the authenticated user
has sufficient authorization
Dependencies
============
* This implementation will use an existing dependency to **pyJWT** in Zuul.
* A new dependency to **jsonpath-rw** will be added to support XPath-like parsing
of complex claims.
| zuul | /zuul-9.1.0.tar.gz/zuul-9.1.0/doc/source/developer/specs/tenant-scoped-admin-web-API.rst | tenant-scoped-admin-web-API.rst |
Tracing
=======
.. warning:: This is not authoritative documentation. These features
are not currently available in Zuul. They may change significantly
before final implementation, or may never be fully completed.
It can be difficult for a user to understand what steps were involved
between a trigger event (such as a patchset upload or recheck comment)
and a buildset report. If it took an unusually long time it can be
difficult to determine why. At present, an operator would need to
examine logs to determine what steps were involved and the sources of
any potential delays. Even experienced operators and developers can
take quite some time to first collect and then analyze logs to answer
these questions.
Sometimes these answers may point to routine system operation (such as
a delay caused by many gate resets, or preparing a large number of
repositories). Other times they may point to deficiencies in the
system (insufficient mergers) or bugs in the code.
Being able to visualize the activities of a Zuul system can help
operators (and potentially users) triage and diagnose issues more
quickly and accurately. Even if examining logs is ultimately required
in order to fully diagnose an issue, being able to narrow down the
scope using analsys tools can greatly simplify the process.
Proposed Solution
-----------------
Implementing distributed tracing in Zuul can help improve the
observability of the system and aid operators and potentially users in
understanding the sequence of events.
By exporting information about the processing Zuul performs using the
OpenTelemetry API, information about Zuul operations can be collected
in any of several tools for analysis.
OpenTelemetry is an Open Source protocol for exchanging observability
data, an SDK implementing that protocol, as well as an implementation
of a collector for distributing information to multiple backends.
It supports three kinds of observability data: `traces`, `metrics`,
and `logs`. Since Zuul already has support for metrics and logs, this
specification proposes that we use only the support in OpenTelemtry
for `traces`.
Usage Scenarios
~~~~~~~~~~~~~~~
Usage of OpenTelemetry should be entirely optional and supplementary
for any Zuul deployment. Log messages alone should continue to be
sufficient to analyze any potential problem.
Should a deployer wish to use OpenTelemetry tracing data, a very
simple deployment for smaller sites may be constructed by running only
Jaeger. Jaeger is a service that can receive, store, and display
tracing information. The project distributes an all-in-one container
image which can store data in local filesystem storage.
https://www.jaegertracing.io/
Larger sites may wish to run multiple collectors and feed data to
larger, distributed storage backends (such as Cassandra,
Elasticsearch, etc).
Suitability to Zuul
~~~~~~~~~~~~~~~~~~~
OpenTelemetry tracing, at a high level, is designed to record
information about events, their timing, and their relation to other
events. At first this seems like a natural fit for Zuul, which reacts
to events, processes events, and generates more events. However,
OpenTelemetry's bias toward small and simple web applications is
evident throughout its documentation and the SDK implementation.
Traces give us the big picture of what happens when a request is
made by user or an application.
Zuul is not driven by user or application requests, and a system
designed to record several millisecond-long events which make up the
internal response to a user request of a web app is not necessarily
the obvious right choice for recording sequences and combinations of
events which frequently take hours (and sometimes days) to play out
across multiple systems.
Fortunately, the concepts and protocol implementation of OpenTelemtry
are sufficiently well-designed for the general case to be able to
accomodate a system like Zuul, even if the SDK makes incompatible
assumptions that make integration difficult. There are some
challenges to implementation, but because the concepts appear to be
well matched, we should proceed with using the OpenTelemetry protocol
and SDK.
Spans
~~~~~
The key tracing concepts in OpenTelemety are `traces` and `spans`.
From a data model perspective, the unit of data storage is a `span`.
A trace itself is really just a unique ID that is common to multiple
spans.
Spans can relate to other spans as either children or links. A trace
is generally considered to have a single 'root' span, and within the
time period represented by that span, it may have any number of child
spans (which may further have their own child spans).
OpenTelemetry anticipates that a span on one system may spawn a child
span on another system and includes facilities for transferring enough
information about the parent span to a child system that the child
system alone can emit traces for its span and any children that it
spawns in turn.
For a concrete example in Zuul, we might have a Zuul scheduler start a
span for a buildset, and then a merger might emit a child span for
performing the initial merge, and an executor might emit a child span
for executing a build.
Spans can relate to other spans (including spans in other traces), so
sequences of events can be chained together without necessitating that
they all be part of the same span or trace.
Because Zuul processes series of events which may stretch for long
periods of time, we should specify what events and actions should
correspond to spans and traces. Spans can have arbitrary metadat
associated with them, so we will be able to search by event or job
ids.
The following sections describe traces and their child spans.
Event Ingestion
+++++++++++++++
A trace will begin when Zuul receives an event and end when that event
has been enqueued into scheduler queues (or discarded). A driver
completing processing of an event is a definitive point in time so it
is easy to know when to close the root span for that event's trace
(whereas if we kept the trace open to include scheduler processing, we
would need to know when the last trigger event spawned by the
connection event was complete).
This may include processing in internal queues by a given driver, and
these processing steps/queues should appear as their own child spans.
The spans should include event IDs (and potentially other information
about the event such as change or pull request numbers) as metadata.
Tenant Event Processing
+++++++++++++++++++++++
A trace will begin when a scheduler begins processing a tenant event
and ends when it has forwarded the event to all pipelines within a
tenant. It will link to the event ingestion trace as a follow-on
span.
Queue Item
++++++++++
A trace will begin when an item is enqueued and end when it is
dequeued. This will be quite a long trace (hours or days). It is
expected to be the primary benefit of this telemetry effort as it will
show the entire lifetime of a queue item. It will link to the tenant
event processing trace as a follow-on span.
Within the root span, there will be a span for each buildset (so that
if a gate reset happens and a new buildset is created, users will see
a series of buildset spans). Within a buildset, there will be spans
for all of the major processing steps, such as merge operations,
layout calculating, freezing the job graph, and freezing jobs. Each
build will also merit a span (retried builds will get their own spans
as well), and within a job span, there will be child spans for git
repo prep, job setup, individual playbooks, and cleanup.
SDK Challenges
~~~~~~~~~~~~~~
As a high-level concept, the idea of spans for each of these
operations makes sense. In practice, the SDK makes implementation
challenging.
The OpenTelemtry SDK makes no provision for beginning a span on one
system and ending it on another, so the fact that one Zuul scheduler
might start a buildset span while another ends it is problematic.
Fortunately, the OpenTelemetry API only reports spans when they end,
not when they start. This means that we don't need to coordinate a
"start" API call on one scheduler with an "end" API call on another.
We can simply emit the trace with its root span at the end. However,
any child spans emitted during that time need to know the trace ID
they should use, which means that we at least need to store a trace ID
and start timestamp on our starting scheduler for use by any child
spans as well as the "end span" API call.
The SDK does not support creating a span with a specific trace ID or
start timestamp (most timestamps are automatic), but it has
well-defined interfaces for spans and we can subclass the
implementation to allow us to specify trace IDs and timestamps. With
this approach, we can "virtually" start a span on one host, store its
information in ZooKeeper with whatever long-lived object it is
associated with (such as a QueueItem) and then make it concrete on
another host when we end it.
Alternatives
++++++++++++
This section describes some alternative ideas for dealing with the
SDK's mismatch with Zuul concepts as well as why they weren't
selected.
* Multiple root spans with the same trace ID
Jaeger handles this relatively well, and the timeline view appears
as expected (multiple events with whitespace between them). The
graph view in Jaeger may have some trouble displaying this.
It is not clear that OpenTelemetry anticipates having multiple
"root" spans, so it may be best to avoid this in order to avoid
potential problems with other tools.
* Child spans without a parent
If we emit spans that specify a parent which does not exist, Jaeger
will display these traces but show a warning that the parent is
invalid. This may occur naturally while the system is operating
(builds complete while a buildset is running), but should be
eventually corrected once an item is dequeued. In case of a serious
error, we may never close a parent span, which would cause this to
persist. We should accept that this may happen, but try to avoid it
happening intentionally.
Links
~~~~~
Links between spans are fairly primitive in Jaeger. While the
OpenTelemetry API includes attributes for links (so that when we link
a queue item to an event, we could specify that it was a forwarded
event), Jaeger does not store or render them. Instead, we are only
left with a reference to a ``< span in another trace >`` with a
reference type of ``FOLLOWS_FROM``. Clicking on that link will
immediately navigate to the other trace where metadata about the trace
will be visible, but before clicking on it, users will have little
idea of what awaits on the other side.
For this reason, we should use span links sparingly so that when they
are encountered, users are likely to intuit what they are for and are
not overwhelmed by multiple indistinguishable links.
Events and Exceptions
~~~~~~~~~~~~~~~~~~~~~
OpenTelemetry allows events to be added to spans. Events have their
own timestamp and attributes. These can be used to add additional
context to spans (representing single points in time rather than
events with duration that should be child spans). Examples might
include receiving a request to cancel a job or dequeue an item.
Events should not be used as an alternative to logs, nor should all
log messages be copied as events. Events should be used sparingly to
avoid overwhelming the tracing storage with data and the user with
information.
Exceptions may also be included in spans. This happens automatically
and by default when using the context managers supplied by the SDK.
Because many spans in Zuul will be unable to use the SDK context
managers and any exception information would need to be explicitly
handled and stored in ZooKeeper, we will disable inclusion of
exception information in spans. This will provide a more consistent
experience (so that users don't see the absence of an exception in
tracing information to indicate the absence of an error in logs) and
reduce the cost of supporting traces (extra storage in ZooKeeper and
in the telemetry storage).
If we decide that exception information is worth including in the
future, this decision will be easy to revisit and reverse.
Sensitive Information
~~~~~~~~~~~~~~~~~~~~~
No sensitive information (secrets, passwords, job variables, etc)
should be included in tracing output. All output should be suitable
for an audience of Zuul users (that is, if someone has access to the
Zuul dashboard, then tracing data should not have any more sensitive
information than they already have access to). For public-facing Zuul
systems (such as OpenDev), the information should be suitable for
public use.
Protobuf and gRPC
~~~~~~~~~~~~~~~~~
The most efficient and straightforward method of transmitting data
from Zuul to a collector (including Jaeger) is using OTLP with gRPC
(OpenTelemetry Protocol + gRPC Remote Procedure Calls). Because
Protobuf applications include automatically generated code, we may
encounter the occasional version inconsistency. We may need to
navigate package requirements more than normal due to this (especially
if we have multiple packages that depend on protobuf).
For a contemporary example, the OpenTelemetry project is in the
process of pinning to an older version of protobuf:
https://github.com/open-telemetry/opentelemetry-python/issues/2717
There is an HTTP+JSON exporter as well, so in the case that something
goes very wrong with protobuf+gRPC, that may be available as a fallback.
Work Items
----------
* Add OpenTelemetry SDK and support for configuring an exporter to
zuul.conf
* Implement SDK subclasses to support opening and closing spans on
different hosts
* Instrument event processing in each driver
* Instrument event processing in scheduler
* Instrument queue items and related spans
* Document a simple Jaeger setup as a quickstart add-on (similar to
authz)
* Optional: work with OpenDev to run a public Jaeger server for
OpenDev
The last item is not required for this specification (and not our
choice as Zuul developers to make) but it would be nice if there were
one available so that all Zuul users and developers have a reference
implementation available for community collaboration.
| zuul | /zuul-9.1.0.tar.gz/zuul-9.1.0/doc/source/developer/specs/tracing.rst | tracing.rst |
Zuul Runner
===========
.. warning:: This is not authoritative documentation. These features
are not currently available in Zuul. They may change significantly
before final implementation, or may never be fully completed.
While Zuul can be deployed to reproduce a job locally, it
is a complex enough system to setup. Zuul jobs being written in
Ansible, we shouldn't have to setup a Zookeeper, Nodepool and Zuul
service to run a job locally.
To that end, the Zuul Project should create a command line utility
to run a job locally using direct ansible-playbook commands execution.
The scope includes two use cases:
* Running a local build of a job that has already ran, for example to
recreate a build that failed in the gate, through using either
a `zuul-info/inventory.yaml` file, or using the `--change-url` command
line argument.
* Being able to run any job from any Zuul instance, tenant, project
or pipeline regardless if it has run or not.
Zuul Job Execution Context
--------------------------
One of the key parts of making the Zuul Runner command line utility
is to reproduce as close as possible the zuul service environment.
A Zuul job requires:
- Test resources
- Copies of the required projects
- Ansible configuration
- Decrypted copies of the secrets
Test Resources
~~~~~~~~~~~~~~
The Zuul Runner shall require the user to provide test resources
as an Ansible inventory, similarly to what Nodepool provides to the
Zuul Executor. The Runner would enrich the inventory with the zuul
vars.
For example, if a job needs two nodes, then the user provides
a resource file like this:
.. code-block:: yaml
all:
hosts:
controller:
ansible_host: ip-node-1
ansible_user: user-node-1
worker:
ansible_host: ip-node-2
ansible_user: user-node-2
Required Projects
~~~~~~~~~~~~~~~~~
The Zuul Runner shall query an existing Zuul API to get the list
of projects required to run a job. This is implemented as part of
the `topic:freeze_job` changes to expose the executor gearman parameters.
The CLI would then perform the executor service task to clone and merge
the required project locally.
Ansible Configuration
~~~~~~~~~~~~~~~~~~~~~
The CLI would also perform the executor service tasks to setup the
execution context.
Playbooks
~~~~~~~~~
In some case, running all the job playbooks is not desirable,
in this situation the CLI provides a way to select and filter
unneeded playbook.
"zuul-runner --list-playbooks" and it would print out:
.. code-block:: console
0: opendev.org/base-jobs/playbooks/pre.yaml
...
10: opendev.org/base-jobs/playbooks/post.yaml
To avoid running the playbook 10, the user would use:
* "--no-playbook 10"
* "--no-playbook -1"
* "--playbook 1..9"
Alternatively, a matcher may be implemented to express:
* "--skip 'opendev.org/base-jobs/playbooks/post.yaml'"
Secrets
~~~~~~~
The Zuul Runner shall require the user to provide copies of
any secrets required by the job.
Implementation
--------------
The process of exposing gearman parameter and refactoring the executor
code to support local/direct usage already started here:
https://review.opendev.org/#/q/topic:freeze_job+(status:open+OR+status:merged)
Zuul Runner CLI
---------------
Here is the proposed usage for the CLI:
.. code-block:: console
usage: zuul-runner [-h] [-c CONFIG] [--version] [-v] [-e FILE] [-a API]
[-t TENANT] [-j JOB] [-P PIPELINE] [-p PROJECT] [-b BRANCH]
[-g GIT_DIR] [-D DEPENDS_ON]
{prepare-workspace,execute} ...
A helper script for running zuul jobs locally.
optional arguments:
-h, --help show this help message and exit
-c CONFIG specify the config file
--version show zuul version
-v, --verbose verbose output
-e FILE, --extra-vars FILE
global extra vars file
-a API, --api API the zuul server api to query against
-t TENANT, --tenant TENANT
the zuul tenant name
-j JOB, --job JOB the zuul job name
-P PIPELINE, --pipeline PIPELINE
the zuul pipeline name
-p PROJECT, --project PROJECT
the zuul project name
-b BRANCH, --branch BRANCH
the zuul project's branch name
-g GIT_DIR, --git-dir GIT_DIR
the git merger dir
-C CHANGE_URL, --change-url CHANGE_URL
reproduce job with speculative change content
commands:
valid commands
{prepare-workspace,execute}
prepare-workspace checks out all of the required playbooks and roles
into a given workspace and returns the order of
execution
execute prepare and execute a zuul jobs
And here is an example execution:
.. code-block:: console
$ pip install --user zuul
$ zuul-runner --api https://zuul.openstack.org --project openstack/nova --job tempest-full-py3 execute
[...]
2019-05-07 06:08:01,040 DEBUG zuul.Runner - Ansible output: b'PLAY RECAP *********************************************************************'
2019-05-07 06:08:01,040 DEBUG zuul.Runner - Ansible output: b'instance-ip : ok=9 changed=5 unreachable=0 failed=0'
2019-05-07 06:08:01,040 DEBUG zuul.Runner - Ansible output: b'localhost : ok=12 changed=9 unreachable=0 failed=0'
2019-05-07 06:08:01,040 DEBUG zuul.Runner - Ansible output: b''
2019-05-07 06:08:01,218 DEBUG zuul.Runner - Ansible output terminated
2019-05-07 06:08:01,219 DEBUG zuul.Runner - Ansible cpu times: user=0.00, system=0.00, children_user=0.00, children_system=0.00
2019-05-07 06:08:01,219 DEBUG zuul.Runner - Ansible exit code: 0
2019-05-07 06:08:01,219 DEBUG zuul.Runner - Stopped disk job killer
2019-05-07 06:08:01,220 DEBUG zuul.Runner - Ansible complete, result RESULT_NORMAL code 0
2019-05-07 06:08:01,220 DEBUG zuul.ExecutorServer - Sent SIGTERM to SSH Agent, {'SSH_AUTH_SOCK': '/tmp/ssh-SYKgxg36XMBa/agent.18274', 'SSH_AGENT_PID': '18275'}
SUCCESS
| zuul | /zuul-9.1.0.tar.gz/zuul-9.1.0/doc/source/developer/specs/zuul-runner.rst | zuul-runner.rst |
.. _quick-start:
Quick-Start Installation and Tutorial
=====================================
Zuul is not like other CI or CD systems. It is a project gating
system designed to assist developers in taking a change from proposal
through deployment. Zuul can support any number of workflow processes
and systems, but to help you get started with Zuul, this tutorial will
walk through setting up a basic gating configuration which protects
projects from merging broken code.
This tutorial is entirely self-contained and may safely be run on a
workstation. The only requirements are a network connection, the
ability to run containers, and at least 2GiB of RAM.
This tutorial supplies a working Gerrit for code review, though the
concepts you will learn apply equally to GitHub.
.. note:: Even if you don't ultimately intend to use Gerrit, you are
encouraged to follow this tutorial to learn how to set up
and use Zuul.
At the end of the tutorial, you will find further information about
how to configure your Zuul to interact with GitHub.
Start Zuul Containers
---------------------
Before you start, ensure that some needed packages are installed.
.. code-block:: shell
# Red Hat / CentOS:
sudo yum install podman git python3
sudo python3 -m pip install git-review podman-compose
# Fedora:
sudo dnf install podman git python3
sudo python3 -m pip install git-review podman-compose
# OpenSuse:
sudo zypper install podman git python3
sudo python3 -m pip install git-review podman-compose
# Ubuntu / Debian:
sudo apt-get update
sudo apt-get install podman git python3-pip
sudo python3 -m pip install git-review podman-compose
Clone the Zuul repository:
.. code-block:: shell
git clone https://opendev.org/zuul/zuul
Then cd into the directory containing this document, and run
podman-compose in order to start Zuul, Nodepool and Gerrit.
.. code-block:: shell
cd zuul/doc/source/examples
podman-compose -p zuul-tutorial up
For reference, the files in that directory are also `browsable on the web
<https://opendev.org/zuul/zuul/src/branch/master/doc/source/examples>`_.
All of the services will be started with debug-level logging sent to
the standard output of the terminal where podman-compose is running.
You will see a considerable amount of information scroll by, including
some errors. Zuul will immediately attempt to connect to Gerrit and
begin processing, even before Gerrit has fully initialized. The
podman composition includes scripts to configure Gerrit and create an
account for Zuul. Once this has all completed, the system should
automatically connect, stabilize and become idle. When this is
complete, you will have the following services running:
* Zookeeper
* Gerrit
* Nodepool Launcher
* Zuul Scheduler
* Zuul Web Server
* Zuul Executor
* Apache HTTPD
And a long-running static test node used by Nodepool and Zuul upon
which to run tests.
The Zuul scheduler is configured to connect to Gerrit via a connection
named ``gerrit``. Zuul can interact with as many systems as
necessary, each such connection is assigned a name for use in the Zuul
configuration.
Zuul is a multi-tenant application, so that differing needs of
independent work-groups can be supported from one system. This
example configures a single tenant named ``example-tenant``. Assigned
to this tenant are three projects: ``zuul-config``, ``test1`` and
``test2``. These have already been created in Gerrit and are ready
for us to begin using.
Add Your Gerrit Account
-----------------------
Before you can interact with Gerrit, you will need to create an
account. The initialization script has already created an account for
Zuul, but has left the task of creating your own account to you so
that you can provide your own SSH key. You may safely use any
existing SSH key on your workstation, or you may create a new one by
running ``ssh-keygen``.
Gerrit is configured in a development mode where passwords are not
required in the web interface and you may become any user in the
system at any time.
To create your Gerrit account, visit http://localhost:8080 in your
browser and click `Sign in` in the top right corner.
.. image:: /images/sign-in.png
:align: center
Then click `New Account` under `Register`.
.. image:: /images/register.png
:align: center
Don't bother to enter anything into the confirmation dialog that pops
up, instead, click the `settings` link at the bottom.
.. image:: /images/confirm.png
:align: center
In the `Profile` section at the top, enter the username you use to log
into your workstation in the `Username` field and your full name in
the `Full name` field, then click `Save Changes`.
.. image:: /images/profile.png
:align: center
Scroll down to the `Email Addresses` section and enter your email
address into the `New email address` field, then click `Send
Verification`. Since Gerrit is in developer mode, it will not
actually send any email, and the address will be automatically
confirmed. This step is useful since several parts of the Gerrit user
interface expect to be able to display email addresses.
.. image:: /images/email.png
:align: center
Scroll down to the `SSH keys` section and copy and paste the contents
of ``~/.ssh/id_rsa.pub`` into the `New SSH key` field and click `Add
New SSH Key`.
.. image:: /images/sshkey.png
:align: center
.. We ask them to click reload so that the page refreshes and their
avatar appears in the top right. Otherwise it's difficult to see
that there's anything there to click.
Click the `Reload` button in your browser to reload the page with the
new settings in effect. At this point you have created and logged
into your personal account in Gerrit and are ready to begin
configuring Zuul.
Configure Zuul Pipelines
------------------------
Zuul recognizes two types of projects: :term:`config
projects<config-project>` and :term:`untrusted
projects<untrusted-project>`. An *untrusted project* is a normal
project from Zuul's point of view. In a gating system, it contains
the software under development and/or most of the job content that
Zuul will run. A *config project* is a special project that contains
the Zuul's configuration. Because it has access to normally
restricted features in Zuul, changes to this repository are not
dynamically evaluated by Zuul. The security and functionality of the
rest of the system depends on this repository, so it is best to limit
what is contained within it to the minimum, and ensure thorough code
review practices when changes are made.
Zuul has no built-in workflow definitions, so in order for it to do
anything, you will need to begin by making changes to a *config
project*. The initialization script has already created a project
named ``zuul-config`` which you should now clone onto your workstation:
.. code-block:: shell
git clone http://localhost:8080/zuul-config
You will find that this repository is empty. Zuul reads its
configuration from either a single file or a directory. In a *Config
Project* with substantial Zuul configuration, you may find it easiest
to use the ``zuul.d`` directory for Zuul configuration. Later, in
*Untrusted Projects* you will use a single file for in-repo
configuration. Make the directory:
.. code-block:: shell
cd zuul-config
mkdir zuul.d
The first type of configuration items we need to add are the Pipelines
we intend to use. In Zuul, a Pipeline represents a workflow action.
It is triggered by some action on a connection. Projects are able to
attach jobs to run in that pipeline, and when they complete, the
results are reported along with actions which may trigger further
Pipelines. In a gating system two pipelines are required:
:term:`check` and :term:`gate`. In our system, ``check`` will be
triggered when a patch is uploaded to Gerrit, so that we are able to
immediately run tests and report whether the change works and is
therefore able to merge. The ``gate`` pipeline is triggered when a code
reviewer approves the change in Gerrit. It will run test jobs again
(in case other changes have merged since the change in question was
uploaded) and if these final tests pass, will automatically merge the
change. To configure these pipelines, copy the following file into
``zuul.d/pipelines.yaml``:
.. literalinclude:: /examples/zuul-config/zuul.d/pipelines.yaml
:language: yaml
Once we have bootstrapped our initial Zuul configuration, we will want
to use the gating process on this repository too, so we need to attach
the ``zuul-config`` repository to the ``check`` and ``gate`` pipelines
we are about to create. There are no jobs defined yet, so we must use
the internally defined ``noop`` job, which always returns success.
Later on we will be configuring some other projects, and while we will
be able to dynamically add jobs to their pipelines, those projects
must first be attached to the pipelines in order for that to work. In
our system, we want all of the projects in Gerrit to participate in
the check and gate pipelines, so we can use a regular expression to
apply this to all projects. To configure the ``check`` and ``gate``
pipelines for ``zuul-config`` to run the ``noop`` job, and add all
projects to those pipelines (with no jobs), copy the following file
into ``zuul.d/projects.yaml``:
.. literalinclude:: /examples/zuul-config/zuul.d/projects.yaml
:language: yaml
Every real job (i.e., all jobs other than ``noop``) must inherit from a
:term:`base job`, and base jobs may only be defined in a
:term:`config-project`. Let's go ahead and add a simple base job that
we can build on later. Copy the following into ``zuul.d/jobs.yaml``:
.. literalinclude:: /examples/zuul-config/zuul.d/jobs.yaml
:language: yaml
Commit the changes and push them up for review:
.. code-block:: shell
git add zuul.d
git commit -m "Add initial Zuul configuration"
git review
Because Zuul is currently running with no configuration whatsoever, it
will ignore this change. For this initial change which bootstraps the
entire system, we will need to bypass code review (hopefully for the
last time). To do this, you need to switch to the Administrator
account in Gerrit. Visit http://localhost:8080 in your browser and
then:
Click the avatar image in the top right corner then click `Sign out`.
.. image:: /images/sign-out-user.png
:align: center
Then click the `Sign in` link again.
.. image:: /images/sign-in.png
:align: center
Click `admin` to log in as the `admin` user.
.. image:: /images/become-select.png
:align: center
You will then see a list of open changes; click on the change you
uploaded.
.. image:: /images/open-changes.png
:align: center
Click `Reply...` at the top center of the change screen. This will
open a dialog where you can leave a review message and vote on the
change. As the administrator, you have access to vote in all of the
review categories, even `Verified` which is normally reserved for
Zuul. Vote Code-Review: +2, Verified: +2, Workflow: +1, and then
click `Send` to leave your approval votes.
.. image:: /images/review-1001.png
:align: center
Once the required votes have been set, the `Submit` button will appear
in the top right; click it. This will cause the change to be merged
immediately. This is normally handled by Zuul, but as the
administrator you can bypass Zuul to forcibly merge a change.
.. image:: /images/submit-1001.png
:align: center
Now that the initial configuration has been bootstrapped, you should
not need to bypass testing and code review again, so switch back to
the account you created for yourself. Click on the avatar image in
the top right corner then click `Sign out`.
.. image:: /images/sign-out-admin.png
:align: center
Then click the `Sign in` link again.
.. image:: /images/sign-in.png
:align: center
And click your username to log into your account.
.. image:: /images/become-select.png
:align: center
Test Zuul Pipelines
-------------------
Zuul is now running with a basic :term:`check` and :term:`gate`
configuration. Now is a good time to take a look at Zuul's web
interface. Visit http://localhost:9000/t/example-tenant/status to see
the current status of the system. It should be idle, but if you leave
this page open during the following steps, you will see it update
automatically.
We can now begin adding Zuul configuration to one of our
:term:`untrusted projects<untrusted-project>`. Start by cloning the
`test1` project which was created by the setup script.
.. code-block:: shell
cd ..
git clone http://localhost:8080/test1
Every Zuul job that runs needs a playbook, so let's create a
sub-directory in the project to hold playbooks:
.. code-block:: shell
cd test1
mkdir playbooks
Start with a simple playbook which just outputs a debug message. Copy
the following to ``playbooks/testjob.yaml``:
.. literalinclude:: /examples/test1/playbooks/testjob.yaml
:language: yaml
Now define a Zuul job which runs that playbook. Zuul will read its
configuration from any of ``zuul.d/`` or ``.zuul.d/`` directories, or
the files ``zuul.yaml`` or ``.zuul.yaml``. Generally in an *untrusted
project* which isn't dedicated entirely to Zuul, it's best to put
Zuul's configuration in a hidden file. Copy the following to
``.zuul.yaml`` in the root of the project:
.. literalinclude:: /examples/test1/zuul.yaml
:language: yaml
Commit the changes and push them up to Gerrit for review:
.. code-block:: shell
git add .zuul.yaml playbooks
git commit -m "Add test Zuul job"
git review
Zuul will dynamically evaluate proposed changes to its configuration
in *untrusted projects* immediately, so shortly after your change is
uploaded, Zuul will run the new job and report back on the change.
Visit http://localhost:8080/dashboard/self and open the change you
just uploaded. If the build is complete, Zuul should have left a
Verified: +1 vote on the change, along with a comment at the bottom.
Expand the comments and you should see that the job succeeded, and a
link to the build result in Zuul is provided. You can follow that
link to see some information about the build, but you won't find any
logs since Zuul hasn't been told where to save them yet.
.. image:: /images/check1-1002.png
:align: center
This means everything is working so far, but we need to configure a
bit more before we have a useful job.
Configure a Base Job
--------------------
Every Zuul tenant needs at least one base job. Zuul administrators
can use a base job to customize Zuul to the local environment. This
may include tasks which run both before jobs, such as setting up
package mirrors or networking configuration, or after jobs, such as
artifact and log storage.
Zuul doesn't take anything for granted, and even tasks such as copying
the git repos for the project being tested onto the remote node must
be explicitly added to a base job (and can therefore be customized as
needed). The Zuul in this tutorial is pre-configured to use the `zuul
jobs`_ repository which is the "standard library" of Zuul jobs and
roles. We will make use of it to quickly create a base job which
performs the necessary set up actions and stores build logs.
.. _zuul jobs: https://zuul-ci.org/docs/zuul-jobs/
Return to the ``zuul-config`` repo that you were working in earlier.
We're going to add some playbooks to the empty base job we created
earlier. Start by creating a directory to store those playbooks:
.. code-block:: shell
cd ..
cd zuul-config
mkdir -p playbooks/base
Zuul supports running any number of playbooks before a job (called
*pre-run* playbooks) or after a job (called *post-run* playbooks).
We're going to add a single *pre-run* playbook now. Copy the
following to ``playbooks/base/pre.yaml``:
.. literalinclude:: /examples/zuul-config/playbooks/base/pre.yaml
:language: yaml
This playbook does two things; first it creates a new SSH key and adds
it to all of the hosts in the inventory, and removes the private key
that Zuul normally uses to log into nodes from the running SSH agent.
This is just an extra bit of protection which ensures that if Zuul's
SSH key has access to any important systems, normal Zuul jobs can't
use it. The second thing the playbook does is copy the git
repositories that Zuul has prepared (which may have one or more
changes being tested) to all of the nodes used in the job.
Next, add a *post-run* playbook to remove the per-build SSH key. Copy
the following to ``playbooks/base/post-ssh.yaml``:
.. literalinclude:: /examples/zuul-config/playbooks/base/post-ssh.yaml
:language: yaml
This is the complement of the `add-build-sshkey` role in the pre-run
playbook -- it simply removes the per-build ssh key from any remote
systems. Zuul always tries to run all of the post-run playbooks
regardless of whether any previous playbooks have failed. Because we
always want log collection to run and we want it to run last, we
create a second post-run playbook for it. Copy the following to
``playbooks/base/post-logs.yaml``:
.. literalinclude:: /examples/zuul-config/playbooks/base/post-logs.yaml
:language: yaml
The first role in this playbook generates some metadata about the logs
which are about to be uploaded. Zuul uses this metadata in its web
interface to nicely render the logs and other information about the
build.
This tutorial is running an Apache webserver in a container which will
serve build logs from a volume that is shared with the Zuul executor.
That volume is mounted at `/srv/static/logs`, which is the default
location in the `upload-logs`_ role. The role also supports copying
files to a remote server via SCP; see the role documentation for how
to configure it. For this simple case, the only option we need to
provide is the URL where the logs can ultimately be found.
.. note:: Zuul-jobs also contains `roles
<https://zuul-ci.org/docs/zuul-jobs/log-roles.html>`_ to
upload logs to a OpenStack Object Storage (swift) or Google
Cloud Storage containers. If you create a role to upload
logs to another system, please feel free to contribute it to
the zuul-jobs repository for others to use.
.. _upload-logs: https://zuul-ci.org/docs/zuul-jobs/roles.html#role-upload-logs
Now that the new playbooks are in place, update the ``base`` job
definition to include them. Overwrite ``zuul.d/jobs.yaml`` with the
following:
.. literalinclude:: /examples/zuul-config/zuul.d/jobs2.yaml
:language: yaml
Then commit the change and upload it to Gerrit for review:
.. code-block:: shell
git add playbooks zuul.d/jobs.yaml
git commit -m "Update Zuul base job"
git review
Visit http://localhost:8080/dashboard/self and open the
``zuul-config`` change you just uploaded.
You should see a Verified +1 vote from Zuul. Click `Reply` then vote
Code-Review: +2 and Workflow: +1 then click `Send`.
.. image:: /images/review-1003.png
:align: center
Wait a few moments for Zuul to process the event, and then reload the
page. The change should have been merged.
Visit http://localhost:8080/dashboard/self and return to the
``test1`` change you uploaded earlier. Click `Reply` then type
`recheck` into the text field and click `Send`.
.. image:: /images/recheck-1002.png
:align: center
This will cause Zuul to re-run the test job we created earlier. This
time it will run with the updated base job configuration, and when
complete, it will report the published log location as a comment on
the change:
.. image:: /images/check2-1002.png
:align: center
Follow the link and you will be directed to the build result page. If
you click on the `Logs` tab, you'll be able to browse the console log
for the job. In the middle of the log, you should see the "Hello,
world!" output from the job's playbook.
Also try the `Console` tab for a more structured view of the log.
Click on the `OK` button in the middle of the page to see the output
of just the task we're interested in.
Further Steps
-------------
You now have a Zuul system up and running, congratulations!
The Zuul community would love to hear about how you plan to use Zuul.
Please take a few moments to fill out the `Zuul User Survey
<https://www.surveymonkey.com/r/K2B2MWL>`_ to provide feedback and
information around your deployment. All information is confidential
to the OpenStack Foundation unless you designate that it can be
public.
If you would like to make further changes to Zuul, its configuration
files are located in the ``zuul/doc/source/examples`` directory
and are bind-mounted into the running containers. You may edit them
and restart the Zuul containers to make changes.
If you would like to connect your Zuul to GitHub, see
:ref:`github_driver`.
.. TODO: write an extension to this tutorial to connect to github
| zuul | /zuul-9.1.0.tar.gz/zuul-9.1.0/doc/source/tutorials/quick-start.rst | quick-start.rst |
Jaeger Tracing Tutorial
=======================
Zuul includes support for `distributed tracing`_ as described by the
OpenTelemetry project. This allows operators (and potentially users)
to visualize the progress of events and queue items through the
various Zuul components as an aid to debugging.
Zuul supports the OpenTelemetry Protocol (OTLP) for exporting traces.
Many observability systems support receiving traces via OTLP. One of
these is Jaeger. Because it can be run as a standalone service with
local storage, this tutorial describes how to set up a Jaeger server
and configure Zuul to export data to it.
For more information about tracing in Zuul, see :ref:`tracing`.
To get started, first run the :ref:`quick-start` and then follow the
steps in this tutorial to add a Jaeger server.
Restart Zuul Containers
-----------------------
After completing the initial tutorial, stop the Zuul containers so
that we can update Zuul's configuration to enable tracing.
.. code-block:: shell
cd zuul/doc/source/examples
sudo -E podman-compose -p zuul-tutorial stop
Restart the containers with a new Zuul configuration.
.. code-block:: shell
cd zuul/doc/source/examples
ZUUL_TUTORIAL_CONFIG="./tracing/etc_zuul/" sudo -E podman-compose -p zuul-tutorial up -d
This tells podman-compose to use these Zuul `config files
<https://opendev.org/zuul/zuul/src/branch/master/doc/source/examples/tracing>`_.
The only change compared to the quick-start is to add a
:attr:`tracing` section to ``zuul.conf``:
.. code-block:: ini
[tracing]
enabled=true
endpoint=jaeger:4317
insecure=true
This instructs Zuul to send tracing information to the Jaeger server
we will start below.
Start Jaeger
------------
A separate docker-compose file is provided to run Jaeger. Start it
with this command:
.. code-block:: shell
cd zuul/doc/source/examples/tracing
sudo -E podman-compose -p zuul-tutorial-tracing up -d
You can visit http://localhost:16686/search to verify it is running.
Recheck a change
----------------
Visit Gerrit at http://localhost:8080/dashboard/self and return to the
``test1`` change you uploaded earlier. Click `Reply` then type
`recheck` into the text field and click `Send`. This will tell Zuul
to run the test job once again. When the job is complete, you should
have a trace available in Jaeger.
To see the trace, visit http://localhost:16686/search and select the
`zuul` service (reload the page if it doesn't show up at first).
Press `Find Traces` and you should see the trace for your build
appear.
_`distributed tracing`: https://opentelemetry.io/docs/concepts/observability-primer/#distributed-traces
| zuul | /zuul-9.1.0.tar.gz/zuul-9.1.0/doc/source/tutorials/tracing.rst | tracing.rst |
Keycloak Tutorial
=================
Zuul supports an authenticated API accessible via its web app which
can be used to perform some administrative actions. To see this in
action, first run the :ref:`quick-start` and then follow the steps in
this tutorial to add a Keycloak server.
Zuul supports any identity provider that can supply a JWT using OpenID
Connect. Keycloak is used here because it is entirely self-contained.
Google authentication is one additional option described elsewhere in
the documentation.
Gerrit can be updated to use the same authentication system as Zuul,
but this tutorial does not address that.
Update /etc/hosts
-----------------
The Zuul containers will use the internal container network to connect to
keycloak, but you will use a mapped port to access it in your web
browser. There is no way to have Zuul use the internal hostname when
it validates the token yet redirect your browser to `localhost` to
obtain the token, therefore you will need to add a matching host entry
to `/etc/hosts`. Make sure you have a line that looks like this:
.. code-block::
127.0.0.1 localhost keycloak
If you are using podman, you need to add the following option in $HOME/.config/containers/containers.conf:
.. code-block::
[containers]
no_hosts=true
This way your /etc/hosts settings will not interfere with podman's networking.
Restart Zuul Containers
-----------------------
After completing the initial tutorial, stop the Zuul containers so
that we can update Zuul's configuration to add authentication.
.. code-block:: shell
cd zuul/doc/source/examples
sudo -E podman-compose -p zuul-tutorial stop
Restart the containers with a new Zuul configuration.
.. code-block:: shell
cd zuul/doc/source/examples
ZUUL_TUTORIAL_CONFIG="./keycloak/etc_zuul/" sudo -E podman-compose -p zuul-tutorial up -d
This tells podman-compose to use these Zuul `config files
<https://opendev.org/zuul/zuul/src/branch/master/doc/source/examples/keycloak>`_.
Start Keycloak
--------------
A separate docker-compose file is supplied to run Keycloak. Start it
with this command:
.. code-block:: shell
cd zuul/doc/source/examples/keycloak
sudo -E podman-compose -p zuul-tutorial-keycloak up -d
Once Keycloak is running, you can visit the web interface at
http://localhost:8082/
The Keycloak administrative user is `admin` with a password of
`kcadmin`.
Log Into Zuul
-------------
Visit http://localhost:9000/t/example-tenant/autoholds and click the
login icon on the top right. You will be directed to Keycloak, where
you can log into the Zuul realm with the user `admin` and password
`admin`.
Once you return to Zuul, you should see the option to create an
autohold -- an admin-only option.
| zuul | /zuul-9.1.0.tar.gz/zuul-9.1.0/doc/source/tutorials/keycloak.rst | keycloak.rst |
# Copyright 2020 BMW Group
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import argparse
import os
import subprocess
def main():
pos_args = {
'--dir': 1,
'--tmpfs': 1,
'--ro-bind': 2,
'--bind': 2,
'--chdir': 1,
'--uid': 1,
'--gid': 1,
'--file': 2,
'--proc': 1,
'--dev': 1,
}
bool_args = [
'--unshare-all',
'--unshare-user',
'--unshare-user-try',
'--unshare-ipc',
'--unshare-pid',
'--unshare-net',
'--unshare-uts',
'--unshare-cgroup',
'--unshare-cgroup-try',
'--share-net',
'--die-with-parent',
]
parser = argparse.ArgumentParser()
for arg, nargs in pos_args.items():
parser.add_argument(arg, nargs=nargs, action='append')
for arg in bool_args:
parser.add_argument(arg, action='store_true')
parser.add_argument('args', metavar='args', nargs=argparse.REMAINDER,
help='Command')
args = parser.parse_args()
for fd, path in args.file:
fd = int(fd)
if path.startswith('/etc'):
# Ignore write requests to /etc
continue
print('Writing file from %s to %s' % (fd, path))
count = 0
with open(path, 'wb') as output:
data = os.read(fd, 32000)
while data:
count += len(data)
output.write(data)
data = os.read(fd, 32000)
print('Wrote file (%s bytes)' % count)
if args.chdir:
os.chdir(args.chdir[0][0])
result = subprocess.run(args.args, shell=False, check=False)
exit(result.returncode)
if __name__ == '__main__':
main() | zuul | /zuul-9.1.0.tar.gz/zuul-9.1.0/tools/fake_bwrap.py | fake_bwrap.py |
# This script updates the Zuul v3 Storyboard. It uses a .boartty.yaml
# file to get credential information.
import requests
import boartty.config
import boartty.sync
import logging # noqa
from pprint import pprint as p # noqa
class App(object):
pass
def get_tasks(sync):
task_list = []
for story in sync.get('/v1/stories?tags=zuulv3'):
print("Story %s: %s" % (story['id'], story['title']))
for task in sync.get('/v1/stories/%s/tasks' % (story['id'])):
print(" %s" % (task['title'],))
task_list.append(task)
return task_list
def task_in_lane(task, lane):
for item in lane['worklist']['items']:
if 'task' in item and item['task']['id'] == task['id']:
return True
return False
def add_task(sync, task, lane):
print("Add task %s to %s" % (task['id'], lane['worklist']['id']))
r = sync.post('v1/worklists/%s/items/' % lane['worklist']['id'],
dict(item_id=task['id'],
item_type='task',
list_position=0))
print(r)
def remove_task(sync, task, lane):
print("Remove task %s from %s" % (task['id'], lane['worklist']['id']))
for item in lane['worklist']['items']:
if 'task' in item and item['task']['id'] == task['id']:
r = sync.delete('v1/worklists/%s/items/' % lane['worklist']['id'],
dict(item_id=item['id']))
print(r)
MAP = {
'todo': ['New', 'Backlog', 'Todo'],
'inprogress': ['In Progress', 'Blocked'],
'review': ['In Progress', 'Blocked'],
'merged': None,
'invalid': None,
}
def main():
requests.packages.urllib3.disable_warnings()
# logging.basicConfig(level=logging.DEBUG)
app = App()
app.config = boartty.config.Config('openstack')
sync = boartty.sync.Sync(app, False)
board = sync.get('v1/boards/41')
tasks = get_tasks(sync)
lanes = dict()
for lane in board['lanes']:
lanes[lane['worklist']['title']] = lane
for task in tasks:
ok_lanes = MAP[task['status']]
task_found = False
for lane_name, lane in lanes.items():
if task_in_lane(task, lane):
if ok_lanes and lane_name in ok_lanes:
task_found = True
else:
remove_task(sync, task, lane)
if ok_lanes and not task_found:
add_task(sync, task, lanes[ok_lanes[0]])
if __name__ == '__main__':
main() | zuul | /zuul-9.1.0.tar.gz/zuul-9.1.0/tools/update-storyboard.py | update-storyboard.py |
# Analyze the contents of the ZK tree (whether in ZK or a dump on the
# local filesystem) to identify large objects.
import argparse
import json
import os
import sys
import zlib
import kazoo.client
KB = 1024
MB = 1024**2
GB = 1024**3
def convert_human(size):
if size >= GB:
return f'{int(size/GB)}G'
if size >= MB:
return f'{int(size/MB)}M'
if size >= KB:
return f'{int(size/KB)}K'
if size > 0:
return f'{size}B'
return '0'
def convert_null(size):
return size
def unconvert_human(size):
suffix = size[-1]
val = size[:-1]
if suffix in ['G', 'g']:
return int(val) * GB
if suffix in ['M', 'm']:
return int(val) * MB
if suffix in ['K', 'k']:
return int(val) * KB
return int(size)
class SummaryLine:
def __init__(self, kind, path, size=0, zk_size=0):
self.kind = kind
self.path = path
self.size = size
self.zk_size = zk_size
self.attrs = {}
self.children = []
@property
def tree_size(self):
return sum([x.tree_size for x in self.children] + [self.size])
@property
def zk_tree_size(self):
return sum([x.zk_tree_size for x in self.children] + [self.zk_size])
def add(self, child):
self.children.append(child)
def __str__(self):
indent = 0
return self.toStr(indent)
def matchesLimit(self, limit, zk):
if not limit:
return True
if zk:
size = self.zk_size
else:
size = self.size
if size >= limit:
return True
for child in self.children:
if child.matchesLimit(limit, zk):
return True
return False
def toStr(self, indent, depth=None, conv=convert_null, limit=0, zk=False):
"""Convert this item and its children to a str representation
:param indent int: How many levels to indent
:param depth int: How many levels deep to display
:param conv func: A function to convert sizes to text
:param limit int: Don't display items smaller than this
:param zk bool: Whether to use the data size (False)
or ZK storage size (True)
"""
if depth and indent >= depth:
return ''
if self.matchesLimit(limit, zk):
attrs = ' '.join([f'{k}={conv(v)}' for k, v in self.attrs.items()])
if attrs:
attrs = ' ' + attrs
if zk:
size = conv(self.zk_size)
tree_size = conv(self.zk_tree_size)
else:
size = conv(self.size)
tree_size = conv(self.tree_size)
ret = (' ' * indent + f"{self.kind} {self.path} "
f"size={size} tree={tree_size}{attrs}\n")
for child in self.children:
ret += child.toStr(indent + 1, depth, conv, limit, zk)
else:
ret = ''
return ret
class Data:
def __init__(self, path, raw, zk_size=None, failed=False):
self.path = path
self.raw = raw
self.failed = failed
self.zk_size = zk_size or len(raw)
if not failed:
self.data = json.loads(raw)
else:
print(f"!!! {path} failed to load data")
self.data = {}
@property
def size(self):
return len(self.raw)
class Tree:
def getNode(self, path):
pass
def listChildren(self, path):
pass
def listConnections(self):
return self.listChildren('/zuul/cache/connection')
def getBranchCache(self, connection):
return self.getShardedNode(f'/zuul/cache/connection/{connection}'
'/branches/data')
def listCacheKeys(self, connection):
return self.listChildren(f'/zuul/cache/connection/{connection}/cache')
def getCacheKey(self, connection, key):
return self.getNode(f'/zuul/cache/connection/{connection}/cache/{key}')
def listCacheData(self, connection):
return self.listChildren(f'/zuul/cache/connection/{connection}/data')
def getCacheData(self, connection, key):
return self.getShardedNode(f'/zuul/cache/connection/{connection}'
f'/data/{key}')
def listTenants(self):
return self.listChildren('/zuul/tenant')
def listPipelines(self, tenant):
return self.listChildren(f'/zuul/tenant/{tenant}/pipeline')
def getPipeline(self, tenant, pipeline):
return self.getNode(f'/zuul/tenant/{tenant}/pipeline/{pipeline}')
def getItems(self, tenant, pipeline):
pdata = self.getPipeline(tenant, pipeline)
for queue in pdata.data.get('queues', []):
qdata = self.getNode(queue)
for item in qdata.data.get('queue', []):
idata = self.getNode(item)
yield idata
def listBuildsets(self, item):
return self.listChildren(f'{item}/buildset')
def getBuildset(self, item, buildset):
return self.getNode(f'{item}/buildset/{buildset}')
def listJobs(self, buildset):
return self.listChildren(f'{buildset}/job')
def getJob(self, buildset, job_name):
return self.getNode(f'{buildset}/job/{job_name}')
def listBuilds(self, buildset, job_name):
return self.listChildren(f'{buildset}/job/{job_name}/build')
def getBuild(self, buildset, job_name, build):
return self.getNode(f'{buildset}/job/{job_name}/build/{build}')
class FilesystemTree(Tree):
def __init__(self, root):
self.root = root
def getNode(self, path):
path = path.lstrip('/')
fullpath = os.path.join(self.root, path)
if not os.path.exists(fullpath):
return Data(path, '', failed=True)
try:
with open(os.path.join(fullpath, 'ZKDATA'), 'rb') as f:
zk_data = f.read()
data = zk_data
try:
data = zlib.decompress(zk_data)
except Exception:
pass
return Data(path, data, zk_size=len(zk_data))
except Exception:
return Data(path, '', failed=True)
def getShardedNode(self, path):
path = path.lstrip('/')
fullpath = os.path.join(self.root, path)
if not os.path.exists(fullpath):
return Data(path, '', failed=True)
shards = sorted([x for x in os.listdir(fullpath)
if x != 'ZKDATA'])
data = b''
compressed_data_len = 0
try:
for shard in shards:
with open(os.path.join(fullpath, shard, 'ZKDATA'), 'rb') as f:
compressed_data = f.read()
compressed_data_len += len(compressed_data)
data += zlib.decompress(compressed_data)
return Data(path, data, zk_size=compressed_data_len)
except Exception:
return Data(path, data, failed=True)
def listChildren(self, path):
path = path.lstrip('/')
fullpath = os.path.join(self.root, path)
if not os.path.exists(fullpath):
return []
return [x for x in os.listdir(fullpath)
if x != 'ZKDATA']
class ZKTree(Tree):
def __init__(self, host, cert, key, ca):
kwargs = {}
if cert:
kwargs['use_ssl'] = True
kwargs['keyfile'] = key
kwargs['certfile'] = cert
kwargs['ca'] = ca
self.client = kazoo.client.KazooClient(host, **kwargs)
self.client.start()
def getNode(self, path):
path = path.lstrip('/')
if not self.client.exists(path):
return Data(path, '', failed=True)
try:
zk_data, _ = self.client.get(path)
data = zk_data
try:
data = zlib.decompress(zk_data)
except Exception:
pass
return Data(path, data, zk_size=len(zk_data))
except Exception:
return Data(path, '', failed=True)
def getShardedNode(self, path):
path = path.lstrip('/')
if not self.client.exists(path):
return Data(path, '', failed=True)
shards = sorted(self.listChildren(path))
data = b''
compressed_data_len = 0
try:
for shard in shards:
compressed_data, _ = self.client.get(os.path.join(path, shard))
compressed_data_len += len(compressed_data)
data += zlib.decompress(compressed_data)
return Data(path, data, zk_size=compressed_data_len)
except Exception:
return Data(path, data, failed=True)
def listChildren(self, path):
path = path.lstrip('/')
try:
return self.client.get_children(path)
except kazoo.client.NoNodeError:
return []
class Analyzer:
def __init__(self, args):
if args.path:
self.tree = FilesystemTree(args.path)
else:
self.tree = ZKTree(args.host, args.cert, args.key, args.ca)
if args.depth is not None:
self.depth = int(args.depth)
else:
self.depth = None
if args.human:
self.conv = convert_human
else:
self.conv = convert_null
if args.limit:
self.limit = unconvert_human(args.limit)
else:
self.limit = 0
self.use_zk_size = args.zk_size
def summarizeItem(self, item):
# Start with an item
item_summary = SummaryLine('Item', item.path, item.size, item.zk_size)
buildsets = self.tree.listBuildsets(item.path)
for bs_i, bs_id in enumerate(buildsets):
# Add each buildset
buildset = self.tree.getBuildset(item.path, bs_id)
buildset_summary = SummaryLine(
'Buildset', buildset.path,
buildset.size, buildset.zk_size)
item_summary.add(buildset_summary)
# Some attributes are offloaded, gather them and include
# the size.
for x in ['merge_repo_state', 'extra_repo_state', 'files',
'config_errors']:
if buildset.data.get(x):
node = self.tree.getShardedNode(buildset.data.get(x))
buildset_summary.attrs[x] = \
self.use_zk_size and node.zk_size or node.size
buildset_summary.size += node.size
buildset_summary.zk_size += node.zk_size
jobs = self.tree.listJobs(buildset.path)
for job_i, job_name in enumerate(jobs):
# Add each job
job = self.tree.getJob(buildset.path, job_name)
job_summary = SummaryLine('Job', job.path,
job.size, job.zk_size)
buildset_summary.add(job_summary)
# Handle offloaded job data
for job_attr in ('artifact_data',
'extra_variables',
'group_variables',
'host_variables',
'secret_parent_data',
'variables',
'parent_data',
'secrets'):
job_data = job.data.get(job_attr, None)
if job_data and job_data['storage'] == 'offload':
node = self.tree.getShardedNode(job_data['path'])
job_summary.attrs[job_attr] = \
self.use_zk_size and node.zk_size or node.size
job_summary.size += node.size
job_summary.zk_size += node.zk_size
builds = self.tree.listBuilds(buildset.path, job_name)
for build_i, build_id in enumerate(builds):
# Add each build
build = self.tree.getBuild(
buildset.path, job_name, build_id)
build_summary = SummaryLine(
'Build', build.path, build.size, build.zk_size)
job_summary.add(build_summary)
# Add the offloaded build attributes
result_len = 0
result_zk_len = 0
if build.data.get('_result_data'):
result_data = self.tree.getShardedNode(
build.data['_result_data'])
result_len += result_data.size
result_zk_len += result_data.zk_size
if build.data.get('_secret_result_data'):
secret_result_data = self.tree.getShardedNode(
build.data['_secret_result_data'])
result_len += secret_result_data.size
result_zk_len += secret_result_data.zk_size
build_summary.attrs['results'] = \
self.use_zk_size and result_zk_len or result_len
build_summary.size += result_len
build_summary.zk_size += result_zk_len
sys.stdout.write(item_summary.toStr(0, self.depth, self.conv,
self.limit, self.use_zk_size))
def summarizePipelines(self):
for tenant_name in self.tree.listTenants():
for pipeline_name in self.tree.listPipelines(tenant_name):
for item in self.tree.getItems(tenant_name, pipeline_name):
self.summarizeItem(item)
def summarizeConnectionCache(self, connection_name):
connection_summary = SummaryLine('Connection', connection_name, 0, 0)
branch_cache = self.tree.getBranchCache(connection_name)
branch_summary = SummaryLine(
'Branch Cache', connection_name,
branch_cache.size, branch_cache.zk_size)
connection_summary.add(branch_summary)
cache_key_summary = SummaryLine(
'Change Cache Keys', connection_name, 0, 0)
cache_key_summary.attrs['count'] = 0
connection_summary.add(cache_key_summary)
for key in self.tree.listCacheKeys(connection_name):
cache_key = self.tree.getCacheKey(connection_name, key)
cache_key_summary.size += cache_key.size
cache_key_summary.zk_size += cache_key.zk_size
cache_key_summary.attrs['count'] += 1
cache_data_summary = SummaryLine(
'Change Cache Data', connection_name, 0, 0)
cache_data_summary.attrs['count'] = 0
connection_summary.add(cache_data_summary)
for key in self.tree.listCacheData(connection_name):
cache_data = self.tree.getCacheData(connection_name, key)
cache_data_summary.size += cache_data.size
cache_data_summary.zk_size += cache_data.zk_size
cache_data_summary.attrs['count'] += 1
sys.stdout.write(connection_summary.toStr(
0, self.depth, self.conv, self.limit, self.use_zk_size))
def summarizeConnections(self):
for connection_name in self.tree.listConnections():
self.summarizeConnectionCache(connection_name)
def summarize(self):
self.summarizeConnections()
self.summarizePipelines()
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('--path',
help='Filesystem path for previously dumped data')
parser.add_argument('--host',
help='ZK host string (exclusive with --path)')
parser.add_argument('--cert', help='Path to TLS certificate')
parser.add_argument('--key', help='Path to TLS key')
parser.add_argument('--ca', help='Path to TLS CA cert')
parser.add_argument('-d', '--depth', help='Limit depth when printing')
parser.add_argument('-H', '--human', dest='human', action='store_true',
help='Use human-readable sizes')
parser.add_argument('-l', '--limit', dest='limit',
help='Only print nodes greater than limit')
parser.add_argument('-Z', '--zksize', dest='zk_size', action='store_true',
help='Use the possibly compressed ZK storage size '
'instead of plain data size')
args = parser.parse_args()
az = Analyzer(args)
az.summarize() | zuul | /zuul-9.1.0.tar.gz/zuul-9.1.0/tools/zk-analyze.py | zk-analyze.py |
import gzip
import os
import re
import yaml
def get_log_age(path):
filename = os.path.basename(path)
parts = filename.split('.')
if len(parts) < 4:
return 0
else:
return int(parts[2])
class LogScraper(object):
# Example log line
# 2018-10-26 16:14:47,527 INFO zuul.nodepool: Nodeset <NodeSet two-centos-7-nodes [<Node 0000058431 ('primary',):centos-7>, <Node 0000058468 ('secondary',):centos-7>]> with 2 nodes was in use for 6241.08082151413 seconds for build <Build 530c4ca7af9e44dcb535e7074258e803 of tripleo-ci-centos-7-scenario008-multinode-oooq-container voting:False on <Worker ze05.openstack.org>> for project openstack/tripleo-quickstart-extras # noqa
r = re.compile(r'(?P<timestamp>\d+-\d+-\d+ \d\d:\d\d:\d\d,\d\d\d) INFO zuul.nodepool: Nodeset <.*> with (?P<nodes>\d+) nodes was in use for (?P<secs>\d+(.[\d\-e]+)?) seconds for build <Build \w+ of (?P<job>[^\s]+) voting:\w+ on .* for project (?P<repos>[^\s]+)') # noqa
def __init__(self):
self.repos = {}
self.sorted_repos = []
self.jobs = {}
self.sorted_jobs = []
self.total_usage = 0.0
self.projects = {}
self.sorted_projects = []
self.start_time = None
self.end_time = None
def scrape_file(self, fn):
if fn.endswith('.gz'):
open_f = gzip.open
else:
open_f = open
with open_f(fn, 'rt') as f:
for line in f:
if 'nodes was in use for' in line:
m = self.r.match(line)
if not m:
continue
g = m.groupdict()
repo = g['repos']
secs = float(g['secs'])
nodes = int(g['nodes'])
job = g['job']
if not self.start_time:
self.start_time = g['timestamp']
self.end_time = g['timestamp']
if repo not in self.repos:
self.repos[repo] = {}
self.repos[repo]['total'] = 0.0
node_time = nodes * secs
self.total_usage += node_time
self.repos[repo]['total'] += node_time
if job not in self.jobs:
self.jobs[job] = 0.0
if job not in self.repos[repo]:
self.repos[repo][job] = 0.0
self.jobs[job] += node_time
self.repos[repo][job] += node_time
def list_log_files(self, path='/var/log/zuul'):
ret = []
entries = os.listdir(path)
prefix = os.path.join(path, 'zuul.log')
for entry in entries:
entry = os.path.join(path, entry)
if os.path.isfile(entry) and entry.startswith(prefix):
ret.append(entry)
ret.sort(key=get_log_age, reverse=True)
return ret
def sort_repos(self):
for repo in self.repos:
self.sorted_repos.append((repo, self.repos[repo]['total']))
self.sorted_repos.sort(key=lambda x: x[1], reverse=True)
def sort_jobs(self):
for job, usage in self.jobs.items():
self.sorted_jobs.append((job, usage))
self.sorted_jobs.sort(key=lambda x: x[1], reverse=True)
def calculate_project_usage(self):
'''Group usage by logical project/effort
It is often the case that a single repo doesn't capture the work
of a logical project or effort. If this is the case in your situation
you can create a projects.yaml file that groups together repos
under logical project names to report usage by that logical grouping.
The projects.yaml should be in your current directory and have this
format:
project_name:
deliverables:
logical_deliverable_name:
repos:
- repo1
- repo2
project_name2:
deliverables:
logical_deliverable_name2:
repos:
- repo3
- repo4
'''
if not os.path.exists('projects.yaml'):
return self.sorted_projects
with open('projects.yaml') as f:
y = yaml.load(f)
for name, v in y.items():
self.projects[name] = 0.0
for deliverable in v['deliverables'].values():
for repo in deliverable['repos']:
if repo in self.repos:
self.projects[name] += self.repos[repo]['total']
for project, usage in self.projects.items():
self.sorted_projects.append((project, usage))
self.sorted_projects.sort(key=lambda x: x[1], reverse=True)
scraper = LogScraper()
for fn in scraper.list_log_files():
scraper.scrape_file(fn)
print('For period from %s to %s' % (scraper.start_time, scraper.end_time))
print('Total node time used: %.2fs' % scraper.total_usage)
print()
scraper.calculate_project_usage()
if scraper.sorted_projects:
print('Top 20 logical projects by resource usage:')
for project, total in scraper.sorted_projects[:20]:
percentage = (total / scraper.total_usage) * 100
print('%s: %.2fs, %.2f%%' % (project, total, percentage))
print()
scraper.sort_repos()
print('Top 20 repos by resource usage:')
for repo, total in scraper.sorted_repos[:20]:
percentage = (total / scraper.total_usage) * 100
print('%s: %.2fs, %.2f%%' % (repo, total, percentage))
print()
scraper.sort_jobs()
print('Top 20 jobs by resource usage:')
for job, total in scraper.sorted_jobs[:20]:
percentage = (total / scraper.total_usage) * 100
print('%s: %.2fs, %.2f%%' % (job, total, percentage))
print() | zuul | /zuul-9.1.0.tar.gz/zuul-9.1.0/tools/node_usage.py | node_usage.py |
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import argparse
import base64
import json
import math
import os
import re
import subprocess
import sys
import tempfile
import textwrap
import ssl
# we to import Request and urlopen differently for python 2 and 3
try:
from urllib.request import Request
from urllib.request import urlopen
from urllib.parse import urlparse
except ImportError:
from urllib2 import Request
from urllib2 import urlopen
from urlparse import urlparse
DESCRIPTION = """Encrypt a secret for Zuul.
This program fetches a project-specific public key from a Zuul server and
uses that to encrypt a secret. The only pre-requisite is an installed
OpenSSL binary.
"""
def main():
parser = argparse.ArgumentParser(description=DESCRIPTION)
parser.add_argument('url',
help="The base URL of the zuul server. "
"E.g., https://zuul.example.com/ or path"
" to project public key file. E.g.,"
" file:///path/to/key.pub")
parser.add_argument('project', default=None, nargs="?",
help="The name of the project. Required when using"
" the Zuul API to fetch the public key.")
parser.add_argument('--tenant',
default=None,
help="The name of the Zuul tenant. This may be "
"required in a multi-tenant environment.")
parser.add_argument('--strip', default=None,
help='Unused, kept for backward compatibility.')
parser.add_argument('--no-strip', action='store_true', default=False,
help="Do not strip whitespace from beginning or "
"end of input.")
parser.add_argument('--infile',
default=None,
help="A filename whose contents will be encrypted. "
"If not supplied, the value will be read from "
"standard input.")
parser.add_argument('--outfile',
default=None,
help="A filename to which the encrypted value will be "
"written. If not supplied, the value will be written "
"to standard output.")
parser.add_argument('--insecure', action='store_true', default=False,
help="Do not verify remote certificate")
args = parser.parse_args()
# We should not use unencrypted connections for retrieving the public key.
# Otherwise our secret can be compromised. The schemes file and https are
# considered safe.
url = urlparse(args.url)
if url.scheme not in ('file', 'https'):
sys.stderr.write("WARNING: Retrieving encryption key via an "
"unencrypted connection. Your secret may get "
"compromised.\n")
ssl_ctx = None
if url.scheme == 'file':
req = Request(args.url)
else:
if args.insecure:
ssl_ctx = ssl.create_default_context()
ssl_ctx.check_hostname = False
ssl_ctx.verify_mode = ssl.CERT_NONE
# Check if tenant is white label
req = Request("%s/api/info" % (args.url.rstrip('/'),))
info = json.loads(urlopen(req, context=ssl_ctx).read().decode('utf8'))
api_tenant = info.get('info', {}).get('tenant')
if not api_tenant and not args.tenant:
print("Error: the --tenant argument is required")
exit(1)
if api_tenant:
req = Request("%s/api/key/%s.pub" % (
args.url.rstrip('/'), args.project))
else:
req = Request("%s/api/tenant/%s/key/%s.pub" % (
args.url.rstrip('/'), args.tenant, args.project))
try:
pubkey = urlopen(req, context=ssl_ctx)
except Exception:
sys.stderr.write(
"ERROR: Couldn't retrieve project key via %s\n" % req.full_url)
raise
if args.infile:
with open(args.infile) as f:
plaintext = f.read()
else:
plaintext = sys.stdin.read()
plaintext = plaintext.encode("utf-8")
if not args.no_strip:
plaintext = plaintext.strip()
pubkey_file = tempfile.NamedTemporaryFile(delete=False)
try:
pubkey_file.write(pubkey.read())
pubkey_file.close()
p = subprocess.Popen(['openssl', 'rsa', '-text',
'-pubin', '-in',
pubkey_file.name],
stdout=subprocess.PIPE)
(stdout, stderr) = p.communicate()
if p.returncode != 0:
raise Exception("Return code %s from openssl" % p.returncode)
output = stdout.decode('utf-8')
openssl_version = subprocess.check_output(
['openssl', 'version']).split()[1]
if openssl_version.startswith(b'0.'):
key_length_re = r'^Modulus \((?P<key_length>\d+) bit\):$'
else:
key_length_re = r'^(|RSA )Public-Key: \((?P<key_length>\d+) bit\)$'
m = re.match(key_length_re, output, re.MULTILINE)
nbits = int(m.group('key_length'))
nbytes = int(nbits / 8)
max_bytes = nbytes - 42 # PKCS1-OAEP overhead
chunks = int(math.ceil(float(len(plaintext)) / max_bytes))
ciphertext_chunks = []
print("Public key length: {} bits ({} bytes)".format(nbits, nbytes))
print("Max plaintext length per chunk: {} bytes".format(max_bytes))
print("Input plaintext length: {} bytes".format(len(plaintext)))
print("Number of chunks: {}".format(chunks))
for count in range(chunks):
chunk = plaintext[int(count * max_bytes):
int((count + 1) * max_bytes)]
p = subprocess.Popen(['openssl', 'rsautl', '-encrypt',
'-oaep', '-pubin', '-inkey',
pubkey_file.name],
stdin=subprocess.PIPE,
stdout=subprocess.PIPE)
(stdout, stderr) = p.communicate(chunk)
if p.returncode != 0:
raise Exception("Return code %s from openssl" % p.returncode)
ciphertext_chunks.append(base64.b64encode(stdout).decode('utf-8'))
finally:
os.unlink(pubkey_file.name)
output = textwrap.dedent(
'''
- secret:
name: <name>
data:
<fieldname>: !encrypted/pkcs1-oaep
''')
twrap = textwrap.TextWrapper(width=79,
initial_indent=' ' * 8,
subsequent_indent=' ' * 10)
for chunk in ciphertext_chunks:
chunk = twrap.fill('- ' + chunk)
output += chunk + '\n'
if args.outfile:
with open(args.outfile, "w") as f:
f.write(output)
else:
print(output)
if __name__ == '__main__':
print(
"This script is deprecated. Use `zuul-client encrypt` instead. "
"Please refer to https://zuul-ci.org/docs/zuul-client/ "
"for more details on how to use zuul-client."
)
main() | zuul | /zuul-9.1.0.tar.gz/zuul-9.1.0/tools/encrypt_secret.py | encrypt_secret.py |
# Copyright 2020 Red Hat, Inc
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
# Manage a CA for Zookeeper
CAROOT=$1
SERVER=$2
SUBJECT='/C=US/ST=California/L=Oakland/O=Company Name/OU=Org'
TOOLSDIR=$(dirname $0)
ABSTOOLSDIR=$(cd $TOOLSDIR ;pwd)
CONFIG="-config $ABSTOOLSDIR/openssl.cnf"
make_ca() {
mkdir $CAROOT/demoCA
mkdir $CAROOT/demoCA/reqs
mkdir $CAROOT/demoCA/newcerts
mkdir $CAROOT/demoCA/crl
mkdir $CAROOT/demoCA/private
chmod 700 $CAROOT/demoCA/private
touch $CAROOT/demoCA/index.txt
touch $CAROOT/demoCA/index.txt.attr
mkdir $CAROOT/certs
mkdir $CAROOT/keys
mkdir $CAROOT/keystores
chmod 700 $CAROOT/keys
chmod 700 $CAROOT/keystores
openssl req $CONFIG -new -nodes -subj "$SUBJECT/CN=caroot" \
-keyout $CAROOT/demoCA/private/cakey.pem \
-out $CAROOT/demoCA/reqs/careq.pem
openssl ca $CONFIG -create_serial -days 3560 -batch -selfsign -extensions v3_ca \
-out $CAROOT/demoCA/cacert.pem \
-keyfile $CAROOT/demoCA/private/cakey.pem \
-infiles $CAROOT/demoCA/reqs/careq.pem
cp $CAROOT/demoCA/cacert.pem $CAROOT/certs
}
make_client() {
openssl req $CONFIG -new -nodes -subj "$SUBJECT/CN=client" \
-keyout $CAROOT/keys/clientkey.pem \
-out $CAROOT/demoCA/reqs/clientreq.pem
openssl ca $CONFIG -batch -policy policy_anything -days 3560 \
-out $CAROOT/certs/client.pem \
-infiles $CAROOT/demoCA/reqs/clientreq.pem
}
make_server() {
openssl req $CONFIG -new -nodes -subj "$SUBJECT/CN=$SERVER" \
-keyout $CAROOT/keys/${SERVER}key.pem \
-out $CAROOT/demoCA/reqs/${SERVER}req.pem
openssl ca $CONFIG -batch -policy policy_anything -days 3560 \
-out $CAROOT/certs/$SERVER.pem \
-infiles $CAROOT/demoCA/reqs/${SERVER}req.pem
cat $CAROOT/certs/$SERVER.pem $CAROOT/keys/${SERVER}key.pem \
> $CAROOT/keystores/$SERVER.pem
}
help() {
echo "$0 CAROOT [SERVER]"
echo
echo " CAROOT is the path to a directory in which to store the CA"
echo " and certificates."
echo " SERVER is the FQDN of a server for which a certificate should"
echo " be generated"
}
if [ ! -d "$CAROOT" ]; then
echo "CAROOT must be a directory"
help
exit 1
fi
cd $CAROOT
CAROOT=`pwd`
if [ ! -d "$CAROOT/demoCA" ]; then
echo 'Generate CA'
make_ca
echo 'Generate client certificate'
make_client
fi
if [ -f "$CAROOT/certs/$SERVER.pem" ]; then
echo "Certificate for $SERVER already exists"
exit 0
fi
if [ "$SERVER" != "" ]; then
make_server
fi | zuul | /zuul-9.1.0.tar.gz/zuul-9.1.0/tools/zk-ca.sh | zk-ca.sh |
# pylint: disable=locally-disabled, invalid-name
"""
Zuul references cleaner.
Clear up references under /refs/zuul/ by inspecting the age of the commit the
reference points to. If the commit date is older than a number of days
specificed by --until, the reference is deleted from the git repository.
Use --dry-run --verbose to finely inspect the script behavior.
"""
import argparse
import git
import logging
import time
import sys
NOW = int(time.time())
DEFAULT_DAYS = 360
ZUUL_REF_PREFIX = 'refs/zuul/'
parser = argparse.ArgumentParser(
description=__doc__,
formatter_class=argparse.RawDescriptionHelpFormatter,
)
parser.add_argument('--until', dest='days_ago', default=DEFAULT_DAYS, type=int,
help='references older than this number of day will '
'be deleted. Default: %s' % DEFAULT_DAYS)
parser.add_argument('-n', '--dry-run', dest='dryrun', action='store_true',
help='do not delete references')
parser.add_argument('-v', '--verbose', dest='verbose', action='store_true',
help='set log level from info to debug')
parser.add_argument('gitrepo', help='path to a Zuul git repository')
args = parser.parse_args()
logging.basicConfig()
log = logging.getLogger('zuul-clear-refs')
if args.verbose:
log.setLevel(logging.DEBUG)
else:
log.setLevel(logging.INFO)
try:
repo = git.Repo(args.gitrepo)
except git.exc.InvalidGitRepositoryError:
log.error("Invalid git repo: %s" % args.gitrepo)
sys.exit(1)
for ref in repo.references:
if not ref.path.startswith(ZUUL_REF_PREFIX):
continue
if type(ref) is not git.refs.reference.Reference:
# Paranoia: ignore heads/tags/remotes ..
continue
try:
commit_ts = ref.commit.committed_date
except LookupError:
# GitPython does not properly handle PGP signed tags
log.exception("Error in commit: %s, ref: %s. Type: %s",
ref.commit, ref.path, type(ref))
continue
commit_age = int((NOW - commit_ts) / 86400) # days
log.debug(
"%s at %s is %3s days old",
ref.commit,
ref.path,
commit_age,
)
if commit_age > args.days_ago:
if args.dryrun:
log.info("Would delete old ref: %s (%s)", ref.path, ref.commit)
else:
log.info("Deleting old ref: %s (%s)", ref.path, ref.commit)
ref.delete(repo, ref.path) | zuul | /zuul-9.1.0.tar.gz/zuul-9.1.0/tools/zuul-clear-refs.py | zuul-clear-refs.py |
# Inspect ZK contents like zk-shell; handles compressed and sharded
# data.
import argparse
import pathlib
import cmd
import sys
import textwrap
import zlib
import kazoo.client
from kazoo.exceptions import NoNodeError
def resolve_path(path, rest):
newpath = path / rest
newparts = []
for part in newpath.parts:
if part == '.':
continue
elif part == '..':
newparts.pop()
else:
newparts.append(part)
return pathlib.PurePosixPath(*newparts)
class REPL(cmd.Cmd):
def __init__(self, args):
self.path = pathlib.PurePosixPath('/')
super().__init__()
kwargs = {}
if args.cert:
kwargs['use_ssl'] = True
kwargs['keyfile'] = args.key
kwargs['certfile'] = args.cert
kwargs['ca'] = args.ca
self.client = kazoo.client.KazooClient(args.host, **kwargs)
self.client.start()
@property
def prompt(self):
return f'{self.path}> '
def do_EOF(self, path):
sys.exit(0)
def do_ls(self, path):
'List znodes: ls [PATH]'
if path:
mypath = self.path / path
else:
mypath = self.path
try:
for child in self.client.get_children(str(mypath)):
print(child)
except NoNodeError:
print(f'No such node: {mypath}')
def do_cd(self, path):
'Change the working path: cd PATH'
if path:
newpath = resolve_path(self.path, path)
if self.client.exists(str(newpath)):
self.path = newpath
else:
print(f'No such node: {newpath}')
def do_pwd(self):
'Print the working path'
print(self.path)
def help_get(self):
print(textwrap.dedent(self.do_get.__doc__))
def do_get(self, args):
"""\
Get znode value: get PATH [-v]
-v: output metadata about the path
"""
args = args.split(' ')
path = args[0]
args = args[1:]
path = resolve_path(self.path, path)
try:
compressed_data, zstat = self.client.get(str(path))
except NoNodeError:
print(f'No such node: {path}')
return
was_compressed = False
try:
data = zlib.decompress(compressed_data)
was_compressed = True
except zlib.error:
data = compressed_data
if '-v' in args:
print(f'Compressed: {was_compressed}')
print(f'Size: {len(data)}')
print(f'Compressed size: {len(compressed_data)}')
print(f'Zstat: {zstat}')
print(data)
def help_unshard(self):
print(textwrap.dedent(self.do_unshard.__doc__))
def do_unshard(self, args):
"""\
Get the unsharded value: get PATH [-v]
-v: output metadata about the path
"""
args = args.split(' ')
path = args[0]
args = args[1:]
path = resolve_path(self.path, path)
try:
shards = sorted(self.client.get_children(str(path)))
except NoNodeError:
print(f'No such node: {path}')
return
compressed_data = b''
data = b''
for shard in shards:
d, _ = self.client.get(str(path / shard))
compressed_data += d
if compressed_data:
data = zlib.decompress(compressed_data)
if '-v' in args:
print(f'Size: {len(data)}')
print(f'Compressed size: {len(compressed_data)}')
print(data)
def do_rm(self, args):
'Delete znode: rm PATH [-r]'
args = args.split(' ')
path = args[0]
args = args[1:]
path = resolve_path(self.path, path)
if '-r' in args:
recursive = True
else:
recursive = False
try:
self.client.delete(str(path), recursive=recursive)
except NoNodeError:
print(f'No such node: {path}')
def main():
parser = argparse.ArgumentParser()
parser.add_argument('host', help='ZK host string')
parser.add_argument('--cert', help='Path to TLS certificate')
parser.add_argument('--key', help='Path to TLS key')
parser.add_argument('--ca', help='Path to TLS CA cert')
args = parser.parse_args()
repl = REPL(args)
repl.cmdloop()
if __name__ == '__main__':
main() | zuul | /zuul-9.1.0.tar.gz/zuul-9.1.0/tools/zk-shell.py | zk-shell.py |
# Copyright (c) 2016 NodeSource LLC
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
# The above license is inferred from the
# https://github.com/nodesource/distributions source repository.
# Discussion, issues and change requests at:
# https://github.com/nodesource/distributions
#
# Script to install the NodeSource Node.js 10.x repo onto an
# Enterprise Linux or Fedora Core based system.
#
# This was downloaded from https://rpm.nodesource.com/setup_10.x
# A few modifications have been made.
SCRSUFFIX="_10.x"
NODENAME="Node.js 10.x"
NODEREPO="pub_10.x"
NODEPKG="nodejs"
print_status() {
local outp=$(echo "$1") # | sed -r 's/\\n/\\n## /mg')
echo
echo -e "## ${outp}"
echo
}
if test -t 1; then # if terminal
ncolors=$(which tput > /dev/null && tput colors) # supports color
if test -n "$ncolors" && test $ncolors -ge 8; then
termcols=$(tput cols)
bold="$(tput bold)"
underline="$(tput smul)"
standout="$(tput smso)"
normal="$(tput sgr0)"
black="$(tput setaf 0)"
red="$(tput setaf 1)"
green="$(tput setaf 2)"
yellow="$(tput setaf 3)"
blue="$(tput setaf 4)"
magenta="$(tput setaf 5)"
cyan="$(tput setaf 6)"
white="$(tput setaf 7)"
fi
fi
print_bold() {
title="$1"
text="$2"
echo
echo "${red}================================================================================${normal}"
echo "${red}================================================================================${normal}"
echo
echo -e " ${bold}${yellow}${title}${normal}"
echo
echo -en " ${text}"
echo
echo "${red}================================================================================${normal}"
echo "${red}================================================================================${normal}"
}
bail() {
echo 'Error executing command, exiting'
exit 1
}
exec_cmd_nobail() {
echo "+ $1"
bash -c "$1"
}
exec_cmd() {
exec_cmd_nobail "$1" || bail
}
node_deprecation_warning() {
if [[ "X${NODENAME}" == "Xio.js 1.x" ||
"X${NODENAME}" == "Xio.js 2.x" ||
"X${NODENAME}" == "Xio.js 3.x" ||
"X${NODENAME}" == "XNode.js 0.10" ||
"X${NODENAME}" == "XNode.js 0.12" ||
"X${NODENAME}" == "XNode.js 4.x LTS Argon" ||
"X${NODENAME}" == "XNode.js 5.x" ||
"X${NODENAME}" == "XNode.js 7.x" ]]; then
print_bold \
" DEPRECATION WARNING " "\
${bold}${NODENAME} is no longer actively supported!${normal}
${bold}You will not receive security or critical stability updates${normal} for this version.
You should migrate to a supported version of Node.js as soon as possible.
Use the installation script that corresponds to the version of Node.js you
wish to install. e.g.
* ${green}https://deb.nodesource.com/setup_8.x — Node.js v8 LTS \"Carbon\"${normal} (recommended)
* ${green}https://deb.nodesource.com/setup_10.x — Node.js v10 Current${normal}
Please see ${bold}https://github.com/nodejs/Release${normal} for details about which
version may be appropriate for you.
The ${bold}NodeSource${normal} Node.js distributions repository contains
information both about supported versions of Node.js and supported Linux
distributions. To learn more about usage, see the repository:
${bold}https://github.com/nodesource/distributions${normal}
"
echo
echo "Continuing in 20 seconds ..."
echo
sleep 20
fi
}
script_deprecation_warning() {
if [ "X${SCRSUFFIX}" == "X" ]; then
print_bold \
" SCRIPT DEPRECATION WARNING " "\
This script, located at ${bold}https://rpm.nodesource.com/setup${normal}, used to
install Node.js v0.10, is deprecated and will eventually be made inactive.
You should use the script that corresponds to the version of Node.js you
wish to install. e.g.
* ${green}https://deb.nodesource.com/setup_8.x — Node.js v8 LTS \"Carbon\"${normal} (recommended)
* ${green}https://deb.nodesource.com/setup_10.x — Node.js v10 Current${normal}
Please see ${bold}https://github.com/nodejs/Release${normal} for details about which
version may be appropriate for you.
The ${bold}NodeSource${normal} Node.js Linux distributions GitHub repository contains
information about which versions of Node.js and which Linux distributions
are supported and how to use the install scripts.
${bold}https://github.com/nodesource/distributions${normal}
"
echo
echo "Continuing in 20 seconds (press Ctrl-C to abort) ..."
echo
sleep 20
fi
}
setup() {
script_deprecation_warning
node_deprecation_warning
print_status "Installing the NodeSource ${NODENAME} repo..."
print_status "Inspecting system..."
if [ ! -x /bin/rpm ]; then
print_status """You don't appear to be running an Enterprise Linux based system,
please contact NodeSource at https://github.com/nodesource/distributions/issues
if you think this is incorrect or would like your distribution to be considered
for support.
"""
exit 1
fi
## Annotated section for auto extraction in test.sh
#-check-distro-#
## Check distro and arch
echo "+ rpm -q --whatprovides redhat-release || rpm -q --whatprovides centos-release || rpm -q --whatprovides cloudlinux-release || rpm -q --whatprovides sl-release"
DISTRO_PKG=$(rpm -q --whatprovides redhat-release || rpm -q --whatprovides centos-release || rpm -q --whatprovides cloudlinux-release || rpm -q --whatprovides sl-release)
echo "+ uname -m"
UNAME_ARCH=$(uname -m)
if [ "X${UNAME_ARCH}" == "Xi686" ]; then
DIST_ARCH=i386
elif [ "X${UNAME_ARCH}" == "Xx86_64" ]; then
DIST_ARCH=x86_64
else
print_status "\
You don't appear to be running a supported machine architecture: ${UNAME_ARCH}. \
Please contact NodeSource at \
https://github.com/nodesource/distributions/issues if you think this is \
incorrect or would like your architecture to be considered for support. \
"
exit 1
fi
if [[ $DISTRO_PKG =~ ^(redhat|centos|cloudlinux|sl)- ]]; then
DIST_TYPE=el
elif [[ $DISTRO_PKG =~ ^(enterprise|system)-release- ]]; then # Oracle Linux & Amazon Linux
DIST_TYPE=el
elif [[ $DISTRO_PKG =~ ^(fedora|korora)- ]]; then
DIST_TYPE=fc
else
print_status "\
You don't appear to be running a supported version of Enterprise Linux. \
Please contact NodeSource at \
https://github.com/nodesource/distributions/issues if you think this is \
incorrect or would like your architecture to be considered for support. \
Include your 'distribution package' name: ${DISTRO_PKG}. \
"
exit 1
fi
if [[ $DISTRO_PKG =~ ^system-release ]]; then
# Amazon Linux, for 2014.* use el7, older versions are unknown, perhaps el6
DIST_VERSION=7
else
## Using the redhat-release-server-X, centos-release-X, etc. pattern
## extract the major version number of the distro
DIST_VERSION=$(echo $DISTRO_PKG | sed -r 's/^[[:alpha:]]+-release(-server|-workstation|-client)?-([0-9]+).*$/\2/')
if ! [[ $DIST_VERSION =~ ^[0-9][0-9]?$ ]]; then
print_status "\
Could not determine your distribution version, you may not be running a \
supported version of Enterprise Linux. \
Please contact NodeSource at \
https://github.com/nodesource/distributions/issues if you think this is \
incorrect. Include your 'distribution package' name: ${DISTRO_PKG}. \
"
exit 1
fi
fi
## Given the distro, version and arch, construct the url for
## the appropriate nodesource-release package (it's noarch but
## we include the arch in the directory tree anyway)
RELEASE_URL_VERSION_STRING="${DIST_TYPE}${DIST_VERSION}"
RELEASE_URL="\
https://rpm.nodesource.com/${NODEREPO}/\
${DIST_TYPE}/\
${DIST_VERSION}/\
${DIST_ARCH}/\
nodesource-release-${RELEASE_URL_VERSION_STRING}-1.noarch.rpm"
#-check-distro-#
print_status "Confirming \"${DIST_TYPE}${DIST_VERSION}-${DIST_ARCH}\" is supported..."
## Simple fetch & fast-fail to see if the nodesource-release
## file exists for this distro/version/arch
exec_cmd_nobail "curl -sLf -o /dev/null '${RELEASE_URL}'"
RC=$?
if [[ $RC != 0 ]]; then
print_status "\
Your distribution, identified as \"${DISTRO_PKG}\", \
is not currently supported, please contact NodeSource at \
https://github.com/nodesource/distributions/issues \
if you think this is incorrect or would like your distribution to be considered for support"
exit 1
fi
## EPEL is needed for EL5, we don't install it if it's missing but
## we can give guidance
if [ "$DIST_TYPE" == "el" ] && [ "$DIST_VERSION" == "5" ]; then
print_status "Checking if EPEL is enabled..."
echo "+ yum repolist enabled 2> /dev/null | grep epel"
repolist=$(yum repolist enabled 2> /dev/null | grep epel)
if [ "X${repolist}" == "X" ]; then
print_status "Finding current EPEL release RPM..."
## We can scrape the html to find the latest epel-release (likely 5.4)
epel_url="http://dl.fedoraproject.org/pub/epel/5/${DIST_ARCH}/"
epel_release_view="${epel_url}repoview/epel-release.html"
echo "+ curl -s $epel_release_view | grep -oE 'epel-release-[0-9\-]+\.noarch\.rpm'"
epel=$(curl -s $epel_release_view | grep -oE 'epel-release-[0-9\-]+\.noarch\.rpm')
if [ "X${epel}" = "X" ]; then
print_status "Error: Could not find current EPEL release RPM!"
exit 1
fi
print_status """The EPEL (Extra Packages for Enterprise Linux) repository is a
prerequisite for installing Node.js on your operating system. Please
add it and re-run this setup script.
The EPEL repository RPM is available at:
${epel_url}${epel}
You can try installing with: \`rpm -ivh <url>\`
"""
exit 1
fi
fi
print_status "Downloading release setup RPM..."
## Two-step process to install the nodesource-release RPM,
## Download to a tmp file then install it directly with `rpm`.
## We don't rely on RPM's ability to fetch from HTTPS directly
echo "+ mktemp"
RPM_TMP=$(mktemp || bail)
exec_cmd "curl -sL -o '${RPM_TMP}' '${RELEASE_URL}'"
print_status "Installing release setup RPM..."
## --nosignature because nodesource-release contains the signature!
exec_cmd "rpm -i --nosignature --force '${RPM_TMP}'"
print_status "Cleaning up..."
exec_cmd "rm -f '${RPM_TMP}'"
print_status "Checking for existing installations..."
## Nasty consequences if you have an existing Node or npm package
## installed, need to inform if they are there
echo "+ rpm -qa 'node|npm' | grep -v nodesource"
EXISTING_NODE=$(rpm -qa 'node|npm|iojs' | grep -v nodesource)
if [ "X${EXISTING_NODE}" != "X" ]; then
print_status """Your system appears to already have Node.js installed from an alternative source.
Run \`${bold}sudo yum remove -y ${NODEPKG} npm${normal}\` to remove these first.
"""
fi
print_status """Run \`${bold}sudo yum install -y ${NODEPKG}${normal}\` to install ${NODENAME} and npm.
## You may also need development tools to build native addons:
sudo yum install gcc-c++ make
## To install the Yarn package manager, run:
curl -sL https://dl.yarnpkg.com/rpm/yarn.repo | sudo tee /etc/yum.repos.d/yarn.repo
sudo yum install yarn
"""
exit 0
}
## Defer setup until we have the complete script
setup | zuul | /zuul-9.1.0.tar.gz/zuul-9.1.0/tools/install-js-repos-rpm.sh | install-js-repos-rpm.sh |
import argparse
import json
import sys
import datetime
import requests
from pathlib import Path
def usage(argv):
two_weeks_ago = datetime.datetime.utcnow() - datetime.timedelta(days=14)
parser = argparse.ArgumentParser(
description="Look for unstrusted command in builds log")
parser.add_argument(
"--since", default=two_weeks_ago, help="Date in YYYY-MM-DD format")
parser.add_argument("zuul_url", help="The url of a zuul-web service")
args = parser.parse_args(argv)
args.zuul_url = args.zuul_url.rstrip("/")
if not args.zuul_url.endswith("/api"):
args.zuul_url += "/api"
if not isinstance(args.since, datetime.datetime):
args.since = datetime.datetime.strptime(args.since, "%Y-%m-%d")
return args
def get_tenants(zuul_url):
""" Fetch list of tenant names """
is_witelabel = requests.get(
"%s/info" % zuul_url).json().get('tenant', None) is not None
if is_witelabel:
raise RuntimeError("Need multitenant api")
return [
tenant["name"]
for tenant in requests.get("%s/tenants" % zuul_url).json()
]
def is_build_in_range(build, since):
""" Check if a build is in range """
try:
build_date = datetime.datetime.strptime(
build["start_time"], "%Y-%m-%dT%H:%M:%S")
return build_date > since
except TypeError:
return False
def get_builds(zuul_builds_url, since):
""" Fecth list of builds that are in range """
builds = []
pos = 0
step = 50
while not builds or is_build_in_range(builds[-1], since):
url = "%s?skip=%d&limit=%d" % (zuul_builds_url, pos, step)
print("Querying %s" % url)
builds += requests.get(url).json()
pos += step
return builds
def filter_unique_builds(builds):
""" Filter the list of build to keep only one per job name """
jobs = dict()
for build in builds:
if build["job_name"] not in jobs:
jobs[build["job_name"]] = build
unique_builds = list(jobs.values())
print("Found %d unique job builds" % len(unique_builds))
return unique_builds
def download(source_url, local_filename):
""" Download a file using streaming request """
with requests.get(source_url, local_filename, stream=True) as r:
r.raise_for_status()
with open(local_filename, 'wb') as f:
for chunk in r.iter_content(chunk_size=8192):
f.write(chunk)
def download_build_job_output(zuul_build_url, local_path):
""" Download the job-output.json of a build """
build = requests.get(zuul_build_url).json()
if not build.get("log_url"):
return "No log url"
try:
download(build["log_url"] + "job-output.json", local_path)
except Exception as e:
return str(e)
def examine(path):
""" Look for forbidden tasks in a job-output.json file path """
data = json.load(open(path))
to_fix = False
for playbook in data:
if playbook['trusted']:
continue
for play in playbook['plays']:
for task in play['tasks']:
for hostname, host in task['hosts'].items():
if hostname != 'localhost':
continue
if host['action'] in ['command', 'shell']:
print("Found disallowed task:")
print(" Playbook: %s" % playbook['playbook'])
print(" Role: %s" % task.get('role', {}).get('name'))
print(" Task: %s" % task.get('task', {}).get('name'))
to_fix = True
return to_fix
def main(argv):
args = usage(argv)
cache_dir = Path("/tmp/zuul-logs")
if not cache_dir.exists():
cache_dir.mkdir()
to_fix = set()
failed_to_examine = set()
for tenant in get_tenants(args.zuul_url):
zuul_tenant_url = args.zuul_url + "/tenant/" + tenant
print("Looking for unique build in %s" % zuul_tenant_url)
for build in filter_unique_builds(
get_builds(zuul_tenant_url + "/builds", args.since)):
if not build.get("uuid"):
# Probably a SKIPPED build, no need to examine
continue
local_path = cache_dir / (build["uuid"] + ".json")
build_url = zuul_tenant_url + "/build/" + build["uuid"]
if not local_path.exists():
err = download_build_job_output(build_url, str(local_path))
if err:
failed_to_examine.add((build_url, err))
continue
try:
if not examine(str(local_path)):
print("%s: ok" % build_url)
else:
to_fix.add(build_url)
except Exception as e:
failed_to_examine.add((build_url, str(e)))
if failed_to_examine:
print("The following builds could not be examined:")
for build_url, err in failed_to_examine:
print("%s: %s" % (build_url, err))
if not to_fix:
exit(1)
if to_fix:
print("The following builds are using localhost command:")
for build in to_fix:
print(build.replace("/api/", "/t/"))
exit(1)
if __name__ == "__main__":
main(sys.argv[1:]) | zuul | /zuul-9.1.0.tar.gz/zuul-9.1.0/tools/find-untrusted-exec.py | find-untrusted-exec.py |
try:
from urllib.request import urlopen
except ImportError:
from urllib2 import urlopen
import json
import argparse
parser = argparse.ArgumentParser()
parser.add_argument('url', help='The URL of the running Zuul instance')
parser.add_argument('tenant', help='The Zuul tenant', nargs='?')
parser.add_argument('pipeline', help='The name of the Zuul pipeline',
nargs='?')
parser.add_argument('--use-config',
metavar='CONFIG',
help='The name of the zuul-client config to use')
options = parser.parse_args()
command = 'zuul-client'
if options.use_config:
command += f' --use-config {options.use_config}'
# Check if tenant is white label
info = json.loads(urlopen('%s/api/info' % options.url).read())
api_tenant = info.get('info', {}).get('tenant')
tenants = []
if api_tenant:
if api_tenant == options.tenant:
tenants.append(None)
else:
print("Error: %s doesn't match tenant %s (!= %s)" % (
options.url, options.tenant, api_tenant))
exit(1)
else:
tenants_url = '%s/api/tenants' % options.url
data = json.loads(urlopen(tenants_url).read())
for tenant in data:
tenants.append(tenant['name'])
for tenant in tenants:
if tenant is None:
status_url = '%s/api/status' % options.url
else:
status_url = '%s/api/tenant/%s/status' % (options.url, tenant)
data = json.loads(urlopen(status_url).read())
for pipeline in data['pipelines']:
if options.pipeline and pipeline['name'] != options.pipeline:
continue
for queue in pipeline.get('change_queues', []):
for head in queue['heads']:
for change in head:
if not change['live']:
continue
if change['id'] and ',' in change['id']:
# change triggered
cid, cps = change['id'].split(',')
print("%s enqueue"
" --tenant %s"
" --pipeline %s"
" --project %s"
" --change %s,%s" % (command, tenant,
pipeline['name'],
change['project_canonical'],
cid, cps))
else:
# ref triggered
cmd = '%s enqueue-ref' \
' --tenant %s' \
' --pipeline %s' \
' --project %s' \
' --ref %s' % (command, tenant,
pipeline['name'],
change['project_canonical'],
change['ref'])
if change['id']:
cmd += ' --newrev %s' % change['id']
print(cmd) | zuul | /zuul-9.1.0.tar.gz/zuul-9.1.0/tools/zuul-changes.py | zuul-changes.py |
import logging
from collections import UserDict
from zuul.driver.github.githubconnection import GithubConnection
from zuul.driver.github import GithubDriver
from zuul.model import Change
from zuul.zk.change_cache import ChangeKey
# This is a template with boilerplate code for debugging github issues
# TODO: for real use override the following variables
server = 'github.com'
api_token = 'xxxx'
appid = 2
appkey = '/opt/project/appkey'
org = 'example'
repo = 'sandbox'
pull_nr = 8
class DummyChangeCache(UserDict):
def updateChangeWithRetry(self, key, change, update_func, retry_count=5):
update_func(change)
self[key] = change
return change
def configure_logging(context):
stream_handler = logging.StreamHandler()
logger = logging.getLogger(context)
logger.addHandler(stream_handler)
logger.setLevel(logging.DEBUG)
# uncomment for more logging
# configure_logging('urllib3')
# configure_logging('github3')
# configure_logging('cachecontrol')
# This is all that's needed for getting a usable github connection
def create_connection(server, api_token):
driver = GithubDriver()
connection_config = {
'server': server,
'api_token': api_token,
}
conn = GithubConnection(driver, 'github', connection_config)
conn._github_client_manager.initialize()
conn._change_cache = DummyChangeCache()
return conn
def create_connection_app(server, appid, appkey):
driver = GithubDriver()
connection_config = {
'server': server,
'app_id': appid,
'app_key': appkey,
}
conn = GithubConnection(driver, 'github', connection_config)
conn._github_client_manager.initialize()
conn._change_cache = DummyChangeCache()
return conn
def get_change(connection: GithubConnection,
org: str,
repo: str,
pull: int) -> Change:
project_name = f"{org}/{repo}"
github = connection.getGithubClient(project_name)
pr = github.pull_request(org, repo, pull)
sha = pr.head.sha
change_key = ChangeKey('github', project_name, 'PullRequest', pull, sha)
return conn._getChange(change_key, refresh=True)
# create github connection with api token
conn = create_connection(server, api_token)
# create github connection with app key
# conn = create_connection_app(server, appid, appkey)
# Now we can do anything we want with the connection, e.g. check canMerge for
# a pull request.
change = get_change(conn, org, repo, pull_nr)
print(conn.canMerge(change, {'cc/gate2'}))
# Or just use the github object.
# github = conn.getGithubClient()
#
# repository = github.repository(org, repo)
# print(repository.as_dict()) | zuul | /zuul-9.1.0.tar.gz/zuul-9.1.0/tools/github-debugging.py | github-debugging.py |
import Axios from 'axios'
let authToken = undefined
export function setAuthToken(token) {
authToken = token
}
function getHomepageUrl() {
//
// Discover serving location from href.
//
// This is only needed for sub-directory serving. Serving the application
// from 'scheme://domain/' may simply default to 'scheme://domain/'
//
// Note that this is not enough for sub-directory serving,
// The static files location also needs to be adapted with the 'homepage'
// settings of the package.json file.
//
// This homepage url is used for the Router and Link resolution logic
//
let url = new URL(window.location.href)
if ('PUBLIC_URL' in process.env) {
url.pathname = process.env.PUBLIC_URL
} else {
url.pathname = ''
}
if (!url.pathname.endsWith('/')) {
url.pathname = url.pathname + '/'
}
return url.origin + url.pathname
}
function getZuulUrl() {
// Return the zuul root api absolute url
const ZUUL_API = process.env.REACT_APP_ZUUL_API
let apiUrl
if (ZUUL_API) {
// Api url set at build time, use it
apiUrl = ZUUL_API
} else {
// Api url is relative to homepage path
apiUrl = getHomepageUrl() + 'api/'
}
if (!apiUrl.endsWith('/')) {
apiUrl = apiUrl + '/'
}
if (!apiUrl.endsWith('/api/')) {
apiUrl = apiUrl + 'api/'
}
// console.log('Api url is ', apiUrl)
return apiUrl
}
const apiUrl = getZuulUrl()
function getStreamUrl(apiPrefix) {
const streamUrl = (apiUrl + apiPrefix)
.replace(/(http)(s)?:\/\//, 'ws$2://') + 'console-stream'
// console.log('Stream url is ', streamUrl)
return streamUrl
}
function makeRequest(url, method, data) {
if (method === undefined) {
method = 'get'
}
// This performs a simple GET and tries to detect if CORS errors are
// due to proxy authentication errors.
const instance = Axios.create({
baseURL: apiUrl
})
if (authToken) {
instance.defaults.headers.common['Authorization'] = 'Bearer ' + authToken
}
const config = {method, url, data}
// First try the request as normal
let res = instance.request(config).catch(err => {
if (err.response === undefined) {
// This is either a Network, DNS, or CORS error, but we can't tell which.
// If we're behind an authz proxy, it's possible our creds have timed out
// and the CORS error is because we're getting a redirect.
// Apache mod_auth_mellon (and possibly other authz proxies) will avoid
// issuing a redirect if X-Requested-With is set to 'XMLHttpRequest' and
// will instead issue a 403. We can use this to detect that case.
instance.defaults.headers.common['X-Requested-With'] = 'XMLHttpRequest'
let res2 = instance.request(config).catch(err2 => {
if (err2.response && err2.response.status === 403) {
// We might be getting a redirect or something else,
// so reload the page.
console.log('Received 403 after unknown error; reloading')
window.location.reload()
}
// If we're still getting an error, we don't know the cause,
// it could be a transient network error, so we won't reload, we'll just
// wait for it to clear.
throw (err2)
})
return res2
}
throw (err)
})
return res
}
// Direct APIs
function fetchInfo() {
return makeRequest('info')
}
function fetchComponents() {
return makeRequest('components')
}
function fetchTenantInfo(apiPrefix) {
return makeRequest(apiPrefix + 'info')
}
function fetchOpenApi() {
return Axios.get(getHomepageUrl() + 'openapi.yaml')
}
function fetchTenants() {
return makeRequest(apiUrl + 'tenants')
}
function fetchConfigErrors(apiPrefix) {
return makeRequest(apiPrefix + 'config-errors')
}
function fetchStatus(apiPrefix) {
return makeRequest(apiPrefix + 'status')
}
function fetchChangeStatus(apiPrefix, changeId) {
return makeRequest(apiPrefix + 'status/change/' + changeId)
}
function fetchFreezeJob(apiPrefix, pipelineName, projectName, branchName, jobName) {
return makeRequest(apiPrefix +
'pipeline/' + pipelineName +
'/project/' + projectName +
'/branch/' + branchName +
'/freeze-job/' + jobName)
}
function fetchBuild(apiPrefix, buildId) {
return makeRequest(apiPrefix + 'build/' + buildId)
}
function fetchBuilds(apiPrefix, queryString) {
let path = 'builds'
if (queryString) {
path += '?' + queryString.slice(1)
}
return makeRequest(apiPrefix + path)
}
function fetchBuildset(apiPrefix, buildsetId) {
return makeRequest(apiPrefix + 'buildset/' + buildsetId)
}
function fetchBuildsets(apiPrefix, queryString) {
let path = 'buildsets'
if (queryString) {
path += '?' + queryString.slice(1)
}
return makeRequest(apiPrefix + path)
}
function fetchPipelines(apiPrefix) {
return makeRequest(apiPrefix + 'pipelines')
}
function fetchProject(apiPrefix, projectName) {
return makeRequest(apiPrefix + 'project/' + projectName)
}
function fetchProjects(apiPrefix) {
return makeRequest(apiPrefix + 'projects')
}
function fetchJob(apiPrefix, jobName) {
return makeRequest(apiPrefix + 'job/' + jobName)
}
function fetchJobGraph(apiPrefix, projectName, pipelineName, branchName) {
return makeRequest(apiPrefix +
'pipeline/' + pipelineName +
'/project/' + projectName +
'/branch/' + branchName +
'/freeze-jobs')
}
function fetchJobs(apiPrefix) {
return makeRequest(apiPrefix + 'jobs')
}
function fetchLabels(apiPrefix) {
return makeRequest(apiPrefix + 'labels')
}
function fetchNodes(apiPrefix) {
return makeRequest(apiPrefix + 'nodes')
}
function fetchSemaphores(apiPrefix) {
return makeRequest(apiPrefix + 'semaphores')
}
function fetchAutoholds(apiPrefix) {
return makeRequest(apiPrefix + 'autohold')
}
function fetchAutohold(apiPrefix, requestId) {
return makeRequest(apiPrefix + 'autohold/' + requestId)
}
function fetchUserAuthorizations(apiPrefix) {
return makeRequest(apiPrefix + 'authorizations')
}
function dequeue(apiPrefix, projectName, pipeline, change) {
return makeRequest(
apiPrefix + 'project/' + projectName + '/dequeue',
'post',
{
pipeline: pipeline,
change: change,
}
)
}
function dequeue_ref(apiPrefix, projectName, pipeline, ref) {
return makeRequest(
apiPrefix + 'project/' + projectName + '/dequeue',
'post',
{
pipeline: pipeline,
ref: ref,
}
)
}
function enqueue(apiPrefix, projectName, pipeline, change) {
return makeRequest(
apiPrefix + 'project/' + projectName + '/enqueue',
'post',
{
pipeline: pipeline,
change: change,
}
)
}
function enqueue_ref(apiPrefix, projectName, pipeline, ref, oldrev, newrev) {
return makeRequest(
apiPrefix + 'project/' + projectName + '/enqueue',
'post',
{
pipeline: pipeline,
ref: ref,
oldrev: oldrev,
newrev: newrev,
}
)
}
function autohold(apiPrefix, projectName, job, change, ref,
reason, count, node_hold_expiration) {
return makeRequest(
apiPrefix + 'project/' + projectName + '/autohold',
'post',
{
change: change,
job: job,
ref: ref,
reason: reason,
count: count,
node_hold_expiration: node_hold_expiration,
}
)
}
function autohold_delete(apiPrefix, requestId) {
return makeRequest(
apiPrefix + '/autohold/' + requestId,
'delete'
)
}
function promote(apiPrefix, pipeline, changes) {
return makeRequest(
apiPrefix + '/promote',
'post',
{
pipeline: pipeline,
changes: changes,
}
)
}
export {
apiUrl,
getHomepageUrl,
getStreamUrl,
fetchChangeStatus,
fetchConfigErrors,
fetchStatus,
fetchBuild,
fetchBuilds,
fetchBuildset,
fetchBuildsets,
fetchFreezeJob,
fetchPipelines,
fetchProject,
fetchProjects,
fetchJob,
fetchJobGraph,
fetchJobs,
fetchLabels,
fetchNodes,
fetchOpenApi,
fetchSemaphores,
fetchTenants,
fetchInfo,
fetchComponents,
fetchTenantInfo,
fetchUserAuthorizations,
fetchAutoholds,
fetchAutohold,
autohold,
autohold_delete,
dequeue,
dequeue_ref,
enqueue,
enqueue_ref,
promote,
} | zuul | /zuul-9.1.0.tar.gz/zuul-9.1.0/web/src/api.js | api.js |
// This lets the app load faster on subsequent visits in production, and gives
// it offline capabilities. However, it also means that developers (and users)
// will only see deployed updates on the "N+1" visit to a page, since previously
// cached resources are updated in the background.
// To learn more about the benefits of this model, read
// https://github.com/facebook/create-react-app/blob/master/packages/react-scripts/template/README.md#making-a-progressive-web-app
// This link also includes instructions on opting out of this behavior.
const isLocalhost = Boolean(
window.location.hostname === 'localhost' ||
// [::1] is the IPv6 localhost address.
window.location.hostname === '[::1]' ||
// 127.0.0.1/8 is considered localhost for IPv4.
window.location.hostname.match(
/^127(?:\.(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)){3}$/
)
)
export default function register () {
if (process.env.REACT_APP_ENABLE_SERVICE_WORKER !== 'true') {
console.log('Disabled service worker')
unregister()
return
}
if (process.env.NODE_ENV === 'production' && 'serviceWorker' in navigator) {
// The URL constructor is available in all browsers that support SW.
const publicUrl = new URL(process.env.PUBLIC_URL, window.location)
if (publicUrl.origin !== window.location.origin) {
// Our service worker won't work if PUBLIC_URL is on a different origin
// from what our page is served on. This might happen if a CDN is used to
// serve assets; see https://github.com/facebookincubator/create-react-app/issues/2374
return
}
window.addEventListener('load', () => {
const swUrl = `${process.env.PUBLIC_URL}/service-worker.js`
if (isLocalhost) {
// This is running on localhost. Lets check if a service worker still exists or not.
checkValidServiceWorker(swUrl)
// Add some additional logging to localhost, pointing developers to the
// service worker/PWA documentation.
navigator.serviceWorker.ready.then(() => {
console.log(
'This web app is being served cache-first by a service ' +
'worker. To learn more, visit https://goo.gl/SC7cgQ'
)
})
} else {
// Is not local host. Just register service worker
registerValidSW(swUrl)
}
})
}
}
function registerValidSW (swUrl) {
navigator.serviceWorker
.register(swUrl)
.then(registration => {
registration.onupdatefound = () => {
const installingWorker = registration.installing
installingWorker.onstatechange = () => {
if (installingWorker.state === 'installed') {
if (navigator.serviceWorker.controller) {
// At this point, the old content will have been purged and
// the fresh content will have been added to the cache.
// It's the perfect time to display a "New content is
// available; please refresh." message in your web app.
console.log('New content is available; please refresh.')
} else {
// At this point, everything has been precached.
// It's the perfect time to display a
// "Content is cached for offline use." message.
console.log('Content is cached for offline use.')
}
}
}
}
})
.catch(error => {
console.error('Error during service worker registration:', error)
})
}
function checkValidServiceWorker (swUrl) {
// Check if the service worker can be found. If it can't reload the page.
fetch(swUrl)
.then(response => {
// Ensure service worker exists, and that we really are getting a JS file.
if (
response.status === 404 ||
response.headers.get('content-type').indexOf('javascript') === -1
) {
// No service worker found. Probably a different app. Reload the page.
navigator.serviceWorker.ready.then(registration => {
registration.unregister().then(() => {
window.location.reload()
})
})
} else {
// Service worker found. Proceed as normal.
registerValidSW(swUrl)
}
})
.catch(() => {
console.log(
'No internet connection found. App is running in offline mode.'
)
})
}
export function unregister () {
if ('serviceWorker' in navigator) {
navigator.serviceWorker.ready.then(registration => {
registration.unregister()
})
}
} | zuul | /zuul-9.1.0.tar.gz/zuul-9.1.0/web/src/registerServiceWorker.js | registerServiceWorker.js |
import ComponentsPage from './pages/Components'
import FreezeJobPage from './pages/FreezeJob'
import StatusPage from './pages/Status'
import ChangeStatusPage from './pages/ChangeStatus'
import ProjectPage from './pages/Project'
import ProjectsPage from './pages/Projects'
import JobPage from './pages/Job'
import JobsPage from './pages/Jobs'
import LabelsPage from './pages/Labels'
import NodesPage from './pages/Nodes'
import SemaphorePage from './pages/Semaphore'
import SemaphoresPage from './pages/Semaphores'
import AutoholdsPage from './pages/Autoholds'
import AutoholdPage from './pages/Autohold'
import BuildPage from './pages/Build'
import BuildsPage from './pages/Builds'
import BuildsetPage from './pages/Buildset'
import BuildsetsPage from './pages/Buildsets'
import ConfigErrorsPage from './pages/ConfigErrors'
import TenantsPage from './pages/Tenants'
import StreamPage from './pages/Stream'
import OpenApiPage from './pages/OpenApi'
// The Route object are created in the App component.
// Object with a title are created in the menu.
// Object with globalRoute are not tenant scoped.
// Remember to update the api getHomepageUrl subDir list for route with params
const routes = () => [
{
title: 'Status',
to: '/status',
component: StatusPage
},
{
title: 'Projects',
to: '/projects',
component: ProjectsPage
},
{
title: 'Jobs',
to: '/jobs',
component: JobsPage
},
{
title: 'Labels',
to: '/labels',
component: LabelsPage
},
{
title: 'Nodes',
to: '/nodes',
component: NodesPage
},
{
title: 'Autoholds',
to: '/autoholds',
component: AutoholdsPage
},
{
title: 'Semaphores',
to: '/semaphores',
component: SemaphoresPage
},
{
title: 'Builds',
to: '/builds',
component: BuildsPage
},
{
title: 'Buildsets',
to: '/buildsets',
component: BuildsetsPage
},
{
to: '/freeze-job',
component: FreezeJobPage
},
{
to: '/status/change/:changeId',
component: ChangeStatusPage
},
{
to: '/stream/:buildId',
component: StreamPage
},
{
to: '/project/:projectName*',
component: ProjectPage
},
{
to: '/job/:jobName',
component: JobPage
},
{
to: '/build/:buildId',
component: BuildPage,
props: { 'activeTab': 'results' },
},
{
to: '/build/:buildId/artifacts',
component: BuildPage,
props: { 'activeTab': 'artifacts' },
},
{
to: '/build/:buildId/logs',
component: BuildPage,
props: { 'activeTab': 'logs' },
},
{
to: '/build/:buildId/console',
component: BuildPage,
props: { 'activeTab': 'console' },
},
{
to: '/build/:buildId/log/:file*',
component: BuildPage,
props: { 'activeTab': 'logs', 'logfile': true },
},
{
to: '/buildset/:buildsetId',
component: BuildsetPage
},
{
to: '/autohold/:requestId',
component: AutoholdPage
},
{
to: '/semaphore/:semaphoreName',
component: SemaphorePage
},
{
to: '/config-errors',
component: ConfigErrorsPage,
},
{
to: '/tenants',
component: TenantsPage,
globalRoute: true
},
{
to: '/openapi',
component: OpenApiPage,
noTenantPrefix: true,
},
{
to: '/components',
component: ComponentsPage,
noTenantPrefix: true,
},
// auth_callback is handled in App.jsx
]
export { routes } | zuul | /zuul-9.1.0.tar.gz/zuul-9.1.0/web/src/routes.js | routes.js |
// The index is the main of the project. The App is wrapped with
// a Provider to share the redux store and a Router to manage the location.
import React from 'react'
import ReactDOM from 'react-dom'
import { BrowserRouter as Router } from 'react-router-dom'
import { Provider } from 'react-redux'
import { BroadcastChannel, createLeaderElection } from 'broadcast-channel'
import 'patternfly/dist/css/patternfly.min.css'
import 'patternfly/dist/css/patternfly-additions.min.css'
// NOTE (felix): The Patternfly 4 CSS file must be imported before the App
// component. Otherwise, the CSS rules are imported in the wrong order and some
// wildcard expressions could break the layout:
// https://forum.patternfly.org/t/wildcard-selector-more-specific-after-upgrade-to-patternfly-4-react-version-3-75-2/261
// Usually it should be imported at the uppermost positon, but as we don't want
// PF3 to overrule PF4, we import PF4 styles after PF3.
import '@patternfly/react-core/dist/styles/base.css'
import '@patternfly/react-styles/css/utilities/Sizing/sizing.css'
import '@patternfly/react-styles/css/utilities/Spacing/spacing.css'
// To avoid that PF4 breaks existing PF3 components by some wildcard CSS rules,
// we include our own migration CSS file that restores relevant parts of those
// rules.
// TODO (felix): Remove this import after the PF4 migration
import './pf4-migration.css'
import { getHomepageUrl } from './api'
import registerServiceWorker from './registerServiceWorker'
import { fetchInfoIfNeeded } from './actions/info'
import configureStore from './store'
import App from './App'
// Importing our custom css file after the App allows us to also overwrite the
// style attributes of PF4 component (as their CSS is loaded when the component
// is imported within the App).
import './index.css'
import ZuulAuthProvider from './ZuulAuthProvider'
import SilentCallback from './pages/SilentCallback'
// Uncomment the next 3 lines to enable debug-level logging from
// oidc-client.
// import { Log } from 'oidc-client'
// Log.logger = console
// Log.level = Log.DEBUG
// Don't render the entire application to handle a silent
// authentication callback.
if ((window.location.origin + window.location.pathname) ===
(getHomepageUrl() + 'silent_callback')) {
ReactDOM.render(
<SilentCallback/>,
document.getElementById('root'))
} else {
const store = configureStore()
// Load info endpoint
store.dispatch(fetchInfoIfNeeded())
// Create a broadcast channel for sending auth (or other)
// information between tabs.
const channel = new BroadcastChannel('zuul')
// Create an election so that only one tab will renew auth tokens. We run the
// election perpetually and just check whether we are the leader when it's time
// to renew tokens.
const auth_election = createLeaderElection(channel)
const waitForever = new Promise(function () {})
auth_election.awaitLeadership().then(()=> {
waitForever.then(function() {})
})
ReactDOM.render(
<Provider store={store}>
<ZuulAuthProvider channel={channel} election={auth_election}>
<Router basename={new URL(getHomepageUrl()).pathname}><App /></Router>
</ZuulAuthProvider>
</Provider>, document.getElementById('root'))
registerServiceWorker()
} | zuul | /zuul-9.1.0.tar.gz/zuul-9.1.0/web/src/index.js | index.js |
import {
BUILD_FETCH_FAIL,
BUILD_FETCH_REQUEST,
BUILD_FETCH_SUCCESS,
BUILDSET_FETCH_FAIL,
BUILDSET_FETCH_REQUEST,
BUILDSET_FETCH_SUCCESS,
BUILD_OUTPUT_FAIL,
BUILD_OUTPUT_REQUEST,
BUILD_OUTPUT_SUCCESS,
BUILD_OUTPUT_NOT_AVAILABLE,
BUILD_MANIFEST_FAIL,
BUILD_MANIFEST_REQUEST,
BUILD_MANIFEST_SUCCESS,
BUILD_MANIFEST_NOT_AVAILABLE,
} from '../actions/build'
import initialState from './initialState'
export default (state = initialState.build, action) => {
switch (action.type) {
case BUILD_FETCH_REQUEST:
case BUILDSET_FETCH_REQUEST:
return { ...state, isFetching: true }
case BUILD_FETCH_SUCCESS:
return {
...state,
builds: { ...state.builds, [action.buildId]: action.build },
isFetching: false,
}
case BUILDSET_FETCH_SUCCESS:
return {
...state,
buildsets: { ...state.buildsets, [action.buildsetId]: action.buildset },
isFetching: false,
}
case BUILD_FETCH_FAIL:
return {
...state,
builds: { ...state.builds, [action.buildId]: null },
isFetching: false,
}
case BUILDSET_FETCH_FAIL:
return {
...state,
buildsets: { ...state.buildsets, [action.buildsetId]: null },
isFetching: false,
}
case BUILD_OUTPUT_REQUEST:
return { ...state, isFetchingOutput: true }
case BUILD_OUTPUT_SUCCESS:
return {
...state,
outputs: { ...state.outputs, [action.buildId]: action.output },
errorIds: { ...state.errorIds, [action.buildId]: action.errorIds },
hosts: { ...state.hosts, [action.buildId]: action.hosts },
isFetchingOutput: false,
}
case BUILD_OUTPUT_FAIL:
case BUILD_OUTPUT_NOT_AVAILABLE:
return {
...state,
outputs: { ...state.outputs, [action.buildId]: null },
errorIds: { ...state.errorIds, [action.buildId]: null },
hosts: { ...state.hosts, [action.buildId]: null },
isFetchingOutput: false,
}
case BUILD_MANIFEST_REQUEST:
return { ...state, isFetchingManifest: true }
case BUILD_MANIFEST_SUCCESS:
return {
...state,
manifests: { ...state.manifests, [action.buildId]: action.manifest },
isFetchingManifest: false,
}
case BUILD_MANIFEST_FAIL:
case BUILD_MANIFEST_NOT_AVAILABLE:
return {
...state,
manifests: { ...state.manifests, [action.buildId]: null },
isFetchingManifest: false,
}
default:
return state
}
} | zuul | /zuul-9.1.0.tar.gz/zuul-9.1.0/web/src/reducers/build.js | build.js |
import * as API from '../api'
export const PROJECT_FETCH_REQUEST = 'PROJECT_FETCH_REQUEST'
export const PROJECT_FETCH_SUCCESS = 'PROJECT_FETCH_SUCCESS'
export const PROJECT_FETCH_FAIL = 'PROJECT_FETCH_FAIL'
export const requestProject = () => ({
type: PROJECT_FETCH_REQUEST
})
export const receiveProject = (tenant, projectName, project) => {
// TODO: fix api to return template name or merge them
// in the mean-time, merge the jobs in project configs
const templateIdx = []
let idx
project.configs.forEach((config, idx) => {
if (config.is_template === true) {
// This must be a template
templateIdx.push(idx)
config.pipelines.forEach(templatePipeline => {
let pipeline = project.configs[idx - 1].pipelines.filter(
item => item.name === templatePipeline.name)
if (pipeline.length === 0) {
// Pipeline doesn't exist in project config
project.configs[idx - 1].pipelines.push(templatePipeline)
} else {
if (pipeline[0].queue_name === null) {
pipeline[0].queue_name = templatePipeline.queue_name
}
templatePipeline.jobs.forEach(job => {
pipeline[0].jobs.push(job)
})
}
})
}
})
for (idx = templateIdx.length - 1; idx >= 0; idx -= 1) {
project.configs.splice(templateIdx[idx], 1)
}
return {
type: PROJECT_FETCH_SUCCESS,
tenant: tenant,
projectName: projectName,
project: project,
receivedAt: Date.now(),
}
}
const failedProject = error => ({
type: PROJECT_FETCH_FAIL,
error
})
const fetchProject = (tenant, project) => dispatch => {
dispatch(requestProject())
return API.fetchProject(tenant.apiPrefix, project)
.then(response => dispatch(receiveProject(
tenant.name, project, response.data)))
.catch(error => dispatch(failedProject(error)))
}
const shouldFetchProject = (tenant, projectName, state) => {
const tenantProjects = state.project.projects[tenant.name]
if (tenantProjects) {
const project = tenantProjects[projectName]
if (!project) {
return true
}
if (project.isFetching) {
return false
}
return false
}
return true
}
export const fetchProjectIfNeeded = (tenant, project, force) => (
dispatch, getState) => {
if (force || shouldFetchProject(tenant, project, getState())) {
return dispatch(fetchProject(tenant, project))
}
return Promise.resolve()
} | zuul | /zuul-9.1.0.tar.gz/zuul-9.1.0/web/src/actions/project.js | project.js |
import * as API from '../api'
export const FREEZE_JOB_FETCH_REQUEST = 'FREEZE_JOB_FETCH_REQUEST'
export const FREEZE_JOB_FETCH_SUCCESS = 'FREEZE_JOB_FETCH_SUCCESS'
export const FREEZE_JOB_FETCH_FAIL = 'FREEZE_JOB_FETCH_FAIL'
export const requestFreezeJob = () => ({
type: FREEZE_JOB_FETCH_REQUEST
})
export function makeFreezeJobKey(pipeline, project, branch, job) {
return JSON.stringify({
pipeline, project, branch, job
})
}
export const receiveFreezeJob = (tenant, freezeJobKey, freezeJob) => {
return {
type: FREEZE_JOB_FETCH_SUCCESS,
tenant: tenant,
freezeJobKey: freezeJobKey,
freezeJob: freezeJob,
receivedAt: Date.now(),
}
}
const failedFreezeJob = error => ({
type: FREEZE_JOB_FETCH_FAIL,
error
})
const fetchFreezeJob = (tenant, pipeline, project, branch, job) => dispatch => {
dispatch(requestFreezeJob())
const freezeJobKey = makeFreezeJobKey(pipeline, project, branch, job)
return API.fetchFreezeJob(tenant.apiPrefix,
pipeline,
project,
branch,
job)
.then(response => dispatch(receiveFreezeJob(
tenant.name, freezeJobKey, response.data)))
.catch(error => dispatch(failedFreezeJob(error)))
}
const shouldFetchFreezeJob = (tenant, pipeline, project, branch, job, state) => {
const freezeJobKey = makeFreezeJobKey(pipeline, project, branch, job)
const tenantFreezeJobs = state.freezejob.freezeJobs[tenant.name]
if (tenantFreezeJobs) {
const freezeJob = tenantFreezeJobs[freezeJobKey]
if (!freezeJob) {
return true
}
if (freezeJob.isFetching) {
return false
}
return false
}
return true
}
export const fetchFreezeJobIfNeeded = (tenant, pipeline, project, branch, job,
force) => (
dispatch, getState) => {
if (force || shouldFetchFreezeJob(tenant, pipeline, project, branch, job,
getState())) {
return dispatch(fetchFreezeJob(tenant, pipeline, project, branch, job))
}
return Promise.resolve()
} | zuul | /zuul-9.1.0.tar.gz/zuul-9.1.0/web/src/actions/freezejob.js | freezejob.js |
import Axios from 'axios'
export const LOGFILE_FETCH_REQUEST = 'LOGFILE_FETCH_REQUEST'
export const LOGFILE_FETCH_SUCCESS = 'LOGFILE_FETCH_SUCCESS'
export const LOGFILE_FETCH_FAIL = 'LOGFILE_FETCH_FAIL'
export const requestLogfile = (url) => ({
type: LOGFILE_FETCH_REQUEST,
url: url,
})
const SYSLOGDATE = '\\w+\\s+\\d+\\s+\\d{2}:\\d{2}:\\d{2}((\\.|\\,)\\d{3,6})?'
const DATEFMT = '\\d{4}-\\d{2}-\\d{2} \\d{2}:\\d{2}:\\d{2}((\\.|\\,)\\d{3,6})?'
const STATUSFMT = '(DEBUG|INFO|WARNING|ERROR|TRACE|AUDIT|CRITICAL)'
const severityMap = {
DEBUG: 1,
INFO: 2,
WARNING: 3,
ERROR: 4,
TRACE: 5,
AUDIT: 6,
CRITICAL: 7,
}
const OSLO_LOGMATCH = new RegExp(`^(${DATEFMT})(( \\d+)? (${STATUSFMT}).*)`)
const SYSTEMD_LOGMATCH = new RegExp(`^(${SYSLOGDATE})( (\\S+) \\S+\\[\\d+\\]\\: (${STATUSFMT}).*)`)
const receiveLogfile = (buildId, file, data) => {
const out = data.split(/\r?\n/).map((line, idx) => {
let m = null
let sev = null
m = SYSTEMD_LOGMATCH.exec(line)
if (m) {
sev = severityMap[m[7]]
} else {
OSLO_LOGMATCH.exec(line)
if (m) {
sev = severityMap[m[7]]
}
}
return {
text: line,
index: idx+1,
severity: sev
}
})
return {
type: LOGFILE_FETCH_SUCCESS,
buildId,
fileName: file,
fileContent: out,
receivedAt: Date.now()
}
}
const failedLogfile = (error, url) => {
error.url = url
return {
type: LOGFILE_FETCH_FAIL,
error
}
}
export function fetchLogfile(buildId, file, state) {
return async function (dispatch) {
// Don't do anything if the logfile is already part of our local state
if (
buildId in state.logfile.files &&
file in state.logfile.files[buildId]
) {
return Promise.resolve()
}
// Since this method is only called after fetchBuild() and fetchManifest(),
// we can assume both are there.
const build = state.build.builds[buildId]
const manifest = state.build.manifests[buildId]
const item = manifest.index['/' + file]
if (!item) {
return dispatch(
failedLogfile(Error(`No manifest entry found for logfile "${file}"`))
)
}
if (item.mimetype !== 'text/plain') {
return dispatch(
failedLogfile(Error(`Logfile "${file}" has invalid mimetype`))
)
}
const url = build.log_url + file
dispatch(requestLogfile())
try {
const response = await Axios.get(url, { transformResponse: [] })
dispatch(receiveLogfile(buildId, file, response.data))
} catch(error) {
dispatch(failedLogfile(error, url))
}
}
} | zuul | /zuul-9.1.0.tar.gz/zuul-9.1.0/web/src/actions/logfile.js | logfile.js |
import Axios from 'axios'
import * as API from '../api'
import { fetchLogfile } from './logfile'
export const BUILD_FETCH_REQUEST = 'BUILD_FETCH_REQUEST'
export const BUILD_FETCH_SUCCESS = 'BUILD_FETCH_SUCCESS'
export const BUILD_FETCH_FAIL = 'BUILD_FETCH_FAIL'
export const BUILDSET_FETCH_REQUEST = 'BUILDSET_FETCH_REQUEST'
export const BUILDSET_FETCH_SUCCESS = 'BUILDSET_FETCH_SUCCESS'
export const BUILDSET_FETCH_FAIL = 'BUILDSET_FETCH_FAIL'
export const BUILD_OUTPUT_REQUEST = 'BUILD_OUTPUT_FETCH_REQUEST'
export const BUILD_OUTPUT_SUCCESS = 'BUILD_OUTPUT_FETCH_SUCCESS'
export const BUILD_OUTPUT_FAIL = 'BUILD_OUTPUT_FETCH_FAIL'
export const BUILD_OUTPUT_NOT_AVAILABLE = 'BUILD_OUTPUT_NOT_AVAILABLE'
export const BUILD_MANIFEST_REQUEST = 'BUILD_MANIFEST_FETCH_REQUEST'
export const BUILD_MANIFEST_SUCCESS = 'BUILD_MANIFEST_FETCH_SUCCESS'
export const BUILD_MANIFEST_FAIL = 'BUILD_MANIFEST_FETCH_FAIL'
export const BUILD_MANIFEST_NOT_AVAILABLE = 'BUILD_MANIFEST_NOT_AVAILABLE'
export const requestBuild = () => ({
type: BUILD_FETCH_REQUEST
})
export const receiveBuild = (buildId, build) => ({
type: BUILD_FETCH_SUCCESS,
buildId: buildId,
build: build,
receivedAt: Date.now()
})
const failedBuild = (buildId, error, url) => {
error.url = url
return {
type: BUILD_FETCH_FAIL,
buildId,
error
}
}
export const requestBuildOutput = () => ({
type: BUILD_OUTPUT_REQUEST
})
// job-output processing functions
export function renderTree(tenant, build, path, obj, textRenderer, defaultRenderer) {
const node = {}
let name = obj.name
if ('children' in obj && obj.children) {
node.nodes = obj.children.map(
n => renderTree(tenant, build, path+obj.name+'/', n,
textRenderer, defaultRenderer))
}
if (obj.mimetype === 'application/directory') {
name = obj.name + '/'
} else {
node.icon = 'fa fa-file-o'
}
let log_url = build.log_url
if (log_url.endsWith('/')) {
log_url = log_url.slice(0, -1)
}
if (obj.mimetype === 'text/plain') {
node.text = textRenderer(tenant, build, path, name, log_url, obj)
} else {
node.text = defaultRenderer(log_url, path, name, obj)
}
return node
}
export function didTaskFail(task) {
if (task.failed) {
return true
}
if (task.results) {
for (let result of task.results) {
if (didTaskFail(result)) {
return true
}
}
}
return false
}
export function hasInterestingKeys (obj, keys) {
return Object.entries(obj).filter(
([k, v]) => (keys.includes(k) && v !== '')
).length > 0
}
export function findLoopLabel(item) {
const label = item._ansible_item_label
return typeof(label) === 'string' ? label : ''
}
export function shouldIncludeKey(key, value, ignore_underscore, included) {
if (ignore_underscore && key[0] === '_') {
return false
}
if (included) {
if (!included.includes(key)) {
return false
}
if (value === '') {
return false
}
}
return true
}
export function makeTaskPath (path) {
return path.join('/')
}
export function taskPathMatches (ref, test) {
if (test.length < ref.length)
return false
for (let i=0; i < ref.length; i++) {
if (ref[i] !== test[i])
return false
}
return true
}
export const receiveBuildOutput = (buildId, output) => {
const hosts = {}
const taskFailed = (taskResult) => {
if (taskResult.rc && taskResult.failed_when_result !== false)
return true
else if (taskResult.failed)
return true
else
return false
}
// Compute stats
output.forEach(phase => {
Object.entries(phase.stats).forEach(([host, stats]) => {
if (!hosts[host]) {
hosts[host] = stats
hosts[host].failed = []
} else {
hosts[host].changed += stats.changed
hosts[host].failures += stats.failures
hosts[host].ok += stats.ok
}
if (stats.failures > 0) {
// Look for failed tasks
phase.plays.forEach(play => {
play.tasks.forEach(task => {
if (task.hosts[host]) {
if (task.hosts[host].results &&
task.hosts[host].results.length > 0) {
task.hosts[host].results.forEach(result => {
if (taskFailed(result)) {
result.name = task.task.name
hosts[host].failed.push(result)
}
})
} else if (taskFailed(task.hosts[host])) {
let result = task.hosts[host]
result.name = task.task.name
hosts[host].failed.push(result)
}
}
})
})
}
})
})
// Identify all of the hosttasks (and therefore tasks, plays, and
// playbooks) which have failed. The errorIds are either task or
// play uuids, or the phase+index for the playbook. Since they are
// different formats, we can store them in the same set without
// collisions.
const errorIds = new Set()
output.forEach(playbook => {
playbook.plays.forEach(play => {
play.tasks.forEach(task => {
Object.entries(task.hosts).forEach(([, host]) => {
if (didTaskFail(host)) {
errorIds.add(task.task.id)
errorIds.add(play.play.id)
errorIds.add(playbook.phase + playbook.index)
}
})
})
})
})
return {
type: BUILD_OUTPUT_SUCCESS,
buildId: buildId,
hosts: hosts,
output: output,
errorIds: errorIds,
receivedAt: Date.now()
}
}
const failedBuildOutput = (buildId, error, url) => {
error.url = url
return {
type: BUILD_OUTPUT_FAIL,
buildId,
error
}
}
export const requestBuildManifest = () => ({
type: BUILD_MANIFEST_REQUEST
})
export const receiveBuildManifest = (buildId, manifest) => {
const index = {}
const renderNode = (root, object) => {
const path = root + '/' + object.name
if ('children' in object && object.children) {
object.children.map(n => renderNode(path, n))
} else {
index[path] = object
}
}
manifest.tree.map(n => renderNode('', n))
return {
type: BUILD_MANIFEST_SUCCESS,
buildId: buildId,
manifest: {tree: manifest.tree, index: index,
index_links: manifest.index_links},
receivedAt: Date.now()
}
}
const failedBuildManifest = (buildId, error, url) => {
error.url = url
return {
type: BUILD_MANIFEST_FAIL,
buildId,
error
}
}
function buildOutputNotAvailable(buildId) {
return {
type: BUILD_OUTPUT_NOT_AVAILABLE,
buildId: buildId,
}
}
function buildManifestNotAvailable(buildId) {
return {
type: BUILD_MANIFEST_NOT_AVAILABLE,
buildId: buildId,
}
}
export function fetchBuild(tenant, buildId, state) {
return async function (dispatch) {
// Although it feels a little weird to not do anything in an action creator
// based on the redux state, we do this in here because the function is
// called from multiple places and it's easier to check for the build in
// here than in all the other places before calling this function.
if (state.build.builds[buildId]) {
return Promise.resolve()
}
dispatch(requestBuild())
try {
const response = await API.fetchBuild(tenant.apiPrefix, buildId)
dispatch(receiveBuild(buildId, response.data))
} catch (error) {
dispatch(failedBuild(buildId, error, tenant.apiPrefix))
// Raise the error again, so fetchBuildAllInfo() doesn't call the
// remaining fetch methods.
throw error
}
}
}
function fetchBuildOutput(buildId, state) {
return async function (dispatch) {
// In case the value is already set in our local state, directly resolve the
// promise. A null value means that the output could not be found for this
// build id.
if (state.build.outputs[buildId] !== undefined) {
return Promise.resolve()
}
// As this function is only called after fetchBuild() we can assume that
// the build is in the state. Otherwise an error would have been thrown and
// this function wouldn't be called.
const build = state.build.builds[buildId]
if (!build.log_url) {
// Don't treat a missing log URL as failure as we don't want to show a
// toast for that. The UI already informs about the missing log URL in
// multiple places.
return dispatch(buildOutputNotAvailable(buildId))
}
const url = build.log_url.substr(0, build.log_url.lastIndexOf('/') + 1)
dispatch(requestBuildOutput())
try {
const response = await Axios.get(url + 'job-output.json.gz')
dispatch(receiveBuildOutput(buildId, response.data))
} catch (error) {
if (!error.request) {
dispatch(failedBuildOutput(buildId, error, url))
// Raise the error again, so fetchBuildAllInfo() doesn't call the
// remaining fetch methods.
throw error
}
try {
// Try without compression
const response = await Axios.get(url + 'job-output.json')
dispatch(receiveBuildOutput(buildId, response.data))
} catch (error) {
dispatch(failedBuildOutput(buildId, error, url))
// Raise the error again, so fetchBuildAllInfo() doesn't call the
// remaining fetch methods.
throw error
}
}
}
}
export function fetchBuildManifest(buildId, state) {
return async function(dispatch) {
// In case the value is already set in our local state, directly resolve the
// promise. A null value means that the manifest could not be found for this
// build id.
if (state.build.manifests[buildId] !== undefined) {
return Promise.resolve()
}
// As this function is only called after fetchBuild() we can assume that
// the build is in the state. Otherwise an error would have been thrown and
// this function wouldn't be called.
const build = state.build.builds[buildId]
dispatch(requestBuildManifest())
for (let artifact of build.artifacts) {
if (
'metadata' in artifact &&
'type' in artifact.metadata &&
artifact.metadata.type === 'zuul_manifest'
) {
try {
const response = await Axios.get(artifact.url)
return dispatch(receiveBuildManifest(buildId, response.data))
} catch(error) {
// Show the error since we expected a manifest but did not
// receive it.
dispatch(failedBuildManifest(buildId, error, artifact.url))
}
}
}
// Don't treat a missing manifest file as failure as we don't want to show a
// toast for that.
dispatch(buildManifestNotAvailable(buildId))
}
}
export function fetchBuildAllInfo(tenant, buildId, logfileName) {
// This wraps the calls to fetch the build, output and manifest together as
// this is the common use case we have when loading the build info.
return async function (dispatch, getState) {
try {
// Wait for the build to be available as fetchBuildOutput and
// fetchBuildManifest require information from the build object.
await dispatch(fetchBuild(tenant, buildId, getState()))
dispatch(fetchBuildOutput(buildId, getState()))
// Wait for the manifest info to be available as this is needed in case
// we also download a logfile.
await dispatch(fetchBuildManifest(buildId, getState()))
if (logfileName) {
dispatch(fetchLogfile(buildId, logfileName, getState()))
}
} catch (error) {
dispatch(failedBuild(buildId, error, tenant.apiPrefix))
}
}
}
export const requestBuildset = () => ({
type: BUILDSET_FETCH_REQUEST
})
export const receiveBuildset = (buildsetId, buildset) => ({
type: BUILDSET_FETCH_SUCCESS,
buildsetId: buildsetId,
buildset: buildset,
receivedAt: Date.now()
})
const failedBuildset = (buildsetId, error) => ({
type: BUILDSET_FETCH_FAIL,
buildsetId,
error
})
export function fetchBuildset(tenant, buildsetId) {
return async function(dispatch) {
dispatch(requestBuildset())
try {
const response = await API.fetchBuildset(tenant.apiPrefix, buildsetId)
dispatch(receiveBuildset(buildsetId, response.data))
} catch (error) {
dispatch(failedBuildset(buildsetId, error))
}
}
}
const shouldFetchBuildset = (buildsetId, state) => {
const buildset = state.build.buildsets[buildsetId]
if (!buildset) {
return true
}
if (buildset.isFetching) {
return false
}
return false
}
export const fetchBuildsetIfNeeded = (tenant, buildsetId, force) => (
dispatch, getState) => {
if (force || shouldFetchBuildset(buildsetId, getState())) {
return dispatch(fetchBuildset(tenant, buildsetId))
}
} | zuul | /zuul-9.1.0.tar.gz/zuul-9.1.0/web/src/actions/build.js | build.js |
import * as API from '../api'
export const AUTOHOLDS_FETCH_REQUEST = 'AUTOHOLDS_FETCH_REQUEST'
export const AUTOHOLDS_FETCH_SUCCESS = 'AUTOHOLDS_FETCH_SUCCESS'
export const AUTOHOLDS_FETCH_FAIL = 'AUTOHOLDS_FETCH_FAIL'
export const AUTOHOLD_FETCH_REQUEST = 'AUTOHOLD_FETCH_REQUEST'
export const AUTOHOLD_FETCH_SUCCESS = 'AUTOHOLD_FETCH_SUCCESS'
export const AUTOHOLD_FETCH_FAIL = 'AUTOHOLD_FETCH_FAIL'
export const requestAutoholds = () => ({
type: AUTOHOLDS_FETCH_REQUEST
})
export const receiveAutoholds = (tenant, json) => ({
type: AUTOHOLDS_FETCH_SUCCESS,
autoholds: json,
receivedAt: Date.now()
})
const failedAutoholds = error => ({
type: AUTOHOLDS_FETCH_FAIL,
error
})
export const fetchAutoholds = (tenant) => dispatch => {
dispatch(requestAutoholds())
return API.fetchAutoholds(tenant.apiPrefix)
.then(response => dispatch(receiveAutoholds(tenant.name, response.data)))
.catch(error => dispatch(failedAutoholds(error)))
}
const shouldFetchAutoholds = (tenant, state) => {
const autoholds = state.autoholds
if (!autoholds || autoholds.autoholds.length === 0) {
return true
}
if (autoholds.isFetching) {
return false
}
if (Date.now() - autoholds.receivedAt > 60000) {
// Refetch after 1 minutes
return true
}
return false
}
export const fetchAutoholdsIfNeeded = (tenant, force) => (
dispatch, getState) => {
if (force || shouldFetchAutoholds(tenant, getState())) {
return dispatch(fetchAutoholds(tenant))
}
}
export const requestAutohold = () => ({
type: AUTOHOLD_FETCH_REQUEST
})
export const receiveAutohold = (tenant, json) => ({
type: AUTOHOLD_FETCH_SUCCESS,
autohold: json,
receivedAt: Date.now()
})
const failedAutohold = error => ({
type: AUTOHOLD_FETCH_FAIL,
error
})
export const fetchAutohold = (tenant, requestId) => dispatch => {
dispatch(requestAutohold())
return API.fetchAutohold(tenant.apiPrefix, requestId)
.then(response => dispatch(receiveAutohold(tenant.name, response.data)))
.catch(error => dispatch(failedAutohold(error)))
} | zuul | /zuul-9.1.0.tar.gz/zuul-9.1.0/web/src/actions/autoholds.js | autoholds.js |
import * as API from '../api'
export const JOB_GRAPH_FETCH_REQUEST = 'JOB_GRAPH_FETCH_REQUEST'
export const JOB_GRAPH_FETCH_SUCCESS = 'JOB_GRAPH_FETCH_SUCCESS'
export const JOB_GRAPH_FETCH_FAIL = 'JOB_GRAPH_FETCH_FAIL'
export const requestJobGraph = () => ({
type: JOB_GRAPH_FETCH_REQUEST
})
export function makeJobGraphKey(project, pipeline, branch) {
return JSON.stringify({
project: project, pipeline: pipeline, branch: branch
})
}
export const receiveJobGraph = (tenant, jobGraphKey, jobGraph) => {
return {
type: JOB_GRAPH_FETCH_SUCCESS,
tenant: tenant,
jobGraphKey: jobGraphKey,
jobGraph: jobGraph,
receivedAt: Date.now(),
}
}
const failedJobGraph = error => ({
type: JOB_GRAPH_FETCH_FAIL,
error
})
const fetchJobGraph = (tenant, project, pipeline, branch) => dispatch => {
dispatch(requestJobGraph())
const jobGraphKey = makeJobGraphKey(project, pipeline, branch)
return API.fetchJobGraph(tenant.apiPrefix,
project,
pipeline,
branch)
.then(response => dispatch(receiveJobGraph(
tenant.name, jobGraphKey, response.data)))
.catch(error => dispatch(failedJobGraph(error)))
}
const shouldFetchJobGraph = (tenant, project, pipeline, branch, state) => {
const jobGraphKey = makeJobGraphKey(project, pipeline, branch)
const tenantJobGraphs = state.jobgraph.jobGraphs[tenant.name]
if (tenantJobGraphs) {
const jobGraph = tenantJobGraphs[jobGraphKey]
if (!jobGraph) {
return true
}
if (jobGraph.isFetching) {
return false
}
return false
}
return true
}
export const fetchJobGraphIfNeeded = (tenant, project, pipeline, branch,
force) => (
dispatch, getState) => {
if (force || shouldFetchJobGraph(tenant, project, pipeline, branch,
getState())) {
return dispatch(fetchJobGraph(tenant, project, pipeline, branch))
}
return Promise.resolve()
} | zuul | /zuul-9.1.0.tar.gz/zuul-9.1.0/web/src/actions/jobgraph.js | jobgraph.js |
import * as API from '../api'
export const AUTH_CONFIG_REQUEST = 'AUTH_CONFIG_REQUEST'
export const AUTH_CONFIG_SUCCESS = 'AUTH_CONFIG_SUCCESS'
export const AUTH_CONFIG_FAIL = 'AUTH_CONFIG_FAIL'
export const USER_ACL_REQUEST = 'USER_ACL_REQUEST'
export const USER_ACL_SUCCESS = 'USER_ACL_SUCCESS'
export const USER_ACL_FAIL = 'USER_ACL_FAIL'
export const AUTH_START = 'AUTH_START'
const authConfigRequest = () => ({
type: AUTH_CONFIG_REQUEST
})
function createAuthParamsFromJson(json) {
let auth_info = json.info.capabilities.auth
let auth_params = {
authority: '',
client_id: '',
scope: '',
loadUserInfo: true,
}
if (!auth_info) {
console.log('No auth config')
return auth_params
}
const realm = auth_info.default_realm
const client_config = auth_info.realms[realm]
if (client_config && client_config.driver === 'OpenIDConnect') {
auth_params.client_id = client_config.client_id
auth_params.scope = client_config.scope
auth_params.authority = client_config.authority
auth_params.loadUserInfo = client_config.load_user_info
return auth_params
} else {
console.log('No OpenIDConnect provider found')
return auth_params
}
}
const authConfigSuccess = (json, auth_params) => ({
type: AUTH_CONFIG_SUCCESS,
info: json.info.capabilities.auth,
auth_params: auth_params,
})
const authConfigFail = error => ({
type: AUTH_CONFIG_FAIL,
error
})
export const configureAuthFromTenant = (tenantName) => (dispatch) => {
dispatch(authConfigRequest())
return API.fetchTenantInfo('tenant/' + tenantName + '/')
.then(response => {
dispatch(authConfigSuccess(
response.data,
createAuthParamsFromJson(response.data)))
})
.catch(error => {
dispatch(authConfigFail(error))
})
}
export const configureAuthFromInfo = (info) => (dispatch) => {
try {
dispatch(authConfigSuccess(
{info: info},
createAuthParamsFromJson({info: info})))
} catch(error) {
dispatch(authConfigFail(error))
}
} | zuul | /zuul-9.1.0.tar.gz/zuul-9.1.0/web/src/actions/auth.js | auth.js |
========
zuul_get
========
The ``zuul_get`` script retrieves status updates from OpenStack's Zuul
deployment and returns the status of a particular CI job. The script now
supports version 2 and 3 of Zuul.
Installation
------------
The easiest method is to use pip:
.. code-block:: console
pip install zuul_get
Running the script
------------------
Provide a six-digit gerrit review number as an argument to retrieve the CI job
URLs from Zuul's JSON status file. Here's an example:
.. code-block:: console
$ zuul_get 510588
+---------------------------------------------------+---------+----------------------+
| Zuulv2 Jobs for 510588 | | |
+---------------------------------------------------+---------+----------------------+
| gate-ansible-hardening-docs-ubuntu-xenial | Queued | |
| gate-ansible-hardening-linters-ubuntu-xenial | Queued | |
| gate-ansible-hardening-ansible-func-centos-7 | Success | https://is.gd/ifQc2I |
| gate-ansible-hardening-ansible-func-ubuntu-xenial | Queued | |
| gate-ansible-hardening-ansible-func-opensuse-423 | Success | https://is.gd/RiiZFW |
| gate-ansible-hardening-ansible-func-debian-jessie | Success | https://is.gd/gQ0izk |
| gate-ansible-hardening-ansible-func-fedora-26 | Success | https://is.gd/w9zTCa |
+---------------------------------------------------+---------+----------------------+
+-----------------------------------------------------+--------+--+
| Zuulv3 Jobs for 510588 | | |
+-----------------------------------------------------+--------+--+
| build-openstack-sphinx-docs | Queued | |
| openstack-tox-linters | Queued | |
| legacy-ansible-func-centos-7 | Queued | |
| legacy-ansible-func | Queued | |
| legacy-ansible-func-opensuse-423 | Queued | |
| legacy-ansible-hardening-ansible-func-debian-jessie | Queued | |
| legacy-ansible-hardening-ansible-func-fedora-26 | Queued | |
+-----------------------------------------------------+--------+--+
Currently running jobs will have a link displayed which allows you to view
the progress of a particular job. Zuulv2 uses ``telnet://`` links while
Zuulv3 has a continuously updating page in your browser.
Completed jobs will have a link to the job results.
Contributing
------------
Pull requests and GitHub issues are always welcome!
| zuul_get | /zuul_get-1.2.tar.gz/zuul_get-1.2/README.rst | README.rst |
====
zuup
====
.. image:: https://travis-ci.org/sileht/zuup.png?branch=master
:target: https://travis-ci.org/sileht/zuup
.. image:: https://img.shields.io/pypi/v/zuup.svg
:target: https://pypi.python.org/pypi/zuup/
:alt: Latest Version
.. image:: https://img.shields.io/pypi/dm/zuup.svg
:target: https://pypi.python.org/pypi/zuup/
:alt: Downloads
Command line to consult Openstack zuul status
* Free software: Apache license
* Documentation: http://zuup.readthedocs.org
* Source: http://github.com/sileht/zuup
* Bugs: http://github.com/sileht/zuup/issues
Installation
------------
At the command line::
$ pip install zuup
Or, if you have virtualenvwrapper installed::
$ mkvirtualenv zuup
$ pip install zuup
Usage
-----
To use zuup::
$ zuup --help
usage: zuup [-h] [-D] [-d] [-w DELAY] [-e EXPIRATION] [-u USERNAME]
[-p PROJECTS] [-c CHANGES] [-l] [-r] [-s] [-j JOB]
optional arguments:
-h, --help show this help message and exit
-D Daemonize and exit if no more reviews
-d Daemonize
-w DELAY refresh delay
-e EXPIRATION review expiration in deamon mode
-u USERNAME Username
-p PROJECTS Projects
-c CHANGES changes
-l local changes
-r current repo changes
-s short output
-j JOB show log of a job of a change
Example
-------
Print jobs of projects::
$ zuup -p openstack/ceilometer -p openstack/gnocchi
[openstack/gnocchi] check[0]: https://review.openstack.org/235161
TEST 01:22:14/00:00:00
- SUCCESS --:--:-- gate-gnocchi-pep8 http://logs.openstack.org/61/235161/4/check/gate-gnocchi-pep8/ac6632a
- SUCCESS --:--:-- gate-gnocchi-docs http://logs.openstack.org/61/235161/4/check/gate-gnocchi-docs/ff085e7
- SUCCESS --:--:-- gate-gnocchi-python27 http://logs.openstack.org/61/235161/4/check/gate-gnocchi-python27/9e3fd5e
- SUCCESS --:--:-- gate-gnocchi-python34 http://logs.openstack.org/61/235161/4/check/gate-gnocchi-python34/afcef87
- SUCCESS --:--:-- gate-gnocchi-bashate http://logs.openstack.org/61/235161/4/check/gate-gnocchi-bashate/f7b10d4
- SUCCESS --:--:-- gate-gnocchi-dsvm-functional-file-mysql http://logs.openstack.org/61/235161/4/check/gate-gnocchi-dsvm-functional-file-mysql/d016760
- ======= 00:00:00 gate-gnocchi-dsvm-functional-swift-postgresql https://jenkins06.openstack.org/job/gate-gnocchi-dsvm-functional-swift-postgresql/263/
- SUCCESS --:--:-- gate-gnocchi-dsvm-functional-ceph-mysql http://logs.openstack.org/61/235161/4/check/gate-gnocchi-dsvm-functional-ceph-mysql/2b54187
- SUCCESS --:--:-- gate-ceilometer-dsvm-integration http://logs.openstack.org/61/235161/4/check/gate-ceilometer-dsvm-integration/a937fd5
[openstack/ceilometer] check[0]: https://review.openstack.org/235202
Merge tag '5.0.0' 01:02:46/00:09:20
- SUCCESS --:--:-- gate-ceilometer-pep8 http://logs.openstack.org/02/235202/1/check/gate-ceilometer-pep8/bac67ce
- SUCCESS --:--:-- gate-ceilometer-docs http://logs.openstack.org/02/235202/1/check/gate-ceilometer-docs/1d1eb96
- FAILURE --:--:-- gate-ceilometer-python27 http://logs.openstack.org/02/235202/1/check/gate-ceilometer-python27/d993423
- FAILURE --:--:-- gate-ceilometer-python34 http://logs.openstack.org/02/235202/1/check/gate-ceilometer-python34/5ee29b5
- SUCCESS --:--:-- gate-tempest-dsvm-ceilometer-mongodb-full http://logs.openstack.org/02/235202/1/check/gate-tempest-dsvm-ceilometer-mongodb-full/a55e9e6
- ======. 00:09:20 gate-tempest-dsvm-ceilometer-mysql-neutron-full https://jenkins06.openstack.org/job/gate-tempest-dsvm-ceilometer-mysql-neutron-full/114/
- ======= 00:00:00 gate-tempest-dsvm-ceilometer-mysql-full https://jenkins03.openstack.org/job/gate-tempest-dsvm-ceilometer-mysql-full/36/
- SUCCESS --:--:-- gate-tempest-dsvm-ceilometer-postgresql-full http://logs.openstack.org/02/235202/1/check/gate-tempest-dsvm-ceilometer-postgresql-full/a1eee16
- ======= 00:00:00 gate-ceilometer-dsvm-functional-mongodb https://jenkins03.openstack.org/job/gate-ceilometer-dsvm-functional-mongodb/275/
- ======= 00:00:00 gate-ceilometer-dsvm-functional-postgresql https://jenkins05.openstack.org/job/gate-ceilometer-dsvm-functional-postgresql/146/
- SUCCESS --:--:-- gate-grenade-dsvm-ceilometer http://logs.openstack.org/02/235202/1/check/gate-grenade-dsvm-ceilometer/383ecfb
- SUCCESS --:--:-- gate-ceilometer-dsvm-integration http://logs.openstack.org/02/235202/1/check/gate-ceilometer-dsvm-integration/6758820
...
Print jobs of an user::
$ zuup -u sileht
$ zuup -u sileht -d # Run it in loop
Print jobs of a change-id::
$ zuup -c 235161
or
$ zuup -c https://review.openstack.org/235207
Print jobs of change-ids on your local git branch::
$ zuup -l
Print jobs resume ::
$ zuup -c https://review.openstack.org/235207 -s
[openstack/ceilometer] check[0]: https://review.openstack.org/235207 Switch to post-versioning 00:59:40/00:04:08 SSFSSSSPPSS
- FAILURE --:--:-- gate-ceilometer-python27 http://logs.openstack.org/07/235207/1/check/gate-ceilometer-python27/546a067
Print running and failed jobs only ::
$ zuup -c https://review.openstack.org/235207 -R
[openstack/ceilometer] check[0]: https://review.openstack.org/235207
Switch to post-versioning 01:00:18/00:03:30
- FAILURE --:--:-- gate-ceilometer-python27 http://logs.openstack.org/07/235207/1/check/gate-ceilometer-python27/546a067
- ======= 00:00:00 gate-ceilometer-dsvm-functional-mongodb https://jenkins03.openstack.org/job/gate-ceilometer-dsvm-functional-mongodb/276/
- ======. 00:03:30 gate-ceilometer-dsvm-functional-postgresql https://jenkins04.openstack.org/job/gate-ceilometer-dsvm-functional-postgresql/140/
| zuup | /zuup-1.0.7.tar.gz/zuup-1.0.7/README.rst | README.rst |
========
Usage
========
To use zuup::
zuul --help
usage: zuul [-h] [-D] [-d] [-w DELAY] [-e EXPIRATION] [-u USERNAME]
[-p PROJECTS] [-c CHANGES] [-l] [-r] [-s] [-j JOB]
optional arguments:
-h, --help show this help message and exit
-D Daemonize and exit if no more reviews
-d Daemonize
-w DELAY refresh delay
-e EXPIRATION review expiration in deamon mode
-u USERNAME Username
-p PROJECTS Projects
-c CHANGES changes
-l local changes
-r current repo changes
-s short output
-j JOB show log of a job of a change
| zuup | /zuup-1.0.7.tar.gz/zuup-1.0.7/doc/source/usage.rst | usage.rst |
import requests
import json
class Deadline:
def __init__(self, date, course, description, opportunity, meta):
self.date = date
self.course = course
self.description = description
self.opportunity = opportunity
self.meta = meta
class Lesson:
def __init__(self, start, end, course, location, teacher, meta):
self.start = start
self.end = end
self.course = course
self.location = location
self.teacher = teacher
self.meta = meta
class Meta:
def __init__(self, last_update, user):
self.last_update = last_update
self.user = user
class APIConnection:
def __init__(self, key):
self.base_url = 'https://app.zuydbot.cc/api/v2'
self.key = key
self.deadlines = None
self.lessons = None
self.test_connection()
def test_connection(self):
try:
r = requests.get(self.base_url, timeout=15)
except requests.exceptions.ReadTimeout:
raise TimeoutError('Connected timed out.')
if r.status_code is not 200:
raise ConnectionError('Cannot reach API (HTTP {}).'.format(r.status_code))
def send_request(self, module):
try:
r = requests.get('{}/{}'.format(self.base_url, module), headers={'key': self.key}, timeout=15)
except requests.exceptions.ReadTimeout:
raise TimeoutError('Connected timed out.')
if r.status_code is not 200:
raise ConnectionError('Cannot reach API (HTTP {}).'.format(r.status_code))
response = json.loads(r.content.decode('utf-8'))
return response['deadlines'], response['meta']
def get_deadlines(self):
deadlines, meta = self.send_request('deadlines')
deadline_list = []
metadata = Meta(last_update=meta['last-update'], user=meta['user'])
for deadline in deadlines:
deadline_list.append(Deadline(date=deadline['date'], course=deadline['course'], meta=metadata,
description=deadline['description'], opportunity=deadline['opportunity']))
self.deadlines = deadline_list
def get_lessons(self):
lessons, meta = self.send_request('lessons')
lesson_list = []
metadata = Meta(last_update=meta['last-update'], user=meta['user'])
for lesson in lessons:
lesson_list.append(Lesson(start=lesson['start-time'], end=lesson['end-time'], course=lesson['course'],
location=lesson['location'], teacher=lesson['teacher'], meta=metadata))
self.lessons = lesson_list | zuydbot-api | /zuydbot_api-0.1-py3-none-any.whl/zuydbot_api/APIConnection.py | APIConnection.py |
# ZuzuVibhu
The module provides the zuzu package for Vibhu Agarwal.\
Zuzu is a unique language defined by Vibhu Agarwal himself.\
The language is in no way related to other standard languages understood in public which is not specifically defined by Vibhu Agarwal.
Happy Go Zuzus!
## Installing the package
```
pip install zuzuvibhu
```
## Using the package
```
>>> import zuzuvibhu
>>> zuzuvibhu.get_zuzus()
```
Go to http://localhost:5000/ to get the response in HTML\
or you may visit http://localhost:5000/api to get the text in JSON format.
| zuzuvibhu | /zuzuvibhu-1.0.4.tar.gz/zuzuvibhu-1.0.4/README.md | README.md |
# import module
from __future__ import print_function
from math import *
import argparse
import mechanize
import cookielib
import sys
import bs4
import requests
import os
import glob
import random
import time
reload(sys)
sys.setdefaultencoding('utf-8')
__VERSION__ = '0.1.3 (in development)'
__BOTNAME__ = 'zvBot' # default botname
__LICENSE__ = '''
MIT License
Copyright (c) 2018 Noval Wahyu Ramadhan <[email protected]>
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
'''
# lambda
sprt = lambda: logger('-'*arg.long_separator, sprt=True)
# super user
ADMIN = []
# blacklist user
BLACKLIST = []
# Command options
SINGLE_COMMAND = ['@quit', '@help']
NOT_SINGLE_COMMAND = [ '@calc', '@igstalk', '@img',
'@tr', '@igd', '@wiki',
'@sgb_quote', '@tanpakertas_quote', '@rasa_quote',
'@img_quote', '@kbbi',
'@lyrics' ]
COMMANDS = NOT_SINGLE_COMMAND + SINGLE_COMMAND
BLACKLIST_COMMAND = []
# helper
HELP_TEXT = [ 'commands:\n',
' - @help : show this help message.',
' - @kbbi <word> : search entries for a word/phrase in KBBI Online.',
' - @lyrics <song title> : look for the lyrics of a song',
' - @img <query> : look for images that are relevant to the query.',
' - @calc <value> : do mathematical calculations.',
' - @igd <url> : download Instagram photos from url.',
' - @sgb_quote <quote> : SGB quote maker.',
' - @rasa_quote <quote> : rasa untukmu quote maker.',
' - @tanpakertas_quote <quote> : tanpa kertas quote maker.',
' - @img_quote <quote> : IMG quote maker.',
' - @wiki <word> : search for word definitions in wikipedia.',
' - @tr <text> : translate any language into English.',
' - @igstalk <username> : View user profiles on Instagram.',
'<br>Example:\n',
' - @kbbi makan',
' - @img random',
' - @lyrics eminem venom',
' - @calc 1+2+3+4+5',
' - @sgb_quote write your quote here!',
' - @igd https://instagram.com/p/<code>',
' - @tr halo dunia',
' - @wiki wibu',
' - @wiki kpop' ]
def command(mess, name = ''):
me = mess[0]
if me == '@lyrics':
query = '{}'.format(' '.join(mess[1:]))
hdr = {'User-Agent': 'Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.1) Gecko/2008071615 Fedora/3.0.1-1.fc9 Firefox/3.0.1',
'Accept-Language': 'en-US,en;q=0.8',
'Connection': 'keep-alive'}
r = requests.get('https://search.azlyrics.com/search.php',
params = {'q': query,
'w': 'songs'},
headers = hdr)
url = bs4.BeautifulSoup(r.text, 'html.parser').find('td', {'class': 'text-left visitedlyr'})
if not url:
r = requests.get('https://www.lyricsmode.com/search.php',
params = {'search': query},
headers = hdr)
soup = bs4.BeautifulSoup(r.text, 'html.parser')
url = soup.find('a', {'class': 'lm-link lm-link--primary lm-link--highlight'})
if not url:
return 'lyrics can\'t be found'
r = requests.get('https://www.lyricsmode.com{}'.format(url.attrs['href']))
soup = bs4.BeautifulSoup(r.text, 'html.parser')
return '{0}\n\n{1}'.format(
' - '.join([i.text[1:] for i in soup.find('ul', {'class': 'breadcrumb'}).findAll('li')[-2:]])[:-7],
soup.find('p', {'class': 'ui-annotatable js-lyric-text-container'}).text[29:])
r = requests.get(url.a.attrs['href'])
soup = bs4.BeautifulSoup(r.text, 'html.parser')
return '{0}\n{1}'.format(
soup.title.text[:-22],
soup.findAll('div')[21].text)
elif me == '@kbbi':
url = 'https://kbbi.kemdikbud.go.id/entri/{}'.format(' '.join(mess[1:]))
raw = requests.get(url).text
if "Entri tidak ditemukan." in raw:
return 'entry not found: {}'.format(' '.join(mess[1:]))
arti = []
arti_contoh = []
isolasi = raw[raw.find('<h2>'):raw.find('<h4>')]
soup = bs4.BeautifulSoup(isolasi, 'html.parser')
entri = soup.find_all('ol') + soup.find_all('ul')
for tiap_entri in entri:
for tiap_arti in tiap_entri.find_all('li'):
kelas = tiap_arti.find(color="red").get_text().strip()
arti_lengkap = tiap_arti.get_text().strip()[len(kelas):]
if ':' in arti_lengkap:
arti_saja = arti_lengkap[:arti_lengkap.find(':')]
else:
arti_saja = arti_lengkap
if kelas:
hasil = '({0}) {1}'
else:
hasil = '{1}'
arti_contoh.append(hasil.format(kelas, arti_lengkap))
arti.append(hasil.format(kelas, arti_saja))
return '\n'.join(arti).replace('(n)', '( n )')
elif me == '@tr':
params = {
'hl':'id',
'sl':'auto',
'tl':'en',
'ie':'UTF-8',
'prev':'_m',
'q':' '.join(mess[1:])
}
url = 'https://translate.google.com/m'
r = requests.get(url, params=params)
soup = bs4.BeautifulSoup(r.text, 'html.parser')
return soup.find(class_='t0').text
elif me == '@wiki':
m = False
url = 'https://id.m.wikipedia.org/wiki/' + '_'.join(mess[1:])
r = requests.get(url)
soup = bs4.BeautifulSoup(r.text, 'html.parser')
res = '$'
temp = ''
if soup.find('p'):
if 'dapat mengacu kepada beberapa hal berikut:' in soup.find('p').text or 'bisa merujuk kepada' in soup.find('p').text:
temp += soup.find('p').text + '\n'
for i in soup.find_all('li'):
if 'privasi' in i.text.lower():
m = False
if m:
temp += '- ' + i.text + '\n'
if 'baca dalam' in i.text.lower():
m = True
else:
paragraph = 6 if arg.paragraph >= 6 else arg.paragraph
for i in soup.find_all('p')[:paragraph]:
if 'akurasi' in i.text.lower():
pass
else:
temp += i.text + '\n\n'
res += temp
res += '<br>read more: ' + r.url
if '$<br>' in res:
res = ' sorry, I can\'t find the definition of "%s"' % ' '.join(mess[1:])
return res[1:]
elif me == '@help':
res = 'Hello %s, ' % (' '.join([i.capitalize() for i in name.split()]))
res += 'you are admin now\n\n' if name in ADMIN else 'have a nice day\n\n'
for x in HELP_TEXT:
c = x.split()
if len(c) > 2:
if x.split()[1] in COMMANDS:
if name in ADMIN:
res += x + '\n'
elif x.split()[1] not in BLACKLIST_COMMAND:
res += x + '\n'
else:
res += x + '\n'
return res
# --------------- unknow ----------------- #
def updt(progress, total):
indi = '\x1b[32m#\x1b[0m' if arg.color else '#'
barLength, status = 25, '%s/%s' % (convertSize(progress), convertSize(total))
progress = float(progress) / float(total)
block = int(round(barLength * progress))
text = "\r{:<9}[{}] {} [{:.0f}%]".format(
'PROGRESS',
indi * block + "-" * (barLength - block),
status,
round(progress * 100, 0)
)
sys.stdout.write(text)
sys.stdout.flush()
def convertSize(n, format='%(value).1f %(symbol)s', symbols='customary'):
SYMBOLS = {
'customary': ('B', 'K', 'Mb', 'G', 'T', 'P', 'E', 'Z', 'Y'),
'customary_ext': ('byte', 'kilo', 'mega', 'giga', 'tera', 'peta', 'exa',
'zetta', 'iotta'),
'iec': ('Bi', 'Ki', 'Mi', 'Gi', 'Ti', 'Pi', 'Ei', 'Zi', 'Yi'),
'iec_ext': ('byte', 'kibi', 'mebi', 'gibi', 'tebi', 'pebi', 'exbi',
'zebi', 'yobi'),
}
n = int(n)
if n < 0:
raise ValueError("n < 0")
symbols = SYMBOLS[symbols]
prefix = {}
for i, s in enumerate(symbols[1:]):
prefix[s] = 1 << (i + 1) * 10
for symbol in reversed(symbols[1:]):
if n >= prefix[symbol]:
value = float(n) / prefix[symbol]
return format % locals()
return format % dict(symbol=symbols[0], value=n)
def parse_url(url):
return url[8 if url.startswith('https://') else 7:].split('/')[0]
def get_file(url, name = 'zvBot.jpeg'):
logger('downloading file from %s' % parse_url(url))
r = requests.get(url, stream=True)
file_size = len(r.content)
downloaded = 0
with open(name, 'wb') as f:
for i in r.iter_content(1024):
if buffer:
updt(downloaded, file_size)
f.write(i)
f.flush()
downloaded += len(i)
print ('') # new line
return True
# -------------- starting bot ---------------- #
class start_bot:
def __init__(self, username, password):
self.url = 'https://mbasic.facebook.com'
self.username = username
self.password = password
self.image_numbers = self.get_last_images()
self.burl = None
# user config
self.config = {
'blacklist':{},
'last':{},
'limit':{},
'botname':{}
}
self.br = self.setup()
self.login()
def run_a_bot(self):
self.br.open(self.url + '/messages/read')
name = False
for i in self.br.links():
if name:
self.name = i.text.lower().split(' (')[0]
# added new user
if self.name not in self.config['last'].keys():
self.config['blacklist'][self.name] = False
self.config['limit'][self.name] = 0
self.config['botname'][self.name] = __BOTNAME__
self.config['last'][self.name] = 'unknow'
if not self.config['blacklist'][self.name]:
logger('choose chat from %s' % i.text)
if self.name not in BLACKLIST or self.config['blacklist'][self.name]:
self.burl = self.url + i.url
break
else:
logger('blacklist user detected, skipped\n%s' % '-'*arg.long_separator, 'WARNING')
self.config['blacklist'][self.name] = True
break
if 'Cari pesan' == i.text:
name = True
if self.burl:
for _ in range(arg.refresh):
shell = True
allow = True
not_igd_and_set = True
text = self.br.open(self.burl).read()
for x in self.br.links():
if arg.group_chat:
if x.text.lower()[-5:] == 'orang':
logger('group chat detected, skipped', 'WARNING')
sprt()
self.config['blacklist'][self.name] = True
break
else:
break
soup = bs4.BeautifulSoup(text, 'html.parser')
m = soup.find_all('span')
com = ['']
for num, i in enumerate(m):
if 'dilihat' in i.text.lower():
if m[num-3].text[:3].lower() == '@igd' and 'instagram' in m[num-3].text and len(m[num-3].text.split('/')) > 4:
logger('receive command: @igd')
not_igd_and_set = False
self.config['last'][self.name] = '@igd'
logger('make a requests')
ig_code = m[num-3].text.split('/')[4][1:]
logger('code: %s', ig_code)
self.get_file_from_instagram(ig_code)
break
not_com = m[num-3].text.lower()
com = not_com.split()
break
if self.config['last'][self.name] and _ == 0 and not self.config['blacklist'][self.name]:
logger('last command: %s' % self.config['last'][self.name])
if len(com) == 1 and com[0] in NOT_SINGLE_COMMAND:
shell = False
try:
if self.config['limit'][self.name] == arg.limit and self.name not in ADMIN and not self.config['blacklist'][self.name] and com[0] in COMMANDS:
logger('user has exceeded the limit')
self.send('You have reached the usage limit')
self.config['blacklist'][self.name] = True
if com[0] in COMMANDS and com[0] != '@help':
self.config['limit'][self.name] += 1
if com[0] in BLACKLIST_COMMAND:
allow = False
if not self.config['blacklist'][self.name] and self.name not in ADMIN:
logger('receive command: %s' % com[0])
self.send('sorry, this command has been disabled by admin')
if self.name in ADMIN:
allow = True
# execute
if com[0] in COMMANDS and shell and allow:
if com[0] != '@igd' and not self.config['blacklist'][self.name]:
self.bcom = com[0]
self.config['last'][self.name] = com[0]
c_m = com[0]
logger('receive command: %s' % c_m)
if not_igd_and_set and com[0] != '@quit':
if com[0] in NOT_SINGLE_COMMAND or '_quote' in com[0]:
logger('value:%s' % not_com.replace(com[0],''))
logger('make a requests')
if com[0] == '@img':
self.send_image(get_file('https://source.unsplash.com/640x640/?' + not_com.replace(com[0],'')[1:]))
sprt()
elif com[0] == '@calc':
try:
i = ''
for x in not_com:
if x.isdigit() or x in ['/', '*', '+', '-', '%']:
i += x
res = eval(i)
self.send('%s\n\n= %s' % (not_com[6:],res))
except (NameError,SyntaxError):
self.send('invalid value: %s' % not_com[6:])
elif '_quote' in com[0]:
self.send_image(self.quote(' '.join(com[1:])))
sprt()
elif com[0] == '@igstalk':
self.ig_stalk(com[1])
else:
self.send(command(com, self.name))
elif com[0] == '@quit':
if self.name in ADMIN:
self.send('bot stopped, thank you for chatting with me ^^')
exit('stopped bot\n' + '-'*arg.long_separator)
else:
self.send('You are not an admin, access is denied')
except IndexError:
pass
# ------------- other tool ------------ #
def ig_stalk(self,username):
text = requests.get('https://insta-stalker.com/profile/%s' % username).text
soup = bs4.BeautifulSoup(text, 'html.parser')
try:
data = {'profile_url':soup.find(class_='profile-img').attrs['src'],
'bio':'',
'data':{'following':0, 'followers':0, 'posts':0}}
for num,i in enumerate(soup.find_all('p')[:-2]):
if 'http' not in i.text and num == 1:
break
data['bio'] += i.text + '\n\n'
if 'private' not in data['bio']:
for num,i in enumerate(soup.find_all('script')[8:][:9]):
if 'var' in i.text:
break
data['data'][data['data'].keys()[num]] = i.text[:-3].split('(')[-1]
self.send_image(get_file(data['profile_url']))
self.send('%s\nFollowing: %s\nFollowers: %s\nPosts: %s' % (data['bio'][:-1], data['data']['following'], data['data']['followers'], data['data']['posts']))
except AttributeError:
self.send('invalid username: %s' % username)
def quote(self, quote = 'hello world!'):
link = 'http://shiroyasha.tech/?tools='
if self.bcom == '@sgb_quote':
link = 'https://wirayudaaditya.site/quotes/?module='
xs = 'sgbquote'
elif self.bcom == '@tanpakertas_quote':
xs = 'tanpakertas_'
elif self.bcom == '@rasa_quote':
xs = 'rasauntukmu'
elif self.bcom == '@img_quote':
link = 'https://wirayudaaditya.site/quotes/?module='
xs = 'autoquotemaker'
self.br.open(link + xs)
self.br.select_form(nr=0)
self.br.form['quote'] = quote
if self.bcom in ('@sgb_quote','@tanpakertas_quote','@img_quote'):
self.br.form['copyright'] = self.name
res = self.br.submit().read()
soup = bs4.BeautifulSoup(res, 'html.parser')
if self.bcom in ('@img_quote', '@sgb_quote'):
open('zvBot.jpeg', 'wb').write(soup.find_all('a')[-1].img['src'].split(',')[1].decode('base64'))
else:
open('zvBot.jpeg', 'wb').write(soup.find_all('center')[1].img['src'].split(',')[1].decode('base64'))
return True
# --------------- other functions ------- #
def upload_file(self,name):
logger('uploading file')
r = requests.post('https://www.datafilehost.com/upload.php',
files={'upfile':open(name,'rb')} )
return str(bs4.BeautifulSoup(r.text,'html.parser').find('tr').input['value'])
def get_last_commands(self):
_ = False
self.br.open(self.url + '/messages/read')
for i in self.br.links():
if 'Lihat Pesan Sebelumnya' == i.text:
break
if _:
name = i.text.lower().split(' (')[0]
self.config['limit'][name] = 0
self.config['blacklist'][name] = False
self.config['botname'][name] = __BOTNAME__
self.config['last'][name] = 'unknow'
if arg.admin:
ADMIN.append(name)
if 'Cari pesan' == i.text:
_ = True
def get_last_images(self):
x = 1
for i in glob.glob(arg.dir_cache+'/image_*.jpeg'):
num = int(i.split('/')[-1].split('_')[1][:-5]) + 1
if num >= x:
x = num
return x
def get_file_from_instagram(self, code):
try:
r = requests.get('https://www.instagram.com/p/'+code, params={'__a': 1}).json()
media = r['graphql']['shortcode_media']
if media['is_video']:
self.send('sorry, i can\'t download other than images')
else:
if media.get('edge_sidecar_to_children', None):
self.send('downloading multiple images of this post')
for child_node in media['edge_sidecar_to_children']['edges']:
self.send_image(get_file(child_node['node']['display_url']), 'zvBot.jpeg')
else:
self.send('downloading single image')
self.send_image(get_file(media['display_url']), 'zvBot.jpeg')
sprt()
except (KeyError, ValueError):
self.send('invalid code: %s' % code)
# ----------- send command ------------- #
def send(self, temp):
n = True
if arg.botname and temp.split()[0] != 'download' or 'wait a minute' not in temp:
temp += ('\n\n- via {0} | limit {1}/{2}'.format(
self.config['botname'][self.name],
self.config['limit'][self.name], arg.limit))
for message in temp.split('<br>'):
logger('sending message: %s' % message)
self.br.select_form(nr=1)
self.br.form['body'] = message.capitalize()
self.br.submit()
logger('result: success')
if 'download' in message.lower() or 'wait a minute' in message.lower():
n = False
if 'example' in message.lower():
n = True
if n:
sprt()
def send_image(self, image, x = 'zvBot.jpeg'):
if '_quote' in self.bcom:
self.br.open(self.burl)
logger('send pictures to the recipient')
if arg.cache:
logger('picture name: image_%s.jpeg' % self.image_numbers)
self.br.select_form(nr=1)
self.br.open(self.br.click(type = 'submit', nr = 2))
self.br.select_form(nr=0)
self.br.form.add_file(open(x), 'text/plain', x, nr=0)
self.br.submit()
logger('result: success')
if image:
if arg.cache:
os.rename(str(x), 'image_%s.jpeg' % self.image_numbers)
if arg.up_file:
self.send('hd image: ' + self.upload_file('image_%s.jpeg' % self.image_numbers))
os.system('mv image_%s.jpeg %s' % (self.image_numbers, arg.dir_cache))
else:
os.remove(x)
self.image_numbers +=1
# ----------- Useless function --------- #
def search_(self):
data = {}
logger('search for the latest chat history\n', 'DEBUG')
self.br.open(self.url + '/messages/read')
xD = False
num = 1
for i in self.br.links():
if 'Lihat Pesan Sebelumnya' == i.text:
break
if xD:
print ('%s) %s' % (num, i.text.lower().split(' (')[0]))
data[num] = {'url': i.url, 'name': i.text.lower().split(' (')[0]}
num += 1
if 'Cari pesan' == i.text:
xD = True
return data
def select_(self):
data = self.search_()
final_ = []
n = []
user_ = raw_input('\nenter numbers [1-%s] : ' % len(data))
for x in user_.split(','):
if int(x) in range(len(data) + 1) and x not in n:
final_.append(data[int(x)])
n.append(x)
sprt()
logger('total selected : %s' % len(final_), 'DEBUG')
sprt()
return final_
def delete_(self):
res = self.select_()
for i in res:
logger('delete messages from %s' % i['name'])
self.br.open(self.url + i['url'])
self.br.select_form(nr=2)
self.br.open(self.br.click(type = 'submit', nr = 1))
self.br.open(self.url + self.br.find_link('Hapus').url)
logger('finished all')
# ----------- Browser Options ---------- #
def setup(self):
self.__ = False
xd = os.uname()
logger('build a virtual server (%s %s)' % (xd[0], xd[-1]), 'DEBUG')
br = mechanize.Browser()
self.cookie = cookielib.LWPCookieJar()
if arg.cookie and not arg.own_bot:
logger('use external cookies', 'DEBUG')
self.cookie.load(arg.cookie)
self.__ = True
br.set_handle_robots(False)
br.set_handle_equiv(True)
br.set_handle_referer(True)
br.set_handle_redirect(True)
br.set_cookiejar(self.cookie)
br.set_handle_refresh(mechanize._http.HTTPRefreshProcessor(), max_time = 5)
br.addheaders = [('user-agent', arg.ua)]
br.open(self.url)
return br
def login(self):
logger('make server configuration', 'DEBUG')
if not self.__ and arg.own_bot:
self.br.select_form(nr=0)
self.br.form['email'] = self.username
self.br.form['pass'] = self.password
self.br.submit()
self.br.select_form(nr = 0)
self.br.submit()
if 'login' not in self.br.geturl():
for i in self.br.links():
if 'Keluar' in i.text:
name = i.text.replace('Keluar (', '')[:-1]
logger('server is running (%s)' % name, 'DEBUG')
logger('Press Ctrl-C to quit.', 'DEBUG')
sprt()
if arg.info:
logger('settings\n')
for num,i in enumerate(arg.__dict__):
print ('arg.{0:<20} {1}'.format(i+':', arg.__dict__[i]))
print ('') # new line
sprt()
if arg.save_cookie and not arg.cookie:
res = name.replace(' ', '_') + '.cj'
self.cookie.save(res)
logger('save the cookie in %s' % res, 'DEBUG')
sprt()
if not os.path.isdir(arg.dir_cache):
logger('create new cache directory', 'DEBUG')
os.mkdir(arg.dir_cache)
sprt()
self.get_last_commands()
if not arg.delete_chat:
while True:
self.run_a_bot()
logger('refresh', 'DEBUG')
sprt()
else:
self.delete_()
sprt()
exit()
else:
logger('failed to build server', 'ERROR')
sprt()
def logger(mess, level='INFO', ex=False, sprt=False):
mess = mess.lower().encode('utf8')
code = {'INFO' : '38;5;2',
'DEBUG': '38;5;11',
'ERROR': 31,
'WARNING': 33,
'CRITICAL':41}
if arg.underline:
mess = mess.replace(arg.underline, '\x1b[%sm%s\x1b[0m' % ('4;32' if arg.color else '4' , arg.underline))
message = '{0:<9}{1}'.format(level + ' ' if not sprt else ('-'*9), mess[:-9] if sprt else mess)
print ('\r{1}{0}'.format(message.replace(level, '\x1b[%sm%s\x1b[0m' % (code[level], level)) if arg.color else message, time.strftime('%H:%M:%S ') if arg.time and not sprt else ''))
if arg.log:
if not os.path.isfile(arg.log):
open(arg.log, 'a').write('# create a daily report | %s v%s\n# %s\n' % (__BOTNAME__, __VERSION__, time.strftime('%c')))
with open(arg.log, 'a') as f:
if arg.underline:
message = message.replace('\x1b[{0}m{1}\x1b[0m'.format(
'4;32' if arg.color else '4' , arg.underline
),
arg.underline
)
f.write('\n{0}{1}{2}'.format(
time.strftime('%H:%M:%S ') if not sprt else '',
message.replace('-'*arg.long_separator, '-'*30),
'' if not ex else '\n')
)
if ex:
exit()
def main():
global __BOTNAME__, __LICENSE__, cookie, arg, user, pwd
parse = argparse.ArgumentParser(usage='python2 zvbot [--run] (--cookie PATH | --account USER:PASS) [options]', description='description:\n create a virtual server for Bot Messenger Facebook with a personal account', formatter_class=argparse.RawTextHelpFormatter, epilog='author:\n zevtyardt <[email protected]>\n ')
parse.add_argument('-r', '--run', dest='run', action='store_true', help='run the server')
value = parse.add_argument_group('value arguments')
value.add_argument('--account', metavar='USER:PASS', dest='own_bot', help='create your own bot account')
value.add_argument('--botname', metavar='NAME', dest='default_botname', help='rename your own bot, default %s' % __BOTNAME__)
value.add_argument('--blacklist', metavar='NAME', dest='add_blacklist_user', action='append', help='add a new blacklist user by name')
value.add_argument('--cookie', metavar='PATH', dest='cookie', help='use our own cookie')
value.add_argument('--dirname', metavar='DIRNAME', dest='dir_cache', action='append', help='name of directory is used to store images', default='cache_image')
value.add_argument('--ignore-cmd', metavar='COMMAND', dest='ignore_command', help='adding a prohibited command', choices=COMMANDS)
value.add_argument('--limit', metavar='INT', dest='limit', help='limit of request from the user, default 4', type=int, default=4)
value.add_argument('--logfile', metavar='PATH', dest='log', help='save all logs into the file')
value.add_argument('--long-sprt',metavar='INT',dest='long_separator', help='long separating each session, min 20 max 30', type=int, default=30, choices=range(20,31))
value.add_argument('--new-admin', metavar='NAME', dest='add_admin', action='append', help='add new admin by name')
value.add_argument('--paragraph', metavar='INT', dest='paragraph', help='paragraph number on wikipedia, max 6', type=int, default=2)
value.add_argument('--refresh', metavar='INT', dest='refresh', help='how many times the program refreshes the page', type=int, default=8)
value.add_argument('--underline', metavar='WORD', dest='underline', help='underline the specific word in all logs')
value.add_argument('--user-agent', metavar='UA', dest='ua', help='specify a custom user agent', default='Mozilla/5.0 (Linux; Android 7.0; 5060 Build/NRD90M) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.109 Mobile Safari/537.36')
choice = parse.add_argument_group('choice arguments')
choice.add_argument('--all-admin', dest='admin', action='store_true', help='everyone can use admin command')
choice.add_argument('--clear-screen', dest='clear_screen', action='store_true', help='clean the screen before running the bot')
choice.add_argument('--color', dest='color', action='store_true', help='show colors in all logs')
choice.add_argument('--delete-chat', dest='delete_chat', action='store_true', help='delete the latest chat history')
choice.add_argument('--delete-logfile', dest='delete_logfile', action='store_true', help='delete old logs and create new logs')
choice.add_argument('--ignore-botname', dest='botname', action='store_false', help='don\'t add the bot name to the final result')
choice.add_argument('--ignore-cache', dest='cache', action='store_false', help='does not store all images from the sender\'s request')
choice.add_argument('--ignore-group', dest='group_chat', action='store_true', help='ignore existing chat groups')
choice.add_argument('-i', '--info', dest='info', action='store_true', help='showing any information')
choice.add_argument('-l', '--license', dest='license', action='store_true', help='print license and exit.')
choice.add_argument('-m', '--more', dest='commands', action='store_true', help='print all the available commands and exit')
choice.add_argument('--save-cookie', action='store_true', dest='save_cookie', help='save session cookies into the file')
choice.add_argument('--show-time', dest='time', action='store_true', help='show time in all logs')
choice.add_argument('-v', '--version', dest='version', action='store_true', help='print version information and exit')
choice.add_argument('-u', '--upload', dest='up_file', action='store_true', help='enable file upload. program will send hd image links')
arg = parse.parse_args()
if arg.version:
exit ('v%s' % __VERSION__)
if arg.license:
exit (__LICENSE__)
if arg.commands:
exit ('\n' + ('\n'.join(HELP_TEXT)).replace('<br>', '\n') + '\n')
if arg.default_botname:
__BOTNAME__ = arg.default_botname
if arg.ignore_command:
for i in arg.ignore_command:
if i.lower() != 'help':
cmd = '@'+ i.lower() if i[0] != '@' else i.lower()
BLACKLIST_COMMAND.append(cmd)
if arg.add_admin:
for i in arg.add_admin:
ADMIN.append(i.lower())
if arg.add_blacklist_user:
for i in arg.add_blacklist_user:
BLACKLIST.append(i.lower())
if arg.run and arg.cookie and not arg.own_bot or arg.run and not arg.cookie and arg.own_bot:
if arg.delete_logfile and arg.log:
if os.path.isfile(arg.log):
os.remove(arg.log)
if arg.clear_screen:
print('\x1bc')
try:
logger('Facebook Messenger bot | created by zevtyardt', 'DEBUG')
user, pwd = arg.own_bot.split(':') if arg.own_bot else ('', '')
start_bot(user, pwd)
except KeyboardInterrupt:
logger('user interrupt: stopped bot\n'+'-'*arg.long_separator, 'ERROR', ex=True)
except Exception as e:
logger('%s\n%s' % (e, '-'*arg.long_separator), 'CRITICAL', ex=True)
else:
print ('\n' + ( __BOTNAME__ + '\x1b[0m v' + __VERSION__ + '\n').center(77) )
parse.print_help()
if __name__ == '__main__':
main() | zvbot | /zvbot-0.1.3.tar.gz/zvbot-0.1.3/zvbot.py | zvbot.py |
[](https://pypi.org/project/zvdata/)
[](https://pypi.org/project/zvdata/)
[](https://pypi.org/project/zvdata/)
[](https://travis-ci.org/zvtvz/zvdata)
[](https://codecov.io/github/zvtvz/zvdata)
[](http://hits.dwyl.io/zvtvz/zvdata)
**其他语言: [english](README-en.md).**
zvdata是一个可扩展的记录数据和分析数据的库.
# 如何使用
这是[zvt](https://github.com/zvtvz/zvt)抽象出来的通用库,可以用一种方便的方式来记录,计算和可视化数据.
# 联系方式
微信 foolcage | zvdata | /zvdata-1.2.3.tar.gz/zvdata-1.2.3/README.md | README.md |
import functools
import re
import uuid
import decimal
import zipfile
import os
def is_valid_uuid(val):
"""
Return true if the given value is a valid UUID.
Args:
val (str): a string which might be a UUID.
Returns:
bool: True if UUID
"""
try:
uuid.UUID(str(val))
return True
except ValueError:
return False
def as_collection(value):
"""If the given value is not a collection of some type, return
the value wrapped in a list.
Args:
value (:obj:`mixed`):
Returns:
:obj:`list` of :obj:`mixed`: The value wrapped in alist.
"""
if value is None:
return None
if isinstance(value, (set, list, tuple)):
return value
return [value]
class ObjectView:
"""
Wraps a dictionary and provides an object based view.
"""
snake = re.compile(r'(?<!^)(?=[A-Z])')
def __init__(self, d):
d = dict([(self.snake.sub('_', k).lower(), v) for k, v in d.items()])
self.__dict__ = d
def as_id(value):
"""
If 'value' is an object, return the 'id' property, otherwise return
the value. This is useful for when you need an entity's unique Id
but the user passed in an instance of the entity.
Args:
value (mixed): A string o an object with an 'id' property.
Returns:
str: The id property.
"""
return getattr(value, 'id', value)
def as_id_collection(value):
"""If the given value is not a collection of some type, return
the value wrapped in a list. Additionally entity instances
are resolved into their unique id.
Args:
value (:obj:`mixed`):
Returns:
list: A list of entity unique ids.
"""
if value is None:
return None
if isinstance(value, (set, list, tuple, dict)):
return [getattr(it, "id", it) for it in value]
return [getattr(value, "id", value)]
def memoize(func):
"""
Cache the result of the given function.
Args:
func (function): A function to wrap.
Returns:
function: a wrapped function
"""
cache = func.cache = {}
@functools.wraps(func)
def memoized_func(*args, **kwargs):
key = str(args) + str(kwargs)
if key not in cache:
cache[key] = func(*args, **kwargs)
return cache[key]
return memoized_func
def truncate(number, places):
"""
Truncate a float to the given number of places.
Args:
number (float): The number to truncate.
places (int): The number of plaes to preserve.
Returns:
Decimal: The truncated decimal value.
"""
if not isinstance(places, int):
raise ValueError('Decimal places must be an integer.')
if places < 1:
raise ValueError('Decimal places must be at least 1.')
with decimal.localcontext() as context:
context.rounding = decimal.ROUND_DOWN
exponent = decimal.Decimal(str(10 ** - places))
return decimal.Decimal(str(number)).quantize(exponent)
def round_all(items, precision=3):
"""
Round all items in the list.
Args:
items (list): A list of floats.
precision: (int): number of decimal places.
Returns:
list: A rounded list.
"""
return [round(i, precision) for i in items]
def zip_directory(src_dir, dst_file, zip_root_name=""):
"""
A utility function for ziping a directory of files.
Args:
src_dir (str): The source directory.
dst_file (str): The destination file.s
zip_root_name (str): A optional root directory to place files in the zip.
Returns:
str: The dst file.
"""
def zipdir(path, ziph, root_name):
for root, dirs, files in os.walk(path):
for file in files:
if file == ".DS_Store":
continue
zip_entry = os.path.join(root_name, root.replace(path, ""), file)
ziph.write(os.path.join(root, file), zip_entry)
src_dir = os.path.abspath(src_dir)
zipf = zipfile.ZipFile(dst_file, 'w', zipfile.ZIP_DEFLATED)
zipdir(src_dir + '/', zipf, zip_root_name)
zipf.close()
return dst_file | zvi-client | /zvi-client-1.1.3.tar.gz/zvi-client-1.1.3/pylib/zmlp/util.py | util.py |
import os
import logging
import json
logger = logging.getLogger(__name__)
__all__ = [
'TrainingSetDownloader'
]
class TrainingSetDownloader:
"""
The TrainingSetDownloader class handles writing out the images labeled
for model training to local disk. The Assets are automatically sorted
into train and validation sets.
Multiple directory layouts are supported based on the Model type.
Examples:
# Label Detection Layout
base_dir/flowers/set_validate/daisy
base_dir/flowers/set_validate/rose
base_dir/flowers/set_validate/daisy
base_dir/flowers/set_validate/rose
# Object Detection Layout is a COCO compatible layout
base_dir/set_train/images/*
base_dir/set_train/annotations.json
base_dir/set_test/images/*
base_dir/set_test/annotations.json
"""
SET_TRAIN = "train"
"""Directory name for training images"""
SET_VALIDATION = "validate"
"""Directory name for test images"""
def __init__(self, app, model, style, dst_dir, validation_split=0.2):
"""
Create a new TrainingImageDownloader.
Args:
app: (ZmlpApp): A ZmlpApp instance.
model: (Model): A Model or unique Model ID.
style: (str): The output style: labels_std, objects_keras, objects_coco
dst_dir (str): A destination directory to write the files into.
validation_split (float): The number of images in the training
set for every image in the validation set.
"""
self.app = app
self.model = app.models.get_model(model)
self.style = style
self.dst_dir = dst_dir
self.validation_split = validation_split
self.labels = {}
self.label_distrib = {}
self.query = {
'size': 64,
'_source': ['labels', 'files'],
'query': {
'nested': {
'path': 'labels',
'query': {
'bool': {
'must': [
{'term': {'labels.modelId': self.model.id}},
{'term': {'labels.scope': 'TRAIN'}}
]
}
}
}
}
}
os.makedirs(self.dst_dir, exist_ok=True)
def build(self, pool=None):
"""
Downloads the files labeled for training a Model to local disk.
Args:
labels_std, objects_keras, objects_coco
pool (multiprocessing.Pool): An optional Pool instance which can be used
to download files in parallel.
"""
if self.style == 'labels-standard':
self._build_labels_std_format(pool)
elif self.style == 'objects_coco':
self._build_objects_coco_format(pool)
elif self.style == 'objects_keras':
self._build_objects_keras_format(pool)
else:
raise ValueError('{} not supported by the TrainingSetDownloader'.format(format))
def _build_labels_std_format(self, pool):
self._setup_labels_std_base_dir()
for num, asset in enumerate(self.app.assets.scroll_search(self.query, timeout='5m')):
prx = asset.get_thumbnail(0)
if not prx:
logger.warning('{} did not have a suitable thumbnail'.format(asset))
continue
ds_labels = self._get_labels(asset)
if not ds_labels:
logger.warning('{} did not have any labels'.format(asset))
continue
label = ds_labels[0].get('label')
if not label:
logger.warning('{} was not labeled.'.format(asset))
continue
dir_name = self._get_image_set_type(label)
dst_path = os.path.join(self.dst_dir, dir_name, label, prx.cache_id)
os.makedirs(os.path.dirname(dst_path), exist_ok=True)
logger.info('Downloading to {}'.format(dst_path))
if pool:
pool.apply_async(self.app.assets.download_file, args=(prx, dst_path))
else:
self.app.assets.download_file(prx, dst_path)
def _build_objects_coco_format(self, pool=None):
"""
Write a labeled assets in a COCO object detection training structure.
Args:
pool (multiprocessing.Pool): A multi-processing pool for downloading really fast.
Returns:
str: A path to an annotation file.
"""
self._setup_objects_coco_base_dir()
coco = CocoAnnotationFileBuilder()
for image_id, asset in enumerate(self.app.assets.scroll_search(self.query, timeout='5m')):
prx = asset.get_thumbnail(1)
if not prx:
logger.warning('{} did not have a suitable thumbnail'.format(asset))
continue
ds_labels = self._get_labels(asset)
if not ds_labels:
logger.warning('{} did not have any labels'.format(asset))
continue
for label in ds_labels:
set_type = self._get_image_set_type(label['label'])
dst_path = os.path.join(self.dst_dir, set_type, 'images', prx.cache_id)
if not os.path.exists(dst_path):
self._download_file(prx, dst_path, pool)
image = {
'file_name': dst_path,
'height': prx.attrs['height'],
'width': prx.attrs['width']
}
category = {
'supercategory': 'none',
'name': label['label']
}
bbox, area = self._zvi_to_cocos_bbox(prx, label['bbox'])
annotation = {
'bbox': bbox,
'segmentation': [],
'ignore': 0,
'area': area,
'iscrowd': 0
}
if set_type == self.SET_TRAIN:
coco.add_to_training_set(image, category, annotation)
else:
coco.add_to_validation_set(image, category, annotation)
# Write out the annotations files.
with open(os.path.join(self.dst_dir, self.SET_TRAIN, "annotations.json"), "w") as fp:
logger.debug("Writing training set annotations to {}".format(fp.name))
json.dump(coco.get_training_annotations(), fp)
with open(os.path.join(self.dst_dir, self.SET_VALIDATION, "annotations.json"), "w") as fp:
logger.debug("Writing test set annotations to {}".format(fp.name))
json.dump(coco.get_validation_annotations(), fp)
def _build_objects_keras_format(self, pool=None):
self._setup_objects_keras_base_dir()
fp_train = open(os.path.join(self.dst_dir, self.SET_TRAIN, "annotations.csv"), "w")
fp_test = open(os.path.join(self.dst_dir, self.SET_VALIDATION, "annotations.csv"), "w")
unique_labels = set()
try:
search = self.app.assets.scroll_search(self.query, timeout='5m')
for image_id, asset in enumerate(search):
prx = asset.get_thumbnail(1)
if not prx:
logger.warning('{} did not have a suitable thumbnail'.format(asset))
continue
ds_labels = self._get_labels(asset)
if not ds_labels:
logger.warning('{} did not have any labels'.format(asset))
continue
for label in ds_labels:
unique_labels.add(label['label'])
set_type = self._get_image_set_type(label['label'])
dst_path = os.path.join(self.dst_dir, set_type, 'images', prx.cache_id)
if not os.path.exists(dst_path):
self._download_file(prx, dst_path, pool)
line = [
dst_path
]
line.extend([str(point) for point in
self._zvi_to_keras_bbox(prx, label['bbox'])])
line.append(label['label'])
str_line = "{}\n".format(",".join(line))
if set_type == self.SET_TRAIN:
fp_train.write(str_line)
else:
fp_test.write(str_line)
finally:
fp_train.close()
fp_test.close()
with open(os.path.join(self.dst_dir, "classes.csv"), "w") as fp_classes:
for idx, cls in enumerate(sorted(unique_labels)):
fp_classes.write("{},{}\n".format(cls, idx))
def _zvi_to_keras_bbox(self, prx, bbox):
total_width = prx.attrs['width']
total_height = prx.attrs['height']
return [int(total_width * bbox[0]),
int(total_height * bbox[1]),
int(total_width * bbox[2]),
int(total_height * bbox[3])]
def _zvi_to_cocos_bbox(self, prx, bbox):
"""
Converts a ZVI bbox to a COCOs bbox. The format is x, y, width, height.
Args:
prx (StoredFile): A StoredFile containing a proxy image.
bbox (list): A ZVI bbox.
Returns:
list[float]: A COCOs style bbox.
"""
total_width = prx.attrs['width']
total_height = prx.attrs['height']
pt = total_width * bbox[0], total_height * bbox[1]
new_bbox = [
int(pt[0]),
int(pt[1]),
int(abs(pt[0] - (total_width * bbox[2]))),
int(abs(pt[0] - (total_height * bbox[3])))
]
area = (new_bbox[2] - new_bbox[0]) * (new_bbox[3] - new_bbox[1])
return new_bbox, area
def _download_file(self, prx, dst_path, pool=None):
if pool:
pool.apply_async(self.app.assets.download_file, args=(prx, dst_path))
else:
self.app.assets.download_file(prx, dst_path)
def _setup_labels_std_base_dir(self):
"""
Sets up a directory structure for storing files used to train a model..
The structure is basically:
train/<label>/<img file>
validate/<label>/<img file>
"""
self.labels = self.app.models.get_label_counts(self.model)
# This is layout #1, we need to add darknet layout for object detection.
dirs = (self.SET_TRAIN, self.SET_VALIDATION)
for set_name in dirs:
os.makedirs('{}/{}'.format(self.dst_dir, set_name), exist_ok=True)
for label in self.labels.keys():
os.makedirs(os.path.join(self.dst_dir, set_name, label), exist_ok=True)
logger.info('TrainingSetDownloader setup, using {} labels'.format(len(self.labels)))
def _setup_objects_coco_base_dir(self):
dirs = (self.SET_TRAIN, self.SET_VALIDATION)
for set_name in dirs:
os.makedirs(os.path.join(self.dst_dir, set_name, 'images'), exist_ok=True)
def _setup_objects_keras_base_dir(self):
dirs = (self.SET_TRAIN, self.SET_VALIDATION)
for set_name in dirs:
os.makedirs(os.path.join(self.dst_dir, set_name, 'images'), exist_ok=True)
def _get_image_set_type(self, label):
"""
Using the validation_split property, determine if the current label
would be in the training set or validation set.
Args:
label (str): The label name.
Returns:
str: Either 'validate' or 'train', depending on the validation_split property.
"""
# Everything is in the training set.
if self.validation_split <= 0.0:
return self.SET_TRAIN
ratio = int(1.0 / self.validation_split)
value = self.label_distrib.get(label, 0) + 1
self.label_distrib[label] = value
if value % ratio == 0:
return self.SET_VALIDATION
else:
return self.SET_TRAIN
def _get_labels(self, asset):
"""
Get the current model label for the given asset.
Args:
asset (Asset): The asset to check.
Returns:
list[dict]: The labels for training a model.
"""
ds_labels = asset.get_attr('labels')
if not ds_labels:
return []
result = []
for ds_label in ds_labels:
if ds_label.get('modelId') == self.model.id:
result.append(ds_label)
return result
class CocoAnnotationFileBuilder:
"""
CocoAnnotationFileBuilder manages building a COCO annotations file for both
a training set and test set.
"""
def __init__(self):
self.train_set = {
"output": {
"type": "instances",
"images": [],
"annotations": [],
"categories": []
},
"img_set": {},
"cat_set": {}
}
self.validation_set = {
"output": {
"type": "instances",
"images": [],
"annotations": [],
"categories": []
},
"img_set": {},
"cat_set": {}
}
def add_to_training_set(self, img, cat, annotation):
"""
Add the image, category and annotation to the training set.
Args:
img (dict): A COCO image dict.
cat (dict): A COCO category dict.
annotation: (dict): A COCO annotation dict.
"""
self._add_to_set(self.train_set, img, cat, annotation)
def add_to_validation_set(self, img, cat, annotation):
"""
Add the image, category and annotation to the test set.
Args:
img (dict): A COCO image dict.
cat (dict): A COCO category dict.
annotation: (dict): A COCO annotation dict.
"""
self._add_to_set(self.validation_set, img, cat, annotation)
def _add_to_set(self, dataset, img, cat, annotation):
"""
Add the image, category and annotation to the given set.
Args:
dataset (dict): The set we're building.
img (dict): A COCO image dict.
cat (dict): A COCO category dict.
annotation: (dict): A COCO annotation dict.
"""
img_idmap = dataset['img_set']
cat_idmap = dataset['cat_set']
output = dataset['output']
annots = output['annotations']
img['id'] = img_idmap.get(img['file_name'], len(img_idmap))
cat['id'] = cat_idmap.get(cat['name'], len(cat_idmap))
annotation['id'] = len(annots)
annotation['category_id'] = cat['id']
annotation['image_id'] = img['id']
if img['file_name'] not in img_idmap:
img_idmap[img['file_name']] = img['id']
output['images'].append(img)
if cat['name'] not in cat_idmap:
cat_idmap[cat['name']] = cat['id']
output['categories'].append(cat)
output['annotations'].append(annotation)
def get_training_annotations(self):
"""
Return a structure suitable for a COCO annotations file.
Returns:
dict: The training annoations.=
"""
return self.train_set['output']
def get_validation_annotations(self):
"""
Return a structure suitable for a COCO annotations file.
Returns:
dict: The test annoations.
"""
return self.validation_set['output'] | zvi-client | /zvi-client-1.1.3.tar.gz/zvi-client-1.1.3/pylib/zmlp/training.py | training.py |
import copy
from .entity import VideoClip, Asset, ZmlpException
from .util import as_collection
__all__ = [
'AssetSearchScroller',
'VideoClipSearchScroller',
'AssetSearchResult',
'VideoClipSearchResult',
'AssetSearchCsvExporter',
'LabelConfidenceQuery',
'SingleLabelConfidenceQuery',
'SimilarityQuery',
'FaceSimilarityQuery',
'LabelConfidenceTermsAggregation',
'LabelConfidenceMetricsAggregation'
]
class SearchScroller:
"""
The SearchScroller can iterate over large amounts of data without incurring paging
overhead by utilizing a server side cursor. The cursor is held open for the specified
timeout time unless it is refreshed before the timeout occurs. In this sense, it's important
to complete whatever operation you're taking on each asset within the timeout time. For example
if your page size is 32 and your timeout is 1m, you have 1 minute to handles 32 assets. If that
is not enough time, consider increasing the timeout or lowering your page size.
"""
def __init__(self, klass, endpoint, app, search, timeout="1m", raw_response=False):
"""
Create a new AbstractSearchScroller instance.
Args:
app (ZmlpApp): A ZmlpApp instance.
search: (dict): The ES search
timeout (str): The maximum amount of time the ES scroll will be active unless it's
refreshed.
raw_response (bool): Yield the raw ES response rather than assets. The raw
response will contain the entire page, not individual assets.
"""
self.klass = klass
self.endpoint = endpoint
self.app = app
if search and getattr(search, "to_dict", None):
search = search.to_dict()
self.search = copy.deepcopy(search or {})
self.timeout = timeout
self.raw_response = raw_response
def batches_of(self, batch_size=50):
"""
A generator function capable of efficiently scrolling through
large numbers of assets, returning them in batches of
the given batch size.
Args:
batch_size (int): The size of the batch.
Returns:
generator: A generator that yields batches of Assets.
"""
batch = []
for asset in self.scroll():
batch.append(asset)
if len(batch) >= batch_size:
yield batch
batch = []
if batch:
yield batch
def scroll(self):
"""
A generator function capable of efficiently scrolling through large
results.
Examples:
for asset in AssetSearchScroller({"query": {"term": { "source.extension": "jpg"}}}):
do_something(asset)
Yields:
Asset: Assets that matched the search
"""
result = self.app.client.post(
"{}?scroll={}".format(self.endpoint, self.timeout), self.search)
scroll_id = result.get("_scroll_id")
if not scroll_id:
raise ZmlpException("No scroll ID returned with scroll search, has it timed out?")
try:
while True:
hits = result.get("hits")
if not hits:
return
if self.raw_response:
yield result
else:
for hit in hits['hits']:
yield self.klass.from_hit(hit)
scroll_id = result.get("_scroll_id")
if not scroll_id:
raise ZmlpException(
"No scroll ID returned with scroll search, has it timed out?")
result = self.app.client.post("api/v3/assets/_search/scroll", {
"scroll": self.timeout,
"scroll_id": scroll_id
})
if not result["hits"]["hits"]:
return
finally:
self.app.client.delete("api/v3/assets/_search/scroll", {
"scroll_id": scroll_id
})
def __iter__(self):
return self.scroll()
class AssetSearchScroller(SearchScroller):
"""
AssetSearchScroller handles scrolling through Assets.
"""
def __init__(self, app, search, timeout="1m", raw_response=False):
super(AssetSearchScroller, self).__init__(
Asset, 'api/v3/assets/_search', app, search, timeout, raw_response
)
class VideoClipSearchScroller(SearchScroller):
"""
VideoClipSearchScroller handles scrolling through video clips.
"""
def __init__(self, app, search, timeout="1m", raw_response=False):
super(VideoClipSearchScroller, self).__init__(
VideoClip, 'api/v1/clips/_search', app, search, timeout, raw_response
)
class AssetSearchCsvExporter:
"""
Export a search to a CVS file.
"""
def __init__(self, app, search):
self.app = app
self.search = search
def export(self, fields, path):
"""
Export the given fields to a csv file output path.
Args:
fields (list): An array of field names.
path (str): a file path.
Returns:
int: The number of assets exported.
"""
count = 0
scroller = AssetSearchScroller(self.app, self.search)
fields = as_collection(fields)
with open(str(path), "w") as fp:
for asset in scroller:
count += 1
line = ",".join(["'{}'".format(asset.get_attr(field)) for field in fields])
fp.write(f'{line}\n')
return count
class SearchResult:
"""
Stores a search result from ElasticSearch and provides some convenience methods
for accessing the data.
"""
def __init__(self, klass, endpoint, app, search):
"""
Create a new SearchResult.
Args:
klass (Class): The Class to wrap the search result.
endpoint (str): The endpoint to use for search.
app (ZmlpApp): A ZmlpApp instance.
search (dict): An ElasticSearch query.
"""
self.klass = klass
self.endpoint = endpoint
self.app = app
if search and getattr(search, "to_dict", None):
search = search.to_dict()
self.search = search
self.result = None
self._execute_search()
@property
def items(self):
"""
A list of assets returned by the query. This is not all of the matches,
just a single page of results.
Returns:
list: The list of assets for this page.
"""
hits = self.result.get("hits")
if not hits:
return []
return [self.klass.from_hit(hit) for hit in hits['hits']]
def batches_of(self, batch_size, max_assets=None):
"""
A generator function which returns batches of assets in the
given batch size. This method will optionally page through
N pages, yielding arrays of assets as it goes.
This method is preferred to scrolling for Assets when
multiple pages of Assets need to be processed.
Args:
batch_size (int): The size of the batch.
max_assets (int): The max number of assets to return, max is 10k
Returns:
generator: A generator that yields batches of Assets.
"""
# The maximum we can page through is 10k
asset_countdown = max_assets or 10000
batch = []
while True:
assets = self.assets
if not assets:
break
for asset in assets:
batch.append(asset)
asset_countdown -= 1
if asset_countdown <= 0:
break
if len(batch) >= batch_size:
yield batch
batch = []
if asset_countdown <= 0:
break
self.search['from'] = self.search.get('from', 0) + len(assets)
self._execute_search()
if batch:
yield batch
def aggregation(self, name):
"""
Return an aggregation dict with the given name.
Args:
name (str): The agg name
Returns:
dict: the agg dict or None if no agg exists.
"""
aggs = self.result.get("aggregations")
if not aggs:
return None
if "#" in name:
key = [name]
else:
key = [k for k in
self.result.get("aggregations", {}) if k.endswith("#{}".format(name))]
if len(key) > 1:
raise ValueError(
"Aggs with the same name must be qualified by type (pick 1): {}".format(key))
elif not key:
return None
try:
return aggs[key[0]]
except KeyError:
return None
def aggregations(self):
"""
Return a dictionary of all aggregations.
Returns:
dict: A dict of aggregations keyed on name.
"""
return self.result.get("aggregations", {})
@property
def size(self):
"""
The number assets in this page. See "total_size" for the total number of assets matched.
Returns:
int: The number of assets in this page.
"""
return len(self.result["hits"]["hits"])
@property
def total_size(self):
"""
The total number of assets matched by the query.
Returns:
long: The total number of assets matched.
"""
return self.result["hits"]["total"]["value"]
@property
def raw_response(self):
"""
The raw ES response.
Returns:
(dict) The raw SearchResponse returned by ElasticSearch
"""
return self.result
def next_page(self):
"""
Return an AssetSearchResult containing the next page.
Returns:
AssetSearchResult: The next page
"""
search = copy.deepcopy(self.search or {})
search['from'] = search.get('from', 0) + len(self.result.get("hits"))
return SearchResult(self.klass, self.endpoint, self.app, search)
def _execute_search(self):
self.result = self.app.client.post(self.endpoint, self.search)
def __iter__(self):
return iter(self.items)
def __getitem__(self, item):
return self.items[item]
class AssetSearchResult(SearchResult):
"""
The AssetSearchResult subclass handles paging throug an Asset search result.
"""
def __init__(self, app, search):
super(AssetSearchResult, self).__init__(
Asset, 'api/v3/assets/_search', app, search
)
@property
def assets(self):
return self.items
class VideoClipSearchResult(SearchResult):
"""
The VideoClipSearchResult subclass handles paging through an VideoClip search result.
"""
def __init__(self, app, search):
super(VideoClipSearchResult, self).__init__(
VideoClip, 'api/v1/clips/_search', app, search
)
@property
def clips(self):
return self.items
class LabelConfidenceTermsAggregation:
"""
Convenience class for making a simple terms aggregation on an array of predictions
"""
def __init__(self, namespace):
self.field = "analysis.{}.predictions".format(namespace)
def for_json(self):
return {
"nested": {
"path": self.field
},
"aggs": {
"names": {
"terms": {
"field": self.field + ".label",
"size": 1000,
"order": {"_count": "desc"}
}
}
}
}
class LabelConfidenceMetricsAggregation(object):
def __init__(self, namespace, agg_type="stats"):
"""
Create a new LabelConfidenceMetricsAggregation
Args:
namespace (str): The analysis namespace. (ex: zvi-label-detection)
agg_type (str): A type of metrics agg to perform.
stats, extended_stats,
"""
self.field = "analysis.{}.predictions".format(namespace)
self.agg_type = agg_type
def for_json(self):
return {
"nested": {
"path": self.field
},
"aggs": {
"labels": {
"terms": {
"field": self.field + ".label",
"size": 1000,
"order": {"_count": "desc"}
},
"aggs": {
"stats": {
self.agg_type: {
"field": self.field + ".score"
}
}
}
}
}
}
class LabelConfidenceQuery(object):
"""
A helper class for building a label confidence score query. This query must point
at label confidence structure: For example: analysis.zvi.label-detection.
References:
"labels": [
{"label": "dog", "score": 0.97 },
{"label": "fox", "score": 0.63 }
]
"""
def __init__(self, namespace, labels, min_score=0.1, max_score=1.0):
"""
Create a new LabelConfidenceScoreQuery.
Args:
namespace (str): The analysis namespace with predictions. (ex: zvi-label-detection)
labels (list): A list of labels to filter.
min_score (float): The minimum label score, default to 0.1.
Note that 0.0 allows everything.
max_score (float): The maximum score, defaults to 1.0 which is highest
"""
self.namespace = namespace
self.field = "analysis.{}.predictions".format(namespace)
self.labels = as_collection(labels)
self.score = [min_score, max_score]
def for_json(self):
return {
"bool": {
"filter": [
{
"terms": {
self.field + ".label": self.labels
}
}
],
"must": [
{
"nested": {
"path": self.field,
"query": {
"function_score": {
"boost_mode": "sum",
"field_value_factor": {
"field": self.field + ".score",
"missing": 0
},
"query": {
"bool": {
"filter": [
{
"terms": {
self.field + ".label": self.labels
}
},
{
"range": {
self.field + ".score": {
"gte": self.score[0],
"lte": self.score[1]
}
}
}
]
}
}
}
}
}
}
]
}
}
class SingleLabelConfidenceQuery(object):
"""
A helper class for building a label confidence score query. This query must point
at label confidence structure: For example: analysis.zvi.label-detection.
References:
"labels": [
{"label": "dog", "score": 0.97 },
{"label": "fox", "score": 0.63 }
]
"""
def __init__(self, namespace, labels, min_score=0.1, max_score=1.0):
"""
Create a new SingleLabelConfidenceScoreQuery.
Args:
namespace (str): The analysis namespace with predictions. (ex: zvi-label-detection)
labels (list): A list of labels to filter.
min_score (float): The minimum label score, default to 0.1.
Note that 0.0 allows everything.
max_score (float): The maximum score, defaults to 1.0 which is highest
"""
self.namespace = namespace
self.field = "analysis.{}".format(namespace)
self.labels = as_collection(labels)
self.score = [min_score, max_score]
def for_json(self):
return {
"bool": {
"filter": [
{
"terms": {
self.field + ".label": self.labels
}
}
],
"must": [
{
"function_score": {
"query": {
"bool": {
"must": [
{
"terms": {
self.field + ".label": self.labels
}
},
{
"range": {
self.field + ".score": {
"gte": self.score[0],
"lte": self.score[1]
}
}
}
]
}
},
"boost": "5",
"boost_mode": "sum",
"field_value_factor": {
"field": self.field + ".score",
"missing": 0
}
}
}
]
}
}
class SimilarityQuery:
"""
A helper class for building a similarity search. You can embed this class anywhere
in a ES query dict, for example:
References:
{
"query": {
"bool": {
"must": [
SimilarityQuery(hash_string)
]
}
}
}
"""
def __init__(self, hashes, min_score=0.75, boost=1.0,
field="analysis.zvi-image-similarity.simhash"):
self.field = field
self.hashes = []
self.min_score = min_score
self.boost = boost
self.add_hash(hashes)
def add_hash(self, hashes):
"""
Add a new hash to the search.
Args:
hashes (mixed): A similarity hash string or an asset.
Returns:
SimilarityQuery: this instance of SimilarityQuery
"""
for simhash in as_collection(hashes) or []:
if isinstance(simhash, Asset):
self.hashes.append(simhash.get_attr(self.field))
elif isinstance(simhash, VideoClip):
if simhash.simhash:
self.hashes.append(simhash.simhash)
else:
self.hashes.append(simhash)
return self
def add_asset(self, asset):
"""
See add_hash which handles both hashes and Assets.
"""
return self.add_hash(asset)
def for_json(self):
return {
"script_score": {
"query": {
"match_all": {}
},
"script": {
"source": "similarity",
"lang": "zorroa-similarity",
"params": {
"minScore": self.min_score,
"field": self.field,
"hashes": self.hashes
}
},
"boost": self.boost,
"min_score": self.min_score
}
}
def __add__(self, simhash):
self.add_hash(simhash)
return self
class VideoClipSimilarityQuery(SimilarityQuery):
def __init__(self, hashes, min_score=0.75, boost=1.0):
super(VideoClipSimilarityQuery, self).__init__(
hashes, min_score, boost, 'clip.simhash')
class FaceSimilarityQuery:
"""
Performs a face similarity search.
"""
def __init__(self, faces, min_score=0.90, boost=1.0,
field="analysis.zvi-face-detection.predictions.simhash"):
"""
Create a new FaceSimilarityQuery.
Args:
faces (list): A prediction with a 'simhash' property or a simhash itself.
min_score (float): The minimum score.
boost (float): A boost value which weights this query higer than others.
field (str): An optional field to compare make the comparison with. Defaults to ZVI.
"""
hashes = []
for face in as_collection(faces):
if isinstance(face, str):
hashes.append(face)
else:
hashes.append(face['simhash'])
self.simquery = SimilarityQuery(
hashes,
min_score,
boost,
field)
def for_json(self):
return self.simquery.for_json() | zvi-client | /zvi-client-1.1.3.tar.gz/zvi-client-1.1.3/pylib/zmlp/search.py | search.py |
import base64
import binascii
import datetime
import decimal
import json
import logging
import os
import random
import sys
import time
from io import IOBase
from urllib.parse import urljoin
import jwt
import requests
from .entity.exception import ZmlpException
logger = logging.getLogger(__name__)
DEFAULT_SERVER = 'https://api.zvi.zorroa.com'
class ZmlpClient(object):
"""
ZmlpClient is used to communicate to a ZMLP API server.
"""
def __init__(self, apikey, server, **kwargs):
"""
Create a new ZmlpClient instance.
Args:
apikey: An API key in any supported form. (dict, base64 string, or open file handle)
server: The url of the server to connect to. Defaults to https://api.zmlp.zorroa.com
project_id: An optional project UUID for API keys with access to multiple projects.
max_retries: Maximum number of retries to make if the API server
is down, 0 for unlimited.
"""
self.apikey = self.__load_apikey(apikey)
self.server = server
self.project_id = kwargs.get('project_id', os.environ.get("ZMLP_PROJECT"))
self.max_retries = kwargs.get('max_retries', 3)
self.verify = True
def stream(self, url, dst):
"""
Stream the given URL path to local dst file path.
Args:
url (str): The URL to stream
dst (str): The destination file path
"""
try:
with open(dst, 'wb') as handle:
response = requests.get(self.get_url(url), verify=self.verify,
headers=self.headers(), stream=True)
if not response.ok:
raise ZmlpClientException(
"Failed to stream asset: %s, %s" % (url, response))
for block in response.iter_content(1024):
handle.write(block)
return dst
except requests.exceptions.ConnectionError as e:
raise ZmlpConnectionException(e)
def stream_text(self, url):
"""
Stream the given URL.
Args:
url (str): The URL to stream
Yields:
generator (str): A generator of the lines making up the textual
URL.
"""
try:
response = requests.get(self.get_url(url), verify=self.verify,
headers=self.headers(), stream=True)
if not response.ok:
raise ZmlpClientException(
"Failed to stream text: %s" % response)
for line in response.iter_lines(decode_unicode=True):
yield line
except requests.exceptions.ConnectionError as e:
raise ZmlpConnectionException(e)
def send_file(self, path, file_path):
"""
Sends a file via request body
Args:
path (path): The URI fragment for the request.
file_path (str): The path to the file to send.
Returns:
dict: A dictionary which can be used to fetch the file.
"""
with open(file_path, 'rb') as f:
return self.__handle_rsp(requests.post(
self.get_url(path), headers=self.headers(content_type=""),
data=f), True)
def upload_file(self, path, file, body={}, json_rsp=True):
"""
Upload a single file and a request to the given endpoint path.
Args:
path (str): The URL to upload to.
file (str): The file path to upload.
body (dict): A request body
json_rsp (bool): Set to true if the result returned is JSON
Returns:
dict: The response body of the request.
"""
try:
post_files = [("file", (os.path.basename(file), open(file, 'rb')))]
if body is not None:
post_files.append(
["body", (None, to_json(body), 'application/json')])
return self.__handle_rsp(requests.post(
self.get_url(path), headers=self.headers(content_type=""),
files=post_files), json_rsp)
except requests.exceptions.ConnectionError as e:
raise ZmlpConnectionException(e)
def upload_files(self, path, files, body, json_rsp=True):
"""
Upload an array of files and a reques to the given endpoint path.
Args:
path (str): The URL to upload to
files (list of str): The file paths to upload
body (dict): A request body
json_rsp (bool): Set to true if the result returned is JSON
Returns:
dict: The response body of the request.
"""
try:
post_files = []
for f in files:
if isinstance(f, IOBase):
post_files.append(
("files", (os.path.basename(f.name), f)))
else:
post_files.append(
("files", (os.path.basename(f), open(f, 'rb'))))
if body is not None:
post_files.append(
("body", ("", to_json(body),
'application/json')))
return self.__handle_rsp(requests.post(
self.get_url(path), headers=self.headers(content_type=""),
verify=self.verify, files=post_files), json_rsp)
except requests.exceptions.ConnectionError as e:
raise ZmlpConnectionException(e)
def get(self, path, body=None, is_json=True):
"""
Performs a get request.
Args:
path (str): An archivist URI path.
body (dict): The request body which will be serialized to json.
is_json (bool): Set to true to specify a JSON return value
Returns:
object: The http response object or an object deserialized from the
response json if the ``json`` argument is true.
Raises:
Exception: An error occurred making the request or parsing the
JSON response
"""
return self._make_request('get', path, body, is_json)
def post(self, path, body=None, is_json=True):
"""
Performs a post request.
Args:
path (str): An archivist URI path.
body (object): The request body which will be serialized to json.
is_json (bool): Set to true to specify a JSON return value
Returns:
object: The http response object or an object deserialized from the
response json if the ``json`` argument is true.
Raises:
Exception: An error occurred making the request or parsing the
JSON response
"""
return self._make_request('post', path, body, is_json)
def put(self, path, body=None, is_json=True):
"""
Performs a put request.
Args:
path (str): An archivist URI path.
body (object): The request body which will be serialized to json.
is_json (bool): Set to true to specify a JSON return value
Returns:
object: The http response object or an object deserialized from the
response json if the ``json`` argument is true.
Raises:
Exception: An error occurred making the request or parsing the
JSON response
"""
return self._make_request('put', path, body, is_json)
def delete(self, path, body=None, is_json=True):
"""
Performs a delete request.
Args:
path (str): An archivist URI path.
body (object): The request body which will be serialized to json.
is_json (bool): Set to true to specify a JSON return value
Returns:
object: The http response object or an object deserialized from
the response json if the ``json`` argument is true.
Raises:
Exception: An error occurred making the request or parsing the
JSON response
"""
return self._make_request('delete', path, body, is_json)
def iter_paged_results(self, url, req, limit, cls):
"""
Handles paging through the results of the standard _search
endpoints on the backend.
Args:
url (str): the URL to POST a search to
req (object): the search request body
limit (int): the maximum items to return, None for no limit.
cls (type): the class to wrap each result in
Yields:
Generator
"""
left_to_return = limit or sys.maxsize
page = 0
req["page"] = {}
while True:
if left_to_return < 1:
break
page += 1
req["page"]["size"] = min(100, left_to_return)
req["page"]["from"] = (page - 1) * req["page"]["size"]
rsp = self.post(url, req)
if not rsp.get("list"):
break
for f in rsp["list"]:
yield cls(f)
left_to_return -= 1
# Used to break before pulling new batch
if rsp.get("break"):
break
def _make_request(self, method, path, body=None, is_json=True):
request_function = getattr(requests, method)
if body is not None:
data = to_json(body)
else:
data = body
# Making the request is wrapped in its own try/catch so it's easier
# to catch any and all socket and http exceptions that can possibly be
# thrown. Once hat happens, handle_rsp is called which may throw
# application level exceptions.
rsp = None
tries = 0
url = self.get_url(path, body)
while True:
try:
rsp = request_function(url, data=data, headers=self.headers(),
verify=self.verify)
break
except Exception as e:
# Some form of connection error, wait until archivist comes
# back.
tries += 1
if 0 < self.max_retries <= tries:
raise e
wait = random.randint(1, random.randint(1, 60))
# Switched to stderr in case no logger is setup, still want
# to see messages.
msg = "Communicating to ZMLP (%s) timed out %d times, " \
"waiting ... %d seconds, error=%s\n"
sys.stderr.write(msg % (url, tries, wait, e))
time.sleep(wait)
return self.__handle_rsp(rsp, is_json)
def __handle_rsp(self, rsp, is_json):
if rsp.status_code != 200:
self.__raise_exception(rsp)
if is_json and len(rsp.content):
rsp_val = rsp.json()
if logger.getEffectiveLevel() == logging.DEBUG:
logger.debug(
"rsp: status: %d body: '%s'" % (rsp.status_code, rsp_val))
return rsp_val
return rsp
def __raise_exception(self, rsp):
data = {}
try:
data.update(rsp.json())
except Exception as e:
# The result is not json.
data["message"] = "Your HTTP request was invalid '%s', response not " \
"JSON formatted. %s" % (rsp.status_code, e)
data["status"] = rsp.status_code
# If the status code can't be found, then ZmlpRequestException is returned.
ex_class = translate_exception(rsp.status_code)
raise ex_class(data)
def get_url(self, path, body=None):
"""
Returns the full URL including the configured server part.
"""
url = urljoin(self.server, path)
if logger.getEffectiveLevel() == logging.DEBUG:
logger.debug("url: '%s' path: '%s' body: '%s'" % (url, path, body))
return url
def headers(self, content_type="application/json"):
"""
Generate the return some request headers.
Args:
content_type(str): The content-type for the request. Defaults to
'application/json'
Returns:
dict: An http header struct.
"""
header = {'Authorization': "Bearer {}".format(self.__sign_request())}
if content_type:
header['Content-Type'] = content_type
if logger.getEffectiveLevel() == logging.DEBUG:
logger.debug("headers: %s" % header)
return header
def __load_apikey(self, apikey):
key_data = None
if not apikey:
return key_data
elif hasattr(apikey, 'read'):
key_data = json.load(apikey)
elif isinstance(apikey, dict):
key_data = apikey
elif isinstance(apikey, (str, bytes)):
try:
key_data = json.loads(base64.b64decode(apikey))
except binascii.Error:
raise ValueError("Invalid base64 encoded API key.")
return key_data
def __sign_request(self):
if not self.apikey:
raise RuntimeError("Unable to make request, no ApiKey has been specified.")
claims = {
'aud': self.server,
'exp': datetime.datetime.utcnow() + datetime.timedelta(seconds=60),
'accessKey': self.apikey["accessKey"],
}
if os.environ.get("ZMLP_TASK_ID"):
claims['taskId'] = os.environ.get("ZMLP_TASK_ID")
claims['jobId'] = os.environ.get("ZMLP_JOB_ID")
if self.project_id:
claims["projectId"] = self.project_id
return jwt.encode(claims, self.apikey['secretKey'], algorithm='HS512')
class SearchResult(object):
"""
A utility class for wrapping various search result formats
that come back from the ZMLP servers.
"""
def __init__(self, data, clazz):
"""
Create a new SearchResult instance.
Note that its possible to both iterate and index a SearchResult
as a list. For example
Args:
data (dict): A search response body from the ZMLP servers.
clazz (mixed): A class to wrap each item in the response body.
"""
self.items = [clazz(item) for item in data["list"]]
self.offset = data["page"]["from"]
self.size = len(data["list"])
self.total = data["page"]["totalCount"]
def __iter__(self):
return iter(self.items)
def __getitem__(self, idx):
return self.items[idx]
def to_json(obj, indent=None):
"""
Convert the given object to a JSON string using
the ZmlpJsonEncoder.
Args:
obj (mixed): any json serializable python object.
indent (int): The indentation level for the json, or None for compact.
Returns:
str: The serialized object
"""
val = json.dumps(obj, cls=ZmlpJsonEncoder, indent=indent)
if logger.getEffectiveLevel() == logging.DEBUG:
logger.debug("json: %s" % val)
return val
class ZmlpJsonEncoder(json.JSONEncoder):
"""
JSON encoder for with ZMLP specific serialization defaults.
"""
def default(self, obj):
if hasattr(obj, 'for_json'):
return obj.for_json()
elif isinstance(obj, (set, frozenset)):
return list(obj)
elif isinstance(obj, datetime.datetime):
return obj.isoformat()
elif isinstance(obj, datetime.date):
return obj.isoformat()
elif isinstance(obj, datetime.time):
return obj.isoformat()
elif isinstance(obj, decimal.Decimal):
return float(obj)
# Let the base class default method raise the TypeError
return json.JSONEncoder.default(self, obj)
class ZmlpClientException(ZmlpException):
"""The base exception class for all ZmlpClient related Exceptions."""
pass
class ZmlpRequestException(ZmlpClientException):
"""
The base exception class for all exceptions thrown from zmlp.
"""
def __init__(self, data):
super(ZmlpClientException, self).__init__(
data.get("message", "Unknown request exception"))
self.__data = data
@property
def type(self):
return self.__data["exception"]
@property
def cause(self):
return self.__data["cause"]
@property
def endpoint(self):
return self.__data["path"]
@property
def status(self):
return self.__data["status"]
def __str__(self):
return "<ZmlpRequestException msg=%s>" % self.__data["message"]
class ZmlpConnectionException(ZmlpClientException):
"""
This exception is thrown if the client encounters a connectivity issue
with the Zmlp API servers..
"""
pass
class ZmlpWriteException(ZmlpRequestException):
"""
This exception is thrown the Zmlp fails a write operation.
"""
def __init__(self, data):
super(ZmlpWriteException, self).__init__(data)
class ZmlpSecurityException(ZmlpRequestException):
"""
This exception is thrown if Zmlp fails a security check on the request.
"""
def __init__(self, data):
super(ZmlpSecurityException, self).__init__(data)
class ZmlpNotFoundException(ZmlpRequestException):
"""
This exception is thrown if the Zmlp fails a read operation because
a piece of named data cannot be found.
"""
def __init__(self, data):
super(ZmlpNotFoundException, self).__init__(data)
class ZmlpDuplicateException(ZmlpWriteException):
"""
This exception is thrown if the Zmlp fails a write operation because
the newly created element would be a duplicate.
"""
def __init__(self, data):
super(ZmlpDuplicateException, self).__init__(data)
class ZmlpInvalidRequestException(ZmlpRequestException):
"""
This exception is thrown if the request sent to Zmlp is invalid in
some way, similar to an IllegalArgumentException.
"""
def __init__(self, data):
super(ZmlpInvalidRequestException, self).__init__(data)
"""
A map of HTTP response codes to local exception types.
"""
EXCEPTION_MAP = {
404: ZmlpNotFoundException,
409: ZmlpDuplicateException,
500: ZmlpInvalidRequestException,
400: ZmlpInvalidRequestException,
401: ZmlpSecurityException,
403: ZmlpSecurityException
}
def translate_exception(status_code):
"""
Translate the HTTP status code into one of the exceptions.
Args:
status_code (int): the HTTP status code
Returns:
Exception: the exception to throw for the given status code
"""
return EXCEPTION_MAP.get(status_code, ZmlpRequestException) | zvi-client | /zvi-client-1.1.3.tar.gz/zvi-client-1.1.3/pylib/zmlp/client.py | client.py |
import json
import logging
import os
from ..client import to_json
from ..util import as_collection
__all__ = [
'Asset',
'FileImport',
'FileUpload',
'StoredFile',
'FileTypes'
]
logger = logging.getLogger(__name__)
class DocumentMixin(object):
"""
A Mixin class which provides easy access to a deeply nested dictionary.
"""
def __init__(self):
self.document = {}
def set_attr(self, attr, value):
"""Set the value of an attribute.
Args:
attr (str): The attribute name in dot notation format.
ex: 'foo.bar'
value (:obj:`object`): value: The value for the particular
attribute. Can be any json serializable type.
"""
self.__set_attr(attr, value)
def del_attr(self, attr):
"""
Delete the attribute from the document. If the attribute does not exist
or is protected by a manual field edit then return false. Otherwise,
delete the attribute and return true.
Args:
attr (str): The attribute name.
Returns:
bool: True if the attribute was deleted.
"""
doc = self.document
parts = attr.split(".")
for k in parts[0:-1]:
if not isinstance(doc, dict) or k not in doc:
return False
doc = doc.get(k)
attr_name = parts[-1]
try:
del doc[attr_name]
return not self.attr_exists(attr)
except KeyError:
return False
def get_attr(self, attr, default=None):
"""Get the given attribute to the specified value.
Args:
attr (str): The attribute name in dot notation format.
ex: 'foo.bar'
default (:obj:`mixed`) The default value if no attr exists.
Returns:
mixed: The value of the attribute.
"""
doc = self.document
parts = attr.split(".")
for k in parts:
if not isinstance(doc, dict) or k not in doc:
return default
doc = doc.get(k)
return doc
def attr_exists(self, attr):
"""
Return true if the given attribute exists.
Args:
attr (str): The name of the attribute to check.
Returns:
bool: true if the attr exists.
"""
doc = self.document
parts = attr.split(".")
for k in parts[0:len(parts) - 1]:
if k not in doc:
return False
doc = doc.get(k)
return parts[-1] in doc
def add_analysis(self, name, val):
"""Add an analysis structure to the document.
Args:
name (str): The name of the analysis
val (mixed): the value/result of the analysis.
"""
if not name:
raise ValueError("Analysis requires a unique name")
attr = "analysis.%s" % name
if val is None:
self.set_attr(attr, None)
else:
self.set_attr(attr, json.loads(to_json(val)))
def get_analysis(self, namespace):
"""
Return the the given analysis data under the the given name.
Args:
namespace (str): The model namespace / pipeline module name.
Returns:
dict: An arbitrary dictionary containing predictions, content, etc.
"""
name = getattr(namespace, "namespace", "analysis.{}".format(namespace))
return self.get_attr(name)
def get_predicted_labels(self, namespace, min_score=None):
"""
Get all predictions made by the given label prediction module. If no
label predictions are present, returns None.
Args:
namespace (str): The analysis namespace, example 'zvi-label-detection'.
min_score (float): Filter results by a minimum score.
Returns:
list: A list of dictionaries containing the predictions
"""
name = getattr(namespace, "namespace", "analysis.{}".format(namespace))
predictions = self.get_attr(f'{name}.predictions')
if not predictions:
return None
if min_score:
return [pred for pred in predictions if pred['score'] >= min_score]
else:
return predictions
def get_predicted_label(self, namespace, label):
"""
Get a prediction made by the given label prediction module. If no
label predictions are present, returns None.
Args:
namespace (str): The model / module name that created the prediction.
label (mixed): A label name or integer index of a prediction.
Returns:
dict: a prediction dict with a label, score, etc.
"""
preds = self.get_predicted_labels(namespace)
if not preds:
return None
if isinstance(label, str):
preds = [pred for pred in preds if pred['label'] == label]
label = 0
try:
return preds[label]
except IndexError:
return None
def extend_list_attr(self, attr, items):
"""
Adds the given items to the given attr. The attr must be a list or set.
Args:
attr (str): The name of the attribute
items (:obj:`list` of :obj:`mixed`): A list of new elements.
"""
items = as_collection(items)
all_items = self.get_attr(attr)
if all_items is None:
all_items = set()
self.set_attr(attr, all_items)
try:
all_items.update(items)
except AttributeError:
all_items.extend(items)
def __set_attr(self, attr, value):
"""
Handles setting an attribute value.
Args:
attr (str): The attribute name in dot notation format. ex: 'foo.bar'
value (mixed): The value for the particular attribute.
Can be any json serializable type.
"""
doc = self.document
parts = attr.split(".")
for k in parts[0:len(parts) - 1]:
if k not in doc:
doc[k] = {}
doc = doc[k]
if isinstance(value, dict):
doc[parts[-1]] = value
else:
try:
doc[parts[-1]] = value.for_json()
except AttributeError:
doc[parts[-1]] = value
def __setitem__(self, field, value):
self.set_attr(field, value)
def __getitem__(self, field):
return self.get_attr(field)
class FileImport(object):
"""
An FileImport is used to import a new file and metadata into ZMLP.
"""
def __init__(self, uri, custom=None, page=None, label=None, tmp=None):
"""
Construct an FileImport instance which can point to a remote URI.
Args:
uri (str): a URI locator to the file asset.
custom (dict): Values for custom metadata fields.
page (int): The specific page to import if any.
label (Label): An optional Label which will add the file to
a Model training set.
tmp: (dict): A dict of temp attrs that are removed after procssing.
"""
super(FileImport, self).__init__()
self.uri = uri
self.custom = custom or {}
self.page = page
self.label = label
self.tmp = tmp
def for_json(self):
"""Returns a dictionary suitable for JSON encoding.
The ZpsJsonEncoder will call this method automatically.
Returns:
:obj:`dict`: A JSON serializable version of this Document.
"""
return {
"uri": self.uri,
"custom": self.custom,
"page": self.page,
"label": self.label,
"tmp": self.tmp
}
def __setitem__(self, field, value):
self.custom[field] = value
def __getitem__(self, field):
return self.custom[field]
class FileUpload(FileImport):
"""
FileUpload instances point to a local file that will be uploaded for analysis.
"""
def __init__(self, path, custom=None, page=None, label=None):
"""
Create a new FileUpload instance.
Args:
path (str): A path to a file, the file must exist.
custom (dict): Values for pre-created custom metadata fields.
page (int): The specific page to import if any.
label (Label): An optional Label which will add the file to
a Model training set.
"""
super(FileUpload, self).__init__(
os.path.normpath(os.path.abspath(path)), custom, page, label)
if not os.path.exists(path):
raise ValueError('The path "{}" does not exist'.format(path))
def for_json(self):
"""Returns a dictionary suitable for JSON encoding.
The ZpsJsonEncoder will call this method automatically.
Returns:
:obj:`dict`: A JSON serializable version of this Document.
"""
return {
"uri": self.uri,
"page": self.page,
"label": self.label,
"custom": self.custom
}
class Asset(DocumentMixin):
"""
An Asset represents a single processed file. Assets start out
in the 'CREATED' state, which indicates they've been created by not processed.
Once an asset has been processed and augmented with files created by various
analysis modules, the Asset will move into the 'ANALYZED' state.
"""
def __init__(self, data):
super(Asset, self).__init__()
if not data:
raise ValueError("Error creating Asset instance, Assets must have an id.")
self.id = data.get("id")
self.document = data.get("document", {})
self.score = data.get("score", 0)
self.inner_hits = data.get("inner_hits", [])
@staticmethod
def from_hit(hit):
"""
Converts an ElasticSearch hit into an Asset.
Args:
hit (dict): An raw ES document
Returns:
Asset: The Asset.
"""
return Asset({
'id': hit['_id'],
'score': hit.get('_score', 0),
'document': hit.get('_source', {}),
'inner_hits': hit.get('inner_hits', [])})
@property
def uri(self):
"""
The URI of the asset.
Returns:
str: The URI of the data.
"""
return self.get_attr("source.path")
@property
def extension(self):
"""
The file extension of the asset, lower cases.
Returns:
str: The file extension
"""
return self.get_attr("source.extension").lower()
def add_file(self, stored_file):
"""
Adds the StoredFile record to the asset's list of associated files.
Args:
stored_file (StoredFile): A file that has been stored in ZMLP
Returns:
bool: True if the file was added to the list, False if it was a duplicate.
"""
# Ensure the file doesn't already exist in the metadata
if not self.get_files(id=stored_file.id):
files = self.get_attr("files") or []
files.append(stored_file._data)
self.set_attr("files", files)
return True
return False
def get_files(self, name=None, category=None, mimetype=None, extension=None,
id=None, attrs=None, attr_keys=None, sort_func=None):
"""
Return all stored files associated with this asset. Optionally
filter the results.
Args:
name (str): The associated files name.
category (str): The associated files category, eg proxy, backup, etc.
mimetype (str): The mimetype must start with this string.
extension: (str): The file name must have the given extension.
attrs (dict): The file must have all of the given attributes.
attr_keys: (list): A list of attribute keys that must be present.
sort_func: (func): A lambda function for sorting the result.
Returns:
list of StoredFile: A list of ZMLP file records.
"""
result = []
files = self.get_attr("files") or []
for fs in files:
match = True
if id and not any((item for item in as_collection(id)
if fs["id"] == item)):
match = False
if name and not any((item for item in as_collection(name)
if fs["name"] == item)):
match = False
if category and not any((item for item in as_collection(category)
if fs["category"] == item)):
match = False
if mimetype and not any((item for item in as_collection(mimetype)
if fs["mimetype"].startswith(item))):
match = False
if extension and not any((item for item in as_collection(extension)
if fs["name"].endswith("." + item))):
match = False
file_attrs = fs.get("attrs", {})
if attr_keys:
if not any(key in file_attrs for key in as_collection(attr_keys)):
match = False
if attrs:
for k, v in attrs.items():
if file_attrs.get(k) != v:
match = False
if match:
result.append(StoredFile(fs))
if sort_func:
result = sorted(result, key=sort_func)
return result
def get_thumbnail(self, level):
"""
Return an thumbnail StoredFile record for the Asset. The level
corresponds size of the thumbnail, 0 for the smallest, and
up to N for the largest. Levels 0,1,and 2 are smaller than
the source media, level 3 or above (if they exist) will
be full resolution or higher images used for OCR purposes.
To download the thumbnail call app.assets.download_file(stored_file)
Args:
level (int): The size level, 0 for smallest up to N.
Returns:
StoredFile: A StoredFile instance or None if no image proxies exist.
"""
files = self.get_files(mimetype="image/", category="proxy",
sort_func=lambda f: f.attrs.get('width', 0))
if not files:
return None
if level >= len(files):
level = -1
return files[level]
def get_inner_hits(self, name):
"""
Return any inner hits from a collapse query.
Args:
name (str): The inner hit name.
Returns:
list[Asset]: A list of Assets.
"""
try:
return [Asset.from_hit(hit) for hit in self.inner_hits[name]['hits']['hits']]
except KeyError:
return []
def for_json(self):
"""Returns a dictionary suitable for JSON encoding.
The ZpsJsonEncoder will call this method automatically.
Returns:
:obj:`dict`: A JSON serializable version of this Document.
"""
return {
"id": self.id,
"uri": self.get_attr("source.path"),
"document": self.document,
"page": self.get_attr("media.pageNumber"),
}
def __str__(self):
return "<Asset id='{}'/>".format(self.id)
def __repr__(self):
return "<Asset id='{}' at {}/>".format(self.id, hex(id(self)))
def __hash__(self):
return hash(self.id)
def __eq__(self, other):
if not getattr(other, "id"):
return False
return other.id == self.id
class StoredFile(object):
"""
The StoredFile class represents a supporting file that has been stored in ZVI.
"""
def __init__(self, data):
self._data = data
@property
def id(self):
"""
The unique ID of the file.
"""
return self._data['id']
@property
def name(self):
"""
The file name..
"""
return self._data['name']
@property
def category(self):
"""
The file category.
"""
return self._data['category']
@property
def attrs(self):
"""
Arbitrary attributes.
"""
return self._data['attrs']
@property
def mimetype(self):
"""
The file mimetype.
"""
return self._data['mimetype']
@property
def size(self):
"""
The size of the file.
"""
return self._data['size']
@property
def cache_id(self):
"""
A string suitable for on-disk caching/filenames. Replaces
all slashes in id with underscores.
"""
return self.id.replace("/", "_")
def __str__(self):
return "<StoredFile {}>".format(self.id)
def __eq__(self, other):
return other.id
def __hash__(self):
return hash(self.id)
def for_json(self):
"""Return a JSON serialized copy.
Returns:
:obj:`dict`: A json serializable dict.
"""
serializable_dict = {}
attrs = self._data.keys()
for attr in attrs:
if getattr(self, attr, None) is not None:
serializable_dict[attr] = getattr(self, attr)
return serializable_dict
class FileTypes:
"""
A class for storing the supported file types.
"""
videos = frozenset(['mov', 'mp4', 'mpg', 'mpeg', 'm4v', 'webm', 'ogv', 'ogg', 'mxf', 'avi'])
"""A set of supported video file formats."""
images = frozenset(["bmp", "cin", "dpx", "gif", "jpg",
"jpeg", "exr", "png", "psd", "rla", "tif", "tiff",
"dcm", "rla"])
"""A set of supported image file formats."""
documents = frozenset(['pdf', 'doc', 'docx', 'ppt', 'pptx', 'xls', 'xlsx', 'vsd', 'vsdx'])
"""A set of supported document file formats."""
all = frozenset(videos.union(images).union(documents))
"""A set of all supported file formats."""
@classmethod
def resolve(cls, file_types):
"""
Resolve a list of file extenions or types (images, documents, videos) to
a supported list of extensions.
Args:
file_types (list): A list of file extensions, dot not included.
Returns:
list: The valid list of extensions from the given list
"""
file_types = as_collection(file_types)
if not file_types:
return cls.all
result = set()
for file_type in file_types:
if file_type in cls.all:
result.add(file_type)
else:
exts = getattr(cls, file_type, None)
if exts:
result.update(exts)
return sorted(list(result)) | zvi-client | /zvi-client-1.1.3.tar.gz/zvi-client-1.1.3/pylib/zmlp/entity/asset.py | asset.py |
from datetime import datetime
from .base import BaseEntity
from ..util import ObjectView
__all__ = [
'Job',
'Task',
'TaskError'
]
class Job(BaseEntity):
"""
A Job represents a backend data process. Jobs are made up of Tasks
which are scheduled to execute on Analyst data processing nodes.
"""
def __init__(self, data):
super(Job, self).__init__(data)
@property
def name(self):
"""The name of the Job"""
return self._data['name']
@property
def state(self):
"""The state of the Job"""
return self._data['state']
@property
def paused(self):
"""True if the Job is paused."""
return self._data['paused']
@property
def priority(self):
"""The priority of the Job"""
return self._data['priority']
@property
def time_started(self):
"""The datetime the job got the first analyst."""
if self._data['timeStarted'] == -1:
return None
else:
return datetime.fromtimestamp(self._data['timeStarted'] / 1000.0)
@property
def time_stopped(self):
"""The datetime the job finished."""
if self._data['timeStopped'] == -1:
return None
else:
return datetime.fromtimestamp(self._data['timeStopped'] / 1000.0)
@property
def asset_counts(self):
"""Asset counts for the Job"""
return ObjectView(self._data['assetCounts'])
@property
def task_counts(self):
"""Task counts for the Job"""
return ObjectView(self._data['taskCounts'])
@property
def time_modified(self):
"""The date/time the Job was modified."""
return datetime.fromtimestamp(self._data['timeUpdated'] / 1000.0)
class Task(BaseEntity):
"""
Jobs contain Tasks and each Task handles the processing for 1 or more files/assets.
"""
def __init__(self, data):
super(Task, self).__init__(data)
@property
def job_id(self):
"""The Job Id"""
return self._data['jobId']
@property
def name(self):
"""The name of the Task"""
return self._data['name']
@property
def state(self):
"""The name of the Task"""
return self._data['state']
@property
def time_started(self):
"""The datetime the job got the first analyst."""
if self._data['timeStarted'] == -1:
return None
else:
return datetime.fromtimestamp(self._data['timeStarted'] / 1000.0)
@property
def time_stopped(self):
"""The datetime the job finished."""
if self._data['timeStopped'] == -1:
return None
else:
return datetime.fromtimestamp(self._data['timeStopped'] / 1000.0)
@property
def time_pinged(self):
"""The datetime the running task sent a watch dog ping."""
if self._data['timePing'] == -1:
return None
else:
return datetime.fromtimestamp(self._data['timePing'] / 1000.0)
@property
def time_modified(self):
"""The date/time the Job was modified."""
return self.time_pinged
@property
def asset_counts(self):
return ObjectView(self._data['assetCounts'])
class TaskError:
"""
A TaskError contains information regarding a failed Task or Asset.
"""
def __init__(self, data):
self._data = data
@property
def id(self):
"""ID of the TaskError"""
return self._data['id']
@property
def task_id(self):
"""UUID of the Task that encountered an error."""
return self._data['taskId']
@property
def job_id(self):
"""UUID of the Job that encountered an error."""
return self._data['jobId']
@property
def datasource_id(self):
"""UUID of the DataSource that encountered an error."""
return self._data['dataSourceId']
@property
def asset_id(self):
"""ID of the Asset that encountered an error."""
return self._data['assetId']
@property
def path(self):
"""File path or URI that was being processed."""
return self._data['path']
@property
def message(self):
"""Error message from the exception that generated the error."""
return self._data['message']
@property
def processor(self):
"""Processor in which the error occurred."""
return self._data['processor']
@property
def fatal(self):
"""True if the error was fatal and the Asset was not processed."""
return self._data['fatal']
@property
def phase(self):
"""Phase at which the error occurred: generate, execute, teardown."""
return self._data['phase']
@property
def time_created(self):
"""The date/time the entity was created."""
return datetime.fromtimestamp(self._data['timeCreated'] / 1000.0)
@property
def stack_trace(self):
"""Full stack trace from the error, if any."""
return self._data['stackTrace'] | zvi-client | /zvi-client-1.1.3.tar.gz/zvi-client-1.1.3/pylib/zmlp/entity/job.py | job.py |
from enum import Enum
from .base import BaseEntity
from ..util import as_id
__all__ = [
'Model',
'ModelType',
'Label',
'LabelScope',
'ModelTypeInfo'
]
class ModelType(Enum):
"""
Types of models that can be Trained.
"""
ZVI_KNN_CLASSIFIER = 0
"""A KMeans clustering model for quickly clustering assets into general groups."""
ZVI_LABEL_DETECTION = 1
"""Retrain the ResNet50 convolutional neural network with your own labels."""
ZVI_FACE_RECOGNITION = 2
"""Face Recognition model using a KNN classifier."""
GCP_LABEL_DETECTION = 4
"""Train a Google AutoML vision model."""
TF2_IMAGE_CLASSIFIER = 5
"""Provide your own custom Tensorflow2/Keras model"""
PYTORCH_IMAGE_CLASSIFIER = 5
"""Provide your own custom Pytorch model"""
class LabelScope(Enum):
"""
Types of label scopes
"""
TRAIN = 1
"""The label marks the Asset as part of the Training set."""
TEST = 2
"""The label marks the Asset as part of the Test set."""
class Model(BaseEntity):
def __init__(self, data):
super(Model, self).__init__(data)
@property
def name(self):
"""The name of the Model"""
return self._data['name']
@property
def module_name(self):
"""The name of the Pipeline Module"""
return self._data['moduleName']
@property
def namespace(self):
"""The name of the Pipeline Module"""
return 'analysis.{}'.format(self._data['moduleName'])
@property
def type(self):
"""The type of model"""
return ModelType[self._data['type']]
@property
def file_id(self):
"""The file ID of the trained model"""
return self._data['fileId']
@property
def ready(self):
"""
True if the model is fully trained and ready to use.
Adding new labels will set ready to false.
"""
return self._data['ready']
def make_label(self, label, bbox=None, simhash=None, scope=None):
"""
Make an instance of a Label which can be used to label assets.
Args:
label (str): The label name.
bbox (list[float]): A open bounding box.
simhash (str): An associated simhash, if any.
scope (LabelScope): The scope of the image, can be TEST or TRAIN.
Defaults to TRAIN.
Returns:
Label: The new label.
"""
return Label(self, label, bbox=bbox, simhash=simhash, scope=scope)
def make_label_from_prediction(self, label, prediction, scope=None):
"""
Make a label from a prediction. This will copy the bbox
and simhash from the prediction, if any.
Args:
label (str): A name for the prediction.
prediction (dict): A prediction from an analysis namespace.s
scope (LabelScope): The scope of the image, can be TEST or TRAIN.
Defaults to TRAIN.
Returns:
Label: A new label
"""
return Label(self, label,
bbox=prediction.get('bbox'),
simhash=prediction.get('simhash'),
scope=scope)
def get_label_search(self, scope=None):
"""
Return a search that can be used to query all assets
with labels.
Args:
scope (LabelScope): An optional label scope to filter by.
Returns:
dict: A search to pass to an asset search.
"""
search = {
'size': 64,
'sort': [
'_doc'
],
'_source': ['labels', 'files'],
'query': {
'nested': {
'path': 'labels',
'query': {
'bool': {
'must': [
{'term': {'labels.modelId': self.id}}
]
}
}
}
}
}
if scope:
must = search['query']['nested']['query']['bool']['must']
must.append({'term': {'labels.scope': scope.name}})
return search
def get_confusion_matrix_search(self, min_score=0.0, max_score=1.0, test_set_only=True):
"""
Returns a search query with aggregations that can be used to create a confusion
matrix.
Args:
min_score (float): Minimum confidence score to return results for.
max_score (float): Maximum confidence score to return results for.
test_set_only (bool): If True only assets with TEST labels will be evaluated.
Returns:
dict: A search to pass to an asset search.
"""
prediction_term_map = {
ModelType.ZVI_KNN_CLASSIFIER: f'{self.namespace}.label',
ModelType.ZVI_FACE_RECOGNITION: f'{self.namespace}.predictions.label'
}
score_map = {ModelType.ZVI_KNN_CLASSIFIER: f'{self.namespace}.score',
ModelType.ZVI_LABEL_DETECTION: f'{self.namespace}.score',
ModelType.ZVI_FACE_RECOGNITION: f'{self.namespace}.predictions.score'}
if self.type not in prediction_term_map:
raise TypeError(f'Cannot create a confusion matrix search for {self.type} models.')
search_query = {
"size": 0,
"query": {
"bool": {
"filter": [
{"range": {score_map[self.type]: {"gte": min_score, "lte": max_score}}}
]
}
},
"aggs": {
"nested_labels": {
"nested": {
"path": "labels"
},
"aggs": {
"model_train_labels": {
"filter": {
"bool": {
"must": [
{"term": {"labels.modelId": self.id}}
]
}
},
"aggs": {
"labels": {
"terms": {"field": "labels.label"},
"aggs": {
"predictions_by_label": {
"reverse_nested": {},
"aggs": {
"predictions": {
"terms": {
"field": prediction_term_map[self.type]
}
}
}
}
}
}
}
}
}
}
}
}
if test_set_only:
(search_query
['aggs']
['nested_labels']
['aggs']
['model_train_labels']
['filter']
['bool']
['must'].append({"term": {"labels.scope": "TEST"}}))
return search_query
class ModelTypeInfo:
"""
Additional properties related to each ModelType.
"""
def __init__(self, data):
self._data = data
@property
def name(self):
"""The name of the model type."""
return self._data['name']
@property
def description(self):
"""The description of the model type."""
return self._data['description']
@property
def objective(self):
"""The objective of the model, LABEL_DETECTION, FACE_RECOGNITION, etc"""
return self._data['objective']
@property
def provider(self):
"""The company that maintains the structure and algorithm for the model."""
return self._data['provider']
@property
def min_concepts(self):
"""The minimum number of unique concepts a model must have before it can be trained."""
return self._data['minConcepts']
@property
def min_examples(self):
"""
The minimum number of examples per concept a
model must have before it can be trained.
"""
return self._data['minExamples']
class Label:
"""
A Label that can be added to an Asset either at import time
or once the Asset has been imported.
"""
def __init__(self, model, label, bbox=None, simhash=None, scope=None):
"""
Create a new label.
Args:
model: (Model): The model the label is for.
label (str): The label itself.
bbox (list): A optional list of floats for a bounding box.
simhash (str): An optional similatity hash.
scope (LabelScope): The scope of the image, can be TEST or TRAIN.
Defaults to TRAIN.
"""
self.model_id = as_id(model)
self.label = label
self.bbox = bbox
self.simhash = simhash
self.scope = scope or LabelScope.TRAIN
def for_json(self):
"""Returns a dictionary suitable for JSON encoding.
The ZpsJsonEncoder will call this method automatically.
Returns:
:obj:`dict`: A JSON serializable version of this Document.
"""
return {
'modelId': self.model_id,
'label': self.label,
'bbox': self.bbox,
'simhash': self.simhash,
'scope': self.scope.name
} | zvi-client | /zvi-client-1.1.3.tar.gz/zvi-client-1.1.3/pylib/zmlp/entity/model.py | model.py |
from ..entity.asset import StoredFile
from ..util import as_id, as_collection
__all__ = [
'TimelineBuilder',
'VideoClip'
]
class VideoClip:
"""
Clips represent a prediction for a section of video.
"""
def __init__(self, data):
self._data = data
@property
def id(self):
"""The Asset id the clip is associated with."""
return self._data['id']
@property
def asset_id(self):
"""The Asset id the clip is associated with."""
return self._data['assetId']
@property
def timeline(self):
"""The name of the timeline, this is the same as the pipeline module."""
return self._data['timeline']
@property
def track(self):
"""The track name"""
return self._data['track']
@property
def content(self):
"""The content of the clip. This is the prediction"""
return self._data['content']
@property
def length(self):
"""The length of the clip"""
return self._data['length']
@property
def start(self):
"""The start time of the clip"""
return self._data['start']
@property
def stop(self):
"""The stop time of the clip"""
return self._data['stop']
@property
def score(self):
"""The prediction score"""
return self._data['score']
@property
def simhash(self):
"""A similarity hash, if any"""
return self._data.get('simhash')
@property
def files(self):
"""The array of associated files."""
return [StoredFile(f) for f in self._data.get('files', [])]
@staticmethod
def from_hit(hit):
"""
Converts an ElasticSearch hit into an VideoClip.
Args:
hit (dict): An raw ES document
Returns:
Asset: The Clip.
"""
data = {
'id': hit['_id'],
}
data.update(hit.get('_source', {}).get('clip', {}))
return VideoClip(data)
def __len__(self):
return self.length
def __str__(self):
return "<VideoClip id='{}'/>".format(self.id)
def __repr__(self):
return "<VideoClip id='{}' at {}/>".format(self.id, hex(id(self)))
def __eq__(self, other):
return other.id == self.id
def __hash__(self):
return hash(self.id)
class TimelineBuilder:
"""
The TimelineBuilder class is used for batch creation of video clips. Clips within a track
can be overlapping. Duplicate clips are automatically compacted to the highest score.
"""
def __init__(self, asset, name):
"""
Create a new timeline instance.
Args:
name (str): The name of the Timeline.
"""
self.asset = as_id(asset)
self.name = name
self.tracks = {}
def add_clip(self, track_name, start, stop, content, score=1, tags=None):
"""
Add a clip to the timeline.
Args:
track_name (str): The Track name.
start (float): The starting time.
stop (float): The end time.
content (str): The content.
score: (float): The score if any.
tags: (list): A list of tags that describes the content.
Returns:
(dict): A clip entry.
"""
if stop < start:
raise ValueError("The stop time cannot be smaller than the start time.")
track = self.tracks.get(track_name)
if not track:
track = {'name': track_name, 'clips': []}
self.tracks[track_name] = track
clip = {
"start": start,
"stop": stop,
"content": [c.replace("\n", " ").strip() for c in as_collection(content)],
"score": score,
"tags": as_collection(tags)
}
track['clips'].append(clip)
return clip
def for_json(self):
return {
'name': self.name,
'assetId': self.asset,
'tracks': [track for track in self.tracks.values() if track['clips']]
} | zvi-client | /zvi-client-1.1.3.tar.gz/zvi-client-1.1.3/pylib/zmlp/entity/clip.py | clip.py |
import logging
import os
import tempfile
from ..entity import Model, Job, ModelTypeInfo, AnalysisModule
from ..training import TrainingSetDownloader
from ..util import as_collection, as_id, zip_directory
logger = logging.getLogger(__name__)
__all__ = [
'ModelApp'
]
class ModelApp:
"""
Methods for manipulating models.
"""
def __init__(self, app):
self.app = app
def create_model(self, name, type):
"""
Create and retrn a new model .
Args:
name (str): The name of the model.
type (ModelType): The type of Model, see the ModelType class.
Returns:
Model: The new model.
"""
body = {
"name": name,
"type": type.name
}
return Model(self.app.client.post("/api/v3/models", body))
def get_model(self, id):
"""
Get a Model by Id
Args:
id (str): The model id.
Returns:
Model: The model.
"""
return Model(self.app.client.get("/api/v3/models/{}".format(as_id(id))))
def find_one_model(self, id=None, name=None, type=None):
"""
Find a single Model based on various properties.
Args:
id (str): The ID or list of Ids.
name (str): The model name or list of names.
type (str): The model type or list of types.
Returns:
Model: the matching Model.
"""
body = {
'names': as_collection(name),
'ids': as_collection(id),
'types': as_collection(type)
}
return Model(self.app.client.post("/api/v3/models/_find_one", body))
def find_models(self, id=None, name=None, type=None, limit=None, sort=None):
"""
Find a single Model based on various properties.
Args:
id (str): The ID or list of Ids.
name (str): The model name or list of names.
type (str): The model type or list of types.
limit (int): Limit results to the given size.
sort (list): An arary of properties to sort by. Example: ["name:asc"]
Returns:
generator: A generator which will return matching Models when iterated.
"""
body = {
'names': as_collection(name),
'ids': as_collection(id),
'types': as_collection(type),
'sort': sort
}
return self.app.client.iter_paged_results('/api/v3/models/_search', body, limit, Model)
def train_model(self, model, deploy=False, **kwargs):
"""
Train the given Model by kicking off a model training job.
Args:
model (Model): The Model instance or a unique Model id.
deploy (bool): Deploy the model on your production data immediately after training.
**kwargs (kwargs): Model training arguments which differ based on the model..
Returns:
Job: A model training job.
"""
model_id = as_id(model)
body = {
'deploy': deploy,
'args': dict(kwargs)
}
return Job(self.app.client.post('/api/v3/models/{}/_train'.format(model_id), body))
def deploy_model(self, model, search=None, file_types=None):
"""
Apply the model to the given search.
Args:
model (Model): A Model instance or a model unique Id.
search (dict): An arbitrary asset search, defaults to using the
deployment search associated with the model
file_types (list): An optional file type filer, can be combination of
"images", "documents", and "videos"
Returns:
Job: The Job that is hosting the reprocess task.
"""
mid = as_id(model)
body = {
"search": search,
"fileTypes": file_types,
"jobId": os.environ.get("ZMLP_JOB_ID")
}
return Job(self.app.client.post(f'/api/v3/models/{mid}/_deploy', body))
def upload_trained_model(self, model, model_path, labels):
"""
Uploads a Tensorflow2/Keras model. For the 'model_path' arg you can either
pass the path to a Tensorflow saved model or a trained model instance itself.
Args:
model (Model): The Model or te unique Model ID.
model_path (mixed): The path to the model directory or a Tensorflow model instance.
labels (list): The list of labels,.
Returns:
AnalysisModule: The AnalysisModule configured to use the model.
"""
if not labels:
raise ValueError("Uploading a model requires an array of labels")
# check to see if its a keras model and save to a temp dir.
if getattr(model_path, 'save', None):
tmp_path = tempfile.mkdtemp()
model_path.save(tmp_path)
model_path = tmp_path
with open(model_path + '/labels.txt', 'w') as fp:
for label in labels:
fp.write(f'{label}\n')
model_file = tempfile.mkstemp(prefix="model_", suffix=".zip")[1]
zip_file_path = zip_directory(model_path, model_file)
mid = as_id(model)
return AnalysisModule(self.app.client.send_file(
f'/api/v3/models/{mid}/_upload', zip_file_path))
def get_label_counts(self, model):
"""
Get a dictionary of the labels and how many times they occur.
Args:
model (Model): The Model or its unique Id.
Returns:
dict: a dictionary of label name to occurrence count.
"""
return self.app.client.get('/api/v3/models/{}/_label_counts'.format(as_id(model)))
def rename_label(self, model, old_label, new_label):
"""
Rename a the given label to a new label name. The new label can already exist.
Args:
model (Model): The Model or its unique Id.
old_label (str): The old label name.
new_label (str): The new label name.
Returns:
dict: a dictionary containing the number of assets updated.
"""
body = {
"label": old_label,
"newLabel": new_label
}
return self.app.client.put('/api/v3/models/{}/labels'.format(as_id(model)), body)
def delete_label(self, model, label):
"""
Removes the label from all Assets.
Args:
model (Model): The Model or its unique Id.
label (str): The label name to remove.
Returns:
dict: a dictionary containing the number of assets updated.
"""
body = {
"label": label
}
return self.app.client.delete('/api/v3/models/{}/labels'.format(as_id(model)), body)
def download_labeled_images(self, model, style, dst_dir, validation_split=0.2):
"""
Get a TrainingSetDownloader instance which can be used to download all the
labeled images for a Model to local disk.
Args:
model (Model): The Model or its unique ID.
style (str): The structure style to build: labels_std, objects_keras, objects_coco
dst_dir (str): The destination dir to write the Assets into.
validation_split (float): The ratio of training images to validation images.
Defaults to 0.2.
"""
return TrainingSetDownloader(self.app, model, style, dst_dir, validation_split)
def get_model_type_info(self, model_type):
"""
Get additional properties concerning a specific model type.
Args:
model_type (ModelType): The model type Enum or name.
Returns:
ModelTypeInfo: Additional properties related to a model type.
"""
type_name = getattr(model_type, 'name', str(model_type))
return ModelTypeInfo(self.app.client.get(f'/api/v3/models/_types/{type_name}'))
def get_all_model_type_info(self):
"""
Get all available ModelTypeInfo options.
Returns:
list: A list of ModelTypeInfo
"""
return [ModelTypeInfo(info) for info in self.app.client.get('/api/v3/models/_types')] | zvi-client | /zvi-client-1.1.3.tar.gz/zvi-client-1.1.3/pylib/zmlp/app/model_app.py | model_app.py |
from ..entity import DataSource, Job
from ..util import is_valid_uuid, as_collection
class DataSourceApp(object):
def __init__(self, app):
self.app = app
def create_datasource(self, name, uri, modules=None, file_types=None, credentials=None):
"""
Create a new DataSource.
Args:
name (str): The name of the data source.
uri (str): The URI where the data can be found.
modules (list): A list of AnalysisModules names to apply to the data.
file_types (list of str): a list of file extensions or general types like
'images', 'videos', 'documents'. Defaults to all file types.
credentials (list of str): A list of pre-created credentials blob names.
Returns:
DataSource: The created DataSource
"""
url = '/api/v1/data-sources'
body = {
'name': name,
'uri': uri,
'credentials': as_collection(credentials),
'fileTypes': file_types,
'modules': as_collection(modules)
}
return DataSource(self.app.client.post(url, body=body))
def get_datasource(self, name):
"""
Finds a DataSource by name or unique Id.
Args:
name (str): The unique name or unique ID.
Returns:
DataSource: The DataSource
"""
url = '/api/v1/data-sources/_findOne'
if is_valid_uuid(name):
body = {"ids": [name]}
else:
body = {"names": [name]}
return DataSource(self.app.client.post(url, body=body))
def import_files(self, ds, batch_size=25):
"""
Import all assets found at the given DataSource. If the
DataSource has already been imported then only new files will be
imported. New modules assigned to the datasource will
also be applied to existing assets as well as new assets.
Args:
ds (DataSource): A DataSource object or the name of a data source.
batch_size (int): The number of Assets per batch. Must be at least 20.
Returns:
Job: Return the Job responsible for processing the files.
"""
body = {
"batchSize": batch_size
}
url = '/api/v1/data-sources/{}/_import'.format(ds.id)
return Job(self.app.client.post(url, body))
def delete_datasource(self, ds, remove_assets=False):
"""
Delete the given datasource. If remove_assets is true, then all
assets that were imported with a datasource are removed as well. This
cannot be undone.
Args:
ds (DataSource): A DataSource object or the name of a data source.
remove_assets (bool): Set to true if Assets should be deleted as well.
Returns:
dict: Status object
"""
body = {
'deleteAssets': remove_assets
}
url = '/api/v1/data-sources/{}'.format(ds.id)
return self.app.client.delete(url, body) | zvi-client | /zvi-client-1.1.3.tar.gz/zvi-client-1.1.3/pylib/zmlp/app/datasource_app.py | datasource_app.py |
import io
import os
import requests
from collections import namedtuple
from ..entity import Asset, StoredFile, FileUpload, FileTypes, Job, VideoClip
from ..search import AssetSearchResult, AssetSearchScroller, SimilarityQuery, SearchScroller
from ..util import as_collection, as_id_collection, as_id
class AssetApp(object):
def __init__(self, app):
self.app = app
def batch_import_files(self, files, modules=None):
"""
Import a list of FileImport instances.
Args:
files (list of FileImport): The list of files to import as Assets.
modules (list): A list of Pipeline Modules to apply to the data.
Notes:
Example return value:
{
"bulkResponse" : {
"took" : 15,
"errors" : false,
"items" : [ {
"create" : {
"_index" : "yvqg1901zmu5bw9q",
"_type" : "_doc",
"_id" : "dd0KZtqyec48n1q1fniqVMV5yllhRRGx",
"_version" : 1,
"result" : "created",
"forced_refresh" : true,
"_shards" : {
"total" : 1,
"successful" : 1,
"failed" : 0
},
"_seq_no" : 0,
"_primary_term" : 1,
"status" : 201
}
} ]
},
"failed" : [ ],
"created" : [ "dd0KZtqyec48n1q1fniqVMV5yllhRRGx" ],
"jobId" : "ba310246-1f87-1ece-b67c-be3f79a80d11"
}
Returns:
dict: A dictionary containing an ES bulk response, failed files,
and created asset ids.
"""
body = {
"assets": files,
"modules": modules
}
return self.app.client.post("/api/v3/assets/_batch_create", body)
def batch_upload_files(self, files, modules=None):
"""
Batch upload a list of files and return a structure which contains
an ES bulk response object, a list of failed file paths, a list of created
asset Ids, and a processing jobId.
Args:
files (list of FileUpload):
modules (list): A list of Pipeline Modules to apply to the data.
Notes:
Example return value:
{
"bulkResponse" : {
"took" : 15,
"errors" : false,
"items" : [ {
"create" : {
"_index" : "yvqg1901zmu5bw9q",
"_type" : "_doc",
"_id" : "dd0KZtqyec48n1q1fniqVMV5yllhRRGx",
"_version" : 1,
"result" : "created",
"forced_refresh" : true,
"_shards" : {
"total" : 1,
"successful" : 1,
"failed" : 0
},
"_seq_no" : 0,
"_primary_term" : 1,
"status" : 201
}
} ]
},
"failed" : [ ],
"created" : [ "dd0KZtqyec48n1q1fniqVMV5yllhRRGx" ],
"jobId" : "ba310246-1f87-1ece-b67c-be3f79a80d11"
}
Returns:
dict: A dictionary containing an ES bulk response, failed files,
and created asset ids.
"""
files = as_collection(files)
file_paths = [f.uri for f in files]
body = {
"assets": files,
"modules": modules
}
return self.app.client.upload_files("/api/v3/assets/_batch_upload",
file_paths, body)
def batch_upload_directory(self, path, file_types=None,
batch_size=50, modules=None, callback=None):
"""
Recursively upload all files in the given directory path.
This method takes an optional callback function which takes two
arguments, files and response. This callback is called for
each batch of files submitted.
Examples:
def batch_callback(files, response):
print("--processed files--")
for path in files:
print(path)
print("--zvi response--")
pprint.pprint(rsp)
app.assets.batch_upload_directory("/home", file_types=['images'],
callback=batch_callback)
Args:
path (str): A file path to a directory.
file_types (list): a list of file extensions and/or
categories(documents, images, videos)
batch_size (int) The number of files to upload per batch.
modules (list): An array of modules to apply to the files.
callback (func): A function to call for every batch
Returns:
dict: A dictionary containing batch operation counters.
"""
batch = []
totals = {
"file_count": 0,
"file_size": 0,
"batch_count": 0,
}
def process_batch():
totals['batch_count'] += 1
totals['file_count'] += len(batch)
totals['file_size'] += sum([os.path.getsize(f) for f in batch])
rsp = self.batch_upload_files(
[FileUpload(f) for f in batch], modules)
if callback:
callback(batch.copy(), rsp)
batch.clear()
file_types = FileTypes.resolve(file_types)
for root, dirs, files in os.walk(path):
for fname in files:
if fname.startswith("."):
continue
_, ext = os.path.splitext(fname)
if not ext:
continue
if ext[1:].lower() not in file_types:
continue
batch.append(os.path.abspath(os.path.join(root, fname)))
if len(batch) >= batch_size:
process_batch()
if batch:
process_batch()
return totals
def delete_asset(self, asset):
"""
Delete the given asset.
Args:
asset (mixed): unique Id or Asset instance.
Returns:
bool: True if the asset was deleted.
"""
asset_id = as_id(asset)
return self.app.client.delete("/api/v3/assets/{}".format(asset_id))['success']
def batch_delete_assets(self, assets):
"""
Batch delete the given list of Assets or asset ids.
Args:
assets (list): A list of Assets or unique asset ids.
Returns:
dict: A dictionary containing deleted and errored asset Ids.
"""
body = {
"assetIds": as_id_collection(assets)
}
return self.app.client.delete("/api/v3/assets/_batch_delete", body)
def search(self, search=None, fetch_source=True):
"""
Perform an asset search using the ElasticSearch query DSL.
See Also:
For search/query format.
https://www.elastic.co/guide/en/elasticsearch/reference/6.4/search-request-body.html
Args:
search (dict): The ElasticSearch search to execute.
fetch_source: (bool): If true, the full JSON document for each asset is returned.
Returns:
AssetSearchResult - an AssetSearchResult instance.
"""
if not fetch_source:
search['_source'] = False
return AssetSearchResult(self.app, search)
def scroll_search(self, search=None, timeout="1m"):
"""
Perform an asset scrolled search using the ElasticSearch query DSL.
See Also:
For search/query format.
https://www.elastic.co/guide/en/elasticsearch/reference/6.4/search-request-body.html
Args:
search (dict): The ElasticSearch search to execute
timeout (str): The scroll timeout. Defaults to 1 minute.
Returns:
AssetSearchScroll - an AssetSearchScroller instance which is a generator
by nature.
"""
return AssetSearchScroller(self.app, search, timeout)
def reprocess_search(self, search, modules):
"""
Reprocess the given search with the supplied modules.
Args:
search (dict): An ElasticSearch search.
modules (list): A list of module names to apply.
Returns:
dict: Contains a Job and the number of assets to be processed.
"""
body = {
"search": search,
"modules": modules
}
rsp = self.app.client.post("/api/v3/assets/_search/reprocess", body)
return ReprocessSearchResponse(rsp["assetCount"], Job(rsp["job"]))
def scroll_search_clips(self, asset, search=None, timeout="1m"):
"""
Scroll through clips for given asset using the ElasticSearch query DSL.
Args:
asset (Asset): The asset or unique AssetId.
search (dict): The ElasticSearch search to execute
timeout (str): The scroll timeout. Defaults to 1 minute.
Returns:
SearchScroller a clip scroller instance for generating VideoClips.
"""
asset_id = as_id(asset)
return SearchScroller(
VideoClip, f'/api/v3/assets/{asset_id}/clips/_search', self.app, search, timeout
)
def reprocess_assets(self, assets, modules):
"""
Reprocess the given array of assets with the given modules.
Args:
assets (list): A list of Assets or asset unique Ids.
modules (list): A list of Pipeline module names or ides.
Returns:
Job: The job responsible for processing the assets.
"""
asset_ids = [getattr(asset, "id", asset) for asset in as_collection(assets)]
body = {
"search": {
"query": {
"terms": {
"_id": asset_ids
}
}
},
"modules": as_collection(modules)
}
return self.app.client.post("/api/v3/assets/_search/reprocess", body)
def get_asset(self, id):
"""
Return the asset with the given unique Id.
Args:
id (str): The unique ID of the asset.
Returns:
Asset: The Asset
"""
return Asset(self.app.client.get("/api/v3/assets/{}".format(id)))
def update_labels(self, assets, add_labels=None, remove_labels=None):
"""
Update the Labels on the given array of assets.
Args:
assets (mixed): An Asset, asset ID, or a list of either type.
add_labels (list[Label]): A Label or list of Label to add.
remove_labels (list[Label]): A Label or list of Label to remove.
Returns:
dict: An request status dict
"""
ids = as_id_collection(assets)
body = {}
if add_labels:
body['add'] = dict([(a, as_collection(add_labels)) for a in ids])
if remove_labels:
body['remove'] = dict([(a, as_collection(remove_labels)) for a in ids])
if not body:
raise ValueError("Must pass at least and add_labels or remove_labels argument")
return self.app.client.put("/api/v3/assets/_batch_update_labels", body)
def update_custom_fields(self, asset, values):
"""
Set the values of custom metadata fields.
Args:
asset (Asset): The asset or unique Asset id.
values (dict): A dictionary of values.
Returns:
dict: A status dictionary with failures or succcess
"""
body = {
"update": {
as_id(asset): values
}
}
return self.app.client.put("/api/v3/assets/_batch_update_custom_fields", body)
def batch_update_custom_fields(self, update):
"""
Set the values of custom metadata fields.
Examples:
{
"asset-id1": {"shoe": "nike"},
"asset-id2": {"country": "New Zealand"}
}
Args:
update (dict): A dict o dicts which describe the
Returns:
dict: A status dictionary with failures or success
"""
body = {
'update': update
}
return self.app.client.put('/api/v3/assets/_batch_update_custom_fields', body)
def download_file(self, stored_file, dst_file=None):
"""
Download given file and store results in memory, or optionally
a destination file. The stored_file ID can be specified as
either a string like "assets/<id>/proxy/image_450x360.jpg"
or a StoredFile instance can be used.
Args:
stored_file (mixed): The StoredFile instance or its ID.
dst_file (str): An optional destination file path.
Returns:
io.BytesIO instance containing the binary data or if
a destination path was provided the size of the
file is returned.
"""
if isinstance(stored_file, str):
path = stored_file
elif isinstance(stored_file, StoredFile):
path = stored_file.id
else:
raise ValueError("stored_file must be a string or StoredFile instance")
rsp = self.app.client.get("/api/v3/files/_stream/{}".format(path), is_json=False)
if dst_file:
with open(dst_file, 'wb') as fp:
fp.write(rsp.content)
return os.path.getsize(dst_file)
else:
return io.BytesIO(rsp.content)
def stream_file(self, stored_file, chunk_size=1024):
"""
Streams a file by iteratively returning chunks of the file using a generator. This
can be useful when developing web applications and a full download of the file
before continuing is not necessary.
Args:
stored_file (mixed): The StoredFile instance or its ID.
chunk_size (int): The byte sizes of each requesting chunk. Defaults to 1024.
Yields:
generator (File-like Object): Content of the file.
"""
if isinstance(stored_file, str):
path = stored_file
elif isinstance(stored_file, StoredFile):
path = stored_file.id
else:
raise ValueError("stored_file must be a string or StoredFile instance")
url = self.app.client.get_url('/api/v3/files/_stream/{}'.format(path))
response = requests.get(url, verify=self.app.client.verify,
headers=self.app.client.headers(), stream=True)
for block in response.iter_content(chunk_size):
yield block
def get_sim_hashes(self, images):
"""
Return a similarity hash for the given array of images.
Args:
images (mixed): Can be an file handle (opened with 'rb'), or
path to a file.
Returns:
list of str: A list of similarity hashes.
"""
return self.app.client.upload_files("/ml/v1/sim-hash",
as_collection(images), body=None)
def get_sim_query(self, images, min_score=0.75):
"""
Analyze the given image files and return a SimilarityQuery which
can be used in a search.
Args:
images (mixed): Can be an file handle (opened with 'rb'), or
path to a file.
min_score (float): A float between, the higher the value the more similar
the results. Defaults to 0.75
Returns:
SimilarityQuery: A configured SimilarityQuery
"""
return SimilarityQuery(self.get_sim_hashes(images), min_score)
"""
A named tuple to define a ReprocessSearchResponse
"""
ReprocessSearchResponse = namedtuple('ReprocessSearchResponse', ["asset_count", "job"]) | zvi-client | /zvi-client-1.1.3.tar.gz/zvi-client-1.1.3/pylib/zmlp/app/asset_app.py | asset_app.py |
from ..util import as_id, as_id_collection
from ..search import VideoClipSearchResult, VideoClipSearchScroller
from ..entity import VideoClip
class VideoClipApp:
"""
An App instance for managing Jobs. Jobs are containers for async processes
such as data import or training.
"""
def __init__(self, app):
self.app = app
def create_clip(self, asset, timeline, track, start, stop, content):
"""
Create a new clip. If a clip with the same metadata already exists it will
simply be replaced.
Args:
asset (Asset): The asset or its unique Id.
timeline (str): The timeline name for the clip.
track (str): The track name for the clip.
start (float): The starting point for the clip in seconds.
stop (float): The ending point for the clip in seconds.
content (str): The content of the clip.
Returns:
Clip: The clip that was created.
"""
body = {
"assetId": as_id(asset),
"timeline": timeline,
"track": track,
"start": start,
"stop": stop,
"content": content
}
return VideoClip(self.app.client.post('/api/v1/clips', body))
def create_clips(self, timeline):
"""
Batch create clips using a TimelineBuilder.
Args:
timeline: (TimelineBuilder): A timeline builder.
Returns:
dict: A status dictionary
"""
return self.app.client.post('/api/v1/clips/_timeline', timeline)
def get_webvtt(self,
asset,
dst_file=None,
timeline=None,
track=None,
content=None):
"""
Get all clip data as a WebVTT file and filter by specified options.
Args:
asset (Asset): The asset or unique Id.
timeline: (str): A timeline name or collection of timeline names.
track: (str): A track name or collection of track names.
content (str): A content string to match.
dst_file (mixed): An optional writable file handle or path to file.
Returns:
mixed: The text of the webvtt or the size of the written file.
"""
body = {
'assetId': as_id(asset),
'timelines': as_id_collection(timeline),
'tracks': as_id_collection(track),
'content': as_id_collection(content)
}
rsp = self.app.client.post('/api/v1/clips/_webvtt', body=body, is_json=False)
return self.__handle_webvtt(rsp, dst_file)
def scroll_search(self, search=None, timeout="1m"):
"""
Perform a VideoClip scrolled search using the ElasticSearch query DSL.
See Also:
For search/query format.
https://www.elastic.co/guide/en/elasticsearch/reference/6.4/search-request-body.html
Args:
search (dict): The ElasticSearch search to execute
timeout (str): The scroll timeout. Defaults to 1 minute.
Returns:
VideoClipSearchScroller - an VideoClipSearchScroller instance which can be used as
a generator for paging results.
"""
return VideoClipSearchScroller(self.app, search, timeout)
def search(self, search=None):
"""
Perform an VideoClip search using the ElasticSearch query DSL.
See Also:
For search/query format.
https://www.elastic.co/guide/en/elasticsearch/reference/6.4/search-request-body.html
Args:
search (dict): The ElasticSearch search to execute.
Returns:
VideoClipSearchResult - A VideoClipSearchResult instance.
"""
return VideoClipSearchResult(self.app, search)
def get_clip(self, id):
"""
Get a VideoClip by unique Id.
Args:
id (str): The VideoClip or its unique Id.
Returns:
VideoClip: The clip with the given Id.
"""
return VideoClip(self.app.client.get(f'api/v1/clips/{id}'))
def __handle_webvtt(self, rsp, dst_file):
"""
Handle a webvtt file response.
Args:
rsp (Response): A response from requests.
dst_file (mixed): An optional file path or file handle.
Returns:
(mixed): Return the content itself or the content size if written to file.
"""
if dst_file:
if isinstance(dst_file, str):
with open(dst_file, 'w') as fp:
fp.write(rsp.content.decode())
return len(rsp.content)
else:
dst_file.write(rsp.content.decode())
return len(rsp.content)
else:
return rsp.content.decode() | zvi-client | /zvi-client-1.1.3.tar.gz/zvi-client-1.1.3/pylib/zmlp/app/clip_app.py | clip_app.py |
import logging
from ..entity import AnalysisModule
from ..util import as_collection, as_id
logger = logging.getLogger(__name__)
__all__ = [
'AnalysisModuleApp'
]
class AnalysisModuleApp:
"""
App class for querying Analysis Modules
"""
def __init__(self, app):
self.app = app
def get_analysis_module(self, id):
"""
Get an AnalysisModule by Id.
Args:
id (str): The AnalysisModule ID or a AnalysisModule instance.
Returns:
AnalysisModule: The matching AnalysisModule
"""
return AnalysisModule(self.app.client.get('/api/v1/pipeline-mods/{}'.format(as_id(id))))
def find_one_analysis_module(self, id=None, name=None, type=None, category=None, provider=None):
"""
Find a single AnalysisModule based on various properties.
Args:
id (str): The ID or list of Ids.
name (str): The model name or list of names.
type: (str): A AnalysisModule typ type or collection of types to filter on.
category (str): The category of AnalysisModuleule
provider (str): The provider of the AnalysisModuleule
Returns:
AnalysisModule: The matching AnalysisModule.
"""
body = {
'names': as_collection(name),
'ids': as_collection(id),
'types': as_collection(type),
'categories': as_collection(category),
'providers': as_collection(provider)
}
return AnalysisModule(self.app.client.post('/api/v1/pipeline-mods/_find_one', body))
def find_analysis_modules(self, keywords=None, id=None, name=None, type=None,
category=None, provider=None, limit=None, sort=None):
"""
Search for AnalysisModule.
Args:
keywords(str): Keywords that match various fields on a AnalysisModule
id (str): An ID or collection of IDs to filter on.
name (str): A name or collection of names to filter on.
type: (str): A AnalysisModule type type or collection of types to filter on.
category (str): The category or collection of category names.
provider (str): The provider or collection provider names.
limit: (int) Limit the number of results.
sort: (list): A sort array, example: ["time_created:desc"]
Returns:
generator: A generator which will return matching AnalysisModules when iterated.
"""
body = {
'keywords': str(keywords),
'names': as_collection(name),
'ids': as_collection(id),
'types': as_collection(type),
'categories': as_collection(category),
'providers': as_collection(provider),
'sort': sort
}
return self.app.client.iter_paged_results(
'/api/v1/pipeline-mods/_search', body, limit, AnalysisModule) | zvi-client | /zvi-client-1.1.3.tar.gz/zvi-client-1.1.3/pylib/zmlp/app/analysis_app.py | analysis_app.py |
from ..entity import Job, Task, TaskError
from ..util import as_collection, as_id_collection, as_id
class JobApp:
"""
An App instance for managing Jobs. Jobs are containers for async processes
such as data import or training.
"""
def __init__(self, app):
self.app = app
def get_job(self, id):
"""
Get a Job by its unique Id.
Args:
id (str): The Job id or Job object.
Returns:
Job: The Job
"""
return Job(self.app.client.get('/api/v1/jobs/{}'.format(as_id(id))))
def refresh_job(self, job):
"""
Refreshes the internals of the given job.
Args:
job (Job): The job to refresh.
"""
job._data = self.app.client.get('/api/v1/jobs/{}'.format(job.id))
def find_jobs(self, id=None, state=None, name=None, limit=None, sort=None):
"""
Find jobs matching the given criteria.
Args:
id (mixed): A job ID or IDs to filter on.
state (mixed): A Job state or list of states to filter on.
name (mixed): A Job name or list of names to filter on.
limit (int): The maximum number of jobs to return, None is no limit.
sort (list): A list of sort ordering phrases, like ["name:d", "time_created:a"]
Returns:
generator: A generator which will return matching jobs when iterated.
"""
body = {
'ids': as_collection(id),
'states': as_collection(state),
'names': as_collection(name),
'sort': sort
}
return self.app.client.iter_paged_results('/api/v1/jobs/_search', body, limit, Job)
def find_one_job(self, id=None, state=None, name=None):
"""
Find single Job matching the given criteria. Raises exception if more
than one result is found.
Args:
id (mixed): A job ID or IDs to filter on.
state (mixed): A Job state or list of states to filter on.
name (mixed): A Job name or list of names to filter on.
sort (list): A list of sort ordering phrases, like ["name:d", "time_created:a"]
Returns:
Job: The job.
"""
body = {
'ids': as_collection(id),
'states': as_collection(state),
'names': as_collection(name)
}
return Job(self.app.client.post('/api/v1/jobs/_findOne', body))
def find_task_errors(self, query=None, job=None, task=None,
asset=None, path=None, processor=None, limit=None, sort=None):
"""
Find TaskErrors based on the supplied criterion.
Args:
query (str): keyword query to match various error properties.
job (mixed): A single Job, job id or list of either type.
task (mixed): A single Task, task id or list of either type.
asset (mixed): A single Asset, asset id or list of either type.
path (mixed): A file path or list of file path.
processor (mixed): A processor name or list of processors.
limit (int): Limit the number of results or None for all results.
sort (list): A list of sort ordering phrases, like ["name:d", "time_created:a"]
Returns:
generator: A generator which returns results when iterated.
"""
body = {
'keywords': query,
'jobIds': as_id_collection(job),
'taskIds': as_id_collection(task),
'assetIds': as_id_collection(asset),
'paths': as_collection(path),
'processor': as_collection(processor),
'sort': sort
}
return self.app.client.iter_paged_results(
'/api/v1/taskerrors/_search', body, limit, TaskError)
def pause_job(self, job):
"""
Pause scheduling for the given Job. Pausing a job simply removes the
job from scheduler consideration. All existing tasks will continue to run
and Analysts will move to new jobs as tasks complete.
Args:
job (Job): The Job to pause
Returns:
bool: True if the job was actually paused.
"""
# Resolve the job if we need to.
if isinstance(job, str):
job = self.get_job(job)
if self.app.client.put('/api/v1/jobs/{}'.format(job.id), job._data)['success']:
job._data['paused'] = True
return True
return False
def resume_job(self, job):
"""
Resume scheduling for the given Job.
Args:
job (Job): The Job to resume
Returns:
bool: True of the job was actually resumed.
"""
if isinstance(job, str):
job = self.get_job(job)
if self.app.client.put('/api/v1/jobs/{}'.format(job.id), job._data)['success']:
job._data['paused'] = False
return True
return False
def cancel_job(self, job):
"""
Cancel the given Job. Canceling a job immediately kills all running Tasks
and removes the job from scheduler consideration.
Args:
job (Job): The Job to cancel, or the job's unique Id.
Returns:
bool: True if the job was actually canceled, False if the job was already cancelled.
"""
if isinstance(job, str):
job = self.get_job(job)
if self.app.client.put('/api/v1/jobs/{}/_cancel'.format(job.id)).get('success'):
self.refresh_job(job)
return True
return False
def restart_job(self, job):
"""
Restart a canceled job.
Args:
job (Job): The Job to restart
Returns:
bool: True if the job was actually restarted, false if the job was not cancelled.
"""
if isinstance(job, str):
job = self.get_job(job)
if self.app.client.put('/api/v1/jobs/{}/_restart'.format(job.id)).get('success'):
self.refresh_job(job)
return True
return False
def retry_all_failed_tasks(self, job):
"""
Retry all failed Tasks in the Job.
Args:
job (Job): The Job with failed tasks.
Returns:
bool: True if the some failed tasks were restarted.
"""
if isinstance(job, str):
job = self.get_job(job)
if self.app.client.put(
'/api/v1/jobs/{}/_retryAllFailures'.format(job.id)).get('success'):
self.refresh_job(job)
return True
return False
def find_tasks(self, job=None, id=None, name=None, state=None, limit=None, sort=None):
"""
Find Tasks matching the given criteria.
Args:
job: (mixed): A single Job, job id or list of either type.
id (mixed): A single Task, task id or list of either type.
name (mixed): A task name or list of tasks names.
state (mixed): A take state or list of task states.
limit (int): Limit the number of results, None for no limit.
sort (list): A list of sort ordering phrases, like ["name:d", "time_created:a"]
Returns:
generator: A Generator that returns matching Tasks when iterated.
"""
body = {
'ids': as_collection(id),
'states': as_collection(state),
'names': as_collection(name),
'jobIds': as_id_collection(job),
'sort': sort
}
return self.app.client.iter_paged_results('/api/v1/tasks/_search', body, limit, Task)
def find_one_task(self, job=None, id=None, name=None, state=None):
"""
Find a single task matching the criterion.
Args:
job: (mixed): A single Job, job id or list of either type.
id (mixed): A single Task, task id or list of either type.
name (mixed): A task name or list of tasks names.
state (mixed): A take state or list of task states.
Returns:
Task A single matching task.
"""
body = {
'ids': as_collection(id),
'states': as_collection(state),
'names': as_collection(name),
'jobIds': as_id_collection(job)
}
res = Task(self.app.client.post('/api/v1/tasks/_findOne', body))
return res
def get_task(self, task):
"""
Get a Task by its unique id.
Args:
task (str): The Task or task id.
Returns:
Task: The Task
"""
return Task(self.app.client.get('/api/v1/tasks/{}'.format(as_id(task))))
def refresh_task(self, task):
"""
Refreshes the internals of the given job.
Args:
task (Task): The Task
"""
task._data = self.app.client.get('/api/v1/tasks/{}'.format(task.id))
def skip_task(self, task):
"""
Skip the given task. A skipped task wilk not run.
Args:
task (str): The Task or task id.
Returns:
bool: True if the Task changed to the Skipped state.
"""
if isinstance(task, str):
task = self.get_task(task)
if self.app.client.put('/api/v1/tasks/{}/_skip'.format(task.id))['success']:
self.refresh_task(task)
return True
return False
def retry_task(self, task):
"""
Retry the given task. Retried tasks are set back to the waiting state.
Args:
task (str): The Task or task id.
Returns:
bool: True if the Task changed to the Waiting state.
"""
if isinstance(task, str):
task = self.get_task(task)
if self.app.client.put('/api/v1/tasks/{}/_retry'.format(task.id))['success']:
self.refresh_task(task)
return True
return False
def get_task_script(self, task):
"""
Return the given task's ZPS script.
Args:
task: (str): The Task or task id.
Returns:
dict: The script in dictionary form.
"""
return self.app.client.get('/api/v1/tasks/{}/_script'.format(as_id(task)))
def download_task_log(self, task, dst_path):
"""
Download the task log file to the given file path.
Args:
task: (str): The Task or task id.
dst_path (str): The path to the destination file.
Returns:
dict: The script in dictionary form.
"""
return self.app.client.stream('/api/v1/tasks/{}/_log'.format(as_id(task)), dst_path)
def iterate_task_log(self, task):
"""
Return a generator that can be used to iterate a task log file.
Args:
task: (str): The Task or task id.
Returns:
generator: A generator which yields each line of a log file.
"""
return self.app.client.stream_text('/api/v1/tasks/{}/_log'.format(as_id(task))) | zvi-client | /zvi-client-1.1.3.tar.gz/zvi-client-1.1.3/pylib/zmlp/app/job_app.py | job_app.py |
MIT License
Copyright (c) 2018-2022 Olexa Bilaniuk
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
| zvit | /zvit-0.0.11+8ef0d2e7db5e6bb76f8186088141a78692691e7b.tar.gz/zvit-0.0.11/LICENSE.md | LICENSE.md |
import os, re, sys, subprocess, time
from . import git
#
# Public Version.
#
# This is the master declaration of the version number for this project.
#
# We will obey PEP 440 (https://www.python.org/dev/peps/pep-0440/) here. PEP440
# recommends the pattern
# [N!]N(.N)*[{a|b|rc}N][.postN][.devN]
# We shall standardize on the ultracompact form
# [N!]N(.N)*[{a|b|rc}N][-N][.devN]
# which has a well-defined normalization.
#
verPublic = "0.0.11"
#
# Information computed from the public version.
#
regexMatch = re.match(r"""(?:
(?:(?P<epoch>[0-9]+)!)? # epoch
(?P<release>[0-9]+(?:\.[0-9]+)*) # release segment
(?P<pre> # pre-release
(?P<preL>a|b|rc)
(?P<preN>[0-9]+)
)?
(?P<post> # post release
(?:-(?P<postN>[0-9]+))
)?
(?P<dev> # dev release
(?:\.dev(?P<devN>[0-9]+))
)?
)""", verPublic, re.X)
assert regexMatch
verEpoch = regexMatch.group("epoch") or ""
verRelease = regexMatch.group("release")
verPreRel = regexMatch.group("pre") or ""
verPostRel = regexMatch.group("post") or ""
verDevRel = regexMatch.group("dev") or ""
verNormal = verRelease+verPreRel+verPostRel+verDevRel
verIsRel = bool(not verPreRel and not verDevRel)
#
# Local Version.
#
# Uses POSIX time (Nominal build time as seconds since the Epoch) as obtained
# either from the environment variable SOURCE_DATE_EPOCH or the wallclock time.
# Also converts POSIX timestamp to ISO 8601.
#
verVCS = git.getGitVer()
verClean = bool((not verVCS) or (git.isGitClean()))
posixTime = int(os.environ.get("SOURCE_DATE_EPOCH", time.time()))
iso8601Time= time.strftime("%Y%m%dT%H%M%SZ", time.gmtime(posixTime))
verLocal = verPublic+"+"+iso8601Time
if verVCS:
verLocal += "."+verVCS
if not verClean:
verLocal += ".dirty"
#
# SemVer Version.
#
# Obeys Semantic Versioning 2.0.0, found at
# https://semver.org/spec/v2.0.0.html
#
verSemVer = ".".join((verRelease+".0.0").split(".")[:3])
identifiers= []
if verPreRel: identifiers.append(verPreRel)
if verDevRel: identifiers.append(verDevRel[1:])
if identifiers:
verSemVer += "-" + ".".join(identifiers)
metadata = []
if regexMatch.group("postN"):
metadata.append("post")
metadata.append(regexMatch.group("postN"))
metadata.append("buildtime")
metadata.append(iso8601Time)
if verVCS:
metadata.append("git")
metadata.append(verVCS)
if not verClean:
metadata.append("dirty")
if metadata:
verSemVer += "+" + ".".join(metadata)
#
# Version utilities
#
def synthesizeVersionPy():
templatePath = os.path.join(git.getSrcRoot(),
"scripts",
"version.py.in")
with open(templatePath, "r") as f:
return f.read().format(**globals()) | zvit | /zvit-0.0.11+8ef0d2e7db5e6bb76f8186088141a78692691e7b.tar.gz/zvit-0.0.11/scripts/versioning.py | versioning.py |
#
# Imports
#
import os, subprocess
# Useful constants
EMPTYTREE_SHA1 = "4b825dc642cb6eb9a060e54bf8d69288fbee4904"
ORIGINAL_ENV = os.environ.copy()
C_ENV = os.environ.copy()
C_ENV['LANGUAGE'] = C_ENV['LANG'] = C_ENV['LC_ALL'] = "C"
SCRIPT_PATH = os.path.abspath(os.path.dirname(__file__))
SRCROOT_PATH = None
GIT_VER = None
GIT_CLEAN = None
#
# Utility functions
#
def invoke(command,
cwd = SCRIPT_PATH,
env = C_ENV,
stdin = subprocess.DEVNULL,
stdout = subprocess.PIPE,
stderr = subprocess.PIPE,
**kwargs):
return subprocess.Popen(
command,
stdin = stdin,
stdout = stdout,
stderr = stderr,
cwd = cwd,
env = env,
**kwargs
)
def getSrcRoot():
#
# Return the cached value if we know it.
#
global SRCROOT_PATH
if SRCROOT_PATH is not None:
return SRCROOT_PATH
#
# Our initial guess is `dirname(dirname(__file__))`.
#
root = os.path.dirname(SCRIPT_PATH)
try:
inv = invoke(["git", "rev-parse", "--show-toplevel"],
universal_newlines = True,)
streamOut, streamErr = inv.communicate()
if inv.returncode == 0:
root = streamOut[:-1]
except FileNotFoundError as err:
pass
finally:
SRCROOT_PATH = root
return root
def getGitVer():
#
# Return the cached value if we know it.
#
global GIT_VER
if GIT_VER is not None:
return GIT_VER
try:
gitVer = ""
inv = invoke(["git", "rev-parse", "HEAD"],
universal_newlines = True,)
streamOut, streamErr = inv.communicate()
if inv.returncode == 0 or inv.returncode == 128:
gitVer = streamOut[:-1]
except FileNotFoundError as err:
pass
finally:
if gitVer == "HEAD":
GIT_VER = EMPTYTREE_SHA1
else:
GIT_VER = gitVer
return GIT_VER
def isGitClean():
#
# Return the cached value if we know it.
#
global GIT_CLEAN
if GIT_CLEAN is not None:
return GIT_CLEAN
try:
gitVer = None
inv_nc = invoke(["git", "diff", "--quiet"],
stdout = subprocess.DEVNULL,
stderr = subprocess.DEVNULL,)
inv_c = invoke(["git", "diff", "--quiet", "--cached"],
stdout = subprocess.DEVNULL,
stderr = subprocess.DEVNULL,)
inv_nc = inv_nc.wait()
inv_c = inv_c .wait()
GIT_CLEAN = (inv_nc == 0) and (inv_c == 0)
except FileNotFoundError as err:
#
# If we don't have access to Git, assume it's a tarball, in which case
# it's always clean.
#
GIT_CLEAN = True
return GIT_CLEAN | zvit | /zvit-0.0.11+8ef0d2e7db5e6bb76f8186088141a78692691e7b.tar.gz/zvit-0.0.11/scripts/git.py | git.py |
[](https://github.com/zvtvz/zvt-ccxt)
[](https://pypi.org/project/zvt-ccxt/)
[](https://pypi.org/project/zvt-ccxt/)
[](https://pypi.org/project/zvt-ccxt/)
[](https://travis-ci.org/zvtvz/zvt-ccxt)
[](http://hits.dwyl.io/zvtvz/zvt-ccxt)
## How to use
### 1.1 install
```
pip install zvt-ccxt
pip show zvt-ccxt
```
make sure use the latest version
```
pip install --upgrade zvt-ccxt
```
### 1.2 use in zvt way
```
In [1]: from zvt_ccxt.domain import *
In [2]: Coin
Out[2]: zvt_ccxt.domain.coin_meta.Coin
In [3]: Coin.record_data()
Coin registered recorders:{'ccxt': <class 'zvt_ccxt.recorders.coin_recorder.CoinMetaRecorder'>}
2020-07-17 23:26:38,730 INFO MainThread init_markets for binance success
2020-07-17 23:26:40,941 INFO MainThread init_markets for huobipro success
In [4]: Coin.query_data()
Out[4]:
id entity_id timestamp entity_type exchange code name
0 coin_binance_BTC/USDT coin_binance_BTC/USDT None coin binance BTC/USDT BTC/USDT
1 coin_binance_ETH/USDT coin_binance_ETH/USDT None coin binance ETH/USDT ETH/USDT
2 coin_binance_EOS/USDT coin_binance_EOS/USDT None coin binance EOS/USDT EOS/USDT
3 coin_huobipro_BTC/USDT coin_huobipro_BTC/USDT None coin huobipro BTC/USDT BTC/USDT
4 coin_huobipro_ETH/USDT coin_huobipro_ETH/USDT None coin huobipro ETH/USDT ETH/USDT
5 coin_huobipro_EOS/USDT coin_huobipro_EOS/USDT None coin huobipro EOS/USDT EOS/USDT
In [2]: Coin1dKdata.record_data()
In [4]: Coin1dKdata.query_data(codes=['BTC/USDT'])
Out[4]:
id entity_id timestamp provider code name level open close high low volume turnover
0 coin_binance_BTC/USDT_2017-10-22 coin_binance_BTC/USDT 2017-10-22 ccxt BTC/USDT BTC/USDT 1d 6003.27 5950.02 6060.00 5720.03 1362.092216 None
1 coin_binance_BTC/USDT_2017-10-23 coin_binance_BTC/USDT 2017-10-23 ccxt BTC/USDT BTC/USDT 1d 5975.00 5915.93 6080.00 5621.03 1812.557715 None
2 coin_binance_BTC/USDT_2017-10-24 coin_binance_BTC/USDT 2017-10-24 ccxt BTC/USDT BTC/USDT 1d 5909.47 5477.03 5925.00 5450.00 2580.418767 None
3 coin_binance_BTC/USDT_2017-10-25 coin_binance_BTC/USDT 2017-10-25 ccxt BTC/USDT BTC/USDT 1d 5506.92 5689.99 5704.96 5286.98 2282.813205 None
4 coin_binance_BTC/USDT_2017-10-26 coin_binance_BTC/USDT 2017-10-26 ccxt BTC/USDT BTC/USDT 1d 5670.10 5861.77 5939.99 5650.00 1972.965882 None
.. ... ... ... ... ... ... ... ... ... ... ... ... ...
995 coin_binance_BTC/USDT_2020-07-13 coin_binance_BTC/USDT 2020-07-13 ccxt BTC/USDT BTC/USDT 1d 9303.31 9242.62 9343.82 9200.89 42740.069115 None
996 coin_binance_BTC/USDT_2020-07-14 coin_binance_BTC/USDT 2020-07-14 ccxt BTC/USDT BTC/USDT 1d 9242.61 9255.85 9279.54 9113.00 45772.552509 None
997 coin_binance_BTC/USDT_2020-07-15 coin_binance_BTC/USDT 2020-07-15 ccxt BTC/USDT BTC/USDT 1d 9255.85 9197.60 9276.49 9160.57 39053.579665 None
998 coin_binance_BTC/USDT_2020-07-16 coin_binance_BTC/USDT 2020-07-16 ccxt BTC/USDT BTC/USDT 1d 9197.60 9133.72 9226.15 9047.25 43375.571191 None
999 coin_binance_BTC/USDT_2020-07-17 coin_binance_BTC/USDT 2020-07-17 ccxt BTC/USDT BTC/USDT 1d 9133.72 9157.72 9186.83 9089.81 21075.560207 None
[1000 rows x 13 columns]
```
## 💌请作者喝杯咖啡
如果你觉得项目对你有帮助,可以请作者喝杯咖啡
<img src="https://raw.githubusercontent.com/zvtvz/zvt/master/docs/imgs/alipay-cn.png" width="25%" alt="Alipay">
<img src="https://raw.githubusercontent.com/zvtvz/zvt/master/docs/imgs/wechat-cn.png" width="25%" alt="Wechat">
## 🤝联系方式
个人微信:foolcage 添加暗号:zvt-ccxt
<img src="https://raw.githubusercontent.com/zvtvz/zvt/master/docs/imgs/wechat.jpeg" width="25%" alt="Wechat">
------
微信公众号:
<img src="https://raw.githubusercontent.com/zvtvz/zvt/master/docs/imgs/gongzhonghao.jpg" width="25%" alt="Wechat">
知乎专栏:
https://zhuanlan.zhihu.com/automoney
## Thanks
<p><a href=https://www.jetbrains.com/?from=zvt><img src="https://raw.githubusercontent.com/zvtvz/zvt/master/docs/imgs/jetbrains.png" width="25%" alt="jetbrains"></a></p> | zvt-ccxt | /zvt-ccxt-0.0.6.tar.gz/zvt-ccxt-0.0.6/README.md | README.md |
import pandas as pd
from zvt.contract.api import df_to_db
from zvt.contract.recorder import Recorder
from zvt_ccxt.accounts import CCXTAccount
from zvt_ccxt.domain import Coin
from zvt_ccxt.settings import COIN_EXCHANGES, COIN_PAIRS
class CoinMetaRecorder(Recorder):
provider = 'ccxt'
data_schema = Coin
def __init__(self, batch_size=10, force_update=False, sleeping_time=10, exchanges=COIN_EXCHANGES) -> None:
super().__init__(batch_size, force_update, sleeping_time)
self.exchanges = exchanges
def run(self):
for exchange_str in self.exchanges:
exchange = CCXTAccount.get_ccxt_exchange(exchange_str)
try:
markets = exchange.fetch_markets()
df = pd.DataFrame()
# markets有些为key=symbol的dict,有些为list
markets_type = type(markets)
if markets_type != dict and markets_type != list:
self.logger.exception("unknown return markets type {}".format(markets_type))
return
aa = []
for market in markets:
if markets_type == dict:
name = market
code = market
if markets_type == list:
code = market['symbol']
name = market['symbol']
if name not in COIN_PAIRS:
continue
aa.append(market)
security_item = {
'id': '{}_{}_{}'.format('coin', exchange_str, code),
'entity_id': '{}_{}_{}'.format('coin', exchange_str, code),
'exchange': exchange_str,
'entity_type': 'coin',
'code': code,
'name': name
}
df = df.append(security_item, ignore_index=True)
# 存储该交易所的数字货币列表
if not df.empty:
df_to_db(df=df, data_schema=self.data_schema, provider=self.provider, force_update=True)
self.logger.info("init_markets for {} success".format(exchange_str))
except Exception as e:
self.logger.exception(f"init_markets for {exchange_str} failed", e)
__all__ = ["CoinMetaRecorder"]
if __name__ == '__main__':
CoinMetaRecorder().run() | zvt-ccxt | /zvt-ccxt-0.0.6.tar.gz/zvt-ccxt-0.0.6/zvt_ccxt/recorders/coin_recorder.py | coin_recorder.py |
import argparse
from zvt import init_log
from zvt.api import get_kdata_schema, generate_kdata_id
from zvt.contract import IntervalLevel
from zvt.contract.recorder import FixedCycleDataRecorder
from zvt.utils.time_utils import to_pd_timestamp
from zvt_ccxt.accounts import CCXTAccount
from zvt_ccxt.domain import Coin, CoinTickCommon
from zvt_ccxt.settings import COIN_EXCHANGES, COIN_PAIRS
class CoinTickRecorder(FixedCycleDataRecorder):
provider = 'ccxt'
entity_provider = 'ccxt'
entity_schema = Coin
# 只是为了把recorder注册到data_schema
data_schema = CoinTickCommon
def __init__(self,
exchanges=['binance'],
entity_ids=None,
codes=None,
batch_size=10,
force_update=True,
sleeping_time=10,
default_size=2000,
real_time=True,
fix_duplicate_way='ignore',
start_timestamp=None,
end_timestamp=None,
kdata_use_begin_time=False,
close_hour=None,
close_minute=None,
level=IntervalLevel.LEVEL_TICK,
one_day_trading_minutes=24 * 60) -> None:
self.data_schema = get_kdata_schema(entity_type='coin', level=level)
super().__init__('coin', exchanges, entity_ids, codes, batch_size, force_update, sleeping_time,
default_size, real_time, fix_duplicate_way, start_timestamp, end_timestamp, close_hour,
close_minute, IntervalLevel.LEVEL_TICK, kdata_use_begin_time, one_day_trading_minutes)
def generate_domain_id(self, entity, original_data):
return generate_kdata_id(entity_id=entity.id, timestamp=original_data['timestamp'], level=self.level)
def record(self, entity, start, end, size, timestamps):
if size < 20:
size = 20
ccxt_exchange = CCXTAccount.get_ccxt_exchange(entity.exchange)
if ccxt_exchange.has['fetchTrades']:
limit = CCXTAccount.get_tick_limit(entity.exchange)
limit = min(size, limit)
kdata_list = []
trades = ccxt_exchange.fetch_trades(entity.code, limit=limit)
for trade in trades:
kdata_json = {
'name': entity.name,
'provider': 'ccxt',
# 'id': trade['id'],
'level': 'tick',
'order': trade['order'],
'timestamp': to_pd_timestamp(trade['timestamp']),
'price': trade['price'],
'volume': trade['amount'],
'direction': trade['side'],
'order_type': trade['type'],
'turnover': trade['price'] * trade['amount']
}
kdata_list.append(kdata_json)
return kdata_list
else:
self.logger.warning("exchange:{} not support fetchOHLCV".format(entity.exchange))
__all__ = ["CoinTickRecorder"]
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('--exchanges', help='exchanges', default='binance', nargs='+',
choices=[item for item in COIN_EXCHANGES])
parser.add_argument('--codes', help='codes', default='EOS/USDT', nargs='+',
choices=[item for item in COIN_PAIRS])
args = parser.parse_args()
init_log('coin_tick_kdata.log')
CoinTickRecorder(codes=['EOS/USDT']).run() | zvt-ccxt | /zvt-ccxt-0.0.6.tar.gz/zvt-ccxt-0.0.6/zvt_ccxt/recorders/coin_tick_recorder.py | coin_tick_recorder.py |
import argparse
from zvt import init_log
from zvt.api import generate_kdata_id, get_kdata_schema
from zvt.contract import IntervalLevel
from zvt.contract.recorder import FixedCycleDataRecorder
from zvt.utils.time_utils import to_pd_timestamp
from zvt.utils.time_utils import to_time_str
from zvt_ccxt.accounts import CCXTAccount
from zvt_ccxt.domain import Coin, CoinKdataCommon
from zvt_ccxt.recorders import to_ccxt_trading_level
from zvt_ccxt.settings import COIN_EXCHANGES, COIN_PAIRS
class CoinKdataRecorder(FixedCycleDataRecorder):
provider = 'ccxt'
entity_provider = 'ccxt'
entity_schema = Coin
# 只是为了把recorder注册到data_schema
data_schema = CoinKdataCommon
def __init__(self,
exchanges=['binance'],
entity_ids=None,
codes=None,
batch_size=10,
force_update=True,
sleeping_time=10,
default_size=2000,
real_time=False,
fix_duplicate_way='ignore',
start_timestamp=None,
end_timestamp=None,
level=IntervalLevel.LEVEL_1DAY,
kdata_use_begin_time=True,
close_hour=None,
close_minute=None,
one_day_trading_minutes=24 * 60) -> None:
self.data_schema = get_kdata_schema(entity_type='coin', level=level)
self.ccxt_trading_level = to_ccxt_trading_level(level)
super().__init__('coin', exchanges, entity_ids, codes, batch_size, force_update, sleeping_time,
default_size, real_time, fix_duplicate_way, start_timestamp, close_hour, close_minute,
end_timestamp, level, kdata_use_begin_time, one_day_trading_minutes)
def generate_domain_id(self, entity, original_data):
return generate_kdata_id(entity_id=entity.id, timestamp=original_data['timestamp'], level=self.level)
def record(self, entity, start, end, size, timestamps):
start_timestamp = to_time_str(start)
ccxt_exchange = CCXTAccount.get_ccxt_exchange(entity.exchange)
if ccxt_exchange.has['fetchOHLCV']:
limit = CCXTAccount.get_kdata_limit(entity.exchange)
limit = min(size, limit)
kdata_list = []
if CCXTAccount.exchange_conf[entity.exchange]['support_since']:
kdatas = ccxt_exchange.fetch_ohlcv(entity.code,
timeframe=self.ccxt_trading_level,
since=start_timestamp)
else:
kdatas = ccxt_exchange.fetch_ohlcv(entity.code,
timeframe=self.ccxt_trading_level,
limit=limit)
for kdata in kdatas:
current_timestamp = kdata[0]
if self.level == IntervalLevel.LEVEL_1DAY:
current_timestamp = to_time_str(current_timestamp)
kdata_json = {
'timestamp': to_pd_timestamp(current_timestamp),
'open': kdata[1],
'high': kdata[2],
'low': kdata[3],
'close': kdata[4],
'volume': kdata[5],
'name': entity.name,
'provider': 'ccxt',
'level': self.level.value
}
kdata_list.append(kdata_json)
return kdata_list
else:
self.logger.warning("exchange:{} not support fetchOHLCV".format(entity.exchange))
__all__ = ["CoinKdataRecorder"]
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('--level', help='trading level', default='1m', choices=[item.value for item in IntervalLevel])
parser.add_argument('--exchanges', help='exchanges', default='binance', nargs='+',
choices=[item for item in COIN_EXCHANGES])
parser.add_argument('--codes', help='codes', default='EOS/USDT', nargs='+',
choices=[item for item in COIN_PAIRS])
args = parser.parse_args()
level = IntervalLevel(args.level)
exchanges = args.exchanges
if type(exchanges) != list:
exchanges = [exchanges]
codes = args.codes
if type(codes) != list:
codes = [codes]
init_log(
'coin_{}_{}_{}_kdata.log'.format('-'.join(exchanges), '-'.join(codes).replace('/', ''), args.level))
CoinKdataRecorder(exchanges=exchanges, codes=codes, level=level, real_time=True).run() | zvt-ccxt | /zvt-ccxt-0.0.6.tar.gz/zvt-ccxt-0.0.6/zvt_ccxt/recorders/coin_kdata_recorder.py | coin_kdata_recorder.py |
[](https://github.com/zvtvz/zvt)
[](https://pypi.org/project/zvt/)
[](https://pypi.org/project/zvt/)
[](https://pypi.org/project/zvt/)
[](https://github.com/zvtvz/zvt/actions/workflows/build.yml)
[](https://github.com/zvtvz/zvt/actions/workflows/package.yaml)
[](https://zvt.readthedocs.io/en/latest/?badge=latest)
[](https://codecov.io/github/zvtvz/zvt)
[](https://pepy.tech/project/zvt)
**Read this in other languages: [中文](README-cn.md).**
**Read the docs:[https://zvt.readthedocs.io/en/latest/](https://zvt.readthedocs.io/en/latest/)**
### Install
```
python3 -m pip install -U zvt
```
### Main ui
After the installation is complete, enter zvt on the command line
```shell
zvt
```
open [http://127.0.0.1:8050/](http://127.0.0.1:8050/)
> The example shown here relies on data, factor, trader, please read [docs](https://zvt.readthedocs.io/en/latest/)
<p align="center"><img src='https://raw.githubusercontent.com/zvtvz/zvt/master/docs/imgs/zvt-factor.png'/></p>
<p align="center"><img src='https://raw.githubusercontent.com/zvtvz/zvt/master/docs/imgs/zvt-trader.png'/></p>
> The core concept of the system is visual, and the name of the interface corresponds to it one-to-one, so it is also uniform and extensible.
> You can write and run the strategy in your favorite ide, and then view its related targets, factor, signal and performance on the UI.
### Behold, the power of zvt:
```
>>> from zvt.domain import Stock, Stock1dHfqKdata
>>> from zvt.ml import MaStockMLMachine
>>> Stock.record_data(provider="em")
>>> entity_ids = ["stock_sz_000001", "stock_sz_000338", "stock_sh_601318"]
>>> Stock1dHfqKdata.record_data(provider="em", entity_ids=entity_ids, sleeping_time=1)
>>> machine = MaStockMLMachine(entity_ids=["stock_sz_000001"], data_provider="em")
>>> machine.train()
>>> machine.predict()
>>> machine.draw_result(entity_id="stock_sz_000001")
```
<p align="center"><img src='https://raw.githubusercontent.com/zvtvz/zvt/master/docs/imgs/pred_close.png'/></p>
> The few lines of code above has done: data capture, persistence, incremental update, machine learning, prediction, and display results.
> Once you are familiar with the core concepts of the system, you can apply it to any target in the market.
### Data
#### China stock
```
>>> from zvt.domain import *
>>> Stock.record_data(provider="em")
>>> df = Stock.query_data(provider="em", index='code')
>>> print(df)
id entity_id timestamp entity_type exchange code name list_date end_date
code
000001 stock_sz_000001 stock_sz_000001 1991-04-03 stock sz 000001 平安银行 1991-04-03 None
000002 stock_sz_000002 stock_sz_000002 1991-01-29 stock sz 000002 万 科A 1991-01-29 None
000004 stock_sz_000004 stock_sz_000004 1990-12-01 stock sz 000004 国华网安 1990-12-01 None
000005 stock_sz_000005 stock_sz_000005 1990-12-10 stock sz 000005 世纪星源 1990-12-10 None
000006 stock_sz_000006 stock_sz_000006 1992-04-27 stock sz 000006 深振业A 1992-04-27 None
... ... ... ... ... ... ... ... ... ...
605507 stock_sh_605507 stock_sh_605507 2021-08-02 stock sh 605507 国邦医药 2021-08-02 None
605577 stock_sh_605577 stock_sh_605577 2021-08-24 stock sh 605577 龙版传媒 2021-08-24 None
605580 stock_sh_605580 stock_sh_605580 2021-08-19 stock sh 605580 恒盛能源 2021-08-19 None
605588 stock_sh_605588 stock_sh_605588 2021-08-12 stock sh 605588 冠石科技 2021-08-12 None
605589 stock_sh_605589 stock_sh_605589 2021-08-10 stock sh 605589 圣泉集团 2021-08-10 None
[4136 rows x 9 columns]
```
#### USA stock
```
>>> Stockus.record_data()
>>> df = Stockus.query_data(index='code')
>>> print(df)
id entity_id timestamp entity_type exchange code name list_date end_date
code
A stockus_nyse_A stockus_nyse_A NaT stockus nyse A 安捷伦 None None
AA stockus_nyse_AA stockus_nyse_AA NaT stockus nyse AA 美国铝业 None None
AAC stockus_nyse_AAC stockus_nyse_AAC NaT stockus nyse AAC Ares Acquisition Corp-A None None
AACG stockus_nasdaq_AACG stockus_nasdaq_AACG NaT stockus nasdaq AACG ATA Creativity Global ADR None None
AACG stockus_nyse_AACG stockus_nyse_AACG NaT stockus nyse AACG ATA Creativity Global ADR None None
... ... ... ... ... ... ... ... ... ...
ZWRK stockus_nasdaq_ZWRK stockus_nasdaq_ZWRK NaT stockus nasdaq ZWRK Z-Work Acquisition Corp-A None None
ZY stockus_nasdaq_ZY stockus_nasdaq_ZY NaT stockus nasdaq ZY Zymergen Inc None None
ZYME stockus_nyse_ZYME stockus_nyse_ZYME NaT stockus nyse ZYME Zymeworks Inc None None
ZYNE stockus_nasdaq_ZYNE stockus_nasdaq_ZYNE NaT stockus nasdaq ZYNE Zynerba Pharmaceuticals Inc None None
ZYXI stockus_nasdaq_ZYXI stockus_nasdaq_ZYXI NaT stockus nasdaq ZYXI Zynex Inc None None
[5826 rows x 9 columns]
>>> Stockus.query_data(code='AAPL')
id entity_id timestamp entity_type exchange code name list_date end_date
0 stockus_nasdaq_AAPL stockus_nasdaq_AAPL None stockus nasdaq AAPL 苹果 None None
```
#### Hong Kong stock
```
>>> Stockhk.record_data()
>>> df = Stockhk.query_data(index='code')
>>> print(df)
id entity_id timestamp entity_type exchange code name list_date end_date
code
00001 stockhk_hk_00001 stockhk_hk_00001 NaT stockhk hk 00001 长和 None None
00002 stockhk_hk_00002 stockhk_hk_00002 NaT stockhk hk 00002 中电控股 None None
00003 stockhk_hk_00003 stockhk_hk_00003 NaT stockhk hk 00003 香港中华煤气 None None
00004 stockhk_hk_00004 stockhk_hk_00004 NaT stockhk hk 00004 九龙仓集团 None None
00005 stockhk_hk_00005 stockhk_hk_00005 NaT stockhk hk 00005 汇丰控股 None None
... ... ... ... ... ... ... ... ... ...
09996 stockhk_hk_09996 stockhk_hk_09996 NaT stockhk hk 09996 沛嘉医疗-B None None
09997 stockhk_hk_09997 stockhk_hk_09997 NaT stockhk hk 09997 康基医疗 None None
09998 stockhk_hk_09998 stockhk_hk_09998 NaT stockhk hk 09998 光荣控股 None None
09999 stockhk_hk_09999 stockhk_hk_09999 NaT stockhk hk 09999 网易-S None None
80737 stockhk_hk_80737 stockhk_hk_80737 NaT stockhk hk 80737 湾区发展-R None None
[2597 rows x 9 columns]
>>> df[df.code=='00700']
id entity_id timestamp entity_type exchange code name list_date end_date
2112 stockhk_hk_00700 stockhk_hk_00700 None stockhk hk 00700 腾讯控股 None None
```
#### And more
```
>>> from zvt.contract import *
>>> zvt_context.tradable_schema_map
{'stockus': zvt.domain.meta.stockus_meta.Stockus,
'stockhk': zvt.domain.meta.stockhk_meta.Stockhk,
'index': zvt.domain.meta.index_meta.Index,
'etf': zvt.domain.meta.etf_meta.Etf,
'stock': zvt.domain.meta.stock_meta.Stock,
'block': zvt.domain.meta.block_meta.Block,
'fund': zvt.domain.meta.fund_meta.Fund}
```
The key is tradable entity type, and the value is the schema. The system provides unified **record (record_data)** and **query (query_data)** methods for the schema.
```
>>> Index.record_data()
>>> df=Index.query_data(filters=[Index.category=='scope',Index.exchange='sh'])
>>> print(df)
id entity_id timestamp entity_type exchange code name list_date end_date publisher category base_point
0 index_sh_000001 index_sh_000001 1990-12-19 index sh 000001 上证指数 1991-07-15 None csindex scope 100.00
1 index_sh_000002 index_sh_000002 1990-12-19 index sh 000002 A股指数 1992-02-21 None csindex scope 100.00
2 index_sh_000003 index_sh_000003 1992-02-21 index sh 000003 B股指数 1992-08-17 None csindex scope 100.00
3 index_sh_000010 index_sh_000010 2002-06-28 index sh 000010 上证180 2002-07-01 None csindex scope 3299.06
4 index_sh_000016 index_sh_000016 2003-12-31 index sh 000016 上证50 2004-01-02 None csindex scope 1000.00
.. ... ... ... ... ... ... ... ... ... ... ... ...
25 index_sh_000020 index_sh_000020 2007-12-28 index sh 000020 中型综指 2008-05-12 None csindex scope 1000.00
26 index_sh_000090 index_sh_000090 2009-12-31 index sh 000090 上证流通 2010-12-02 None csindex scope 1000.00
27 index_sh_930903 index_sh_930903 2012-12-31 index sh 930903 中证A股 2016-10-18 None csindex scope 1000.00
28 index_sh_000688 index_sh_000688 2019-12-31 index sh 000688 科创50 2020-07-23 None csindex scope 1000.00
29 index_sh_931643 index_sh_931643 2019-12-31 index sh 931643 科创创业50 2021-06-01 None csindex scope 1000.00
[30 rows x 12 columns]
```
### EntityEvent
We have tradable entity and then events about them.
#### Market quotes
the TradableEntity quote schema follows the following rules:
```
{entity_shema}{level}{adjust_type}Kdata
```
* entity_schema
TradableEntity class,e.g., Stock,Stockus.
* level
```
>>> for level in IntervalLevel:
print(level.value)
```
* adjust type
```
>>> for adjust_type in AdjustType:
print(adjust_type.value)
```
> Note: In order to be compatible with historical data, the pre-reset is an exception, {adjust_type} is left empty
qfq
```
>>> Stock1dKdata.record_data(code='000338', provider='em')
>>> df = Stock1dKdata.query_data(code='000338', provider='em')
>>> print(df)
id entity_id timestamp provider code name level open close high low volume turnover change_pct turnover_rate
0 stock_sz_000338_2007-04-30 stock_sz_000338 2007-04-30 None 000338 潍柴动力 1d 2.33 2.00 2.40 1.87 207375.0 1.365189e+09 3.2472 0.1182
1 stock_sz_000338_2007-05-08 stock_sz_000338 2007-05-08 None 000338 潍柴动力 1d 2.11 1.94 2.20 1.87 86299.0 5.563198e+08 -0.0300 0.0492
2 stock_sz_000338_2007-05-09 stock_sz_000338 2007-05-09 None 000338 潍柴动力 1d 1.90 1.81 1.94 1.66 93823.0 5.782065e+08 -0.0670 0.0535
3 stock_sz_000338_2007-05-10 stock_sz_000338 2007-05-10 None 000338 潍柴动力 1d 1.78 1.85 1.98 1.75 47720.0 2.999226e+08 0.0221 0.0272
4 stock_sz_000338_2007-05-11 stock_sz_000338 2007-05-11 None 000338 潍柴动力 1d 1.81 1.73 1.81 1.66 39273.0 2.373126e+08 -0.0649 0.0224
... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ...
3426 stock_sz_000338_2021-08-27 stock_sz_000338 2021-08-27 None 000338 潍柴动力 1d 19.39 20.30 20.30 19.25 1688497.0 3.370241e+09 0.0601 0.0398
3427 stock_sz_000338_2021-08-30 stock_sz_000338 2021-08-30 None 000338 潍柴动力 1d 20.30 20.09 20.31 19.78 1187601.0 2.377957e+09 -0.0103 0.0280
3428 stock_sz_000338_2021-08-31 stock_sz_000338 2021-08-31 None 000338 潍柴动力 1d 20.20 20.07 20.63 19.70 1143985.0 2.295195e+09 -0.0010 0.0270
3429 stock_sz_000338_2021-09-01 stock_sz_000338 2021-09-01 None 000338 潍柴动力 1d 19.98 19.68 19.98 19.15 1218697.0 2.383841e+09 -0.0194 0.0287
3430 stock_sz_000338_2021-09-02 stock_sz_000338 2021-09-02 None 000338 潍柴动力 1d 19.71 19.85 19.97 19.24 1023545.0 2.012006e+09 0.0086 0.0241
[3431 rows x 15 columns]
>>> Stockus1dKdata.record_data(code='AAPL', provider='em')
>>> df = Stockus1dKdata.query_data(code='AAPL', provider='em')
>>> print(df)
id entity_id timestamp provider code name level open close high low volume turnover change_pct turnover_rate
0 stockus_nasdaq_AAPL_1984-09-07 stockus_nasdaq_AAPL 1984-09-07 None AAPL 苹果 1d -5.59 -5.59 -5.58 -5.59 2981600.0 0.000000e+00 0.0000 0.0002
1 stockus_nasdaq_AAPL_1984-09-10 stockus_nasdaq_AAPL 1984-09-10 None AAPL 苹果 1d -5.59 -5.59 -5.58 -5.59 2346400.0 0.000000e+00 0.0000 0.0001
2 stockus_nasdaq_AAPL_1984-09-11 stockus_nasdaq_AAPL 1984-09-11 None AAPL 苹果 1d -5.58 -5.58 -5.58 -5.58 5444000.0 0.000000e+00 0.0018 0.0003
3 stockus_nasdaq_AAPL_1984-09-12 stockus_nasdaq_AAPL 1984-09-12 None AAPL 苹果 1d -5.58 -5.59 -5.58 -5.59 4773600.0 0.000000e+00 -0.0018 0.0003
4 stockus_nasdaq_AAPL_1984-09-13 stockus_nasdaq_AAPL 1984-09-13 None AAPL 苹果 1d -5.58 -5.58 -5.58 -5.58 7429600.0 0.000000e+00 0.0018 0.0004
... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ...
8765 stockus_nasdaq_AAPL_2021-08-27 stockus_nasdaq_AAPL 2021-08-27 None AAPL 苹果 1d 147.48 148.60 148.75 146.83 55802388.0 8.265452e+09 0.0072 0.0034
8766 stockus_nasdaq_AAPL_2021-08-30 stockus_nasdaq_AAPL 2021-08-30 None AAPL 苹果 1d 149.00 153.12 153.49 148.61 90956723.0 1.383762e+10 0.0304 0.0055
8767 stockus_nasdaq_AAPL_2021-08-31 stockus_nasdaq_AAPL 2021-08-31 None AAPL 苹果 1d 152.66 151.83 152.80 151.29 86453117.0 1.314255e+10 -0.0084 0.0052
8768 stockus_nasdaq_AAPL_2021-09-01 stockus_nasdaq_AAPL 2021-09-01 None AAPL 苹果 1d 152.83 152.51 154.98 152.34 80313711.0 1.235321e+10 0.0045 0.0049
8769 stockus_nasdaq_AAPL_2021-09-02 stockus_nasdaq_AAPL 2021-09-02 None AAPL 苹果 1d 153.87 153.65 154.72 152.40 71171317.0 1.093251e+10 0.0075 0.0043
[8770 rows x 15 columns]
```
hfq
```
>>> Stock1dHfqKdata.record_data(code='000338', provider='em')
>>> df = Stock1dHfqKdata.query_data(code='000338', provider='em')
>>> print(df)
id entity_id timestamp provider code name level open close high low volume turnover change_pct turnover_rate
0 stock_sz_000338_2007-04-30 stock_sz_000338 2007-04-30 None 000338 潍柴动力 1d 70.00 64.93 71.00 62.88 207375.0 1.365189e+09 2.1720 0.1182
1 stock_sz_000338_2007-05-08 stock_sz_000338 2007-05-08 None 000338 潍柴动力 1d 66.60 64.00 68.00 62.88 86299.0 5.563198e+08 -0.0143 0.0492
2 stock_sz_000338_2007-05-09 stock_sz_000338 2007-05-09 None 000338 潍柴动力 1d 63.32 62.00 63.88 59.60 93823.0 5.782065e+08 -0.0313 0.0535
3 stock_sz_000338_2007-05-10 stock_sz_000338 2007-05-10 None 000338 潍柴动力 1d 61.50 62.49 64.48 61.01 47720.0 2.999226e+08 0.0079 0.0272
4 stock_sz_000338_2007-05-11 stock_sz_000338 2007-05-11 None 000338 潍柴动力 1d 61.90 60.65 61.90 59.70 39273.0 2.373126e+08 -0.0294 0.0224
... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ...
3426 stock_sz_000338_2021-08-27 stock_sz_000338 2021-08-27 None 000338 潍柴动力 1d 331.97 345.95 345.95 329.82 1688497.0 3.370241e+09 0.0540 0.0398
3427 stock_sz_000338_2021-08-30 stock_sz_000338 2021-08-30 None 000338 潍柴动力 1d 345.95 342.72 346.10 337.96 1187601.0 2.377957e+09 -0.0093 0.0280
3428 stock_sz_000338_2021-08-31 stock_sz_000338 2021-08-31 None 000338 潍柴动力 1d 344.41 342.41 351.02 336.73 1143985.0 2.295195e+09 -0.0009 0.0270
3429 stock_sz_000338_2021-09-01 stock_sz_000338 2021-09-01 None 000338 潍柴动力 1d 341.03 336.42 341.03 328.28 1218697.0 2.383841e+09 -0.0175 0.0287
3430 stock_sz_000338_2021-09-02 stock_sz_000338 2021-09-02 None 000338 潍柴动力 1d 336.88 339.03 340.88 329.67 1023545.0 2.012006e+09 0.0078 0.0241
[3431 rows x 15 columns]
```
#### Finance factor
```
>>> FinanceFactor.record_data(code='000338')
>>> FinanceFactor.query_data(code='000338',columns=FinanceFactor.important_cols(),index='timestamp')
basic_eps total_op_income net_profit op_income_growth_yoy net_profit_growth_yoy roe rota gross_profit_margin net_margin timestamp
timestamp
2002-12-31 NaN 1.962000e+07 2.471000e+06 NaN NaN NaN NaN 0.2068 0.1259 2002-12-31
2003-12-31 1.27 3.574000e+09 2.739000e+08 181.2022 109.8778 0.7729 0.1783 0.2551 0.0766 2003-12-31
2004-12-31 1.75 6.188000e+09 5.369000e+08 0.7313 0.9598 0.3245 0.1474 0.2489 0.0868 2004-12-31
2005-12-31 0.93 5.283000e+09 3.065000e+08 -0.1463 -0.4291 0.1327 0.0603 0.2252 0.0583 2005-12-31
2006-03-31 0.33 1.859000e+09 1.079000e+08 NaN NaN NaN NaN NaN 0.0598 2006-03-31
... ... ... ... ... ... ... ... ... ... ...
2020-08-28 0.59 9.449000e+10 4.680000e+09 0.0400 -0.1148 0.0983 0.0229 0.1958 0.0603 2020-08-28
2020-10-31 0.90 1.474000e+11 7.106000e+09 0.1632 0.0067 0.1502 0.0347 0.1949 0.0590 2020-10-31
2021-03-31 1.16 1.975000e+11 9.207000e+09 0.1327 0.0112 0.1919 0.0444 0.1931 0.0571 2021-03-31
2021-04-30 0.42 6.547000e+10 3.344000e+09 0.6788 0.6197 0.0622 0.0158 0.1916 0.0667 2021-04-30
2021-08-31 0.80 1.264000e+11 6.432000e+09 0.3375 0.3742 0.1125 0.0287 0.1884 0.0653 2021-08-31
[66 rows x 10 columns]
```
#### Three financial tables
```
>>> BalanceSheet.record_data(code='000338')
>>> IncomeStatement.record_data(code='000338')
>>> CashFlowStatement.record_data(code='000338')
```
#### And more
```
>>> zvt_context.schemas
[zvt.domain.dividend_financing.DividendFinancing,
zvt.domain.dividend_financing.DividendDetail,
zvt.domain.dividend_financing.SpoDetail...]
```
All schemas is registered in zvt_context.schemas, **schema** is table, data structure.
The fields and meaning could be checked in following ways:
* help
type the schema. and press tab to show its fields or .help()
```
>>> FinanceFactor.help()
```
* source code
Schemas defined in [domain](https://github.com/zvtvz/zvt/tree/master/zvt/domain)
From above examples, you should know the unified way of recording data:
> Schema.record_data(provider='your provider',codes='the codes')
Note the optional parameter provider, which represents the data provider.
A schema can have multiple providers, which is the cornerstone of system stability.
Check the provider has been implemented:
```
>>> Stock.provider_map_recorder
{'joinquant': zvt.recorders.joinquant.meta.jq_stock_meta_recorder.JqChinaStockRecorder,
'exchange': zvt.recorders.exchange.exchange_stock_meta_recorder.ExchangeStockMetaRecorder,
'em': zvt.recorders.em.meta.em_stock_meta_recorder.EMStockRecorder,
'eastmoney': zvt.recorders.eastmoney.meta.eastmoney_stock_meta_recorder.EastmoneyChinaStockListRecorder}
```
You can use any provider to get the data, the first one is used by default.
One more example, the stock sector data recording:
```
>>> Block.provider_map_recorder
{'eastmoney': zvt.recorders.eastmoney.meta.eastmoney_block_meta_recorder.EastmoneyChinaBlockRecorder,
'sina': zvt.recorders.sina.meta.sina_block_recorder.SinaBlockRecorder}
>>> Block.record_data(provider='sina')
Block registered recorders:{'eastmoney': <class 'zvt.recorders.eastmoney.meta.china_stock_category_recorder.EastmoneyChinaBlockRecorder'>, 'sina': <class 'zvt.recorders.sina.meta.sina_china_stock_category_recorder.SinaChinaBlockRecorder'>}
2020-03-04 23:56:48,931 INFO MainThread finish record sina blocks:industry
2020-03-04 23:56:49,450 INFO MainThread finish record sina blocks:concept
```
Learn more about record_data
* The parameter code[single], codes[multiple] represent the stock codes to be recorded
* Recording the whole market if not set code, codes
* This method will store the data locally and only do incremental updates
Refer to the scheduling recoding way[data runner](https://github.com/zvtvz/zvt/blob/master/examples/data_runner)
#### Market-wide stock selection
After recording the data of the whole market, you can quickly query the required data locally.
An example: the top 20 stocks with roe>8% and revenue growth>8% in the 2018 annual report
```
>>> df=FinanceFactor.query_data(filters=[FinanceFactor.roe>0.08,FinanceFactor.report_period=='year',FinanceFactor.op_income_growth_yoy>0.08],start_timestamp='2019-01-01',order=FinanceFactor.roe.desc(),limit=20,columns=["code"]+FinanceFactor.important_cols(),index='code')
code basic_eps total_op_income net_profit op_income_growth_yoy net_profit_growth_yoy roe rota gross_profit_margin net_margin timestamp
code
000048 000048 2.7350 4.919000e+09 1.101000e+09 0.4311 1.5168 0.7035 0.1988 0.5243 0.2355 2020-04-30
000912 000912 0.3500 4.405000e+09 3.516000e+08 0.1796 1.2363 4.7847 0.0539 0.2175 0.0795 2019-03-20
002207 002207 0.2200 3.021000e+08 5.189000e+07 0.1600 1.1526 1.1175 0.1182 0.1565 0.1718 2020-04-27
002234 002234 5.3300 3.276000e+09 1.610000e+09 0.8023 3.2295 0.8361 0.5469 0.5968 0.4913 2020-04-21
002458 002458 3.7900 3.584000e+09 2.176000e+09 1.4326 4.9973 0.8318 0.6754 0.6537 0.6080 2020-02-20
... ... ... ... ... ... ... ... ... ... ... ...
600701 600701 -3.6858 7.830000e+08 -3.814000e+09 1.3579 -0.0325 1.9498 -0.7012 0.4173 -4.9293 2020-04-29
600747 600747 -1.5600 3.467000e+08 -2.290000e+09 2.1489 -0.4633 3.1922 -1.5886 0.0378 -6.6093 2020-06-30
600793 600793 1.6568 1.293000e+09 1.745000e+08 0.1164 0.8868 0.7490 0.0486 0.1622 0.1350 2019-04-30
600870 600870 0.0087 3.096000e+07 4.554000e+06 0.7773 1.3702 0.7458 0.0724 0.2688 0.1675 2019-03-30
688169 688169 15.6600 4.205000e+09 7.829000e+08 0.3781 1.5452 0.7172 0.4832 0.3612 0.1862 2020-04-28
[20 rows x 11 columns]
```
So, you should be able to answer the following three questions now:
* What data is there?
* How to record data?
* How to query data?
For more advanced usage and extended data, please refer to the data section in the detailed document.
### Write strategy
Now we could write strategy basing on TradableEntity and EntityEvent.
The so-called strategy backtesting is nothing but repeating the following process:
#### At a certain time, find the targets which matching conditions, buy and sell them, and see the performance.
Two modes to write strategy:
* solo (free style)
At a certain time, calculate conditions according to the events, buy and sell
* formal (正式的)
The calculation model of the two-dimensional index and multi-entity
#### a too simple,sometimes naive person (solo)
Well, this strategy is really too simple,sometimes naive, as we do most of the time.
> When the report comes out, I look at the report.
> If the institution increases its position by more than 5%, I will buy it, and if the institution reduces its position by more than 50%, I will sell it.
Show you the code:
```
# -*- coding: utf-8 -*-
import pandas as pd
from zvt.api import get_recent_report_date
from zvt.contract import ActorType, AdjustType
from zvt.domain import StockActorSummary, Stock1dKdata
from zvt.trader import StockTrader
from zvt.utils import pd_is_not_null, is_same_date, to_pd_timestamp
class FollowIITrader(StockTrader):
finish_date = None
def on_time(self, timestamp: pd.Timestamp):
recent_report_date = to_pd_timestamp(get_recent_report_date(timestamp))
if self.finish_date and is_same_date(recent_report_date, self.finish_date):
return
filters = [StockActorSummary.actor_type == ActorType.raised_fund.value,
StockActorSummary.report_date == recent_report_date]
if self.entity_ids:
filters = filters + [StockActorSummary.entity_id.in_(self.entity_ids)]
df = StockActorSummary.query_data(filters=filters)
if pd_is_not_null(df):
self.logger.info(f'{df}')
self.finish_date = recent_report_date
long_df = df[df['change_ratio'] > 0.05]
short_df = df[df['change_ratio'] < -0.5]
try:
self.trade_the_targets(due_timestamp=timestamp, happen_timestamp=timestamp,
long_selected=set(long_df['entity_id'].to_list()),
short_selected=set(short_df['entity_id'].to_list()))
except Exception as e:
self.logger.error(e)
if __name__ == '__main__':
entity_id = 'stock_sh_600519'
Stock1dKdata.record_data(entity_id=entity_id, provider='em')
StockActorSummary.record_data(entity_id=entity_id, provider='em')
FollowIITrader(start_timestamp='2002-01-01', end_timestamp='2021-01-01', entity_ids=[entity_id],
provider='em', adjust_type=AdjustType.qfq, profit_threshold=None).run()
```
So, writing a strategy is not that complicated.
Just use your imagination, find the relation of the price and the events.
Then refresh [http://127.0.0.1:8050/](http://127.0.0.1:8050/),check the performance of your strategy.
More examples is in [Strategy example](https://github.com/zvtvz/zvt/tree/master/examples/trader)
#### Be serious (formal)
Simple calculation can be done through query_data.
Now it's time to introduce the two-dimensional index multi-entity calculation model.
Takes technical factors as an example to illustrate the **calculation process**:
```
In [7]: from zvt.factors.technical_factor import *
In [8]: factor = BullFactor(codes=['000338','601318'],start_timestamp='2019-01-01',end_timestamp='2019-06-10', transformer=MacdTransformer())
```
### data_df
**two-dimensional index** DataFrame read from the schema by query_data.
```
In [11]: factor.data_df
Out[11]:
level high id entity_id open low timestamp close
entity_id timestamp
stock_sh_601318 2019-01-02 1d 54.91 stock_sh_601318_2019-01-02 stock_sh_601318 54.78 53.70 2019-01-02 53.94
2019-01-03 1d 55.06 stock_sh_601318_2019-01-03 stock_sh_601318 53.91 53.82 2019-01-03 54.42
2019-01-04 1d 55.71 stock_sh_601318_2019-01-04 stock_sh_601318 54.03 53.98 2019-01-04 55.31
2019-01-07 1d 55.88 stock_sh_601318_2019-01-07 stock_sh_601318 55.80 54.64 2019-01-07 55.03
2019-01-08 1d 54.83 stock_sh_601318_2019-01-08 stock_sh_601318 54.79 53.96 2019-01-08 54.54
... ... ... ... ... ... ... ... ...
stock_sz_000338 2019-06-03 1d 11.04 stock_sz_000338_2019-06-03 stock_sz_000338 10.93 10.74 2019-06-03 10.81
2019-06-04 1d 10.85 stock_sz_000338_2019-06-04 stock_sz_000338 10.84 10.57 2019-06-04 10.73
2019-06-05 1d 10.92 stock_sz_000338_2019-06-05 stock_sz_000338 10.87 10.59 2019-06-05 10.59
2019-06-06 1d 10.71 stock_sz_000338_2019-06-06 stock_sz_000338 10.59 10.49 2019-06-06 10.65
2019-06-10 1d 11.05 stock_sz_000338_2019-06-10 stock_sz_000338 10.73 10.71 2019-06-10 11.02
[208 rows x 8 columns]
```
### factor_df
**two-dimensional index** DataFrame which calculating using data_df by [transformer](https://github.com/zvtvz/zvt/blob/master/zvt/factors/factor.py#L18)
e.g., MacdTransformer.
```
In [12]: factor.factor_df
Out[12]:
level high id entity_id open low timestamp close diff dea macd
entity_id timestamp
stock_sh_601318 2019-01-02 1d 54.91 stock_sh_601318_2019-01-02 stock_sh_601318 54.78 53.70 2019-01-02 53.94 NaN NaN NaN
2019-01-03 1d 55.06 stock_sh_601318_2019-01-03 stock_sh_601318 53.91 53.82 2019-01-03 54.42 NaN NaN NaN
2019-01-04 1d 55.71 stock_sh_601318_2019-01-04 stock_sh_601318 54.03 53.98 2019-01-04 55.31 NaN NaN NaN
2019-01-07 1d 55.88 stock_sh_601318_2019-01-07 stock_sh_601318 55.80 54.64 2019-01-07 55.03 NaN NaN NaN
2019-01-08 1d 54.83 stock_sh_601318_2019-01-08 stock_sh_601318 54.79 53.96 2019-01-08 54.54 NaN NaN NaN
... ... ... ... ... ... ... ... ... ... ... ...
stock_sz_000338 2019-06-03 1d 11.04 stock_sz_000338_2019-06-03 stock_sz_000338 10.93 10.74 2019-06-03 10.81 -0.121336 -0.145444 0.048215
2019-06-04 1d 10.85 stock_sz_000338_2019-06-04 stock_sz_000338 10.84 10.57 2019-06-04 10.73 -0.133829 -0.143121 0.018583
2019-06-05 1d 10.92 stock_sz_000338_2019-06-05 stock_sz_000338 10.87 10.59 2019-06-05 10.59 -0.153260 -0.145149 -0.016223
2019-06-06 1d 10.71 stock_sz_000338_2019-06-06 stock_sz_000338 10.59 10.49 2019-06-06 10.65 -0.161951 -0.148509 -0.026884
2019-06-10 1d 11.05 stock_sz_000338_2019-06-10 stock_sz_000338 10.73 10.71 2019-06-10 11.02 -0.137399 -0.146287 0.017776
[208 rows x 11 columns]
```
### result_df
**two-dimensional index** DataFrame which calculating using factor_df or(and) data_df.
It's used by TargetSelector.
e.g.,[macd](https://github.com/zvtvz/zvt/blob/master/zvt/factors/technical_factor.py#L56)
```
In [14]: factor.result_df
Out[14]:
filter_result
entity_id timestamp
stock_sh_601318 2019-01-02 False
2019-01-03 False
2019-01-04 False
2019-01-07 False
2019-01-08 False
... ...
stock_sz_000338 2019-06-03 False
2019-06-04 False
2019-06-05 False
2019-06-06 False
2019-06-10 False
[208 rows x 1 columns]
```
The format of result_df is as follows:
<p align="center"><img src='https://raw.githubusercontent.com/zvtvz/zvt/master/docs/imgs/result_df.png'/></p>
filter_result is True or False, score_result is from 0 to 1
Combining the stock picker and backtesting, the whole process is as follows:
<p align="center"><img src='https://raw.githubusercontent.com/zvtvz/zvt/master/docs/imgs/flow.png'/></p>
## Env settings(optional)
```
>>> from zvt import *
>>> zvt_env
{'zvt_home': '/Users/foolcage/zvt-home',
'data_path': '/Users/foolcage/zvt-home/data',
'tmp_path': '/Users/foolcage/zvt-home/tmp',
'ui_path': '/Users/foolcage/zvt-home/ui',
'log_path': '/Users/foolcage/zvt-home/logs'}
>>> zvt_config
```
* jq_username 聚宽数据用户名
* jq_password 聚宽数据密码
* smtp_host 邮件服务器host
* smtp_port 邮件服务器端口
* email_username smtp邮箱账户
* email_password smtp邮箱密码
* wechat_app_id
* wechat_app_secrect
```
>>> init_config(current_config=zvt_config, jq_username='xxx', jq_password='yyy')
```
> config others this way: init_config(current_config=zvt_config, **kv)
### History data(optional)
baidu: https://pan.baidu.com/s/1kHAxGSxx8r5IBHe5I7MAmQ code: yb6c
google drive: https://drive.google.com/drive/folders/17Bxijq-PHJYrLDpyvFAm5P6QyhKL-ahn?usp=sharing
It contains daily/weekly post-restoration data, stock valuations, fund and its holdings data, financial data and other data.
Unzip the downloaded data to the data_path of the your environment (all db files are placed in this directory, there is no hierarchical structure)
The data could be updated incrementally. Downloading historical data is just to save time. It is also possible to update all by yourself.
#### Joinquant(optional)
the data could be updated from different provider, this make the system stable.
https://www.joinquant.com/default/index/sdk?channelId=953cbf5d1b8683f81f0c40c9d4265c0d
> add other providers, [Data extension tutorial](https://zvtvz.github.io/zvt/#/data_extending)
## Development
### Clone
```
git clone https://github.com/zvtvz/zvt.git
```
set up virtual env(python>=3.6),install requirements
```
pip3 install -r requirements.txt
pip3 install pytest
```
### Tests
```shell
pytest ./tests
```
<p align="center"><img src='https://raw.githubusercontent.com/zvtvz/zvt/master/docs/imgs/pytest.jpg'/></p>
Most of the features can be referenced from the tests
## Contribution
[code of conduct](https://github.com/zvtvz/zvt/blob/master/code_of_conduct.md)
1. Pass all unit tests, if it is a new feature, please add a new unit test for it
2. Compliance with development specifications
3. If necessary, please update the corresponding document
Developers are also very welcome to provide more examples for zvt, and work together to improve the documentation.
## Buy me a coffee
<img src="https://raw.githubusercontent.com/zvtvz/zvt/master/docs/imgs/alipay-cn.png" width="25%" alt="Alipay">
<img src="https://raw.githubusercontent.com/zvtvz/zvt/master/docs/imgs/wechat-cn.png" width="25%" alt="Wechat">
## Contact
wechat:foolcage
<img src="https://raw.githubusercontent.com/zvtvz/zvt/master/docs/imgs/wechat.jpeg" width="25%" alt="Wechat">
------
wechat subscription:
<img src="https://raw.githubusercontent.com/zvtvz/zvt/master/docs/imgs/gongzhonghao.jpg" width="25%" alt="Wechat">
zhihu:
https://zhuanlan.zhihu.com/automoney
## Thanks
<p><a href=https://www.jetbrains.com/?from=zvt><img src="https://raw.githubusercontent.com/zvtvz/zvt/master/docs/imgs/jetbrains.png" width="25%" alt="jetbrains"></a></p>
| zvt | /zvt-0.10.4.tar.gz/zvt-0.10.4/README.md | README.md |
[](https://github.com/zvtvz/zvt)
[](https://pypi.org/project/zvt/)
[](https://pypi.org/project/zvt/)
[](https://pypi.org/project/zvt/)
[](https://github.com/zvtvz/zvt/actions/workflows/build.yml)
[](https://github.com/zvtvz/zvt/actions/workflows/package.yaml)
[](https://zvt.readthedocs.io/en/latest/?badge=latest)
[](https://codecov.io/github/zvtvz/zvt)
[](https://pepy.tech/project/zvt)
**Read this in other languages: [English](README-cn.md).**
**详细文档:[https://zvt.readthedocs.io/en/latest/](https://zvt.readthedocs.io/en/latest/)**
## 市场模型
ZVT 将市场抽象为如下的模型:
<p align="center"><img src='https://raw.githubusercontent.com/zvtvz/zvt/master/docs/imgs/view.png'/></p>
* TradableEntity (交易标的)
* ActorEntity (市场参与者)
* EntityEvent (交易标的 和 市场参与者 发生的事件)
## 快速开始
### 安装
```
python3 -m pip install -U zvt
```
### 使用展示
#### 主界面
安装完成后,在命令行下输入 zvt
```shell
zvt
```
打开 [http://127.0.0.1:8050/](http://127.0.0.1:8050/)
> 这里展示的例子依赖后面的下载历史数据,数据更新请参考后面文档
<p align="center"><img src='https://raw.githubusercontent.com/zvtvz/zvt/master/docs/imgs/zvt-factor.png'/></p>
<p align="center"><img src='https://raw.githubusercontent.com/zvtvz/zvt/master/docs/imgs/zvt-trader.png'/></p>
> 系统的核心概念是可视化的,界面的名称与其一一对应,因此也是统一可扩展的。
> 你可以在你喜欢的ide里编写和运行策略,然后运行界面查看其相关的标的,因子,信号和净值展示。
#### 见证奇迹的时刻
```
>>> from zvt.domain import Stock, Stock1dHfqKdata
>>> from zvt.ml import MaStockMLMachine
>>> Stock.record_data(provider="em")
>>> entity_ids = ["stock_sz_000001", "stock_sz_000338", "stock_sh_601318"]
>>> Stock1dHfqKdata.record_data(provider="em", entity_ids=entity_ids, sleeping_time=1)
>>> machine = MaStockMLMachine(entity_ids=["stock_sz_000001"], data_provider="em")
>>> machine.train()
>>> machine.predict()
>>> machine.draw_result(entity_id="stock_sz_000001")
```
<p align="center"><img src='https://raw.githubusercontent.com/zvtvz/zvt/master/docs/imgs/pred_close.png'/></p>
> 以上几行代码实现了:数据的抓取,持久化,增量更新,机器学习,预测,展示结果。
> 熟悉系统的核心概念后,可以应用到市场中的任何标的。
### 核心概念
```
>>> from zvt.domain import *
```
### TradableEntity (交易标的)
#### A股交易标的
```
>>> Stock.record_data()
>>> df = Stock.query_data(index='code')
>>> print(df)
id entity_id timestamp entity_type exchange code name list_date end_date
code
000001 stock_sz_000001 stock_sz_000001 1991-04-03 stock sz 000001 平安银行 1991-04-03 None
000002 stock_sz_000002 stock_sz_000002 1991-01-29 stock sz 000002 万 科A 1991-01-29 None
000004 stock_sz_000004 stock_sz_000004 1990-12-01 stock sz 000004 国华网安 1990-12-01 None
000005 stock_sz_000005 stock_sz_000005 1990-12-10 stock sz 000005 世纪星源 1990-12-10 None
000006 stock_sz_000006 stock_sz_000006 1992-04-27 stock sz 000006 深振业A 1992-04-27 None
... ... ... ... ... ... ... ... ... ...
605507 stock_sh_605507 stock_sh_605507 2021-08-02 stock sh 605507 国邦医药 2021-08-02 None
605577 stock_sh_605577 stock_sh_605577 2021-08-24 stock sh 605577 龙版传媒 2021-08-24 None
605580 stock_sh_605580 stock_sh_605580 2021-08-19 stock sh 605580 恒盛能源 2021-08-19 None
605588 stock_sh_605588 stock_sh_605588 2021-08-12 stock sh 605588 冠石科技 2021-08-12 None
605589 stock_sh_605589 stock_sh_605589 2021-08-10 stock sh 605589 圣泉集团 2021-08-10 None
[4136 rows x 9 columns]
```
#### 美股交易标的
```
>>> Stockus.record_data()
>>> df = Stockus.query_data(index='code')
>>> print(df)
id entity_id timestamp entity_type exchange code name list_date end_date
code
A stockus_nyse_A stockus_nyse_A NaT stockus nyse A 安捷伦 None None
AA stockus_nyse_AA stockus_nyse_AA NaT stockus nyse AA 美国铝业 None None
AAC stockus_nyse_AAC stockus_nyse_AAC NaT stockus nyse AAC Ares Acquisition Corp-A None None
AACG stockus_nasdaq_AACG stockus_nasdaq_AACG NaT stockus nasdaq AACG ATA Creativity Global ADR None None
AACG stockus_nyse_AACG stockus_nyse_AACG NaT stockus nyse AACG ATA Creativity Global ADR None None
... ... ... ... ... ... ... ... ... ...
ZWRK stockus_nasdaq_ZWRK stockus_nasdaq_ZWRK NaT stockus nasdaq ZWRK Z-Work Acquisition Corp-A None None
ZY stockus_nasdaq_ZY stockus_nasdaq_ZY NaT stockus nasdaq ZY Zymergen Inc None None
ZYME stockus_nyse_ZYME stockus_nyse_ZYME NaT stockus nyse ZYME Zymeworks Inc None None
ZYNE stockus_nasdaq_ZYNE stockus_nasdaq_ZYNE NaT stockus nasdaq ZYNE Zynerba Pharmaceuticals Inc None None
ZYXI stockus_nasdaq_ZYXI stockus_nasdaq_ZYXI NaT stockus nasdaq ZYXI Zynex Inc None None
[5826 rows x 9 columns]
>>> Stockus.query_data(code='AAPL')
id entity_id timestamp entity_type exchange code name list_date end_date
0 stockus_nasdaq_AAPL stockus_nasdaq_AAPL None stockus nasdaq AAPL 苹果 None None
```
#### 港股交易标的
```
>>> Stockhk.record_data()
>>> df = Stockhk.query_data(index='code')
>>> print(df)
id entity_id timestamp entity_type exchange code name list_date end_date
code
00001 stockhk_hk_00001 stockhk_hk_00001 NaT stockhk hk 00001 长和 None None
00002 stockhk_hk_00002 stockhk_hk_00002 NaT stockhk hk 00002 中电控股 None None
00003 stockhk_hk_00003 stockhk_hk_00003 NaT stockhk hk 00003 香港中华煤气 None None
00004 stockhk_hk_00004 stockhk_hk_00004 NaT stockhk hk 00004 九龙仓集团 None None
00005 stockhk_hk_00005 stockhk_hk_00005 NaT stockhk hk 00005 汇丰控股 None None
... ... ... ... ... ... ... ... ... ...
09996 stockhk_hk_09996 stockhk_hk_09996 NaT stockhk hk 09996 沛嘉医疗-B None None
09997 stockhk_hk_09997 stockhk_hk_09997 NaT stockhk hk 09997 康基医疗 None None
09998 stockhk_hk_09998 stockhk_hk_09998 NaT stockhk hk 09998 光荣控股 None None
09999 stockhk_hk_09999 stockhk_hk_09999 NaT stockhk hk 09999 网易-S None None
80737 stockhk_hk_80737 stockhk_hk_80737 NaT stockhk hk 80737 湾区发展-R None None
[2597 rows x 9 columns]
>>> df[df.code=='00700']
id entity_id timestamp entity_type exchange code name list_date end_date
2112 stockhk_hk_00700 stockhk_hk_00700 None stockhk hk 00700 腾讯控股 None None
```
#### 还有更多
```
>>> from zvt.contract import *
>>> zvt_context.tradable_schema_map
{'stockus': zvt.domain.meta.stockus_meta.Stockus,
'stockhk': zvt.domain.meta.stockhk_meta.Stockhk,
'index': zvt.domain.meta.index_meta.Index,
'etf': zvt.domain.meta.etf_meta.Etf,
'stock': zvt.domain.meta.stock_meta.Stock,
'block': zvt.domain.meta.block_meta.Block,
'fund': zvt.domain.meta.fund_meta.Fund}
```
其中key为交易标的的类型,value为其schema,系统为schema提供了统一的 **记录(record_data)** 和 **查询(query_data)** 方法。
```
>>> Index.record_data()
>>> df=Index.query_data(filters=[Index.category=='scope',Index.exchange='sh'])
>>> print(df)
id entity_id timestamp entity_type exchange code name list_date end_date publisher category base_point
0 index_sh_000001 index_sh_000001 1990-12-19 index sh 000001 上证指数 1991-07-15 None csindex scope 100.00
1 index_sh_000002 index_sh_000002 1990-12-19 index sh 000002 A股指数 1992-02-21 None csindex scope 100.00
2 index_sh_000003 index_sh_000003 1992-02-21 index sh 000003 B股指数 1992-08-17 None csindex scope 100.00
3 index_sh_000010 index_sh_000010 2002-06-28 index sh 000010 上证180 2002-07-01 None csindex scope 3299.06
4 index_sh_000016 index_sh_000016 2003-12-31 index sh 000016 上证50 2004-01-02 None csindex scope 1000.00
.. ... ... ... ... ... ... ... ... ... ... ... ...
25 index_sh_000020 index_sh_000020 2007-12-28 index sh 000020 中型综指 2008-05-12 None csindex scope 1000.00
26 index_sh_000090 index_sh_000090 2009-12-31 index sh 000090 上证流通 2010-12-02 None csindex scope 1000.00
27 index_sh_930903 index_sh_930903 2012-12-31 index sh 930903 中证A股 2016-10-18 None csindex scope 1000.00
28 index_sh_000688 index_sh_000688 2019-12-31 index sh 000688 科创50 2020-07-23 None csindex scope 1000.00
29 index_sh_931643 index_sh_931643 2019-12-31 index sh 931643 科创创业50 2021-06-01 None csindex scope 1000.00
[30 rows x 12 columns]
```
### EntityEvent (交易标的 发生的事件)
有了交易标的,才有交易标的 发生的事。
#### 行情数据
交易标的 **行情schema** 遵从如下的规则:
```
{entity_shema}{level}{adjust_type}Kdata
```
* entity_schema
就是前面说的TradableEntity,比如Stock,Stockus等。
* level
```
>>> for level in IntervalLevel:
print(level.value)
```
* adjust type
```
>>> for adjust_type in AdjustType:
print(adjust_type.value)
```
> 注意: 为了兼容历史数据,前复权是个例外,{adjust_type}不填
前复权
```
>>> Stock1dKdata.record_data(code='000338', provider='em')
>>> df = Stock1dKdata.query_data(code='000338', provider='em')
>>> print(df)
id entity_id timestamp provider code name level open close high low volume turnover change_pct turnover_rate
0 stock_sz_000338_2007-04-30 stock_sz_000338 2007-04-30 None 000338 潍柴动力 1d 2.33 2.00 2.40 1.87 207375.0 1.365189e+09 3.2472 0.1182
1 stock_sz_000338_2007-05-08 stock_sz_000338 2007-05-08 None 000338 潍柴动力 1d 2.11 1.94 2.20 1.87 86299.0 5.563198e+08 -0.0300 0.0492
2 stock_sz_000338_2007-05-09 stock_sz_000338 2007-05-09 None 000338 潍柴动力 1d 1.90 1.81 1.94 1.66 93823.0 5.782065e+08 -0.0670 0.0535
3 stock_sz_000338_2007-05-10 stock_sz_000338 2007-05-10 None 000338 潍柴动力 1d 1.78 1.85 1.98 1.75 47720.0 2.999226e+08 0.0221 0.0272
4 stock_sz_000338_2007-05-11 stock_sz_000338 2007-05-11 None 000338 潍柴动力 1d 1.81 1.73 1.81 1.66 39273.0 2.373126e+08 -0.0649 0.0224
... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ...
3426 stock_sz_000338_2021-08-27 stock_sz_000338 2021-08-27 None 000338 潍柴动力 1d 19.39 20.30 20.30 19.25 1688497.0 3.370241e+09 0.0601 0.0398
3427 stock_sz_000338_2021-08-30 stock_sz_000338 2021-08-30 None 000338 潍柴动力 1d 20.30 20.09 20.31 19.78 1187601.0 2.377957e+09 -0.0103 0.0280
3428 stock_sz_000338_2021-08-31 stock_sz_000338 2021-08-31 None 000338 潍柴动力 1d 20.20 20.07 20.63 19.70 1143985.0 2.295195e+09 -0.0010 0.0270
3429 stock_sz_000338_2021-09-01 stock_sz_000338 2021-09-01 None 000338 潍柴动力 1d 19.98 19.68 19.98 19.15 1218697.0 2.383841e+09 -0.0194 0.0287
3430 stock_sz_000338_2021-09-02 stock_sz_000338 2021-09-02 None 000338 潍柴动力 1d 19.71 19.85 19.97 19.24 1023545.0 2.012006e+09 0.0086 0.0241
[3431 rows x 15 columns]
>>> Stockus1dKdata.record_data(code='AAPL', provider='em')
>>> df = Stockus1dKdata.query_data(code='AAPL', provider='em')
>>> print(df)
id entity_id timestamp provider code name level open close high low volume turnover change_pct turnover_rate
0 stockus_nasdaq_AAPL_1984-09-07 stockus_nasdaq_AAPL 1984-09-07 None AAPL 苹果 1d -5.59 -5.59 -5.58 -5.59 2981600.0 0.000000e+00 0.0000 0.0002
1 stockus_nasdaq_AAPL_1984-09-10 stockus_nasdaq_AAPL 1984-09-10 None AAPL 苹果 1d -5.59 -5.59 -5.58 -5.59 2346400.0 0.000000e+00 0.0000 0.0001
2 stockus_nasdaq_AAPL_1984-09-11 stockus_nasdaq_AAPL 1984-09-11 None AAPL 苹果 1d -5.58 -5.58 -5.58 -5.58 5444000.0 0.000000e+00 0.0018 0.0003
3 stockus_nasdaq_AAPL_1984-09-12 stockus_nasdaq_AAPL 1984-09-12 None AAPL 苹果 1d -5.58 -5.59 -5.58 -5.59 4773600.0 0.000000e+00 -0.0018 0.0003
4 stockus_nasdaq_AAPL_1984-09-13 stockus_nasdaq_AAPL 1984-09-13 None AAPL 苹果 1d -5.58 -5.58 -5.58 -5.58 7429600.0 0.000000e+00 0.0018 0.0004
... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ...
8765 stockus_nasdaq_AAPL_2021-08-27 stockus_nasdaq_AAPL 2021-08-27 None AAPL 苹果 1d 147.48 148.60 148.75 146.83 55802388.0 8.265452e+09 0.0072 0.0034
8766 stockus_nasdaq_AAPL_2021-08-30 stockus_nasdaq_AAPL 2021-08-30 None AAPL 苹果 1d 149.00 153.12 153.49 148.61 90956723.0 1.383762e+10 0.0304 0.0055
8767 stockus_nasdaq_AAPL_2021-08-31 stockus_nasdaq_AAPL 2021-08-31 None AAPL 苹果 1d 152.66 151.83 152.80 151.29 86453117.0 1.314255e+10 -0.0084 0.0052
8768 stockus_nasdaq_AAPL_2021-09-01 stockus_nasdaq_AAPL 2021-09-01 None AAPL 苹果 1d 152.83 152.51 154.98 152.34 80313711.0 1.235321e+10 0.0045 0.0049
8769 stockus_nasdaq_AAPL_2021-09-02 stockus_nasdaq_AAPL 2021-09-02 None AAPL 苹果 1d 153.87 153.65 154.72 152.40 71171317.0 1.093251e+10 0.0075 0.0043
[8770 rows x 15 columns]
```
后复权
```
>>> Stock1dHfqKdata.record_data(code='000338', provider='em')
>>> df = Stock1dHfqKdata.query_data(code='000338', provider='em')
>>> print(df)
id entity_id timestamp provider code name level open close high low volume turnover change_pct turnover_rate
0 stock_sz_000338_2007-04-30 stock_sz_000338 2007-04-30 None 000338 潍柴动力 1d 70.00 64.93 71.00 62.88 207375.0 1.365189e+09 2.1720 0.1182
1 stock_sz_000338_2007-05-08 stock_sz_000338 2007-05-08 None 000338 潍柴动力 1d 66.60 64.00 68.00 62.88 86299.0 5.563198e+08 -0.0143 0.0492
2 stock_sz_000338_2007-05-09 stock_sz_000338 2007-05-09 None 000338 潍柴动力 1d 63.32 62.00 63.88 59.60 93823.0 5.782065e+08 -0.0313 0.0535
3 stock_sz_000338_2007-05-10 stock_sz_000338 2007-05-10 None 000338 潍柴动力 1d 61.50 62.49 64.48 61.01 47720.0 2.999226e+08 0.0079 0.0272
4 stock_sz_000338_2007-05-11 stock_sz_000338 2007-05-11 None 000338 潍柴动力 1d 61.90 60.65 61.90 59.70 39273.0 2.373126e+08 -0.0294 0.0224
... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ...
3426 stock_sz_000338_2021-08-27 stock_sz_000338 2021-08-27 None 000338 潍柴动力 1d 331.97 345.95 345.95 329.82 1688497.0 3.370241e+09 0.0540 0.0398
3427 stock_sz_000338_2021-08-30 stock_sz_000338 2021-08-30 None 000338 潍柴动力 1d 345.95 342.72 346.10 337.96 1187601.0 2.377957e+09 -0.0093 0.0280
3428 stock_sz_000338_2021-08-31 stock_sz_000338 2021-08-31 None 000338 潍柴动力 1d 344.41 342.41 351.02 336.73 1143985.0 2.295195e+09 -0.0009 0.0270
3429 stock_sz_000338_2021-09-01 stock_sz_000338 2021-09-01 None 000338 潍柴动力 1d 341.03 336.42 341.03 328.28 1218697.0 2.383841e+09 -0.0175 0.0287
3430 stock_sz_000338_2021-09-02 stock_sz_000338 2021-09-02 None 000338 潍柴动力 1d 336.88 339.03 340.88 329.67 1023545.0 2.012006e+09 0.0078 0.0241
[3431 rows x 15 columns]
```
#### 财务因子
```
>>> FinanceFactor.record_data(code='000338')
>>> FinanceFactor.query_data(code='000338',columns=FinanceFactor.important_cols(),index='timestamp')
basic_eps total_op_income net_profit op_income_growth_yoy net_profit_growth_yoy roe rota gross_profit_margin net_margin timestamp
timestamp
2002-12-31 NaN 1.962000e+07 2.471000e+06 NaN NaN NaN NaN 0.2068 0.1259 2002-12-31
2003-12-31 1.27 3.574000e+09 2.739000e+08 181.2022 109.8778 0.7729 0.1783 0.2551 0.0766 2003-12-31
2004-12-31 1.75 6.188000e+09 5.369000e+08 0.7313 0.9598 0.3245 0.1474 0.2489 0.0868 2004-12-31
2005-12-31 0.93 5.283000e+09 3.065000e+08 -0.1463 -0.4291 0.1327 0.0603 0.2252 0.0583 2005-12-31
2006-03-31 0.33 1.859000e+09 1.079000e+08 NaN NaN NaN NaN NaN 0.0598 2006-03-31
... ... ... ... ... ... ... ... ... ... ...
2020-08-28 0.59 9.449000e+10 4.680000e+09 0.0400 -0.1148 0.0983 0.0229 0.1958 0.0603 2020-08-28
2020-10-31 0.90 1.474000e+11 7.106000e+09 0.1632 0.0067 0.1502 0.0347 0.1949 0.0590 2020-10-31
2021-03-31 1.16 1.975000e+11 9.207000e+09 0.1327 0.0112 0.1919 0.0444 0.1931 0.0571 2021-03-31
2021-04-30 0.42 6.547000e+10 3.344000e+09 0.6788 0.6197 0.0622 0.0158 0.1916 0.0667 2021-04-30
2021-08-31 0.80 1.264000e+11 6.432000e+09 0.3375 0.3742 0.1125 0.0287 0.1884 0.0653 2021-08-31
[66 rows x 10 columns]
```
#### 财务三张表
```
#资产负债表
>>> BalanceSheet.record_data(code='000338')
#利润表
>>> IncomeStatement.record_data(code='000338')
#现金流量表
>>> CashFlowStatement.record_data(code='000338')
```
#### 还有更多
```
>>> zvt_context.schemas
[zvt.domain.dividend_financing.DividendFinancing,
zvt.domain.dividend_financing.DividendDetail,
zvt.domain.dividend_financing.SpoDetail...]
```
zvt_context.schemas为系统支持的schema,schema即表结构,即数据,其字段含义的查看方式如下:
* help
输入schema.按tab提示其包含的字段,或者.help()
```
>>> FinanceFactor.help()
```
* 源码
[domain](https://github.com/zvtvz/zvt/tree/master/zvt/domain)里的文件为schema的定义,查看相应字段的注释即可。
通过以上的例子,你应该掌握了统一的记录数据的方法:
> Schema.record_data(provider='your provider',codes='the codes')
注意可选参数provider,其代表数据提供商,一个schema可以有多个provider,这是系统稳定的基石。
查看**已实现**的provider
```
>>> Stock.provider_map_recorder
{'joinquant': zvt.recorders.joinquant.meta.jq_stock_meta_recorder.JqChinaStockRecorder,
'exchange': zvt.recorders.exchange.exchange_stock_meta_recorder.ExchangeStockMetaRecorder,
'em': zvt.recorders.em.meta.em_stock_meta_recorder.EMStockRecorder,
'eastmoney': zvt.recorders.eastmoney.meta.eastmoney_stock_meta_recorder.EastmoneyChinaStockListRecorder}
```
你可以使用任意一个provider来获取数据,默认使用第一个。
再举个例子,股票板块数据获取:
```
>>> Block.provider_map_recorder
{'eastmoney': zvt.recorders.eastmoney.meta.eastmoney_block_meta_recorder.EastmoneyChinaBlockRecorder,
'sina': zvt.recorders.sina.meta.sina_block_recorder.SinaBlockRecorder}
>>> Block.record_data(provider='sina')
Block registered recorders:{'eastmoney': <class 'zvt.recorders.eastmoney.meta.china_stock_category_recorder.EastmoneyChinaBlockRecorder'>, 'sina': <class 'zvt.recorders.sina.meta.sina_china_stock_category_recorder.SinaChinaBlockRecorder'>}
2020-03-04 23:56:48,931 INFO MainThread finish record sina blocks:industry
2020-03-04 23:56:49,450 INFO MainThread finish record sina blocks:concept
```
再多了解一点record_data:
* 参数code[单个],codes[多个]代表需要抓取的股票代码
* 不传入code,codes则是全市场抓取
* 该方法会把数据存储到本地并只做增量更新
定时任务的方式更新可参考[定时更新](https://github.com/zvtvz/zvt/blob/master/examples/data_runner)
#### 全市场选股
查询数据使用的是query_data方法,把全市场的数据记录下来后,就可以在本地快速查询需要的数据了。
一个例子:2018年年报 roe>8% 营收增长>8% 的前20个股
```
>>> df=FinanceFactor.query_data(filters=[FinanceFactor.roe>0.08,FinanceFactor.report_period=='year',FinanceFactor.op_income_growth_yoy>0.08],start_timestamp='2019-01-01',order=FinanceFactor.roe.desc(),limit=20,columns=["code"]+FinanceFactor.important_cols(),index='code')
code basic_eps total_op_income net_profit op_income_growth_yoy net_profit_growth_yoy roe rota gross_profit_margin net_margin timestamp
code
000048 000048 2.7350 4.919000e+09 1.101000e+09 0.4311 1.5168 0.7035 0.1988 0.5243 0.2355 2020-04-30
000912 000912 0.3500 4.405000e+09 3.516000e+08 0.1796 1.2363 4.7847 0.0539 0.2175 0.0795 2019-03-20
002207 002207 0.2200 3.021000e+08 5.189000e+07 0.1600 1.1526 1.1175 0.1182 0.1565 0.1718 2020-04-27
002234 002234 5.3300 3.276000e+09 1.610000e+09 0.8023 3.2295 0.8361 0.5469 0.5968 0.4913 2020-04-21
002458 002458 3.7900 3.584000e+09 2.176000e+09 1.4326 4.9973 0.8318 0.6754 0.6537 0.6080 2020-02-20
... ... ... ... ... ... ... ... ... ... ... ...
600701 600701 -3.6858 7.830000e+08 -3.814000e+09 1.3579 -0.0325 1.9498 -0.7012 0.4173 -4.9293 2020-04-29
600747 600747 -1.5600 3.467000e+08 -2.290000e+09 2.1489 -0.4633 3.1922 -1.5886 0.0378 -6.6093 2020-06-30
600793 600793 1.6568 1.293000e+09 1.745000e+08 0.1164 0.8868 0.7490 0.0486 0.1622 0.1350 2019-04-30
600870 600870 0.0087 3.096000e+07 4.554000e+06 0.7773 1.3702 0.7458 0.0724 0.2688 0.1675 2019-03-30
688169 688169 15.6600 4.205000e+09 7.829000e+08 0.3781 1.5452 0.7172 0.4832 0.3612 0.1862 2020-04-28
[20 rows x 11 columns]
```
以上,你应该会回答如下的三个问题了:
* 有什么数据?
* 如何记录数据?
* 如何查询数据?
更高级的用法以及扩展数据,可以参考详细文档里的数据部分。
### 写个策略
有了 **交易标的** 和 **交易标的发生的事**,就可以写策略了。
所谓策略回测,无非就是,重复以下过程:
#### 在某时间点,找到符合条件的标的,对其进行买卖,看其表现。
系统支持两种模式:
* solo (随意的)
在 某个时间 根据发生的事件 计算条件 并买卖
* formal (正式的)
系统设计的二维索引多标的计算模型
#### 一个很随便的人(solo)
嗯,这个策略真的很随便,就像我们大部分时间做的那样。
> 报表出来的时,我看一下报表,机构加仓超过5%我就买入,机构减仓超过50%我就卖出。
代码如下:
```
# -*- coding: utf-8 -*-
import pandas as pd
from zvt.api import get_recent_report_date
from zvt.contract import ActorType, AdjustType
from zvt.domain import StockActorSummary, Stock1dKdata
from zvt.trader import StockTrader
from zvt.utils import pd_is_not_null, is_same_date, to_pd_timestamp
class FollowIITrader(StockTrader):
finish_date = None
def on_time(self, timestamp: pd.Timestamp):
recent_report_date = to_pd_timestamp(get_recent_report_date(timestamp))
if self.finish_date and is_same_date(recent_report_date, self.finish_date):
return
filters = [StockActorSummary.actor_type == ActorType.raised_fund.value,
StockActorSummary.report_date == recent_report_date]
if self.entity_ids:
filters = filters + [StockActorSummary.entity_id.in_(self.entity_ids)]
df = StockActorSummary.query_data(filters=filters)
if pd_is_not_null(df):
self.logger.info(f'{df}')
self.finish_date = recent_report_date
long_df = df[df['change_ratio'] > 0.05]
short_df = df[df['change_ratio'] < -0.5]
try:
self.trade_the_targets(due_timestamp=timestamp, happen_timestamp=timestamp,
long_selected=set(long_df['entity_id'].to_list()),
short_selected=set(short_df['entity_id'].to_list()))
except Exception as e:
self.logger.error(e)
if __name__ == '__main__':
entity_id = 'stock_sh_600519'
Stock1dKdata.record_data(entity_id=entity_id, provider='em')
StockActorSummary.record_data(entity_id=entity_id, provider='em')
FollowIITrader(start_timestamp='2002-01-01', end_timestamp='2021-01-01', entity_ids=[entity_id],
provider='em', adjust_type=AdjustType.qfq, profit_threshold=None).run()
```
所以,写一个策略其实还是很简单的嘛。
你可以发挥想象力,社保重仓买买买,外资重仓买买买,董事长跟小姨子跑了卖卖卖......
然后,刷新一下[http://127.0.0.1:8050/](http://127.0.0.1:8050/),看你运行策略的performance
更多可参考[策略例子](https://github.com/zvtvz/zvt/tree/master/examples/trader)
#### 严肃一点(formal)
简单的计算可以通过query_data来完成,这里说的是系统设计的二维索引多标的计算模型。
下面以技术因子为例对**计算流程**进行说明:
```
In [7]: from zvt.factors.technical_factor import *
In [8]: factor = BullFactor(codes=['000338','601318'],start_timestamp='2019-01-01',end_timestamp='2019-06-10', transformer=MacdTransformer())
```
### data_df
data_df为factor的原始数据,即通过query_data从数据库读取到的数据,为一个**二维索引**DataFrame
```
In [11]: factor.data_df
Out[11]:
level high id entity_id open low timestamp close
entity_id timestamp
stock_sh_601318 2019-01-02 1d 54.91 stock_sh_601318_2019-01-02 stock_sh_601318 54.78 53.70 2019-01-02 53.94
2019-01-03 1d 55.06 stock_sh_601318_2019-01-03 stock_sh_601318 53.91 53.82 2019-01-03 54.42
2019-01-04 1d 55.71 stock_sh_601318_2019-01-04 stock_sh_601318 54.03 53.98 2019-01-04 55.31
2019-01-07 1d 55.88 stock_sh_601318_2019-01-07 stock_sh_601318 55.80 54.64 2019-01-07 55.03
2019-01-08 1d 54.83 stock_sh_601318_2019-01-08 stock_sh_601318 54.79 53.96 2019-01-08 54.54
... ... ... ... ... ... ... ... ...
stock_sz_000338 2019-06-03 1d 11.04 stock_sz_000338_2019-06-03 stock_sz_000338 10.93 10.74 2019-06-03 10.81
2019-06-04 1d 10.85 stock_sz_000338_2019-06-04 stock_sz_000338 10.84 10.57 2019-06-04 10.73
2019-06-05 1d 10.92 stock_sz_000338_2019-06-05 stock_sz_000338 10.87 10.59 2019-06-05 10.59
2019-06-06 1d 10.71 stock_sz_000338_2019-06-06 stock_sz_000338 10.59 10.49 2019-06-06 10.65
2019-06-10 1d 11.05 stock_sz_000338_2019-06-10 stock_sz_000338 10.73 10.71 2019-06-10 11.02
[208 rows x 8 columns]
```
### factor_df
factor_df为transformer对data_df进行计算后得到的数据,设计因子即对[transformer](https://github.com/zvtvz/zvt/blob/master/zvt/factors/factor.py#L18)进行扩展,例子中用的是MacdTransformer()。
```
In [12]: factor.factor_df
Out[12]:
level high id entity_id open low timestamp close diff dea macd
entity_id timestamp
stock_sh_601318 2019-01-02 1d 54.91 stock_sh_601318_2019-01-02 stock_sh_601318 54.78 53.70 2019-01-02 53.94 NaN NaN NaN
2019-01-03 1d 55.06 stock_sh_601318_2019-01-03 stock_sh_601318 53.91 53.82 2019-01-03 54.42 NaN NaN NaN
2019-01-04 1d 55.71 stock_sh_601318_2019-01-04 stock_sh_601318 54.03 53.98 2019-01-04 55.31 NaN NaN NaN
2019-01-07 1d 55.88 stock_sh_601318_2019-01-07 stock_sh_601318 55.80 54.64 2019-01-07 55.03 NaN NaN NaN
2019-01-08 1d 54.83 stock_sh_601318_2019-01-08 stock_sh_601318 54.79 53.96 2019-01-08 54.54 NaN NaN NaN
... ... ... ... ... ... ... ... ... ... ... ...
stock_sz_000338 2019-06-03 1d 11.04 stock_sz_000338_2019-06-03 stock_sz_000338 10.93 10.74 2019-06-03 10.81 -0.121336 -0.145444 0.048215
2019-06-04 1d 10.85 stock_sz_000338_2019-06-04 stock_sz_000338 10.84 10.57 2019-06-04 10.73 -0.133829 -0.143121 0.018583
2019-06-05 1d 10.92 stock_sz_000338_2019-06-05 stock_sz_000338 10.87 10.59 2019-06-05 10.59 -0.153260 -0.145149 -0.016223
2019-06-06 1d 10.71 stock_sz_000338_2019-06-06 stock_sz_000338 10.59 10.49 2019-06-06 10.65 -0.161951 -0.148509 -0.026884
2019-06-10 1d 11.05 stock_sz_000338_2019-06-10 stock_sz_000338 10.73 10.71 2019-06-10 11.02 -0.137399 -0.146287 0.017776
[208 rows x 11 columns]
```
### result_df
result_df为可用于选股器的**二维索引**DataFrame,通过对data_df或factor_df计算来实现。
该例子在计算macd之后,利用factor_df,黄白线在0轴上为True,否则为False,[具体代码](https://github.com/zvtvz/zvt/blob/master/zvt/factors/technical_factor.py#L56)
```
In [14]: factor.result_df
Out[14]:
score
entity_id timestamp
stock_sh_601318 2019-01-02 False
2019-01-03 False
2019-01-04 False
2019-01-07 False
2019-01-08 False
... ...
stock_sz_000338 2019-06-03 False
2019-06-04 False
2019-06-05 False
2019-06-06 False
2019-06-10 False
[208 rows x 1 columns]
```
result_df的格式如下:
<p align="center"><img src='https://raw.githubusercontent.com/zvtvz/zvt/master/docs/imgs/result_df.png'/></p>
filter_result 为 True 或 False, score_result 取值为 0 到 1。
结合选股器和回测,整个流程如下:
<p align="center"><img src='https://raw.githubusercontent.com/zvtvz/zvt/master/docs/imgs/flow.png'/></p>
## 环境设置(可选)
```
>>> from zvt import *
>>> zvt_env
{'zvt_home': '/Users/foolcage/zvt-home',
'data_path': '/Users/foolcage/zvt-home/data',
'tmp_path': '/Users/foolcage/zvt-home/tmp',
'ui_path': '/Users/foolcage/zvt-home/ui',
'log_path': '/Users/foolcage/zvt-home/logs'}
>>> zvt_config
```
* jq_username 聚宽数据用户名
* jq_password 聚宽数据密码
* smtp_host 邮件服务器host
* smtp_port 邮件服务器端口
* email_username smtp邮箱账户
* email_password smtp邮箱密码
* wechat_app_id
* wechat_app_secrect
```
>>> init_config(current_config=zvt_config, jq_username='xxx', jq_password='yyy')
```
> 通用的配置方式为: init_config(current_config=zvt_config, **kv)
### 下载历史数据(可选)
百度网盘: https://pan.baidu.com/s/1kHAxGSxx8r5IBHe5I7MAmQ 提取码: yb6c
google drive: https://drive.google.com/drive/folders/17Bxijq-PHJYrLDpyvFAm5P6QyhKL-ahn?usp=sharing
里面包含joinquant的日/周线后复权数据,个股估值,基金及其持仓数据,eastmoney的财务等数据。
把下载的数据解压到正式环境的data_path(所有db文件放到该目录下,没有层级结构)
数据的更新是增量的,下载历史数据只是为了节省时间,全部自己更新也是可以的。
#### 注册聚宽(可选)
项目数据支持多provider,在数据schema一致性的基础上,可根据需要进行选择和扩展,目前支持新浪,东财,交易所等免费数据。
#### 数据的设计上是让provider来适配schema,而不是反过来,这样即使某provider不可用了,换一个即可,不会影响整个系统的使用。
但免费数据的缺点是显而易见的:不稳定,爬取清洗数据耗时耗力,维护代价巨大,且随时可能不可用。
个人建议:如果只是学习研究,可以使用免费数据;如果是真正有意投身量化,还是选一家可靠的数据提供商。
项目支持聚宽的数据,可戳以下链接申请使用(目前可免费使用一年)
https://www.joinquant.com/default/index/sdk?channelId=953cbf5d1b8683f81f0c40c9d4265c0d
> 项目中大部分的免费数据目前都是比较稳定的,且做过严格测试,特别是东财的数据,可放心使用
> 添加其他数据提供商, 请参考[数据扩展教程](https://zvtvz.github.io/zvt/#/data_extending)
## 开发
### clone代码
```
git clone https://github.com/zvtvz/zvt.git
```
设置项目的virtual env(python>=3.6),安装依赖
```
pip3 install -r requirements.txt
pip3 install pytest
```
### 测试案例
pycharm导入工程(推荐,你也可以使用其他ide),然后pytest跑测试案例
<p align="center"><img src='https://raw.githubusercontent.com/zvtvz/zvt/master/docs/imgs/pytest.jpg'/></p>
大部分功能使用都可以从tests里面参考
## 贡献
期待能有更多的开发者参与到 zvt 的开发中来,我会保证尽快 Reivew PR 并且及时回复。但提交 PR 请确保
先看一下[1分钟代码规范](https://github.com/zvtvz/zvt/blob/master/code_of_conduct.md)
1. 通过所有单元测试,如若是新功能,请为其新增单元测试
2. 遵守开发规范
3. 如若需要,请更新相对应的文档
也非常欢迎开发者能为 zvt 提供更多的示例,共同来完善文档。
## 请作者喝杯咖啡
如果你觉得项目对你有帮助,可以请作者喝杯咖啡
<img src="https://raw.githubusercontent.com/zvtvz/zvt/master/docs/imgs/alipay-cn.png" width="25%" alt="Alipay">
<img src="https://raw.githubusercontent.com/zvtvz/zvt/master/docs/imgs/wechat-cn.png" width="25%" alt="Wechat">
## 联系方式
加微信进群:foolcage 添加暗号:zvt
<img src="https://raw.githubusercontent.com/zvtvz/zvt/master/docs/imgs/wechat.jpeg" width="25%" alt="Wechat">
------
微信公众号:
<img src="https://raw.githubusercontent.com/zvtvz/zvt/master/docs/imgs/gongzhonghao.jpg" width="25%" alt="Wechat">
知乎专栏:
https://zhuanlan.zhihu.com/automoney
## Thanks
<p><a href=https://www.jetbrains.com/?from=zvt><img src="https://raw.githubusercontent.com/zvtvz/zvt/master/docs/imgs/jetbrains.png" width="25%" alt="jetbrains"></a></p>
| zvt | /zvt-0.10.4.tar.gz/zvt-0.10.4/README-cn.md | README-cn.md |
<div align="left">
<h1>ZvukoGram API <img src="https://zvukogram.com/design/img/dispic/zvuklogo.png" width=30 height=30></h1>
<p align="left" >
<a href="https://pypi.org/project/zvukogram/">
<img src="https://img.shields.io/pypi/v/zvukogram?style=flat-square" alt="PyPI">
</a>
<a href="https://pypi.org/project/zvukogram/">
<img src="https://img.shields.io/pypi/dm/zvukogram?style=flat-square" alt="PyPI">
</a>
</p>
</div>
A simple, yet powerful library for [ZvukoGram API](https://zvukogram.com/node/api/)
## Usage
With ``ZvukoGram API`` you can fully access the ZvukoGram API.
## Documentation
Official docs can be found on the [API's webpage](https://zvukogram.com/node/api/)
## Installation
```bash
pip install zvukogram
```
## Requirements
- ``Python 3.7+``
- ``aiohttp``
- ``pydantic``
## Features
- ``Asynchronous``
- ``Exception handling``
- ``Pydantic return model``
- ``LightWeight``
## Basic example
```python
import asyncio
from zvukogram import ZvukoGram, ZvukoGramError
api = ZvukoGram('token', 'email')
async def main():
try:
voices = await api.get_voices()
print(voices['Русский'].pop().voice)
except ZvukoGramError as exc:
print(exc)
generation = await api.tts(
voice='Бот Максим',
text='Привет!',
)
print(generation.file)
audio = await generation.download()
generation = await api.tts_long(
voice='Бот Максим',
text='Более длинный текст!',
)
while not generation.file:
await asyncio.sleep(1)
generation = await api.check_progress(generation.id)
print(generation.file)
asyncio.run(main())
```
Developed by Nikita Minaev (c) 2023
| zvukogram | /zvukogram-1.0.1.tar.gz/zvukogram-1.0.1/README.md | README.md |
import numpy
from sklearn.preprocessing import scale
class PAPR:
splits = dict() #创建一个新的字典
splits[2] = [0, float('inf')] #float(‘inf’)正无穷
splits[3] = [-0.43, 0.43, float('inf')]
splits[4] = [-0.67, 0, 0.67, float('inf')]
splits[5] = [-0.84, -0.25, 0.25, 0.84, float('inf')]
splits[6] = [-0.97, -0.43, 0, 0.43, 0.97, float('inf')]
splits[7] = [-1.07, -0.57, -0.18, 0.18, 0.57, 1.07, float('inf')]
splits[8] = [-1.15, -0.67, -0.32, 0, 0.32, 0.67, 1.15, float('inf')]
splits[9] = [-1.22, -0.76, -0.43, -0.14, 0.14, 0.43, 0.76, 1.22, float('inf')]
splits[10] = [-1.28, -0.84, -0.52, -0.25, 0, 0.25, 0.52, 0.84, 1.28, float('inf')]
def __init__(self,m):
self.m=m
'''
功能:利用PAPR_RW算法进行异常分数的计算
输入:data,从CSV文件读取的一维数据
输出:data_matrix,二维矩阵,每行为一个子序列
运行示例:
from PAPR import *
import numpy as np
import matplotlib.pyplot as plt
m=250 #设置子序列长度,每个数据集不一样
P=PAPR(m)
#从CSV文件读取数据,不带标签
data = np.genfromtxt("chfdb_1.csv", delimiter=',')
scores,scores_index=P.PAPR(data)
#可视化标注异常
t=scores_index[0] #得分最低的序列的下标,该序列被认为是异常
x0=range(0,data.__len__())
plt.plot(x0,data)
x1=range(t*m,t*m+m-1)
plt.plot(x1,data[t*m:t*m+m-1],color="red")
plt.show()
'''
def papr(self,data):
m=self.m #划分的序列长度
# 将数据划分成多个子序列
data_matrix = self.split_data(data)
# 对子序列分别进行归一化处理
data_matrix = scale(data_matrix, axis=1)
# 计算子序列的高斯径向基函数的宽度参数
widths = self.find_best_w(data_matrix)
matrix = self.cal_matrix(data_matrix, 6)
sim_matrix = self.cal_similarity(matrix=matrix, wc=0.3, wd=0.4, wr=0.3, length=widths.__len__(), widths=widths)
scores = self.random_walk(sim_matrix, error=0.05)
scores_index=numpy.argsort(scores) #异常分数从低到高排序
return scores,scores_index
'''
输入:data,一维数据
功能:将读取到的一维数据按照传入的子序列长度进行划分
输出:data_matrix,二维矩阵,每行为一个子序列
'''
def split_data(self,data):
index=self.m
length=data.__len__()
data_matix=list()
sequence=list()
i=0
while i<length:
sequence=data[i:i+index]
# print(sequence)
i=i+index
data_matix.append(sequence)
return data_matix
'''
值空间划分以及PAPR指标的计算
'''
def cal_matrix(self,data, k):
points = self.splits[k]
new_data = list()
for item in data:
tmp_points = list()
for i in range(k):
tmp_points.append(list())
for p in item:
for w in range(k):
if p < points[w]:
tmp_points[w].append(p)
break
tmp_matrix = numpy.zeros((k, 3)) #生成一个K行3列的全0矩阵,用于记录PAPR方法得出的Mi = [di, ci, ri]
for w in range(k):
tmp_matrix[w, 0] = len(tmp_points[w]) #记录子值空间的点个数
if tmp_matrix[w, 0] != 0:
tmp_matrix[w, 1] = numpy.mean(tmp_points[w]) #记录子值空间的点均值
tmp_matrix[w, 2] = numpy.var(tmp_points[w]) #记录子值空间的方差
new_data.append(tmp_matrix)
return numpy.array(new_data)
'''
计算相似度矩阵
#length是子序列的数量,width是计算Scij和Srij使所用到的δ
'''
def cal_similarity(self,matrix, length, wd, wc, wr, widths):
index = range(length)
sim_matrix = numpy.zeros((length, length)) #生成一个length行length列的全0矩阵
for r in index:
for c in index:
sd = self.cal_d_sim(matrix[r, :, 0], matrix[c, :, 0])
sc = self.cal_rc_sim(matrix[r, :, 1], matrix[c, :, 1], widths[r])
sr = self.cal_rc_sim(matrix[r, :, 2], matrix[c, :, 2], widths[r])
sim_matrix[r, c] = wd*sd + wc*sc + wr*sr
return sim_matrix
'''
函数功能:计算记录两点数量的向量di和dj的相似度Sdij
'''
def cal_d_sim(self,one, two):
#m是子序列one的总长度,m=∑(k=1..q)dik
m = numpy.sum(one)
#length是记录子序列特征的Mi=[di,ci,ri]的长度,即子值空间的划分数目
length = len(one)
s = 0
for l in range(length):
s += min(one[l], two[l])
return 1.0 * s / m
'''
函数功能:计算Scij和Srij,两个计算公式相同
w即δ,高斯径向基函数的半径,通过信息熵的方法可以计算出每个数据集的该值
'''
def cal_rc_sim(self,one, two, w=0.005):
return numpy.exp(-1.0 * numpy.linalg.norm(one - two, ord=2) / numpy.power(w, 2))
'''
RW模型,最终会得到一个概率分布矩阵,即异常得分
'''
def random_walk(self,sim_matrix, error=0.1):
rows, cols = sim_matrix.shape
s_matrix = numpy.zeros((rows, cols))
for r in range(rows):
totSim = 0.0
for c in range(cols):
totSim += sim_matrix[r, c]
for c in range(cols):
s_matrix[r, c] = 1.0*sim_matrix[r, c] / totSim
damping_factor = 0.1
ct = numpy.array([1.0/rows]*rows)
recursive_err = error+1
times = 0
while recursive_err > error and times < 100:
ct1 = damping_factor/rows + numpy.dot(s_matrix.T, ct)
recursive_err = numpy.linalg.norm(ct-ct1, ord=1)
times += 1
ct = ct1[:]
return ct
'''
函数功能:计算数据集的δ,高斯径向基函数的半径,通过信息熵的方法计算
'''
def find_best_w(self,data_matrix):
alist, blist = numpy.zeros(data_matrix.__len__()), numpy.zeros(data_matrix.__len__())
r_index = range(data_matrix.__len__())
gama = (5**0.5-1)/2
coe = (2**0.5)/3
for i in r_index:
min_dist, max_dist = float('inf'), -float('inf')
for j in r_index:
if i == j:
continue
dist = numpy.linalg.norm(data_matrix[i]-data_matrix[j], ord=2) #求二范数
min_dist = min(dist, min_dist)
max_dist = max(dist, max_dist)
alist[i], blist[i] = coe*min_dist, coe*max_dist
left, right = cal_sig(alist, blist, gama)
ent_left = cal_entropy(left)
ent_right = cal_entropy(right)
epison = 1
times = 0
while numpy.linalg.norm(alist-blist) < 1 and times < 20:
if ent_left < ent_right:
blist, right = right.copy(), left.copy()
ent_right = ent_left
left = alist + (1-gama)*(blist-alist)
ent_left = cal_entropy(left)
else:
alist, left = left.copy(), right.copy()
ent_left = ent_right
right = alist + gama*(blist-alist)
ent_right = cal_entropy(right)
times += 1
if ent_left < ent_right:
return left
else:
return right
def cal_sig(alist, blist, gama):
length = len(alist)
index = range(length)
left, right = numpy.zeros(length), numpy.zeros(length)
for i in index:
left[i] = alist[i] + (1-gama)*(blist[i]-alist[i])
right[i] = alist[i] + gama*(blist[i]-alist[i])
return left, right
'''
计算信信息熵
'''
def cal_entropy(list):
total = sum(list)
list /= total
log_list = numpy.log(list)
return -numpy.dot(list, log_list) | zw-outliersdetec | /zw_outliersdetec-0.0.1.tar.gz/zw_outliersdetec-0.0.1/zw_outliersdetec/PAPR.py | PAPR.py |
import numpy as np
from sklearn.neighbors import NearestNeighbors
from math import *
'''
说明:RDOS算法,根据K近邻集合、逆近邻集合以及共享近邻集合进行对象的核密度估计,从而得出异常分数
传入参数:k-近邻的个数,h-高斯核的宽度参数
'''
class RDOS:
#初始化,传入参数并设置默认值
def __init__(self,n_outliers=1,n_neighbors=5,h=2):
self.n_outliers=n_outliers
self.n_neighbors=n_neighbors
self.h=h
'''
RDOS
输入:data-数据集,n-返回的异常个数(默认为1),n_neighbors-近邻个数,h-高斯核函数的宽度参数
输出:返回异常的分数以及按异常分数排序好的下标,同时会根据预设的异常个数输出其下标及得分
运行示例:
import pandas as pd
from RDOS import *
#如果是带有Lable的数据集则需要先去除Label列
data=pd.read_csv("hbk.csv",sep=',')
#data=data.drop('Label', axis=1)
data=np.array(data)
print(data.shape)
#调用RDOS算法,传入预设的异常个数
rdos=RDOS(n_outliers=10)
RDOS_index,RDOS_score=rdos.rdos(data)
'''
def rdos(self,data):
n_outliers = self.n_outliers
n_neighbors=self.n_neighbors
h=self.h
n=data.shape[0] #n是数据集的样例数量
d=data.shape[1] #d是数据集的维度,即属性个数
#规范输入的参数
if n_neighbors>=n or n_neighbors<1:
print('n_neighbors input must be less than number of observations and greater than 0')
exit()
outliers=list()
#存储每个数据对象的近邻下标
Sknn= list()
Srnn= list()
Ssnn = list()
S= list()
P= list()
#计算Sknn
for X in data:
Sknn_temp = self.KNN(data, [X], return_distance=False)
Sknn_temp = np.squeeze(Sknn_temp)
Sknn.append(Sknn_temp[1:])
S.append(list(Sknn_temp[1:])) # X的所有近邻集合
#计算Srnn
for i in range(n):
Srnn_temp = list() # 记录每个数据对象的rnn
for item in Sknn[i]:
item_neighbors = Sknn[item]
# 如果X的近邻的k近邻集合中也包含X,说明该近邻是X的逆近邻
if i in item_neighbors:
Srnn_temp.append(item)
Srnn.append(Srnn_temp)
S[i].extend(Srnn_temp) # X的所有近邻集合
#计算Ssnn
for i in range(n):
Ssnn_temp = list()
for j in Sknn[i]:
kneighbor_rnn = Srnn[j] # k近邻的逆近邻集合
Ssnn_temp.extend(kneighbor_rnn)
Ssnn_temp = list(set(Ssnn_temp)) # 去重
if i in Ssnn_temp:
Ssnn_temp.remove(i) # 删除X本身下标
Ssnn.append(Ssnn_temp) # X的共享近邻集合
S[i].extend(Ssnn_temp) # X的所有近邻集合
S[i] = list(set(S[i])) # 去重
P.append(self.getKelnelDensity(data, i, S[i]))#计算论文中的P值
'''
#计算每个数据对象的近邻集合
for i in range(n):
Sknn_temp=self.KNN(data,[data[i]],return_distance=False)
Sknn_temp = np.squeeze(Sknn_temp)
print("Sknn:",Sknn_temp[1:])
Sknn.append(Sknn_temp[1:]) #需要除去其本身,例:[[11 29 7 26 24]]→[29 7 26 24]
Srnn.append(self.RNN(data,[data[i]],return_distance=False)) #例:[29 24]
Ssnn_temp=list()
for j in Sknn[i]:
kneighbor_rnn = self.RNN(data, [data[j]], return_distance=False) #k近邻的逆近邻集合
Ssnn_temp.extend(kneighbor_rnn)
Ssnn_temp = list(set(Ssnn_temp)) # 去重
if i in Ssnn_temp:
Ssnn_temp.remove(i) # 删除X本身下标
Ssnn.append(Ssnn_temp) #X的共享近邻集合
S.append(list(set(Ssnn_temp))) #X的所有近邻集合
'''
#print("S:",S[i]) #打印
#计算异常得分RDOS
RDOS_score=list()
for i in range(n):
S_RDOS=0
for j in S[i]: #计算近邻集合的RDOS总分数
S_RDOS=S_RDOS+P[j]
RDOS_score.append(S_RDOS/(len(S[i])*P[i]))
RDOS_index= np.argsort(RDOS_score) #对异常分数进行排序,从低到高,返回的是数组的索引
return RDOS_score,RDOS_index[::-1] #返回异常的得分及其下标(下标由得分从高到低排序)
'''
找出数据集X中每个对象的的k近邻并返回序号(当k>1时,会包括其本身)
X可以是一个点或者一组数据,data是所有数据
return_distance=True时会同时返回距离
'''
def KNN(self,data,X,return_distance=False):
neigh = NearestNeighbors(self.n_neighbors)
neigh.fit(data)
return neigh.kneighbors(X, return_distance=return_distance)
'''
找出X的k逆近邻集合并返回序号
X是一个数据对象,data是所有数据
return_distance=True时会同时返回距离
def RNN(self, data, X, return_distance=False):
neigh = NearestNeighbors(self.n_neighbors)
neigh.fit(data)
X_neighbors=neigh.kneighbors(X, return_distance=return_distance)
X_Srnn=list() #存储逆近邻的下标集合
# 遍历X的近邻集合寻找其逆近邻集合,item为近邻的序号
index = X_neighbors[0, 1:]
X_index = X_neighbors[0, 0] #X的下标
#近邻的下标
for item in index:
item_neighbors = neigh.kneighbors([data[item]], return_distance=False) #寻找近邻的k近邻集合
# 如果X的近邻的k近邻集合中也包含X,说明该近邻是X的逆近邻
if X_index in item_neighbors:
X_Srnn.append(item)
return np.array(X_Srnn)
'''
'''
计算核密度
输入:data-数据集,X_index-数据对象的下标,S近邻集合
输出:论文中的P
'''
def getKelnelDensity(self,data,X_index,S):
h=self.h #高斯核函数参数
d=data.shape[1] #数据的属性个数
S_X=list(S)
S_X.append(X_index)
X_guassian =0
for i in S_X:
X_guassian+=(1/((2*pi)**(d/2)))*exp(-(np.linalg.norm(data[i]-data[X_index]))/(2*h**2))
S_len=S.__len__()
P=1/(S_len+1)*(1/h**d)*X_guassian
return P | zw-outliersdetec | /zw_outliersdetec-0.0.1.tar.gz/zw_outliersdetec-0.0.1/zw_outliersdetec/RDOS.py | RDOS.py |
import numpy as np
from zw_outliersdetec.__sax_via_window import *
'''
函数功能:计算欧式距离
'''
def euclidean(a, b):
"""Compute a Euclidean distance value."""
return np.sqrt(np.sum((a-b)**2))
'''
类功能:实现HOT_SAX算法,找出异常时间序列
'''
class HOTSAX:
#初始化,num_discords-预设输出异常的数量
def __init__(self,num_discords=2):
self.num_discords=num_discords
'''
功能:利用HOT-SAX算法找出异常时间序列的位置信息
输入:series-数据集,win-size-自行设置的窗口大小(可理解为序列长度,默认为100),其他参数默认
输出:根据设定的异常个数输出异常的开始位置以及对应的分数,表示从该位置开始的长度为win-size的序列被认为是异常
运行示例:
import numpy as np
from HOTSAX import *
#ECG数据,不带标签,只有一列值
data = np.genfromtxt("ECG0606_1.csv", delimiter=',')
hs=HOTSAX(2)
discords,win_size =hs.find_discords_hotsax(data)
print(discords,win_size)
'''
def hotsax(self,series, win_size=100, a_size=3,
paa_size=3, z_threshold=0.01):
"""HOT-SAX-driven discords discovery."""
discords = list()
globalRegistry = set()
while (len(discords) < self.num_discords):
bestDiscord =self.find_best_discord_hotsax(series, win_size, a_size,
paa_size, z_threshold,
globalRegistry)
if -1 == bestDiscord[0]:
break
discords.append(bestDiscord)
mark_start = bestDiscord[0] - win_size
if 0 > mark_start:
mark_start = 0
mark_end = bestDiscord[0] + win_size
'''if len(series) < mark_end:
mark_end = len(series)'''
for i in range(mark_start, mark_end):
globalRegistry.add(i)
return discords,win_size #返回设定异常个数的异常开始位置和窗口大小
def find_best_discord_hotsax(self,series, win_size, a_size, paa_size,
znorm_threshold, globalRegistry): # noqa: C901
"""Find the best discord with hotsax."""
"""[1.0] get the sax data first"""
sax_none = sax_via_window(series, win_size, a_size, paa_size, "none", 0.01)
"""[2.0] build the 'magic' array"""
magic_array = list()
for k, v in sax_none.items():
magic_array.append((k, len(v)))
"""[2.1] sort it desc by the key"""
m_arr = sorted(magic_array, key=lambda tup: tup[1])
"""[3.0] define the key vars"""
bestSoFarPosition = -1
bestSoFarDistance = 0.
distanceCalls = 0
visit_array = np.zeros(len(series), dtype=np.int)
"""[4.0] and we are off iterating over the magic array entries"""
for entry in m_arr:
"""[5.0] some moar of teh vars"""
curr_word = entry[0]
occurrences = sax_none[curr_word]
"""[6.0] jumping around by the same word occurrences makes it easier to
nail down the possibly small distance value -- so we can be efficient
and all that..."""
for curr_pos in occurrences:
if curr_pos in globalRegistry:
continue
"""[7.0] we don't want an overlapping subsequence"""
mark_start = curr_pos - win_size
mark_end = curr_pos + win_size
visit_set = set(range(mark_start, mark_end))
"""[8.0] here is our subsequence in question"""
cur_seq = znorm(series[curr_pos:(curr_pos + win_size)],
znorm_threshold)
"""[9.0] let's see what is NN distance"""
nn_dist = np.inf
do_random_search = 1
"""[10.0] ordered by occurrences search first"""
for next_pos in occurrences:
"""[11.0] skip bad pos"""
if next_pos in visit_set:
continue
else:
visit_set.add(next_pos)
"""[12.0] distance we compute"""
dist = euclidean(cur_seq, znorm(series[next_pos:(
next_pos+win_size)], znorm_threshold))
distanceCalls += 1
"""[13.0] keep the books up-to-date"""
if dist < nn_dist:
nn_dist = dist
if dist < bestSoFarDistance:
do_random_search = 0
break
"""[13.0] if not broken above,
we shall proceed with random search"""
if do_random_search:
"""[14.0] build that random visit order array"""
curr_idx = 0
for i in range(0, (len(series) - win_size)):
if not(i in visit_set):
visit_array[curr_idx] = i
curr_idx += 1
it_order = np.random.permutation(visit_array[0:curr_idx])
curr_idx -= 1
"""[15.0] and go random"""
while curr_idx >= 0:
rand_pos = it_order[curr_idx]
curr_idx -= 1
dist = euclidean(cur_seq, znorm(series[rand_pos:(
rand_pos + win_size)], znorm_threshold))
distanceCalls += 1
"""[16.0] keep the books up-to-date again"""
if dist < nn_dist:
nn_dist = dist
if dist < bestSoFarDistance:
nn_dist = dist
break
"""[17.0] and BIGGER books"""
if (nn_dist > bestSoFarDistance) and (nn_dist < np.inf):
bestSoFarDistance = nn_dist
bestSoFarPosition = curr_pos
return (bestSoFarPosition, bestSoFarDistance) | zw-outliersdetec | /zw_outliersdetec-0.0.1.tar.gz/zw_outliersdetec-0.0.1/zw_outliersdetec/HOTSAX.py | HOTSAX.py |
from __future__ import division
import numpy as np
from warnings import warn
from sklearn.utils.fixes import euler_gamma
from scipy.sparse import issparse
import numbers
from sklearn.externals import six
from sklearn.tree import ExtraTreeRegressor
from sklearn.utils import check_random_state, check_array
from sklearn.utils.validation import check_is_fitted
from sklearn.base import OutlierMixin
from sklearn.ensemble.bagging import BaseBagging
__all__ = ["iForest"]
INTEGER_TYPES = (numbers.Integral, np.integer)
class iForest(BaseBagging, OutlierMixin):
"""Isolation Forest Algorithm
Return the anomaly score of each sample using the IsolationForest algorithm
The IsolationForest 'isolates' observations by randomly selecting a feature
and then randomly selecting a split value between the maximum and minimum
values of the selected feature.
Since recursive partitioning can be represented by a tree structure, the
number of splittings required to isolate a sample is equivalent to the path
length from the root node to the terminating node.
This path length, averaged over a forest of such random trees, is a
measure of normality and our decision function.
Random partitioning produces noticeably shorter paths for anomalies.
Hence, when a forest of random trees collectively produce shorter path
lengths for particular samples, they are highly likely to be anomalies.
Read more in the :ref:`User Guide <isolation_forest>`.
.. versionadded:: 0.18
Parameters
----------
n_estimators : int, optional (default=100)
The number of base estimators in the ensemble.
max_samples : int or float, optional (default="auto")
The number of samples to draw from X to train each base estimator.
- If int, then draw `max_samples` samples.
- If float, then draw `max_samples * X.shape[0]` samples.
- If "auto", then `max_samples=min(256, n_samples)`.
If max_samples is larger than the number of samples provided,
all samples will be used for all trees (no sampling).
contamination : float in (0., 0.5), optional (default=0.1)
The amount of contamination of the data set, i.e. the proportion
of outliers in the data set. Used when fitting to define the threshold
on the decision function. If 'auto', the decision function threshold is
determined as in the original paper.
.. versionchanged:: 0.20
The default value of ``contamination`` will change from 0.1 in 0.20
to ``'auto'`` in 0.22.
max_features : int or float, optional (default=1.0)
The number of features to draw from X to train each base estimator.
- If int, then draw `max_features` features.
- If float, then draw `max_features * X.shape[1]` features.
bootstrap : boolean, optional (default=False)
If True, individual trees are fit on random subsets of the training
data sampled with replacement. If False, sampling without replacement
is performed.
n_jobs : int or None, optional (default=None)
The number of jobs to run in parallel for both `fit` and `predict`.
``None`` means 1 unless in a :obj:`joblib.parallel_backend` context.
``-1`` means using all processors. See :term:`Glossary <n_jobs>`
for more details.
behaviour : str, default='old'
Behaviour of the ``decision_function`` which can be either 'old' or
'new'. Passing ``behaviour='new'`` makes the ``decision_function``
change to match other anomaly detection algorithm API which will be
the default behaviour in the future. As explained in details in the
``offset_`` attribute documentation, the ``decision_function`` becomes
dependent on the contamination parameter, in such a way that 0 becomes
its natural threshold to detect outliers.
.. versionadded:: 0.20
``behaviour`` is added in 0.20 for back-compatibility purpose.
.. deprecated:: 0.20
``behaviour='old'`` is deprecated in 0.20 and will not be possible
in 0.22.
.. deprecated:: 0.22
``behaviour`` parameter will be deprecated in 0.22 and removed in
0.24.
random_state : int, RandomState instance or None, optional (default=None)
If int, random_state is the seed used by the random number generator;
If RandomState instance, random_state is the random number generator;
If None, the random number generator is the RandomState instance used
by `np.random`.
verbose : int, optional (default=0)
Controls the verbosity of the tree building process.
Attributes
----------
estimators_ : list of DecisionTreeClassifier
The collection of fitted sub-estimators.
estimators_samples_ : list of arrays
The subset of drawn samples (i.e., the in-bag samples) for each base
estimator.
max_samples_ : integer
The actual number of samples
offset_ : float
Offset used to define the decision function from the raw scores.
We have the relation: ``decision_function = score_samples - offset_``.
Assuming behaviour == 'new', ``offset_`` is defined as follows.
When the contamination parameter is set to "auto", the offset is equal
to -0.5 as the scores of inliers are close to 0 and the scores of
outliers are close to -1. When a contamination parameter different
than "auto" is provided, the offset is defined in such a way we obtain
the expected number of outliers (samples with decision function < 0)
in training.
Assuming the behaviour parameter is set to 'old', we always have
``offset_ = -0.5``, making the decision function independent from the
contamination parameter.
References
----------
.. [1] Liu, Fei Tony, Ting, Kai Ming and Zhou, Zhi-Hua. "Isolation forest."
Data Mining, 2008. ICDM'08. Eighth IEEE International Conference on.
.. [2] Liu, Fei Tony, Ting, Kai Ming and Zhou, Zhi-Hua. "Isolation-based
anomaly detection." ACM Transactions on Knowledge Discovery from
Data (TKDD) 6.1 (2012): 3.
"""
def __init__(self,
n_estimators=100,
max_samples="auto",
contamination="legacy",
max_features=1.,
bootstrap=False,
n_jobs=None,
behaviour='old',
random_state=None,
verbose=0):
super(iForest, self).__init__(
base_estimator=ExtraTreeRegressor(
max_features=1,
splitter='random',
random_state=random_state),
# here above max_features has no links with self.max_features
bootstrap=bootstrap,
bootstrap_features=False,
n_estimators=n_estimators,
max_samples=max_samples,
max_features=max_features,
n_jobs=n_jobs,
random_state=random_state,
verbose=verbose)
self.behaviour = behaviour
self.contamination = contamination
'''
功能:利用IForest算法来计算异常分数
输入:data_train-从CSV文件读取的数据,不含标签
输出:返回每个数据对象的预测标签all_pred,-1则被认定为异常,1则是正常数据
运行示例:
import pandas as pd
from iForest import *
data_train = pd.read_csv('hbk.csv', sep=',')
# 选取特征,不使用标签,如带有标签需除去
#data_train=data_train.drop('Label', axis=1)
#print(data_train.columns)
#n_estimators是隔离树的数量
ift = iForest(n_estimators=100,
behaviour="new",
contamination="auto",
n_jobs=1, # 使用全部cpu
# verbose=2,
)
#调用IForest算法预测数据对象的Label
Label,Index=ift.IForest(data_train)
print(Label)
'''
def iforest(self,data_train):
# 训练
self.fit(data_train)
shape = data_train.shape[0]
batch = 10 ** 6
X_cols = data_train.columns
all_pred_lable = []
all_pred_score = []
for i in range(int(shape / batch + 1)):
start = i * batch
end = (i + 1) * batch
test = data_train[X_cols][start:end]
# 预测
pred_label, pred_score = self.predict(test)
all_pred_lable.extend(pred_label)
all_pred_score.extend(pred_score)
return all_pred_lable, np.argsort(all_pred_score) # 返回阈值限定后的标签和异常分数从小到大排序的数组下标
#data_train.to_csv('outliers.csv', columns=["pred", ], header=False)
def _set_oob_score(self, X, y):
raise NotImplementedError("OOB score not supported by iforest")
def fit(self, X, y=None, sample_weight=None):
"""Fit estimator.
Parameters
----------
X : array-like or sparse matrix, shape (n_samples, n_features)
The input samples. Use ``dtype=np.float32`` for maximum
efficiency. Sparse matrices are also supported, use sparse
``csc_matrix`` for maximum efficiency.
sample_weight : array-like, shape = [n_samples] or None
Sample weights. If None, then samples are equally weighted.
y : Ignored
not used, present for API consistency by convention.
Returns
-------
self : object
"""
if self.contamination == "legacy":
warn('default contamination parameter 0.1 will change '
'in version 0.22 to "auto". This will change the '
'predict method behavior.',
FutureWarning)
self._contamination = 0.1
else:
self._contamination = self.contamination
if self.behaviour == 'old':
warn('behaviour="old" is deprecated and will be removed '
'in version 0.22. Please use behaviour="new", which '
'makes the decision_function change to match '
'other anomaly detection algorithm API.',
FutureWarning)
X = check_array(X, accept_sparse=['csc'])
if issparse(X):
#判断x是否为sparse类型
# Pre-sort indices to avoid that each individual tree of the
# ensemble sorts the indices.
X.sort_indices()
rnd = check_random_state(self.random_state)
y = rnd.uniform(size=X.shape[0])
# ensure that max_sample is in [1, n_samples]:
#保证采样集的大小规范
n_samples = X.shape[0]
if isinstance(self.max_samples, six.string_types):
if self.max_samples == 'auto':
max_samples = min(256, n_samples)
else:
raise ValueError('max_samples (%s) is not supported.'
'Valid choices are: "auto", int or'
'float' % self.max_samples)
elif isinstance(self.max_samples, INTEGER_TYPES):
if self.max_samples > n_samples:
warn("max_samples (%s) is greater than the "
"total number of samples (%s). max_samples "
"will be set to n_samples for estimation."
% (self.max_samples, n_samples))
max_samples = n_samples
else:
max_samples = self.max_samples
else: # float
if not (0. < self.max_samples <= 1.):
raise ValueError("max_samples must be in (0, 1], got %r"
% self.max_samples)
max_samples = int(self.max_samples * X.shape[0])
self.max_samples_ = max_samples
max_depth = int(np.ceil(np.log2(max(max_samples, 2))))
super(iForest, self)._fit(X, y, max_samples,
max_depth=max_depth,
sample_weight=None)
if self.behaviour == 'old':
# in this case, decision_function = 0.5 + self.score_samples(X):
if self._contamination == "auto":
raise ValueError("contamination parameter cannot be set to "
"'auto' when behaviour == 'old'.")
self.offset_ = -0.5
self._threshold_ = np.percentile(self.decision_function(X),
100. * self._contamination)
return self
# else, self.behaviour == 'new':
if self._contamination == "auto":
# 0.5 plays a special role as described in the original paper.
# we take the opposite as we consider the opposite of their score.
self.offset_ = -0.5
return self
# else, define offset_ wrt contamination parameter, so that the
# threshold_ attribute is implicitly 0 and is not needed anymore:
#计算一个多维数组的任意百分比分位数
self.offset_ = np.percentile(self.score_samples(X),
100. * self._contamination)
return self
def predict(self, X):
#计算异常分数预测一个样例是否为异常
"""Predict if a particular sample is an outlier or not.
Parameters
----------
X : array-like or sparse matrix, shape (n_samples, n_features)
The input samples. Internally, it will be converted to
``dtype=np.float32`` and if a sparse matrix is provided
to a sparse ``csr_matrix``.
Returns
-------
is_inlier : array, shape (n_samples,)
For each observation, tells whether or not (+1 or -1) it should
be considered as an inlier according to the fitted model.
"""
check_is_fitted(self, ["offset_"])
X = check_array(X, accept_sparse='csr')
is_inlier = np.ones(X.shape[0], dtype=int) #建立一个矩阵元素均为1
threshold = self.threshold_ if self.behaviour == 'old' else 0
is_inlier[self.decision_function(X) < threshold] = -1 #异常分数小于阈值则为异常,并将其标记为-1
return is_inlier, self.decision_function(X)
def decision_function(self, X):
#返回x的异常分数(减去偏置值),用于在predict中进行判断
"""Average anomaly score of X of the base classifiers.
The anomaly score of an input sample is computed as
the mean anomaly score of the trees in the forest.
The measure of normality of an observation given a tree is the depth
of the leaf containing this observation, which is equivalent to
the number of splittings required to isolate this point. In case of
several observations n_left in the leaf, the average path length of
a n_left samples isolation tree is added.
Parameters
----------
X : {array-like, sparse matrix}, shape (n_samples, n_features)
The training input samples. Sparse matrices are accepted only if
they are supported by the base estimator.
Returns
-------
scores : array, shape (n_samples,)
The anomaly score of the input samples.
The lower, the more abnormal. Negative scores represent outliers,
positive scores represent inliers.
"""
# We subtract self.offset_ to make 0 be the threshold value for being
# an outlier:
return self.score_samples(X) - self.offset_
def score_samples(self, X):
#根据论文中的算法计算异常得分
"""Opposite of the anomaly score defined in the original paper.
The anomaly score of an input sample is computed as
the mean anomaly score of the trees in the forest.
The measure of normality of an observation given a tree is the depth
of the leaf containing this observation, which is equivalent to
the number of splittings required to isolate this point. In case of
several observations n_left in the leaf, the average path length of
a n_left samples isolation tree is added.
Parameters
----------
X : {array-like, sparse matrix}, shape (n_samples, n_features)
The training input samples. Sparse matrices are accepted only if
they are supported by the base estimator.
Returns
-------
scores : array, shape (n_samples,)
The anomaly score of the input samples.
The lower, the more abnormal.
"""
# code structure from ForestClassifier/predict_proba
check_is_fitted(self, ["estimators_"])
# Check data,检查数据是否规范
X = check_array(X, accept_sparse='csr')
if self.n_features_ != X.shape[1]:
raise ValueError("Number of features of the model must "
"match the input. Model n_features is {0} and "
"input n_features is {1}."
"".format(self.n_features_, X.shape[1]))
#输入数据集的大小
n_samples = X.shape[0]
n_samples_leaf = np.zeros((n_samples, self.n_estimators), order="f")
#生成一个n_samples行,n_estimators列的全0矩阵,order: 可选参数,c代表与c语言类似,行优先;F代表列优先
depths = np.zeros((n_samples, self.n_estimators), order="f")
if self._max_features == X.shape[1]:
subsample_features = False
else:
subsample_features = True
#使用enumerate( )方法的好处:可以同时拿到index和value。
#zip([seql, …])接受一系列可迭代对象作为参数,将对象中对应的元素打包成一个个tuple(元组),然后返回由这些tuples组成的list(列表)
for i, (tree, features) in enumerate(zip(self.estimators_,
self.estimators_features_)):
#对数据集进行采样
if subsample_features:
X_subset = X[:, features]
else:
X_subset = X
#tree.apply(X):返回每个样本被预测的叶子结点索引
leaves_index = tree.apply(X_subset)
node_indicator = tree.decision_path(X_subset) #decision_path(X):返回决策路径
n_samples_leaf[:, i] = tree.tree_.n_node_samples[leaves_index] #n_samples_leaf矩阵用于存储样本遍历后得到的叶子节点索引
depths[:, i] = np.ravel(node_indicator.sum(axis=1)) #depth存储遍历路径长度
depths[:, i] -= 1
#存储样例在每棵树遍历后得到的路径长度
depths += _average_path_length(n_samples_leaf)
#按照论文中的方法来计算异常分数
scores = 2 ** (-depths.mean(axis=1) / _average_path_length(
self.max_samples_))
# Take the opposite of the scores as bigger is better (here less abnormal),返回一个负值,分数越小越不正常
return -scores
@property
def threshold_(self):
if self.behaviour != 'old':
raise AttributeError("threshold_ attribute does not exist when "
"behaviour != 'old'")
warn("threshold_ attribute is deprecated in 0.20 and will"
" be removed in 0.22.", DeprecationWarning)
return self._threshold_
#对应论文中的c(φ)函数
def _average_path_length(n_samples_leaf):
""" The average path length in a n_samples iTree, which is equal to
the average path length of an unsuccessful BST search since the
latter has the same structure as an isolation tree.
Parameters
----------
n_samples_leaf : array-like, shape (n_samples, n_estimators), or int.
The number of training samples in each test sample leaf, for
each estimators.
Returns
-------
average_path_length : array, same shape as n_samples_leaf
"""
#isinstance作用:来判断一个对象是否是一个已知的类型,其返回值为布尔型(True or flase)。
if isinstance(n_samples_leaf, INTEGER_TYPES):
if n_samples_leaf <= 1:
return 1.
else:
return 2. * (np.log(n_samples_leaf - 1.) + euler_gamma) - 2. * (
n_samples_leaf - 1.) / n_samples_leaf
else:
n_samples_leaf_shape = n_samples_leaf.shape
n_samples_leaf = n_samples_leaf.reshape((1, -1))
average_path_length = np.zeros(n_samples_leaf.shape)
mask = (n_samples_leaf <= 1)
not_mask = np.logical_not(mask)
average_path_length[mask] = 1.
average_path_length[not_mask] = 2. * (
np.log(n_samples_leaf[not_mask] - 1.) + euler_gamma) - 2. * (
n_samples_leaf[not_mask] - 1.) / n_samples_leaf[not_mask]
return average_path_length.reshape(n_samples_leaf_shape) | zw-outliersdetec | /zw_outliersdetec-0.0.1.tar.gz/zw_outliersdetec-0.0.1/zw_outliersdetec/iForest.py | iForest.py |
import numpy as np
import math
from operator import add
from zw_outliersdetec import __dimension_reduction as dim_red
class FastVOA:
#t是随机超平面投影时的参数
def __init__(self,t):
self.t=t
#Algorithm 3 FirstMomentEstimator(L; t; n)
def __first_moment_estimator(self,projected, t, n):
f1 = [0] * n
for i in range(0, t):
cl = [0] * n
cr = [0] * n
li = projected[i]
for j in range(0, n):
idx = li[j][0]
cl[idx] = j #原:cl[idx] = j - 1
cr[idx] = n - 1 - cl[idx]
for j in range(0, n):
f1[j] += cl[j] * cr[j]
return list(map(lambda x: x * ((2 * math.pi) / (t * (n - 1) * (n - 2))), f1))
#Algorithm 4 FrobeniusNorm(L; t; n)
def __frobenius_norm(self,projected, t, n):
f2 = [0] * n
sl = np.random.choice([-1, 1], size=(n,), p=None)
sr = np.random.choice([-1, 1], size=(n,), p=None)
for i in range(0, t):
amsl = [0] * n
amsr = [0] * n
li = projected[i]
for j in range(1, n):
idx1 = li[j][0]
idx2 = li[j - 1][0]
amsl[idx1] = amsl[idx2] + sl[idx2]
for j in range(n - 2, -1, -1):
idx1 = li[j][0]
idx2 = li[j + 1][0]
amsr[idx1] = amsr[idx2] + sr[idx2]
for j in range(0, n):
f2[j] += amsl[j] * amsr[j]
return f2
#Algorithm 1 FastVOA(S; t; s1; s2)
'''
功能:计算每个数据对象的角度值
输入:train-不带标签的数据,n-数据对象的个数,t-随机超平面投影参数,s1,s2
输出:每个数据对象的角度值分数scores以及按分数排序后的下标scores_index
调用示例:
import pandas as pd
from FastVOA import *
#读取数据,不需要标签
data=pd.read_csv('isolet.csv',sep=',')
ytrain = data.iloc[:, -1]
train=data.drop('Label',axis=1)
#对数据进行随机超平面投影
DIMENSION = 600
t = DIMENSION
n = train.shape[0]
print(n)
#调用FatsVOA进行角度值计算
fv=FastVOA(t)
scores,scores_index=fv.fastvoa(train, n, t, 1, 1)
'''
def fastvoa(self,train, n, t, s1, s2):
projected = dim_red.random_projection(train, t)
f1 = self.__first_moment_estimator(projected, t, n)
y = []
for i in range(0, s2):
s = [0] * n
for j in range(0, s1):
result = list(map(lambda x: x ** 2, self.__frobenius_norm(projected, t, n)))
s = list(map(add, s, result))
s = list(map(lambda x: x / s1, s)) #s的长度为n(由于result的长度为n)
y.append(s) #yi的长度为n,y的长度为s2,y有s2行,n列
y = list(map(list, zip(*y))) #拆分y
f2 = []
for i in range(0, n):
f2.append(np.average(y[i])) #求均值
var = [0] * n
for i in range(0, n):
f2[i] = (4 * (math.pi ** 2) / (t * (t - 1) * (n - 1) * (n - 2))) * f2[i] - (2 * math.pi * f1[i]) / (t - 1)
var[i] = f2[i] - (f1[i] ** 2)
# 排序
scores=var
scores_index = np.argsort(scores) # 按角度值从低到高排序
return scores,scores_index[::-1] #返回异常得分及下标 | zw-outliersdetec | /zw_outliersdetec-0.0.1.tar.gz/zw_outliersdetec-0.0.1/zw_outliersdetec/FastVOA.py | FastVOA.py |
import numpy
from interval import *
from math import *
class IntervalSets:
#m为子序列长度,可以自行设置,默认为100
def __init__(self,m=100):
self.m=m
'''
功能:利用IntervalSets方法计算出每个序列的异常结构分数
输入:data-从CSV文件读取的数据信息,不含标签,一维
输出:每个序列的异常结构分数以及排序后的下标
运行示例:
import numpy as np
import matplotlib.pyplot as plt
from IntervalSets import *
m=400 #设置子序列长度,ECG为400,chfdb为250时效果最佳
#创建对象
IS=IntervalSets(m)
#从CSV文件读取数据
data= np.genfromtxt('ECG108_2.csv',delimiter=',')
#调用IntervalSets方法来查找异常
SAS,SAS_index=IS.intervalsets(data)
print(SAS) #输出异常结构分数SAS
print("最可能的异常序列标号为:",SAS_index[len(SAS_index)-1])
#可视化标注异常
x0=range(0,data.__len__())
plt.plot(x0,data)
t=SAS_index[len(SAS_index)-1]
x1=range(t*m,t*m+m-1)
plt.plot(x1,data[t*m:t*m+m-1],color="red")
plt.text(100, 1,"m=400", size = 15, alpha = 0.8)
plt.show()
'''
def intervalsets(self,data):
data_matrix=self.split_data(data) #对数据进行划分,形成多个子序列
n=len(data_matrix) #子序列的个数
m=self.m #子序列长度
Sp=self.cal_Sp(data_matrix)
SA=self.cal_SA(data_matrix)
#计算每个子序列的异常结构分数
S=list() #二维
SAS=list() #存储分数,一维
for i in range(n):
S_i=0
for j in range(n):
S_i=S_i+(0.5*SA[i][j]+0.5*Sp[i][j])
S.append(S_i)
for i in range(n):
SAS_i =0
for j in range(n):
SAS_i=SAS_i+(S[i]-S[j])**2
SAS.append(SAS_i/n)
#排序
SAS_index = numpy.argsort(SAS)
return SAS,SAS_index
'''
输入:data_matrix,二维数据
功能:类的私有方法,计算子序列之间的概率相似性Sp,boundary为区间边界参数
输出:Sp,二维矩阵,每行为一个子序列与其他子序列的Spij值
'''
def cal_Sp(self,data_matrix,boundary=0.2):
Sp=list()
Amin = list() # 每一行存储一个子序列的最小值
Amax = list() # 每一行存储一个子序列的最大值
Pmin = list() # 每一行存储一个子序列的概率最小值
Pmax = list() # 每一行存储一个子序列的概率最大值
n = len(data_matrix) # 子序列数量
widths=self.find_best_w(data_matrix)
#求子序列的取值区间
for i in range(n):
Amin.append(min(data_matrix[i])) # 子序列最小值
Amax.append(max(data_matrix[i])) # 子序列最大值
#求点分布在边界区间的概率
for i in range(n):
count_min=0
count_max = 0
for item in data_matrix[i]:
if item>=Amin[i] and item<=Amin[i]+boundary*(Amax[i]-Amin[i]):
count_min=count_min+1
if item>=Amax[i]-boundary*(Amax[i]-Amin[i]) and item<=Amax[i]:
count_max = count_max + 1
Pmin.append(count_min/self.m)
Pmax.append(count_max/self.m)
#利用边界的点分布概率计算Sp
for i in range(n):
Sp_i=list()
for j in range(n):
if i==j:
Sp_i.append(1)
else:
p=exp(-((Pmin[i]-Pmin[j])**2+(Pmax[i]-Pmax[j])**2)/widths[i]**2)
#p=numpy.exp(-1.0 * numpy.linalg.norm(one - two, ord=2) / numpy.power(w, 2))
Sp_i.append(p)
Sp.append(Sp_i)
return Sp
'''
输入:data_matrix,二维数据
功能:类的私有方法,计算子序列之间的概率相似性SA
输出:SA,二维矩阵,每行为一个子序列与其他子序列的SAij值
'''
def cal_SA(self, data_matrix):
Amin=list() #每一行存储一个子序列的最小值
Amax = list() # 每一行存储一个子序列的最大值
SA=list() #存储区相似度
n=len(data_matrix) #子序列数量
for i in range(n):
Amin.append(min(data_matrix[i])) # 子序列最小值
Amax.append(max(data_matrix[i])) # 子序列最大值
for i in range(n):
SA_i=list()
A_i=Interval(Amin[i],Amax[i])
#print(A_i)
for j in range(n):
A_j=Interval(Amin[j],Amax[j])
if A_i.overlaps(A_j)==False: #情况1,没有交集
SA_i.append(0)
else: #情况2,有交集
A_ij=A_i.join(A_j) #合并
a=((A_i.upper_bound-A_i.lower_bound)+(A_j.upper_bound-A_j.lower_bound)-(A_ij.upper_bound-A_ij.lower_bound))/(A_ij.upper_bound-A_ij.lower_bound)
SA_i.append(a)
SA.append(SA_i)
return SA
'''
输入:data,一维数据
功能:将读取到的一维数据按照传入的子序列长度进行划分
输出:data_matrix,二维矩阵,每行为一个子序列
'''
def split_data(self, data):
index = self.m
length = data.__len__()
data_matix = list()
sequence = list()
i = 0
while i < length:
sequence = data[i:i + index]
# print(sequence)
i = i + index
data_matix.append(sequence)
return data_matix
'''
函数功能:计算数据集的δ,高斯径向基函数的半径,通过信息熵的方法计算
返回一个与子序列长度相等的数组
'''
def find_best_w(self, data_matrix):
alist, blist = numpy.zeros(data_matrix.__len__()), numpy.zeros(data_matrix.__len__())
r_index = range(data_matrix.__len__())
gama = (5 ** 0.5 - 1) / 2
coe = (2 ** 0.5) / 3
for i in r_index:
min_dist, max_dist = float('inf'), -float('inf')
for j in r_index:
if i == j:
continue
dist = numpy.linalg.norm(data_matrix[i] - data_matrix[j], ord=2) # 求二范数
min_dist = min(dist, min_dist)
max_dist = max(dist, max_dist)
alist[i], blist[i] = coe * min_dist, coe * max_dist
left, right = cal_sig(alist, blist, gama)
ent_left = cal_entropy(left)
ent_right = cal_entropy(right)
epison = 1
times = 0
while numpy.linalg.norm(alist - blist) < 1 and times < 20:
if ent_left < ent_right:
blist, right = right.copy(), left.copy()
ent_right = ent_left
left = alist + (1 - gama) * (blist - alist)
ent_left = cal_entropy(left)
else:
alist, left = left.copy(), right.copy()
ent_left = ent_right
right = alist + gama * (blist - alist)
ent_right = cal_entropy(right)
times += 1
if ent_left < ent_right:
return left
else:
return right
def cal_sig(alist, blist, gama):
length = len(alist)
index = range(length)
left, right = numpy.zeros(length), numpy.zeros(length)
for i in index:
left[i] = alist[i] + (1 - gama) * (blist[i] - alist[i])
right[i] = alist[i] + gama * (blist[i] - alist[i])
return left, right
'''
计算信信息熵
'''
def cal_entropy(list):
total = sum(list)
list /= total
log_list = numpy.log(list)
return -numpy.dot(list, log_list) | zw-outliersdetec | /zw_outliersdetec-0.0.1.tar.gz/zw_outliersdetec-0.0.1/zw_outliersdetec/IntervalSets.py | IntervalSets.py |
import numpy as np
from collections import defaultdict
from zw_outliersdetec.__paa import *
'''
函数功能:SAX算法
'''
def sax_via_window(series, win_size, paa_size, alphabet_size=3,
nr_strategy='exact', z_threshold=0.01):
"""Simple via window conversion implementation."""
cuts = cuts_for_asize(alphabet_size)
sax = defaultdict(list)
prev_word = ''
for i in range(0, len(series) - win_size):
sub_section = series[i:(i+win_size)]
zn = znorm(sub_section, z_threshold)
paa_rep = paa(zn, paa_size)
curr_word = ts_to_string(paa_rep, cuts)
if '' != prev_word:
if 'exact' == nr_strategy and prev_word == curr_word:
continue
elif 'mindist' == nr_strategy and\
is_mindist_zero(prev_word, curr_word):
continue
prev_word = curr_word
sax[curr_word].append(i)
return sax
'''
函数功能:z-score标准化
'''
def znorm(series, znorm_threshold=0.01):
"""Znorm implementation."""
sd = np.std(series)
if (sd < znorm_threshold):
return series
mean = np.mean(series)
return (series - mean) / sd
def cuts_for_asize(a_size):
"""Generate a set of alphabet cuts for its size."""
""" Typically, we generate cuts in R as follows:
get_cuts_for_num <- function(num) {
cuts = c(-Inf)
for (i in 1:(num-1)) {
cuts = c(cuts, qnorm(i * 1/num))
}
cuts
}
get_cuts_for_num(3) """
options = {
2: np.array([-np.inf, 0.00]),
3: np.array([-np.inf, -0.4307273, 0.4307273]),
4: np.array([-np.inf, -0.6744898, 0, 0.6744898]),
5: np.array([-np.inf, -0.841621233572914, -0.2533471031358,
0.2533471031358, 0.841621233572914]),
6: np.array([-np.inf, -0.967421566101701, -0.430727299295457, 0,
0.430727299295457, 0.967421566101701]),
7: np.array([-np.inf, -1.06757052387814, -0.565948821932863,
-0.180012369792705, 0.180012369792705, 0.565948821932863,
1.06757052387814]),
8: np.array([-np.inf, -1.15034938037601, -0.674489750196082,
-0.318639363964375, 0, 0.318639363964375,
0.674489750196082, 1.15034938037601]),
9: np.array([-np.inf, -1.22064034884735, -0.764709673786387,
-0.430727299295457, -0.139710298881862, 0.139710298881862,
0.430727299295457, 0.764709673786387, 1.22064034884735]),
10: np.array([-np.inf, -1.2815515655446, -0.841621233572914,
-0.524400512708041, -0.2533471031358, 0, 0.2533471031358,
0.524400512708041, 0.841621233572914, 1.2815515655446]),
11: np.array([-np.inf, -1.33517773611894, -0.908457868537385,
-0.604585346583237, -0.348755695517045,
-0.114185294321428, 0.114185294321428, 0.348755695517045,
0.604585346583237, 0.908457868537385, 1.33517773611894]),
12: np.array([-np.inf, -1.38299412710064, -0.967421566101701,
-0.674489750196082, -0.430727299295457,
-0.210428394247925, 0, 0.210428394247925,
0.430727299295457, 0.674489750196082, 0.967421566101701,
1.38299412710064]),
13: np.array([-np.inf, -1.42607687227285, -1.0200762327862,
-0.736315917376129, -0.502402223373355,
-0.293381232121193, -0.0965586152896391,
0.0965586152896394, 0.293381232121194, 0.502402223373355,
0.73631591737613, 1.0200762327862, 1.42607687227285]),
14: np.array([-np.inf, -1.46523379268552, -1.06757052387814,
-0.791638607743375, -0.565948821932863, -0.36610635680057,
-0.180012369792705, 0, 0.180012369792705,
0.36610635680057, 0.565948821932863, 0.791638607743375,
1.06757052387814, 1.46523379268552]),
15: np.array([-np.inf, -1.50108594604402, -1.11077161663679,
-0.841621233572914, -0.622925723210088,
-0.430727299295457, -0.2533471031358, -0.0836517339071291,
0.0836517339071291, 0.2533471031358, 0.430727299295457,
0.622925723210088, 0.841621233572914, 1.11077161663679,
1.50108594604402]),
16: np.array([-np.inf, -1.53412054435255, -1.15034938037601,
-0.887146559018876, -0.674489750196082,
-0.488776411114669, -0.318639363964375,
-0.157310684610171, 0, 0.157310684610171,
0.318639363964375, 0.488776411114669, 0.674489750196082,
0.887146559018876, 1.15034938037601, 1.53412054435255]),
17: np.array([-np.inf, -1.5647264713618, -1.18683143275582,
-0.928899491647271, -0.721522283982343,
-0.541395085129088, -0.377391943828554,
-0.223007830940367, -0.0737912738082727,
0.0737912738082727, 0.223007830940367, 0.377391943828554,
0.541395085129088, 0.721522283982343, 0.928899491647271,
1.18683143275582, 1.5647264713618]),
18: np.array([-np.inf, -1.59321881802305, -1.22064034884735,
-0.967421566101701, -0.764709673786387,
-0.589455797849779, -0.430727299295457,
-0.282216147062508, -0.139710298881862, 0,
0.139710298881862, 0.282216147062508, 0.430727299295457,
0.589455797849779, 0.764709673786387, 0.967421566101701,
1.22064034884735, 1.59321881802305]),
19: np.array([-np.inf, -1.61985625863827, -1.25211952026522,
-1.00314796766253, -0.8045963803603, -0.633640000779701,
-0.47950565333095, -0.336038140371823, -0.199201324789267,
-0.0660118123758407, 0.0660118123758406,
0.199201324789267, 0.336038140371823, 0.47950565333095,
0.633640000779701, 0.8045963803603, 1.00314796766253,
1.25211952026522, 1.61985625863827]),
20: np.array([-np.inf, -1.64485362695147, -1.2815515655446,
-1.03643338949379, -0.841621233572914, -0.674489750196082,
-0.524400512708041, -0.385320466407568, -0.2533471031358,
-0.125661346855074, 0, 0.125661346855074, 0.2533471031358,
0.385320466407568, 0.524400512708041, 0.674489750196082,
0.841621233572914, 1.03643338949379, 1.2815515655446,
1.64485362695147]),
}
return options[a_size]
def ts_to_string(series, cuts):
"""A straightforward num-to-string conversion."""
a_size = len(cuts)
sax = list()
for i in range(0, len(series)):
num = series[i]
# if teh number below 0, start from the bottom, or else from the top
if(num >= 0):
j = a_size - 1
while ((j > 0) and (cuts[j] >= num)):
j = j - 1
sax.append(idx2letter(j))
else:
j = 1
while (j < a_size and cuts[j] <= num):
j = j + 1
sax.append(idx2letter(j-1))
return ''.join(sax)
def idx2letter(idx):
"""Convert a numerical index to a char."""
if 0 <= idx < 20:
return chr(97 + idx)
else:
raise ValueError('A wrong idx value supplied.')
def is_mindist_zero(a, b):
"""Check mindist."""
if len(a) != len(b):
return 0
else:
for i in range(0, len(b)):
if abs(ord(a[i]) - ord(b[i])) > 1:
return 0
return 1 | zw-outliersdetec | /zw_outliersdetec-0.0.1.tar.gz/zw_outliersdetec-0.0.1/zw_outliersdetec/__sax_via_window.py | __sax_via_window.py |
__docformat__ = 'reStructuredText'
from zope.i18nmessageid import MessageFactory
_ = MessageFactory('zw.mail.incoming')
from zope import schema
from zope.component import adapter, queryUtility
from zope.component.interface import provideInterface
from zope.component.zcml import handler
from zope.configuration.exceptions import ConfigurationError
from zope.configuration.fields import Path, Tokens
from zope.interface import Interface
from zope.app.appsetup.bootstrap import getInformationFromEvent
from zope.app.appsetup.interfaces import IDatabaseOpenedWithRootEvent
from zw.mail.incoming.interfaces import IInbox
from zw.mail.incoming.inbox import MaildirInbox
from zw.mail.incoming.processor import IncomingMailProcessor
class IIncomingMailProcessorDirective(Interface):
"""This directive register an event on IDataBaseOpenedWithRoot
to launch an incoming mail processor.
"""
name = schema.TextLine(
title = _( u'label-IIncomingMailProcessorDirective.name',
u"Name" ),
description = _( u'help-IIncomingMailProcessorDirective.name',
u"Specifies the name of the mail processor." ),
default = u"Incoming Mail",
required = False )
pollingInterval = schema.Int(
title = _( u"Polling Interval" ),
description = _( u"How often the mail sources are checked for "
u"new messages (in milliseconds)" ),
default = 5000 )
sources = Tokens(
title = _( u"Sources" ),
description = _( u"Iterable of names of IInbox utilities." ),
required = True,
value_type = schema.TextLine(
title = _( u"Inbox utility name" )
)
)
def incomingMailProcessor(_context, sources, pollingInterval = 5000,
name = u"Incoming Mail" ):
@adapter(IDatabaseOpenedWithRootEvent)
def createIncomingMailProcessor(event):
db, conn, root, root_folder = getInformationFromEvent(event)
inboxes = []
for name in sources:
inbox = queryUtility(IInbox, name)
if inbox is None:
raise ConfigurationError("Inbox %r is not defined." % name)
inboxes.append(inbox)
thread = IncomingMailProcessor(root_folder, pollingInterval, inboxes)
thread.start()
_context.action(
discriminator = None,
callable = handler,
args = ('registerHandler',
createIncomingMailProcessor, (IDatabaseOpenedWithRootEvent,),
u'', _context.info),
)
_context.action(
discriminator = None,
callable = provideInterface,
args = ('', IDatabaseOpenedWithRootEvent)
)
class IInboxDirective(Interface):
"""A generic directive registering an inbox.
"""
name = schema.TextLine(
title = _( u'label-IInboxDirective.name',
u"Name" ),
description = _( u'help-IInboxDirective.name',
u"Specifies the Inbox name of the utility." ),
required = True )
class IMaildirInboxDirective(IInboxDirective):
"""Registers a new maildir inbox.
"""
path = Path(
title = _( u'label-IMaildirInboxDirective.path',
u"Maildir Path" ),
description = _( u'help-IMaildirInboxDirective.path',
u"Defines the path to the inbox maildir directory." ),
required = True )
def maildirInbox(_context, name, path):
_context.action(
discriminator = ('utility', IInbox, name),
callable = handler,
args = ('registerUtility',
MaildirInbox(path), IInbox, name)
) | zw.mail.incoming | /zw.mail.incoming-0.1.2.3.tar.gz/zw.mail.incoming-0.1.2.3/src/zw/mail/incoming/zcml.py | zcml.py |
__docformat__ = 'reStructuredText'
from zope.i18nmessageid import MessageFactory
_ = MessageFactory('zw.mail.incoming')
from zope import schema
from zope.interface import Attribute, Interface
from z3c.schema.email import RFC822MailAddress
class IInbox(Interface):
"""An inbox provides a very simple interface for our needs."""
def pop():
"""Return an email.message.Message converted message and remove it.
"""
def __iter__():
"""Iterate through all messages.
"""
def next():
"""Return an email.message.Message converted message.
"""
def delete(msg):
"""Delete msg from inbox.
"""
class IMaildirInbox(IInbox):
"""An inbox that receives its messages by an Maildir folder.
"""
queuePath = schema.TextLine(
title = _( u"Queue Path" ),
description = _( u"Pathname of the Maildir directory." ) )
class IIMAPInbox(IInbox):
"""An inbox that receives its message via an IMAP connection.
"""
class IIncomingMailProcessor(Interface):
"""A mail queue processor that raise IIncomingMailEvent on new messages.
"""
pollingInterval = schema.Int(
title = _( u"Polling Interval" ),
description = _( u"How often the mail sources are checked for "
u"new messages (in milliseconds)" ),
default = 5000 )
sources = schema.FrozenSet(
title = _( u"Sources" ),
description = _( u"Iterable of inbox utilities." ),
required = True,
value_type = schema.Object(
title = _( u"Inbox source" ),
schema = IInbox
)
)
class IIncomingEmailEvent(Interface):
"""A new mail arrived.
"""
message = Attribute(u"""The new email.message message.""")
inbox = schema.Object(
title = _( u"The inbox" ),
description = _( u"The mail folder the message is contained in" ),
schema = IInbox )
root = Attribute(u"""The root object""")
class IIncomingEmailFailureEvent(IIncomingEmailEvent):
"""A new mail arrived with a failure.
"""
failures = schema.List(
title = _( u"Failure addresses" ),
description = _( u"Extracted list of failure addresses." ),
value_type = RFC822MailAddress(
title = u"Failure address" ),
)
delivery_report = Attribute(u"""The delivery report as email.message.Message.""") | zw.mail.incoming | /zw.mail.incoming-0.1.2.3.tar.gz/zw.mail.incoming-0.1.2.3/src/zw/mail/incoming/interfaces.py | interfaces.py |
__docformat__ = 'reStructuredText'
from zope.i18nmessageid import MessageFactory
_ = MessageFactory('zw.mail.incoming')
import atexit
from time import sleep
from threading import Thread
import logging
import transaction
from zope.component import getUtility
from zope.component.interfaces import ComponentLookupError
from zope.event import notify
from zope.interface import implements
from mailman.Bouncers.BouncerAPI import ScanMessages
from zw.mail.incoming.events import NewEmailEvent, NewEmailFailureEvent
from zw.mail.incoming.interfaces import IIncomingMailProcessor, IInbox
class IncomingMailProcessor(Thread):
implements(IIncomingMailProcessor)
log = logging.getLogger("IncomingMailProcessorThread")
__stopped = False
def __init__(self, root, interval, inboxes):
Thread.__init__(self)
self.context = root
self.pollingInterval = interval
self.sources = tuple(inboxes)
def run(self, forever=True):
atexit.register(self.stop)
while not self.__stopped:
for box in self.sources:
msg = None
try:
msg = box.next()
failures = ScanMessages(None, msg)
if failures:
notify( NewEmailFailureEvent( msg, box, failures, self.context ) )
else:
notify( NewEmailEvent( msg, box, self.context ) )
except StopIteration:
# That's fine.
pass
except:
# Catch up any other exception to let this thread survive.
if msg is None:
self.log.error(
"Cannot access next message from inbox '%r'.",
box )
else:
self.log.error(
"Cannot process message '%s' from inbox '%r'.",
msg['Message-Id'], box )
else:
self.log.info(
"Message '%s' from inbox '%r' processed.",
msg['Message-Id'], box )
transaction.commit()
else:
if forever:
sleep(self.pollingInterval/1000.)
if not forever:
break
def stop(self):
self.__stopped = True | zw.mail.incoming | /zw.mail.incoming-0.1.2.3.tar.gz/zw.mail.incoming-0.1.2.3/src/zw/mail/incoming/processor.py | processor.py |
import re
import email.Iterators
def _c(pattern):
return re.compile(pattern, re.IGNORECASE)
# This is a list of tuples of the form
#
# (start cre, end cre, address cre)
#
# where `cre' means compiled regular expression, start is the line just before
# the bouncing address block, end is the line just after the bouncing address
# block, and address cre is the regexp that will recognize the addresses. It
# must have a group called `addr' which will contain exactly and only the
# address that bounced.
PATTERNS = [
# sdm.de
(_c('here is your list of failed recipients'),
_c('here is your returned mail'),
_c(r'<(?P<addr>[^>]*)>')),
# sz-sb.de, corridor.com, nfg.nl
(_c('the following addresses had'),
_c('transcript of session follows'),
_c(r'<(?P<fulladdr>[^>]*)>|\(expanded from: <?(?P<addr>[^>)]*)>?\)')),
# robanal.demon.co.uk
(_c('this message was created automatically by mail delivery software'),
_c('original message follows'),
_c('rcpt to:\s*<(?P<addr>[^>]*)>')),
# s1.com (InterScan E-Mail VirusWall NT ???)
(_c('message from interscan e-mail viruswall nt'),
_c('end of message'),
_c('rcpt to:\s*<(?P<addr>[^>]*)>')),
# Smail
(_c('failed addresses follow:'),
_c('message text follows:'),
_c(r'\s*(?P<addr>\S+@\S+)')),
# newmail.ru
(_c('This is the machine generated message from mail service.'),
_c('--- Below the next line is a copy of the message.'),
_c('<(?P<addr>[^>]*)>')),
# turbosport.com runs something called `MDaemon 3.5.2' ???
(_c('The following addresses did NOT receive a copy of your message:'),
_c('--- Session Transcript ---'),
_c('[>]\s*(?P<addr>.*)$')),
# usa.net
(_c('Intended recipient:\s*(?P<addr>.*)$'),
_c('--------RETURNED MAIL FOLLOWS--------'),
_c('Intended recipient:\s*(?P<addr>.*)$')),
# hotpop.com
(_c('Undeliverable Address:\s*(?P<addr>.*)$'),
_c('Original message attached'),
_c('Undeliverable Address:\s*(?P<addr>.*)$')),
# Another demon.co.uk format
(_c('This message was created automatically by mail delivery'),
_c('^---- START OF RETURNED MESSAGE ----'),
_c("addressed to '(?P<addr>[^']*)'")),
# Prodigy.net full mailbox
(_c("User's mailbox is full:"),
_c('Unable to deliver mail.'),
_c("User's mailbox is full:\s*<(?P<addr>[^>]*)>")),
# Microsoft SMTPSVC
(_c('The email below could not be delivered to the following user:'),
_c('Old message:'),
_c('<(?P<addr>[^>]*)>')),
# Yahoo on behalf of other domains like sbcglobal.net
(_c('Unable to deliver message to the following address\(es\)\.'),
_c('--- Original message follows\.'),
_c('<(?P<addr>[^>]*)>:')),
# googlemail.com
(_c('Delivery to the following recipient failed'),
_c('----- Original message -----'),
_c('^\s*(?P<addr>[^\s@]+@[^\s@]+)\s*$')),
# kundenserver.de
(_c('A message that you sent could not be delivered'),
_c('^---'),
_c('<(?P<addr>[^>]*)>')),
# another kundenserver.de
(_c('A message that you sent could not be delivered'),
_c('^---'),
_c('^(?P<addr>[^\s@]+@[^\s@:]+):')),
# thehartford.com
(_c('Delivery to the following recipients failed'),
# this one may or may not have the original message, but there's nothing
# unique to stop on, so stop on the first line of at least 3 characters
# that doesn't start with 'D' (to not stop immediately) and has no '@'.
_c('^[^D][^@]{2,}$'),
_c('^\s*(?P<addr>[^\s@]+@[^\s@]+)\s*$')),
# and another thehartfod.com/hartfordlife.com
(_c('^Your message\s*$'),
_c('^because:'),
_c('^\s*(?P<addr>[^\s@]+@[^\s@]+)\s*$')),
# kviv.be (InterScan NT)
(_c('^Unable to deliver message to'),
_c(r'\*+\s+End of message\s+\*+'),
_c('<(?P<addr>[^>]*)>')),
# earthlink.net supported domains
(_c('^Sorry, unable to deliver your message to'),
_c('^A copy of the original message'),
_c('\s*(?P<addr>[^\s@]+@[^\s@]+)\s+')),
# ademe.fr
(_c('^A message could not be delivered to:'),
_c('^Subject:'),
_c('^\s*(?P<addr>[^\s@]+@[^\s@]+)\s*$')),
# andrew.ac.jp
(_c('^Invalid final delivery userid:'),
_c('^Original message follows.'),
_c('\s*(?P<addr>[^\s@]+@[^\s@]+)\s*$')),
# [email protected]
(_c('------ Failed Recipients ------'),
_c('-------- Returned Mail --------'),
_c('<(?P<addr>[^>]*)>')),
# cynergycom.net
(_c('A message that you sent could not be delivered'),
_c('^---'),
_c('(?P<addr>[^\s@]+@[^\s@)]+)')),
# LSMTP for Windows
(_c('^--> Error description:\s*$'),
_c('^Error-End:'),
_c('^Error-for:\s+(?P<addr>[^\s@]+@[^\s@]+)')),
# Qmail with a tri-language intro beginning in spanish
(_c('Your message could not be delivered'),
_c('^-'),
_c('<(?P<addr>[^>]*)>:')),
# socgen.com
(_c('Your message could not be delivered to'),
_c('^\s*$'),
_c('(?P<addr>[^\s@]+@[^\s@]+)')),
# dadoservice.it
(_c('Your message has encountered delivery problems'),
_c('Your message reads'),
_c('addressed to\s*(?P<addr>[^\s@]+@[^\s@)]+)')),
# gomaps.com
(_c('Did not reach the following recipient'),
_c('^\s*$'),
_c('\s(?P<addr>[^\s@]+@[^\s@]+)')),
# EYOU MTA SYSTEM
(_c('This is the deliver program at'),
_c('^-'),
_c('^(?P<addr>[^\s@]+@[^\s@<>]+)')),
# A non-standard qmail at ieo.it
(_c('this is the email server at'),
_c('^-'),
_c('\s(?P<addr>[^\s@]+@[^\s@]+)[\s,]')),
# pla.net.py (MDaemon.PRO ?)
(_c('- no such user here'),
_c('There is no user'),
_c('^(?P<addr>[^\s@]+@[^\s@]+)\s')),
# Next one goes here...
]
def process(msg, patterns=None):
if patterns is None:
patterns = PATTERNS
# simple state machine
# 0 = nothing seen yet
# 1 = intro seen
addrs = {}
# MAS: This is a mess. The outer loop used to be over the message
# so we only looped through the message once. Looping through the
# message for each set of patterns is obviously way more work, but
# if we don't do it, problems arise because scre from the wrong
# pattern set matches first and then acre doesn't match. The
# alternative is to split things into separate modules, but then
# we process the message multiple times anyway.
for scre, ecre, acre in patterns:
state = 0
for line in email.Iterators.body_line_iterator(msg):
if state == 0:
if scre.search(line):
state = 1
if state == 1:
mo = acre.search(line)
if mo:
addr = mo.group('addr')
if addr:
addrs[mo.group('addr')] = 1
elif ecre.search(line):
break
if addrs:
break
return addrs.keys() | zw.mail.incoming | /zw.mail.incoming-0.1.2.3.tar.gz/zw.mail.incoming-0.1.2.3/src/mailman/Bouncers/SimpleMatch.py | SimpleMatch.py |
from cStringIO import StringIO
from email.Iterators import typed_subpart_iterator
from email.Utils import parseaddr
from mailman.Bouncers.BouncerAPI import Stop
def check(msg):
# Iterate over each message/delivery-status subpart
addrs = []
for part in typed_subpart_iterator(msg, 'message', 'delivery-status'):
if not part.is_multipart():
# Huh?
continue
# Each message/delivery-status contains a list of Message objects
# which are the header blocks. Iterate over those too.
for msgblock in part.get_payload():
# We try to dig out the Original-Recipient (which is optional) and
# Final-Recipient (which is mandatory, but may not exactly match
# an address on our list). Some MTA's also use X-Actual-Recipient
# as a synonym for Original-Recipient, but some apparently use
# that for other purposes :(
#
# Also grok out Action so we can do something with that too.
action = msgblock.get('action', '').lower()
# Some MTAs have been observed that put comments on the action.
if action.startswith('delayed'):
return Stop
if not action.startswith('fail'):
# Some non-permanent failure, so ignore this block
continue
params = []
foundp = False
for header in ('original-recipient', 'final-recipient'):
for k, v in msgblock.get_params([], header):
if k.lower() == 'rfc822':
foundp = True
else:
params.append(k)
if foundp:
# Note that params should already be unquoted.
addrs.extend(params)
break
else:
# MAS: This is a kludge, but SMTP-GATEWAY01.intra.home.dk
# has a final-recipient with an angle-addr and no
# address-type parameter at all. Non-compliant, but ...
for param in params:
if param.startswith('<') and param.endswith('>'):
addrs.append(param[1:-1])
# Uniquify
rtnaddrs = {}
for a in addrs:
if a is not None:
realname, a = parseaddr(a)
rtnaddrs[a] = True
return rtnaddrs.keys()
def process(msg):
# A DSN has been seen wrapped with a "legal disclaimer" by an outgoing MTA
# in a multipart/mixed outer part.
if msg.is_multipart() and msg.get_content_subtype() == 'mixed':
msg = msg.get_payload()[0]
# The above will suffice if the original message 'parts' were wrapped with
# the disclaimer added, but the original DSN can be wrapped as a
# message/rfc822 part. We need to test that too.
if msg.is_multipart() and msg.get_content_type() == 'message/rfc822':
msg = msg.get_payload()[0]
# The report-type parameter should be "delivery-status", but it seems that
# some DSN generating MTAs don't include this on the Content-Type: header,
# so let's relax the test a bit.
if not msg.is_multipart() or msg.get_content_subtype() <> 'report':
return None
return check(msg) | zw.mail.incoming | /zw.mail.incoming-0.1.2.3.tar.gz/zw.mail.incoming-0.1.2.3/src/mailman/Bouncers/DSN.py | DSN.py |
__docformat__ = 'reStructuredText'
from zw.schema.i18n import MessageFactory as _
import sys
import zope.schema
import zope.schema.interfaces
import zope.component
import zope.interface
from zope.interface.interfaces import IInterface
from zope.dottedname.resolve import resolve
from zope.app.intid.interfaces import IIntIds
from zw.schema.reference.interfaces import IReference
class Reference(zope.schema.Field):
"""A field to an persistent object referencable by IntId.
"""
zope.interface.implements(IReference)
_schemata = None
def __init__(self, *args, **kw):
schemata = kw.pop('schemata', None)
if type(schemata) not in (tuple, list,):
schemata = (schemata,)
schema_list = []
for schema in schemata:
if IInterface.providedBy(schema):
schema_list.append(schema)
elif isinstance(schema, str):
# have dotted names
#module = kw.get('module', sys._getframe(1).f_globals['__name__'])
raise NotImplementedError
schema_list.append(schema)
elif schema is None:
continue
else:
raise zope.schema.interfaces.WrongType
if schema_list:
self._schemata = tuple(schema_list)
super(Reference, self).__init__(*args, **kw)
def _validate(self, value):
super(Reference, self)._validate(value)
if self._schemata is not None:
schema_provided = False
for iface in self._schemata:
if iface.providedBy(value):
schema_provided = True
if not schema_provided:
raise zope.schema.interfaces.SchemaNotProvided
intids = zope.component.getUtility(IIntIds, context=value)
intids.getId(value)
def get(self, object):
id = super(Reference, self).get(object)
intids = zope.component.getUtility(IIntIds, context=object)
return intids.queryObject(id)
def set(self, object, value):
intids = zope.component.getUtility(IIntIds, context=object)
id = intids.getId(value)
super(Reference, self).set(object, id) | zw.schema | /zw.schema-0.3.0b2.1.tar.gz/zw.schema-0.3.0b2.1/src/zw/schema/reference/field.py | field.py |
===========
ColorWidget
===========
The widget can render an input field with color preview::
>>> from zope.interface.verify import verifyClass
>>> from z3c.form.interfaces import IWidget
>>> from zw.widget.color.widget import ColorWidget
The ColorWidget is a widget::
>>> verifyClass(IWidget, ColorWidget)
True
The widget can render a input field only by adapting a request::
>>> from z3c.form.testing import TestRequest
>>> request = TestRequest()
>>> widget = ColorWidget(request)
Such a field provides IWidget::
>>> IWidget.providedBy(widget)
True
We also need to register the template for at least the widget and
request::
>>> import os.path
>>> import zope.interface
>>> from zope.publisher.interfaces.browser import IDefaultBrowserLayer
>>> from zope.pagetemplate.interfaces import IPageTemplate
>>> import zw.widget.color
>>> import z3c.form.widget
>>> template = os.path.join(os.path.dirname(zw.widget.color.__file__),
... 'color_input.pt')
>>> factory = z3c.form.widget.WidgetTemplateFactory(template)
>>> zope.component.provideAdapter(factory,
... (zope.interface.Interface, IDefaultBrowserLayer, None, None, None),
... IPageTemplate, name='input')
If we render the widget we get the HTML::
>>> print widget.render()
<input type="text" class="color-widget" value="" />
Adding some more attributes to the widget will make it display more::
>>> widget.id = 'id'
>>> widget.name = 'name'
>>> widget.value = u'value'
>>> print widget.render()
<span id="" class="color-widget color-sample"
style="background-color: #value;">
</span>
<input type="text" id="id" name="name" class="color-widget"
value="value" />
| zw.widget | /zw.widget-0.1.6.2.tar.gz/zw.widget-0.1.6.2/src/zw/widget/color/README.txt | README.txt |
===========
EmailWidget
===========
The widget can render an ordinary input field::
>>> from zope.interface.verify import verifyClass
>>> from z3c.form.interfaces import IWidget, INPUT_MODE, DISPLAY_MODE
>>> from zw.widget.email.widget import EmailWidget
The EmailWidget is a widget::
>>> verifyClass(IWidget, EmailWidget)
True
The widget can render a input field only by adapting a request::
>>> from z3c.form.testing import TestRequest
>>> request = TestRequest()
>>> widget = EmailWidget(request)
Such a field provides IWidget::
>>> IWidget.providedBy(widget)
True
We also need to register the template for at least the widget and
request::
>>> import os.path
>>> import zope.interface
>>> from zope.publisher.interfaces.browser import IDefaultBrowserLayer
>>> from zope.pagetemplate.interfaces import IPageTemplate
>>> import zw.widget.email
>>> import z3c.form.widget
>>> template = os.path.join(os.path.dirname(zw.widget.email.__file__),
... 'email_input.pt')
>>> factory = z3c.form.widget.WidgetTemplateFactory(template)
>>> zope.component.provideAdapter(factory,
... (zope.interface.Interface, IDefaultBrowserLayer, None, None, None),
... IPageTemplate, name='input')
If we render the widget we get the HTML::
>>> print widget.render()
<input type="text" class="email-widget" value="" />
Adding some more attributes to the widget will make it display more::
>>> widget.id = 'id'
>>> widget.name = 'name'
>>> widget.value = u'[email protected]'
>>> print widget.render()
<input type="text" id="id" name="name" class="email-widget"
value="[email protected]" />
More interesting is to the display view::
>>> widget.mode = DISPLAY_MODE
>>> template = os.path.join(os.path.dirname(zw.widget.email.__file__),
... 'email_display.pt')
>>> factory = z3c.form.widget.WidgetTemplateFactory(template)
>>> zope.component.provideAdapter(factory,
... (zope.interface.Interface, IDefaultBrowserLayer, None, None, None),
... IPageTemplate, name='display')
>>> print widget.render()
<span id="id" class="email-widget">
<a href="mailto:[email protected]">
[email protected]
</a>
</span>
But if we are not authenticated it should be obscured:
>>> widget.obscured = True
>>> print widget.render()
<span id="id" class="email-widget">
[email protected]
</span>
| zw.widget | /zw.widget-0.1.6.2.tar.gz/zw.widget-0.1.6.2/src/zw/widget/email/README.txt | README.txt |
__docformat__ = 'reStructuredText'
from zw.widget.i18n import MessageFactory as _
import zope.schema.interfaces
from zope.component import adapter
from zope.interface import implementsOnly, implementer
from z3c.form.browser.textarea import TextAreaWidget
from z3c.form.interfaces import IFormLayer, IFieldWidget
from z3c.form.widget import FieldWidget
from zw.schema.richtext.interfaces import IRichText
from zw.widget.tiny.interfaces import ITinyWidget
try:
from zc import resourcelibrary
haveResourceLibrary = True
except ImportError:
haveResourceLibrary = False
OPT_PREFIX = "mce_"
OPT_PREFIX_LEN = len(OPT_PREFIX)
MCE_LANGS=[]
import glob
import os
# initialize the language files
for langFile in glob.glob(
os.path.join(os.path.dirname(__file__), 'tiny_mace', 'langs') + '/??.js'):
MCE_LANGS.append(os.path.basename(langFile)[:2])
class TinyWidget(TextAreaWidget):
"""TinyMCE widget implementation.
"""
implementsOnly(ITinyWidget)
klass = u'tiny-widget'
value = u''
tiny_js = u""
rows = 10
cols = 60
mce_theme = "advanced"
mce_theme_advanced_buttons1 = "bold,italic,underline,separator,strikethrough,justifyleft,justifycenter,justifyright, justifyfull,bullist,numlist,undo,redo,link,unlink"
mce_theme_advanced_buttons2 = ""
mce_theme_advanced_buttons3 = ""
mce_theme_advanced_toolbar_location = "top"
mce_theme_advanced_toolbar_align = "left"
mce_theme_advanced_statusbar_location = "bottom"
mce_extended_valid_elements = "a[name|href|target|title|onclick],img[class|src|border=0|alt|title|hspace|vspace|width|height|align|onmouseover|onmouseout|name],hr[class|width|size|noshade],font[face|size|color|style],span[class|align|style]"
def update(self):
super(TinyWidget, self).update()
if haveResourceLibrary:
resourcelibrary.need('tiny_mce')
mceOptions = []
for k in dir(self):
if k.startswith(OPT_PREFIX):
v = getattr(self, k, None)
v = v==True and 'true' or v==False and 'false' or v
if v in ['true','false']:
mceOptions.append('%s : %s' % (k[OPT_PREFIX_LEN:],v))
elif v is not None:
mceOptions.append('%s : "%s"' % (k[OPT_PREFIX_LEN:],v))
mceOptions = ', '.join(mceOptions)
if mceOptions:
mceOptions += ', '
if self.request.locale.id.language in MCE_LANGS:
mceOptions += ('language : "%s", ' % \
self.request.locale.id.language)
self.tiny_js = u"""
tinyMCE.init({
mode : "exact", %(options)s
elements : "%(id)s"
}
);
""" % { "id": self.id,
"options": mceOptions }
@adapter(IRichText, IFormLayer)
@implementer(IFieldWidget)
def TinyFieldWidget(field, request):
"""IFieldWidget factory for TinyWidget.
"""
return FieldWidget(field, TinyWidget(request)) | zw.widget | /zw.widget-0.1.6.2.tar.gz/zw.widget-0.1.6.2/src/zw/widget/tiny/widget.py | widget.py |
==========
TinyWidget
==========
The widget can render a HTML text input field based on the TinyMCE
JavaScript Content Editor from Moxicode Systems
..http://tinymce.moxiecode.com
>>> from zope.interface.verify import verifyClass
>>> from zope.app.form.interfaces import IInputWidget
>>> from z3c.form.interfaces import IWidget
>>> from zw.widget.tiny.widget import TinyWidget
The TinyWidget is a widget:
>>> verifyClass(IWidget, TinyWidget)
True
The widget can render a textarea field only by adapteing a request:
>>> from z3c.form.testing import TestRequest
>>> request = TestRequest()
>>> widget = TinyWidget(request)
Such a field provides IWidget:
>>> IWidget.providedBy(widget)
True
We also need to register the template for at least the widget and
request:
>>> import os.path
>>> import zope.interface
>>> from zope.publisher.interfaces.browser import IDefaultBrowserLayer
>>> from zope.pagetemplate.interfaces import IPageTemplate
>>> import zw.widget.tiny
>>> import z3c.form.widget
>>> template = os.path.join(os.path.dirname(zw.widget.tiny.__file__),
... 'tiny_input.pt')
>>> factory = z3c.form.widget.WidgetTemplateFactory(template)
>>> zope.component.provideAdapter(factory,
... (zope.interface.Interface, IDefaultBrowserLayer, None, None, None),
... IPageTemplate, name='input')
If we render the widget we get the HTML:
>>> print widget.render()
<textarea class="tiny-widget" cols="60" rows="10"></textarea>
Adding some more attributes to the widget will make it display more:
>>> widget.id = 'id'
>>> widget.name = 'name'
>>> widget.value = u'value'
>>> print widget.render()
<textarea id="id" name="name" class="tiny-widget" cols="60"
rows="10">value</textarea>
TODO: Testing for ECMAScript code...
| zw.widget | /zw.widget-0.1.6.2.tar.gz/zw.widget-0.1.6.2/src/zw/widget/tiny/README.txt | README.txt |
===========
LinesWidget
===========
The widget can render a HTML text input field, which collects list
items by line.
>>> from zope.interface.verify import verifyClass
>>> from z3c.form.interfaces import IWidget
>>> from zw.widget.lines.widget import LinesWidget
The LinesWidget is a widget:
>>> verifyClass(IWidget, LinesWidget)
True
The widget can render a textarea field only by adapteing a request:
>>> from z3c.form.testing import TestRequest
>>> request = TestRequest()
>>> widget = LinesWidget(request)
Such a field provides IWidget:
>>> IWidget.providedBy(widget)
True
We also need to register the template for at least the widget and
request:
>>> import os.path
>>> import zope.interface
>>> from zope.publisher.interfaces.browser import IDefaultBrowserLayer
>>> from zope.pagetemplate.interfaces import IPageTemplate
>>> import zw.widget.lines
>>> import z3c.form.widget
>>> template = os.path.join(os.path.dirname(zw.widget.lines.__file__),
... 'lines_input.pt')
>>> factory = z3c.form.widget.WidgetTemplateFactory(template)
>>> zope.component.provideAdapter(factory,
... (zope.interface.Interface, IDefaultBrowserLayer, None, None, None),
... IPageTemplate, name='input')
If we render the widget we get the HTML:
>>> print widget.render()
<textarea class="lines-widget"></textarea>
Adding some more attributes to the widget will make it display more:
>>> widget.id = 'id'
>>> widget.name = 'name'
>>> widget.value = u'value'
>>> print widget.render()
<textarea id="id" name="name" class="lines-widget">value</textarea>
| zw.widget | /zw.widget-0.1.6.2.tar.gz/zw.widget-0.1.6.2/src/zw/widget/lines/README.txt | README.txt |
import urllib3
import zware_api.interfaces as interfaces
from .const import DEVICE_DATABASE
COMMAND_CLASSES = {
"76": interfaces.zwLockLogging,
"78": interfaces.zwScheduleEntry,
"98": interfaces.zwLock,
"99": interfaces.zwUserCode,
"113": interfaces.zwAlarm,
"128": interfaces.zwBattery,
}
class zwClient(object):
"""Representation of a Z-Wave network client."""
CMD_ADD_NODE = 2
CMD_DELETE_NODE = 3
def __init__(self, zware_object, host, user, password):
"""Initialize a z-wave client."""
urllib3.disable_warnings()
self.zware = zware_object
self.ipAddress = host
self.username = user
self.password = password
self.nodes = list()
def login(self):
"""Connect to the server"""
board_ip = 'https://' + self.ipAddress + '/'
r = self.zware.zw_init(board_ip, self.username, self.password)
v = r.findall('./version')[0]
return v.get('app_major') + '.' + v.get('app_minor')
def get_node_list(self, active=False):
"""Get nodes in the z-wave network."""
if active:
nodes_list = list()
nodes = self.zware.zw_api('zwnet_get_node_list')
nodes = nodes.findall('./zwnet/zwnode')
for node in nodes:
node_obj = zwNode(self.zware, node.get('id'), node.get('property'), node.get('vid'),
node.get('pid'), node.get('type'), node.get('category'),
node.get('alive'), node.get('sec'))
nodes_list.append(node_obj)
self.nodes = nodes_list
return self.nodes
def add_node(self):
"""Activate adding mode in a Z-Wave network."""
self.zware.zw_add_remove(self.CMD_ADD_NODE)
self.zware.zw_net_comp(self.CMD_ADD_NODE)
def remove_node(self):
"""Activate exclusion mode in a Z-Wave network."""
self.zware.zw_add_remove(self.CMD_DELETE_NODE)
self.zware.zw_net_comp(self.CMD_DELETE_NODE)
def cancel_command(self):
"""Cancel the last sent Z-Wave command."""
self.zware.zw_abort()
class zwNode:
"""Representation of a Z-Wave Node."""
def __init__(self, zware, id, property, manufacturer_id, product_id, product_type,
device_category, alive_state, is_secure):
"""Initialize a z-wave node."""
self.zware = zware
# Properties of a z-wave node.
self.id = id
self.property = property
self.manufacturer_id = manufacturer_id
self.product_id = product_id
self.product_type = product_type
self.device_category = device_category
self.alive_state = alive_state
self.is_secure = (int(is_secure) == 1)
self.endpoints = list()
self.name = None
self.location = None
def get_name_and_location(self, active=False):
"""Get the current name and location of a node."""
if active:
endpoints = self.zware.zw_api('zwnode_get_ep_list', 'noded=' + self.id)
endpoints = endpoints.findall('./zwnode/zwep')
self.name = endpoints[0].get('name', '').replace("%20", " ")
self.location = endpoints[0].get('loc').replace("%20", " ")
return self.name, self.location
def set_node_name_and_location(self, name, location):
"""Set the name and location of a node."""
if len(self.endpoints) == 0:
self.get_endpoints(active=True)
self.zware.zw_nameloc(self.endpoints[0].id, name, location)
self.name = name
self.location = location
def get_readable_manufacturer_model(self):
"""Return a tupple with human-readable device manufacturer and model"""
return (DEVICE_DATABASE.get(self.manufacturer_id, {}).get("name"),
DEVICE_DATABASE.get(self.manufacturer_id, {}).get(
"product",{}).get(self.product_id, {}).get(self.product_type))
def send_nif(self):
"""Send a node information frame to the node."""
self.zware.zw_api('zwnet_send_nif', 'noded=' + self.id)
self.zware.zw_net_wait()
def update(self):
"""Update the node status in the zwave network."""
self.zware.zw_api('zwnode_update', 'noded=' + self.id)
self.zware.zw_net_wait()
def get_endpoints(self, active=False):
"""Get endpoints in a z-wave node."""
if active:
ep_list = list()
endpoints = self.zware.zw_api('zwnode_get_ep_list', 'noded=' + self.id)
endpoints = endpoints.findall('./zwnode/zwep')
for ep in endpoints:
ep_obj = zwEndpoint(self.zware, ep.get('desc'), ep.get('generic'), ep.get('specific'),
ep.get('name'), ep.get('loc'), ep.get('zwplus_ver'),
ep.get('role_type'), ep.get('node_type'), ep.get('instr_icon'),
ep.get('usr_icon'))
ep_list.append(ep_obj)
self.endpoints = ep_list
return self.endpoints
class zwEndpoint:
"""Representation of a Z-Wave Endpoint."""
def __init__(self, zware, id, generic, specific, name, location, version, role_type,
node_type, instr_icon, user_icon):
"""Initialize a z-wave endpoint."""
self.zware = zware
# Properties of a z-wave endpoint.
self.id = id
self.generic = generic
self.specific = specific
self.name = name
self.location = location
self.zw_plus_version = version
self.role_type = role_type
self.node_type = node_type
self.installer_icon = instr_icon
self.user_icon = user_icon
self.interfaces = list()
def get_interfaces(self, active=False):
"""Get all the interfaces of an endpoint."""
if active:
if_list = list()
itfs = self.zware.zw_api('zwep_get_if_list', 'epd=' + self.id)
itfs = itfs.findall('./zwep/zwif')
for itf in itfs:
type_id = itf.get('id')
if_obj = COMMAND_CLASSES.get(type_id,
interfaces.zwInterface)(self.zware, itf.get('desc'), type_id,
itf.get('name'), itf.get('ver'),
itf.get('real_ver'),
itf.get('sec'), itf.get('unsec'))
if_list.append(if_obj)
self.interfaces = if_list
return self.interfaces | zware-api | /zware_api-0.0.25.tar.gz/zware_api-0.0.25/zware_api/objects.py | objects.py |
import requests
import xml.etree.ElementTree as ET
class ZWareApi:
"""The ZWare web API."""
zware_session = None
zware_url = ""
def zw_api(self, uri, parm=''):
r = self.zware_session.post(self.zware_url + uri, data=parm, verify=False)
assert r.status_code == 200, "Unexpected response from Z-Ware API: {}".format(r.status_code)
try:
x = ET.fromstring(r.text)
except:
return r.text
e = x.find('./error')
assert e is None, e.text
return x
"""Network operations"""
def zw_net_wait(self):
while int(self.zw_api('zwnet_get_operation').find('./zwnet/operation').get('op')):
pass
def zw_net_comp(self, op):
while op != int(self.zw_api('zwnet_get_operation').find('./zwnet/operation').get('prev_op')):
pass
def zw_net_op_sts(self, op):
while op != int(self.zw_api('zwnet_get_operation').find('./zwnet/operation').get('op_sts')):
pass
def zw_net_get_grant_keys(self):
grant_key = self.zw_api('zwnet_add_s2_get_req_keys').find('./zwnet/security').get('req_key')
return grant_key
def zw_net_add_s2_get_dsk(self):
dsk = self.zw_api('zwnet_add_s2_get_dsk').find('./zwnet/security').get('dsk')
return dsk
def zw_net_set_grant_keys(self, grant_key):
return self.zw_api('zwnet_add_s2_set_grant_keys', 'granted_keys=' + grant_key)
def zw_net_provisioning_list_add(self, dsk, boot_mode, grant_keys, interval, device_name,
device_location, application_version, sub_version, vendor,
product_id, product_type, status, generic_class, specific_class,
installer_icon, uuid_format, uuid):
provisioning_list_string = 'dsk=' + dsk
if device_name != "":
provisioning_list_string = provisioning_list_string + '&name=' + device_name
if device_location != "":
provisioning_list_string = provisioning_list_string + '&loc=' + device_location
if generic_class != "":
provisioning_list_string = provisioning_list_string + '&ptype_generic=' + generic_class
if specific_class != "":
provisioning_list_string = provisioning_list_string + '&ptype_specific=' + specific_class
if installer_icon != "":
provisioning_list_string = provisioning_list_string + '&ptype_icon=' + installer_icon
if vendor != "":
provisioning_list_string = provisioning_list_string + '&pid_manufacturer_id=' + vendor
if product_type != "":
provisioning_list_string = provisioning_list_string + '&pid_product_type=' + product_type
if product_id != "":
provisioning_list_string = provisioning_list_string + '&pid_product_id=' + product_id
if application_version != "":
provisioning_list_string = provisioning_list_string + '&pid_app_version=' + application_version
if sub_version != "":
provisioning_list_string = provisioning_list_string + '&pid_app_sub_version=' + sub_version
if interval != "":
provisioning_list_string = provisioning_list_string + '&interval=' + interval
if uuid_format != "":
provisioning_list_string = provisioning_list_string + '&uuid_format=' + uuid_format
if uuid != "":
provisioning_list_string = provisioning_list_string + '&uuid_data=' + uuid
if status != "":
provisioning_list_string = provisioning_list_string + '&pl_status=' + status
if grant_keys != "":
provisioning_list_string = provisioning_list_string + '&grant_keys=' + grant_keys
if boot_mode != "":
provisioning_list_string = provisioning_list_string + '&boot_mode=' + boot_mode
return self.zw_api('zwnet_provisioning_list_add', provisioning_list_string)
def zw_net_provisioning_list_list_get(self):
devices_info = self.zw_api('zwnet_provisioning_list_list_get').findall('./zwnet/pl_list/pl_device_info')
return devices_info
def zw_net_provisioning_list_remove(self, dsk):
result = self.zw_api('zwnet_provisioning_list_remove', 'dsk=' + dsk)
return result
def zw_net_provisioning_list_remove_all(self):
result = self.zw_api('zwnet_provisioning_list_remove_all')
return result
def zw_net_set_dsk(self, dsk):
return self.zw_api('zwnet_add_s2_accept', 'accept=1&value=' + dsk)
def zw_init(self, url='https://127.0.0.1/', user='test_user', pswd='test_password', get_version=True):
self.zware_session = requests.session()
self.zware_url = url
self.zware_session.headers.update({'Content-Type': 'application/x-www-form-urlencoded'}) # apache requires this
self.zw_api('register/login.php', 'usrname=' + user + '&passwd=' + pswd)
self.zware_url += 'cgi/zcgi/networks//'
if get_version:
return self.zw_api('zw_version')
else:
return
def zw_add_remove(self, cmd):
return self.zw_api('zwnet_add', 'cmd=' + str(cmd))
def zw_abort(self):
return self.zw_api('zwnet_abort', '')
def zw_nameloc(self, epd, name, location):
return self.zw_api('zwep_nameloc', 'cmd=1&epd=' + epd + '&name=' + name + '&loc=' + location)
""" Interfaces """
def zwif_api(self, dev, ifd, cmd=1, arg=''):
return self.zw_api('zwif_' + dev, 'cmd=' + str(cmd) + '&ifd=' + str(ifd) + arg)
def zwif_api_ret(self, dev, ifd, cmd=1, arg=''):
r = self.zwif_api(dev, ifd, cmd, arg)
if cmd == 2 or cmd == 3:
return r.find('./zwif/' + dev)
return r
def zwif_basic_api(self, ifd, cmd=1, arg=''):
return self.zwif_api_ret('basic', ifd, cmd, arg)
def zwif_switch_api(self, ifd, cmd=1, arg=''):
return self.zwif_api_ret('switch', ifd, cmd, arg)
def zwif_level_api(self, ifd, cmd=1, arg=''):
return self.zwif_api_ret('level', ifd, cmd, arg)
def zwif_thermo_list_api(self, dev, ifd, cmd=1, arg=''):
r = self.zwif_api_ret('thrmo_' + dev, ifd, cmd, arg)
if cmd == 5 or cmd == 6:
return r.find('./zwif/thrmo_' + dev + '_sup')
return r
def zwif_thermo_mode_api(self, ifd, cmd=1, arg=''):
return self.zwif_thermo_list_api('md', ifd, cmd, arg)
def zwif_thermo_state_api(self, ifd, cmd=1, arg=''):
return self.zwif_api_ret('thrmo_op_sta', ifd, cmd, arg)
def zwif_thermo_setpoint_api(self, ifd, cmd=1, arg=''):
return self.zwif_thermo_list_api('setp', ifd, cmd, arg)
def zwif_thermo_fan_mode_api(self, ifd, cmd=1, arg=''):
return self.zwif_thermo_list_api('fan_md', ifd, cmd, arg)
def zwif_thermo_fan_state_api(self, ifd, cmd=1, arg=''):
return self.zwif_api_ret('thrmo_fan_sta', ifd, cmd, arg)
def zwif_meter_api(self, ifd, cmd=1, arg=''):
return self.zwif_api_ret('meter', ifd, cmd, arg)
def zwif_bsensor_api(self, ifd, cmd=1, arg=''):
return self.zwif_api_ret('bsensor', ifd, cmd, arg)
def zwif_sensor_api(self, ifd, cmd=1, arg=''):
return self.zwif_api_ret('sensor', ifd, cmd, arg)
def zwif_av_api(self, ifd, cmd=1, arg=''):
r = self.zwif_api('av', ifd, cmd, arg)
if cmd == 2 or cmd == 3:
return r.find('./zwif/av_caps')
return r | zware-api | /zware_api-0.0.25.tar.gz/zware_api-0.0.25/zware_api/zware.py | zware.py |
from .objects import zwInterface
import asyncio
EVENTS = {
"0": {
"9": {
"1": "Deadbolt jammed while locking",
"2": "Deadbolt jammed while unlocking",
},
"18": {
"default": "Keypad Lock with user_id {}",
},
"19": {
"default": "Keypad Unlock with user_id {}",
},
"21": {
"1": "Manual Lock by Key Cylinder or Thumb-Turn",
"2": "Manual Lock by Touch Function",
"3": "Manual Lock by Inside Button",
},
"22": {
"1": "Manual Unlock Operation",
},
"24": {
"1": "RF Lock Operation",
},
"25": {
"1": "RF Unlock Operation",
},
"27": {
"1": "Auto re-lock cycle complete",
},
"33": {
"default": "Single user code deleted with user_id {}",
},
"38": {
"default": "Non access code entered with user_id {}",
},
"96": {
"default": "Daily Schedule has been set/erased for user_id {}"
},
"97": {
"default": "Daily Schedule has been enabled/disabled for user_id {}"
},
"98": {
"default": "Yearly Schedule has been set/erased for user_id {}"
},
"99": {
"default": "Yearly Schedule has been enabled/disabled for user_id {}"
},
"100": {
"default": "All Schedules have been set/erased for user_id {}"
},
"101": {
"default": "All Schedules have been enabled/disabled for user_id {}"
},
"112": {
"default": "New user code added with user_id {}",
"0": "Master Code was changed at keypad",
"251": "Master Code was changed over RF",
},
"113": {
"0": "Duplicate Master Code error",
"default": "Duplicate Pin-Code error with user_id {}",
},
"130": {
"0": "Door Lock needs Time Set"
},
"131": {
"default": "Disabled user_id {} code was entered at the keypad"
},
"132": {
"default": "Valid user_id {} code was entered outside of schedule"
},
"161": {
"1": "Keypad attempts exceed limit",
"2": "Front Escutcheon removed from main",
"3": "Master Code attempts exceed limit",
},
"167": {
"default": "Low Battery Level {}",
},
"168": {
"default": "Critical Battery Level {}",
}
},
"6": {
"0": "State idle",
"1": "Manual Lock Operation",
"2": "Manual Unlock Operation",
"3": "RF Lock Operation",
"4": "RF Unlock Operation",
"5": "Keypad Lock Operation",
"6": "Keypad Unlock Operation",
"7": "Manual Not Fully Locked Operation",
"8": "RF Not Fully Locked Operation",
"9": "Auto Lock Locked Operation",
"10": "Auto Lock Not Fully Operation",
"11": "Lock Jammed",
"12": "All user codes deleted",
"13": "Single user code deleted",
"14": "New user code added",
"15": "New user code not added due to duplicate code",
"16": "Keypad temporary disabled",
"17": "Keypad busy",
"18": "New Program code Entered - Unique code for lock configuration",
"19": "Manually Enter user Access code exceeds code limit",
"20": "Unlock By RF with invalid user code",
"21": "Locked by RF with invalid user codes",
"22": "Window/Door is open",
"23": "Window/Door is closed",
"24": "Window/door handle is open",
"25": "Window/door handle is closed",
"32": "Messaging User Code entered via keypad",
"64": "Barrier performing Initialization process",
"65": "Barrier operation (Open / Close) force has been exceeded.",
"66": "Barrier motor has exceeded manufacturer’s operational time limit",
"67": "Barrier operation has exceeded physical mechanical limits.",
"68": "Barrier unable to perform requested operation due to UL requirements",
"69": "Barrier Unattended operation has been disabled per UL requirements",
"70": "Barrier failed to perform Requested operation, device malfunction",
"71": "Barrier Vacation Mode",
"72": "Barrier Safety Beam Obstacle",
"73": "Barrier Sensor Not Detected / Supervisory Error",
"74": "Barrier Sensor Low Battery Warning",
"75": "Barrier detected short in Wall Station wires",
"76": "Barrier associated with non-Z-wave remote control",
"254": "Unknown Event"
},
"8": {
"1": "Door Lock needs Time Set",
"10": "Low Battery",
"11": "Critical Battery Level"
}
}
class zwLock(zwInterface):
"""Representation of a Z-Wave Lock Command Class."""
CMD_OPEN_DOOR = 0
CMD_CLOSE_DOOR = 255
CMD_DLOCK_SETUP = 1
CMD_DLOCK_OP_ACTIVE_GET = 2
CMD_DLOCK_OP_PASSIVE_GET = 3
CMD_DLOCK_OP_SET = 4
CMD_DLOCK_CFG_ACTIVE_GET = 5
CMD_DLOCK_CFG_PASSIVE_GET = 6
def send_command(self, cmd, arg='', dev='dlck'):
"""Send a command to the Doorlock Command Class."""
super(self, zwLock).send_command(cmd, arg=arg, dev=dev)
def ret_command(self, cmd, arg='', dev='dlck'):
"""Send a command to the Doorlock Command Class that returns data."""
r = self.zware.zw_api('zwif_' + dev, 'cmd={}&ifd={}'.format(cmd, self.id) + arg)
if cmd == self.CMD_DLOCK_OP_ACTIVE_GET or cmd == self.CMD_DLOCK_OP_PASSIVE_GET:
return r.find('./zwif/' + dev + '_op')
elif cmd == self.CMD_DLOCK_CFG_ACTIVE_GET or cmd == self.CMD_DLOCK_CFG_PASSIVE_GET:
return r.find('./zwif/' + dev + '_cfg')
return r
def get_status(self, active=False):
"""Get status from the Doorlock Command Class."""
cmd = self.CMD_DLOCK_OP_ACTIVE_GET if active else self.CMD_DLOCK_OP_PASSIVE_GET
sts_lock_door = self.ret_command(cmd)
self.status = (int(sts_lock_door.get('mode')) == self.CMD_OPEN_DOOR)
return {'is_open': self.status}
def lock(self):
"""Operate the Doorlock Command Class to lock."""
self.send_command(self.CMD_DLOCK_SETUP) # Select this Command Class.
self.send_command(self.CMD_DLOCK_OP_SET, '&mode=' + str(self.CMD_CLOSE_DOOR))
def unlock(self, ifd):
"""Operate the Doorlock Command Class to unlock."""
self.send_command(self.CMD_DLOCK_SETUP) # Select this Command Class.
self.send_command(self.CMD_DLOCK_OP_SET, '&mode=' + str(self.CMD_OPEN_DOOR))
class zwBattery(zwInterface):
"""Representation of a Z-Wave Battery Command Class."""
CMD_BATTERY_ACTIVE_GET = 2
CMD_BATTERY_PASSIVE_GET = 3
def send_command(self, cmd, arg='', dev='battery'):
"""Send a command to the Battery Command Class."""
super(self, zwBattery).send_command(cmd, arg=arg, dev=dev)
def ret_command(self, cmd, arg='', dev='battery'):
"""Send a command to the Battery Command Class that returns data."""
return super(self, zwBattery).ret_command(cmd, arg=arg, dev=dev)
def get_status(self, active=False):
"""Get the battery level of the lock."""
cmd = self.CMD_BATTERY_ACTIVE_GET if active else self.CMD_BATTERY_PASSIVE_GET
status = self.ret_command(cmd)
self.status = int(status.get('level'))
return {'battery': self.status}
class zwUserCode(zwInterface):
"""Representation of a Z-Wave User Code Command Class."""
CMD_USER_CODE_ACTIVE_GET = 1
CMD_USER_CODE_PASSIVE_GET = 2
CMD_USER_CODE_SET = 3
CMD_USER_CODE_USERS_ACTIVE_GET = 4
CMD_USER_CODE_USERS_PASSIVE_GET = 5
CMD_USER_CODE_MASTER_ACTIVE_GET = 11
CMD_USER_CODE_MASTER_PASSIVE_GET = 12
CMD_USER_CODE_MASTER_SET = 13
STATUS_UNOCCUPIED = 0
STATUS_OCCUPIED_ENABLED = 1
STATUS_OCCUPIED_DISABLED = 3
STATUS_NON_ACCESS_USER = 4
def send_command(self, cmd, arg='', dev='usrcod'):
"""Send a command to the User Code Command Class."""
super(self, zwUserCode).send_command(cmd, arg=arg, dev=dev)
def ret_command(self, cmd, arg='', dev='usrcod'):
"""Send a command to the User Code Command Class that returns data."""
r = self.zware.zw_api('zwif_' + dev, 'cmd={}&ifd={}'.format(cmd, self.id) + arg)
if cmd == self.CMD_USER_CODE_ACTIVE_GET or cmd == self.CMD_USER_CODE_PASSIVE_GET:
return r.find('./zwif/' + dev)
elif cmd == self.CMD_USER_CODE_USERS_ACTIVE_GET or cmd == self.CMD_USER_CODE_USERS_PASSIVE_GET:
return r.find('./zwif/usrcod_sup')
return r
def get_master_code(self, active=False):
"""Get the master code. Only if the specific devices' User Code Command Class supports it."""
cmd = self.CMD_USER_CODE_MASTER_ACTIVE_GET if active else self.CMD_USER_CODE_MASTER_PASSIVE_GET
status = self.ret_command(cmd)
return {'master_code': status.get('master_code')}
async def set_master_code(self, code, verify=True):
"""Set the master code. Only if the specific devices' User Code Command Class supports it."""
try:
self.send_command(self.CMD_USER_CODE_MASTER_SET, arg='&master_code={}'.format(code))
except:
return False
if verify:
timeout = 0
while self.ret_command(self.CMD_USER_CODE_MASTER_ACTIVE_GET).get('master_code') != str(code):
if timeout >= 20:
return False
await asyncio.sleep(1)
timeout += 1
return True
async def set_codes(self, user_ids: list, status: list, codes=None, verify=True):
"""Set a list of code slots to the given statuses and codes."""
user_ids = ",".join(user_ids)
status = ",".join(status)
codes = ",".join(codes if codes else [])
try:
self.send_command(self.CMD_USER_CODE_SET, '&id={}&status={}&code={}'
.format(user_ids, status, codes))
except:
return False
if verify:
timeout = 0
first_user = user_ids[0]
first_status = status[0]
first_code = codes[0]
while not self.is_code_set(first_user, first_status, first_code):
# Assume that if the first code was set correctly, all codes were.
if timeout >= 20:
return False
await asyncio.sleep(1)
timeout += 1
return True
async def remove_single_code(self, user_id, verify=True):
"""Set a single code to unoccupied status."""
return await self.set_codes([user_id], [self.STATUS_UNOCCUPIED], verify=verify)
async def disable_single_code(self, user_id, code, verify=True):
"""Set a single code to occupied/disabled status."""
return await self.set_codes([user_id], [self.STATUS_UNOCCUPIED], codes=[code], verify=verify)
async def get_all_users(self, active=False):
"""Get a dictionary of the status of all users in the lock."""
cmd = self.CMD_USER_CODE_USERS_ACTIVE_GET if active else self.CMD_USER_CODE_USERS_PASSIVE_GET
max_users = int(self.ret_command(cmd).get('user_cnt'))
users = {}
for i in range(1, max_users + 1):
cmd = self.CMD_USER_CODE_ACTIVE_GET if active else self.CMD_USER_CODE_PASSIVE_GET
code = self.ret_command(cmd, arg='&id={}'.format(i))
users[str(i)] = {
"status": code.get('status'),
"code": code.get('code'),
"update": code.get('utime'),
}
return users
def is_code_set(self, user_id, status, code):
"""Check if a code and status are set in a given id."""
code_obj = self.ret_command(self.CMD_USER_CODE_ACTIVE_GET, '&id={}'.format(user_id))
if status == self.STATUS_UNOCCUPIED:
return code_obj.get('status') == status
return code_obj.get('status') == status and code_obj.get('code') == code
class zwAlarm(zwInterface):
"""Representation of a Z-Wave Alarm Command Class."""
CMD_ALARM_ACTIVE_GET = 2
CMD_ALARM_PASSIVE_GET = 3
def send_command(self, cmd, arg='', dev='alrm'):
"""Send a command to the Alarm Command Class."""
super(self, zwAlarm).send_command(cmd, arg=arg, dev=dev)
def ret_command(self, cmd, arg='', dev='alrm'):
"""Send a command to the Alarm Command Class that returns data."""
return super(self, zwAlarm).ret_command(cmd, arg=arg, dev=dev)
def get_alarm_status(self, alarm_vtype):
"""Get the last alarm status of an specific alarm type."""
status = self.ret_command(self.CMD_ALARM_ACTIVE_GET, '&vtype={}'.format(alarm_vtype))
name = self.get_event_description(status.get('vtype'), status.get('ztype'),
status.get('level'), status.get('event'))
return {'alarm_vtype': status.get('vtype'), 'event_description': name, 'ocurred_at': status.get('utime')}
def get_last_alarm(self):
"""Get the last alarm registered on the lock."""
status = self.ret_command(self.CMD_ALARM_PASSIVE_GET, '&ztype=255')
name = self.get_event_description(status.get('vtype'), status.get('ztype'),
status.get('level'), status.get('event'))
return {'alarm_vtype': status.get('vtype'), 'event_description': name, 'ocurred_at': status.get('utime')}
def get_event_description(self, vtype, level, ztype, event):
"""Get the event description given the types and levels."""
name = EVENTS.get(ztype, dict()).get(event)
if name is None:
name = EVENTS["0"][vtype].get(level)
if name is None:
name = EVENTS[ztype][vtype]["default"].format(level)
return name | zware-api | /zware_api-0.0.25.tar.gz/zware_api-0.0.25/zware_api/lock.py | lock.py |
from . import zwInterface
class zwLock(zwInterface):
"""Representation of a Z-Wave Lock Command Class."""
CMD_OPEN_DOOR = 0
CMD_CLOSE_DOOR = 255
CMD_DLOCK_SETUP = 1
CMD_DLOCK_OP_ACTIVE_GET = 2
CMD_DLOCK_OP_PASSIVE_GET = 3
CMD_DLOCK_OP_SET = 4
CMD_DLOCK_CFG_ACTIVE_GET = 5
CMD_DLOCK_CFG_PASSIVE_GET = 6
def send_command(self, cmd, arg='', dev='dlck'):
"""Send a command to the Doorlock Command Class."""
super().send_command(cmd, arg=arg, dev=dev)
def ret_command(self, cmd, arg='', dev='dlck'):
"""Send a command to the Doorlock Command Class that returns data."""
r = self.zware.zw_api('zwif_' + dev, 'cmd={}&ifd={}'.format(cmd, self.id) + arg)
if cmd == self.CMD_DLOCK_OP_ACTIVE_GET or cmd == self.CMD_DLOCK_OP_PASSIVE_GET:
return r.find('./zwif/' + dev + '_op')
elif cmd == self.CMD_DLOCK_CFG_ACTIVE_GET or cmd == self.CMD_DLOCK_CFG_PASSIVE_GET:
return r.find('./zwif/' + dev + '_cfg')
return r
def get_status(self, active=False):
"""Get status from the Doorlock Command Class."""
cmd = self.CMD_DLOCK_OP_ACTIVE_GET if active else self.CMD_DLOCK_OP_PASSIVE_GET
sts_lock_door = self.ret_command(cmd)
self.status = (int(sts_lock_door.get('mode')) == self.CMD_CLOSE_DOOR)
return {'is_locked': self.status}
def lock(self):
"""Operate the Doorlock Command Class to lock."""
self.send_command(self.CMD_DLOCK_SETUP) # Select this Command Class.
self.send_command(self.CMD_DLOCK_OP_SET, '&mode=' + str(self.CMD_CLOSE_DOOR))
def unlock(self):
"""Operate the Doorlock Command Class to unlock."""
self.send_command(self.CMD_DLOCK_SETUP) # Select this Command Class.
self.send_command(self.CMD_DLOCK_OP_SET, '&mode=' + str(self.CMD_OPEN_DOOR))
class zwLockLogging(zwInterface):
"""Representation of a Z-Wave Lock Logging Command Class."""
CMD_DLOCK_LOG_ACTIVE_GET = 2
CMD_DLOCK_LOG_PASSIVE_GET = 3
CMD_DLOCK_LOG_SUP_ACTIVE_GET = 2
CMD_DLOCK_LOG_SUP_PASSIVE_GET = 3
def send_command(self, cmd, arg='', dev='dlck_log'):
"""Send a command to the Doorlock Command Class."""
super().send_command(cmd, arg=arg, dev=dev)
def ret_command(self, cmd, arg='', dev='dlck_log'):
"""Send a command to the Doorlock Command Class that returns data."""
r = self.zware.zw_api('zwif_' + dev, 'cmd={}&ifd={}'.format(cmd, self.id) + arg)
if cmd == self.CMD_DLOCK_LOG_SUP_ACTIVE_GET or cmd == self.CMD_DLOCK_LOG_SUP_PASSIVE_GET:
return r.find('./zwif/' + dev + '_sup')
return r
class zwScheduleEntry(zwInterface):
"""
Representation of a Z-Wave Schedule Entry Lock Command Class.
Not supported by Z-Ware API yet.
"""
def send_command(self, cmd, arg='', dev=''):
"""Send a command to the Schedule Entry Lock Command Class."""
raise NotImplementedError
def ret_command(self, cmd, arg='', dev=''):
"""Send a command to the Schedule Entry Lock Command Class that returns data."""
raise NotImplementedError | zware-api | /zware_api-0.0.25.tar.gz/zware_api-0.0.25/zware_api/interfaces/doorlock.py | doorlock.py |
from . import zwInterface
EVENTS = {
"0": {
"9": {
"1": "Deadbolt jammed while locking",
"2": "Deadbolt jammed while unlocking",
},
"18": {
"default": "Keypad Lock with user_id {}",
},
"19": {
"default": "Keypad Unlock with user_id {}",
},
"21": {
"1": "Manual Lock by Key Cylinder or Thumb-Turn",
"2": "Manual Lock by Touch Function",
"3": "Manual Lock by Inside Button",
},
"22": {
"1": "Manual Unlock Operation",
},
"24": {
"1": "RF Lock Operation",
},
"25": {
"1": "RF Unlock Operation",
},
"27": {
"1": "Auto re-lock cycle complete",
},
"33": {
"default": "Single user code deleted with user_id {}",
},
"38": {
"default": "Non access code entered with user_id {}",
},
"96": {
"default": "Daily Schedule has been set/erased for user_id {}"
},
"97": {
"default": "Daily Schedule has been enabled/disabled for user_id {}"
},
"98": {
"default": "Yearly Schedule has been set/erased for user_id {}"
},
"99": {
"default": "Yearly Schedule has been enabled/disabled for user_id {}"
},
"100": {
"default": "All Schedules have been set/erased for user_id {}"
},
"101": {
"default": "All Schedules have been enabled/disabled for user_id {}"
},
"112": {
"default": "New user code added with user_id {}",
"0": "Master Code was changed at keypad",
"251": "Master Code was changed over RF",
},
"113": {
"0": "Duplicate Master Code error",
"default": "Duplicate Pin-Code error with user_id {}",
},
"130": {
"0": "Door Lock needs Time Set"
},
"131": {
"default": "Disabled user_id {} code was entered at the keypad"
},
"132": {
"default": "Valid user_id {} code was entered outside of schedule"
},
"161": {
"1": "Keypad attempts exceed limit",
"2": "Front Escutcheon removed from main",
"3": "Master Code attempts exceed limit",
},
"167": {
"default": "Low Battery Level {}",
},
"168": {
"default": "Critical Battery Level {}",
}
},
"6": {
"0": "State idle",
"1": "Manual Lock Operation",
"2": "Manual Unlock Operation",
"3": "RF Lock Operation",
"4": "RF Unlock Operation",
"5": "Keypad Lock Operation",
"6": "Keypad Unlock Operation",
"7": "Manual Not Fully Locked Operation",
"8": "RF Not Fully Locked Operation",
"9": "Auto Lock Locked Operation",
"10": "Auto Lock Not Fully Operation",
"11": "Lock Jammed",
"12": "All user codes deleted",
"13": "Single user code deleted",
"14": "New user code added",
"15": "New user code not added due to duplicate code",
"16": "Keypad temporary disabled",
"17": "Keypad busy",
"18": "New Program code Entered - Unique code for lock configuration",
"19": "Manually Enter user Access code exceeds code limit",
"20": "Unlock By RF with invalid user code",
"21": "Locked by RF with invalid user codes",
"22": "Window/Door is open",
"23": "Window/Door is closed",
"24": "Window/door handle is open",
"25": "Window/door handle is closed",
"32": "Messaging User Code entered via keypad",
"64": "Barrier performing Initialization process",
"65": "Barrier operation (Open / Close) force has been exceeded.",
"66": "Barrier motor has exceeded manufacturer’s operational time limit",
"67": "Barrier operation has exceeded physical mechanical limits.",
"68": "Barrier unable to perform requested operation due to UL requirements",
"69": "Barrier Unattended operation has been disabled per UL requirements",
"70": "Barrier failed to perform Requested operation, device malfunction",
"71": "Barrier Vacation Mode",
"72": "Barrier Safety Beam Obstacle",
"73": "Barrier Sensor Not Detected / Supervisory Error",
"74": "Barrier Sensor Low Battery Warning",
"75": "Barrier detected short in Wall Station wires",
"76": "Barrier associated with non-Z-wave remote control",
"254": "Unknown Event"
},
"8": {
"1": "Door Lock needs Time Set",
"10": "Low Battery",
"11": "Critical Battery Level"
}
}
def get_event_description(vtype, level, ztype, event):
"""Get the event description given the types and levels."""
vtype_ev = EVENTS["0"].get(vtype)
name = None
if vtype_ev:
name = vtype_ev.get(level)
if name is None:
name = vtype_ev.get("default", "{}").format(level)
if name is None or name == str(level):
name = EVENTS.get(ztype, dict()).get(event)
return name
class zwAlarm(zwInterface):
"""Representation of a Z-Wave Alarm Command Class."""
CMD_ALARM_ACTIVE_GET = 2
CMD_ALARM_PASSIVE_GET = 3
def send_command(self, cmd, arg='', dev='alrm'):
"""Send a command to the Alarm Command Class."""
super().send_command(cmd, arg=arg, dev=dev)
def ret_command(self, cmd, arg='', dev='alrm'):
"""Send a command to the Alarm Command Class that returns data."""
return super().ret_command(cmd, arg=arg, dev=dev)
def get_alarm_status(self, alarm_vtype):
"""Get the last alarm status of an specific alarm type."""
status = self.ret_command(self.CMD_ALARM_ACTIVE_GET, '&vtype={}'.format(alarm_vtype))
name = get_event_description(status.get('vtype'), status.get('ztype'),
status.get('level'), status.get('event'))
return {'alarm_vtype': status.get('vtype'), 'event_description': name, 'ocurred_at': status.get('utime')}
def get_last_alarm(self):
"""Get the last alarm registered on the lock."""
status = self.ret_command(self.CMD_ALARM_PASSIVE_GET, '&ztype=255')
if status is None:
return {'event_description': 'Offline'}
name = get_event_description(status.get('vtype'), status.get('level'),
status.get('ztype'), status.get('event'))
return {
'alarm_vtype': status.get('vtype'),
'alarm_level': status.get('level'),
'alarm_ztype': status.get('ztype'),
'alarm_event': status.get('event'),
'event_description': name,
'occurred_at': status.get('utime')
} | zware-api | /zware_api-0.0.25.tar.gz/zware_api-0.0.25/zware_api/interfaces/alarm.py | alarm.py |
import time
from . import zwInterface
class zwUserCode(zwInterface):
"""Representation of a Z-Wave User Code Command Class."""
CMD_USER_CODE_ACTIVE_GET = 1
CMD_USER_CODE_PASSIVE_GET = 2
CMD_USER_CODE_SET = 3
CMD_USER_CODE_USERS_ACTIVE_GET = 4
CMD_USER_CODE_USERS_PASSIVE_GET = 5
CMD_USER_CODE_MASTER_ACTIVE_GET = 11
CMD_USER_CODE_MASTER_PASSIVE_GET = 12
CMD_USER_CODE_MASTER_SET = 13
STATUS_UNOCCUPIED = 0
STATUS_OCCUPIED_ENABLED = 1
STATUS_OCCUPIED_DISABLED = 3
STATUS_NON_ACCESS_USER = 4
def send_command(self, cmd, arg='', dev='usrcod'):
"""Send a command to the User Code Command Class."""
super().send_command(cmd, arg=arg, dev=dev)
def ret_command(self, cmd, arg='', dev='usrcod'):
"""Send a command to the User Code Command Class that returns data."""
r = self.zware.zw_api('zwif_' + dev, 'cmd={}&ifd={}'.format(cmd, self.id) + arg)
if cmd == self.CMD_USER_CODE_ACTIVE_GET or cmd == self.CMD_USER_CODE_PASSIVE_GET:
return r.find('./zwif/usrcod')
elif cmd == self.CMD_USER_CODE_USERS_ACTIVE_GET or cmd == self.CMD_USER_CODE_USERS_PASSIVE_GET:
return r.find('./zwif/usrcod_sup')
return r
def get_master_code(self, active=False):
"""Get the master code. Only if the specific devices' User Code Command Class supports it."""
cmd = self.CMD_USER_CODE_MASTER_ACTIVE_GET if active else self.CMD_USER_CODE_MASTER_PASSIVE_GET
status = self.ret_command(cmd)
return {'master_code': status.get('master_code')}
def set_master_code(self, code, verify=True):
"""Set the master code. Only if the specific devices' User Code Command Class supports it."""
self.send_command(self.CMD_USER_CODE_MASTER_SET, arg='&master_code={}'.format(code))
if verify:
timeout = 0
while self.ret_command(self.CMD_USER_CODE_MASTER_ACTIVE_GET).get('master_code') != str(code):
if timeout >= 20:
return False
time.sleep(1)
timeout += 1
return True
def set_codes(self, user_ids: list, status: list, codes=None, verify=True):
"""Set a list of code slots to the given statuses and codes."""
first_user = user_ids[0]
first_status = status[0]
first_code = codes[0] if codes else None
user_ids = ",".join(user_ids)
status = ",".join(status)
codes = ",".join(codes if codes else [])
self.send_command(self.CMD_USER_CODE_SET, '&id={}&status={}&code={}'
.format(user_ids, status, codes))
if verify:
timeout = 0
while not self.is_code_set(first_user, first_status, first_code):
# Assume that if the first code was set correctly, all codes were.
if timeout >= 20:
return False
time.sleep(1)
timeout += 1
return True
def get_code(self, user_id, active=False):
"""Get the code from a user_id and its status."""
cmd = self.CMD_USER_CODE_ACTIVE_GET if active else self.CMD_USER_CODE_PASSIVE_GET
status = self.ret_command(cmd, arg='&id={}'.format(user_id))
if status is None:
return {"user_id": user_id, "status": self.STATUS_UNOCCUPIED, "code": None, "error": "Not found"}
return {"user_id": status.get("id"), "status": status.get("status"), "code": status.get("code")}
def remove_single_code(self, user_id, verify=True):
"""Set a single code to unoccupied status."""
return self.set_codes([user_id], [str(self.STATUS_UNOCCUPIED)], verify=verify)
def disable_single_code(self, user_id, code, verify=True):
"""Set a single code to occupied/disabled status."""
return self.set_codes([user_id], [str(self.STATUS_UNOCCUPIED)], codes=[code], verify=verify)
def get_all_users(self, active=False):
"""Get a dictionary of the status of all users in the lock."""
cmd = self.CMD_USER_CODE_USERS_ACTIVE_GET if active else self.CMD_USER_CODE_USERS_PASSIVE_GET
max_users = int(self.ret_command(cmd).get('user_cnt'))
users = {}
for i in range(1, max_users + 1):
cmd = self.CMD_USER_CODE_ACTIVE_GET if active else self.CMD_USER_CODE_PASSIVE_GET
code = self.ret_command(cmd, arg='&id={}'.format(i))
if code is not None:
users[str(i)] = {
"status": code.get('status'),
"code": code.get('code'),
"update": code.get('utime'),
}
return users
def is_code_set(self, user_id, status, code):
"""Check if a code and status are set in a given id."""
code_obj = self.ret_command(self.CMD_USER_CODE_ACTIVE_GET, '&id={}'.format(user_id))
if code_obj is None:
return False
if status == str(self.STATUS_UNOCCUPIED):
return code_obj.get('status') == status
return code_obj.get('status') == status and code_obj.get('code') == code | zware-api | /zware_api-0.0.25.tar.gz/zware_api-0.0.25/zware_api/interfaces/usercode.py | usercode.py |
# convolutional network metric scripts
- Code for fast watersheds. Code is based around code from https://bitbucket.org/poozh/watershed described in http://arxiv.org/abs/1505.00249. For use in https://github.com/naibaf7/PyGreentea.
# building
### conda
- `conda install -c conda-forge zwatershed`
### pip [<img src="https://img.shields.io/pypi/v/zwatershed.svg?maxAge=2592000">](https://pypi.python.org/pypi/zwatershed/)
- `pip install zwatershed`
### from source
- clone the repository
- run ./make.sh
### requirements
- numpy, h5py, cython
- if using parallel watershed, also requires multiprocessing or pyspark
- in order to build the cython, requires a c++ compiler and boost
# function api
- `(segs, rand) = zwatershed_and_metrics(segTrue, aff_graph, eval_thresh_list, seg_save_thresh_list)`
- *returns segmentations and metrics*
- `segs`: list of segmentations
- `len(segs) == len(seg_save_thresh_list)`
- `rand`: dict
- `rand['V_Rand']`: V_Rand score (scalar)
- `rand['V_Rand_split']`: list of score values
- `len(rand['V_Rand_split']) == len(eval_thresh_list)`
- `rand['V_Rand_merge']`: list of score values,
- `len(rand['V_Rand_merge']) == len(eval_thresh_list)`
- `segs = zwatershed(aff_graph, seg_save_thresh_list)`
- *returns segmentations*
- `segs`: list of segmentations
- `len(segs) == len(seg_save_thresh_list)`
##### These methods have versions which save the segmentations to hdf5 files instead of returning them
- `rand = zwatershed_and_metrics_h5(segTrue, aff_graph, eval_thresh_list, seg_save_thresh_list, seg_save_path)`
- `zwatershed_h5(aff_graph, eval_thresh_list, seg_save_path)`
##### All 4 methods have versions which take an edgelist representation of the affinity graph
- `(segs, rand) = zwatershed_and_metrics_arb(segTrue, node1, node2, edgeWeight, eval_thresh_list, seg_save_thresh_list)`
- `segs = zwatershed_arb(seg_shape, node1, node2, edgeWeight, seg_save_thresh_list)`
- `rand = zwatershed_and_metrics_h5_arb(segTrue, node1, node2, edgeWeight, eval_thresh_list, seg_save_thresh_list, seg_save_path)`
- `zwatershed_h5_arb(seg_shape, node1, node2, edgeWeight, eval_thresh_list, seg_save_path)`
# parallel watershed - 4 steps
- *a full example is given in par_ex.ipynb*
1. Partition the subvolumes
- `partition_data = partition_subvols(pred_file,out_folder,max_len)`
- evenly divides the data in *pred_file* with the constraint that no dimension of any subvolume is longer than max_len
2. Zwatershed the subvolumes
1. `eval_with_spark(partition_data[0])`
- *with spark*
2. `eval_with_par_map(partition_data[0],NUM_WORKERS)`
- *with python multiprocessing map*
- after evaluating, subvolumes will be saved into the out\_folder directory named based on their smallest indices in each dimension (ex. path/to/out\_folder/0\_0\_0\_vol)
3. Stitch the subvolumes together
- `stitch_and_save(partition_data,outname)`
- stitch together the subvolumes in partition_data
- save to the hdf5 file outname
- outname['starts'] = list of min_indices of each subvolume
- outname['ends'] = list of max_indices of each subvolume
- outname['seg'] = full stitched segmentation
- outname['seg_sizes'] = array of size of each segmentation
- outname['rg_i'] = region graph for ith subvolume
4. Threshold individual subvolumes by merging
- `seg_merged = merge_by_thresh(seg,seg_sizes,rg,thresh)`
- load in these areguments from outname
| zwatershed | /zwatershed-0.10.tar.gz/zwatershed-0.10/README.md | README.md |
from __future__ import annotations
import logging
from dataclasses import dataclass, field
from typing import Callable, Literal
try:
from pydantic.v1 import BaseModel
except ImportError:
from pydantic import BaseModel
LOGGER = logging.getLogger(__package__)
class BaseEventModel(BaseModel):
"""Base model for an event."""
source: Literal["controller", "driver", "node"]
event: str
@dataclass
class Event:
"""Represent an event."""
type: str
data: dict = field(default_factory=dict)
class EventBase:
"""Represent a Z-Wave JS base class for event handling models."""
def __init__(self) -> None:
"""Initialize event base."""
self._listeners: dict[str, list[Callable]] = {}
def on( # pylint: disable=invalid-name
self, event_name: str, callback: Callable
) -> Callable:
"""Register an event callback."""
listeners: list = self._listeners.setdefault(event_name, [])
listeners.append(callback)
def unsubscribe() -> None:
"""Unsubscribe listeners."""
if callback in listeners:
listeners.remove(callback)
return unsubscribe
def once(self, event_name: str, callback: Callable) -> Callable:
"""Listen for an event exactly once."""
def event_listener(data: dict) -> None:
unsub()
callback(data)
unsub = self.on(event_name, event_listener)
return unsub
def emit(self, event_name: str, data: dict) -> None:
"""Run all callbacks for an event."""
for listener in self._listeners.get(event_name, []).copy():
listener(data)
def _handle_event_protocol(self, event: Event) -> None:
"""Process an event based on event protocol."""
handler = getattr(self, f"handle_{event.type.replace(' ', '_')}", None)
if handler is None:
LOGGER.debug("Received unknown event: %s", event)
return
handler(event) | zwave-js-server-python | /zwave_js_server_python-0.51.0-py3-none-any.whl/zwave_js_server/event.py | event.py |
from __future__ import annotations
import argparse
import asyncio
import logging
import sys
import aiohttp
from .client import Client
from .dump import dump_msgs
from .version import get_server_version
logger = logging.getLogger(__package__)
def get_arguments() -> argparse.Namespace:
"""Get parsed passed in arguments."""
parser = argparse.ArgumentParser(description="Z-Wave JS Server Python")
parser.add_argument("--debug", action="store_true", help="Log with debug level")
parser.add_argument(
"--server-version", action="store_true", help="Print the version of the server"
)
parser.add_argument(
"--dump-state", action="store_true", help="Dump the driver state"
)
parser.add_argument(
"--event-timeout",
help="How long to listen for events when dumping state",
)
parser.add_argument(
"url",
type=str,
help="URL of server, ie ws://localhost:3000",
)
arguments = parser.parse_args()
return arguments
async def start_cli() -> None:
"""Run main."""
args = get_arguments()
level = logging.DEBUG if args.debug else logging.INFO
logging.basicConfig(level=level)
async with aiohttp.ClientSession() as session:
if args.server_version:
await print_version(args, session)
elif args.dump_state:
await handle_dump_state(args, session)
else:
await connect(args, session)
async def print_version(
args: argparse.Namespace, session: aiohttp.ClientSession
) -> None:
"""Print the version of the server."""
logger.setLevel(logging.WARNING)
version = await get_server_version(args.url, session)
print("Driver:", version.driver_version)
print("Server:", version.server_version)
print("Home ID:", version.home_id)
async def handle_dump_state(
args: argparse.Namespace, session: aiohttp.ClientSession
) -> None:
"""Dump the state of the server."""
timeout = None if args.event_timeout is None else float(args.event_timeout)
msgs = await dump_msgs(args.url, session, timeout=timeout)
for msg in msgs:
print(msg)
async def connect(args: argparse.Namespace, session: aiohttp.ClientSession) -> None:
"""Connect to the server."""
async with Client(args.url, session) as client:
driver_ready = asyncio.Event()
asyncio.create_task(on_driver_ready(client, driver_ready))
await client.listen(driver_ready)
async def on_driver_ready(client: Client, driver_ready: asyncio.Event) -> None:
"""Act on driver ready."""
await driver_ready.wait()
assert client.driver
# Set up listeners on new nodes
client.driver.controller.on(
"node added",
lambda event: event["node"].on("value updated", log_value_updated),
)
# Set up listeners on existing nodes
for node in client.driver.controller.nodes.values():
node.on("value updated", log_value_updated)
def log_value_updated(event: dict) -> None:
"""Log node value changes."""
node = event["node"]
value = event["value"]
if node.device_config:
description = node.device_config.description
else:
description = f"{node.device_class.generic} (missing device config)"
logger.info(
"Node %s %s (%s) changed to %s",
description,
value.property_name or "",
value.value_id,
value.value,
)
def main() -> None:
"""Run main."""
try:
asyncio.run(start_cli())
except KeyboardInterrupt:
pass
sys.exit(0)
if __name__ == "__main__":
main() | zwave-js-server-python | /zwave_js_server_python-0.51.0-py3-none-any.whl/zwave_js_server/__main__.py | __main__.py |
Subsets and Splits