code
stringlengths 501
5.19M
| package
stringlengths 2
81
| path
stringlengths 9
304
| filename
stringlengths 4
145
|
---|---|---|---|
==========================
``zodbshootout`` Results
==========================
The table below shows typical output of running ``zodbshootout`` with
``etc/sample.conf`` on a dual core, 2.1 GHz laptop::
"Transaction", postgresql, mysql, mysql_mc, zeo_fs
"Add 1000 Objects", 6529, 10027, 9248, 5212
"Update 1000 Objects", 6754, 9012, 8064, 4393
"Read 1000 Warm Objects", 4969, 6147, 21683, 1960
"Read 1000 Cold Objects", 5041, 10554, 5095, 1920
"Read 1000 Hot Objects", 38132, 37286, 37826, 37723
"Read 1000 Steamin' Objects", 4591465, 4366792, 3339414, 4534382
``zodbshootout`` runs six kinds of tests for each database. For each
test, ``zodbshootout`` instructs all processes (or threads or
greenlets, as configured) to perform similar transactions
concurrently, computes the mean duration of the concurrent
transactions, takes the mean timing of three test runs, and derives
how many objects per second the database is capable of writing or
reading under the given conditions.
``zodbshootout`` runs these tests:
* Add objects
``zodbshootout`` begins a transaction, adds the specified number
of persistent objects to a
:class:`~persistent.mapping.PersistentMapping`, or ``BTree`` and
commits the transaction. In the sample output above, MySQL was
able to add 10027 objects per second to the database, almost twice
as fast as ZEO, which was limited to 5212 objects per second.
Also, with memcached support enabled, MySQL write performance took
a small hit due to the time spent storing objects in memcached.
* Update objects
In the same process, without clearing any caches, ``zodbshootout``
makes a simple change to each of the objects just added and commits
the transaction. The sample output above shows that MySQL and ZEO
typically take a little longer to update objects than to add new
objects, while PostgreSQL is faster at updating objects in this case.
The sample tests only history-preserving databases; you may see
different results with history-free databases.
* Read warm objects
In a different process, without clearing any caches,
``zodbshootout`` reads all of the objects just added. This test
favors databases that use either a persistent cache or a cache
shared by multiple processes (such as memcached). In the sample
output above, this test with MySQL and memcached runs more than ten
times faster than ZEO without a persistent cache. (See
``fs-sample.conf`` for a test configuration that includes a ZEO
persistent cache.)
In shared thread mode, the database is not closed and reopened, so
with concurrency greater than 1, this test is a measure of a
shared pickle cache. When concurrency is 1, this test is
equivalent to the steamin' test.
* Read cold objects
In the same process as was used for reading warm objects,
``zodbshootout`` clears all ZODB caches (the pickle cache, the ZEO
cache, and/or memcached) then reads all of the objects written by
the update test. This test favors databases that read objects
quickly, independently of caching. The sample output above shows
that cold read time is currently a significant ZEO weakness.
* Read prefetched cold objects
This is just like the previous test, except the objects are
prefetched using the ZODB 5 API. This demonstrates any value of
bulk prefetching implemented in a database.
* Read hot objects
In the same process as was used for reading cold objects,
``zodbshootout`` clears the in-memory ZODB caches (the pickle
cache), but leaves the other caches intact, then reads all of the
objects written by the update test. This test favors databases that
have a process-specific cache. In the sample output above, all of
the databases have that type of cache.
* Read steamin' objects
In the same process as was used for reading hot objects,
``zodbshootout`` once again reads all of the objects written by the
update test. This test favors databases that take advantage of the
ZODB pickle cache. As can be seen from the sample output above,
accessing an object from the ZODB pickle cache is around 100
times faster than any operation that requires network access or
unpickling.
| zodbshootout | /zodbshootout-0.8.0.tar.gz/zodbshootout-0.8.0/doc/results.rst | results.rst |
==============
Installation
==============
``zodbshootout`` can be installed using ``pip``:
pip install zodbshootout
This will install the :doc:`zodbshootout <zodbshootout>` script along
with ZODB and ZEO. To test other storages (such as ``RelStorage``) or
storage wrappers (such as ``zc.zlibstorage``) you'll need to install
those packages as well.
RelStorage
==========
``zodbshootout`` comes with extras that install RelStorage plus an
appropriate database adapter/driver for a specific database::
pip install "zodbshootout[mysql]"
pip install "zodbshootout[postgresql]"
pip install "zodbshootout[oracle]"
.. note:: This does not actually install the databases. You will need
to install those separately (possibly using your operating
system's package manager) and create user accounts as
described `in the RelStorage documentation
<http://relstorage.readthedocs.io/en/latest/configure-database.html>`_.
.. tip:: This does not install the packages necessary for RelStorage
to integrate with Memcache. See the RelStorage documentation
`for more information
<http://relstorage.readthedocs.io/en/latest/install.html#memcache-integration>`_
on the packages needed to test RelStorage and Memcache.
ZEO
===
When ``zodbshootout`` is installed, ZEO is also installed. To test
ZEO's performance, you'll need to have the ZEO process running, as
described `in the ZEO documentation <https://pypi.python.org/pypi/ZEO/5.1.0#running-the-server>`_.
| zodbshootout | /zodbshootout-0.8.0.tar.gz/zodbshootout-0.8.0/doc/install.rst | install.rst |
================
ZODB Shoot Out
================
This application measures and compares the performance of various
ZODB storages and configurations. It is derived from the RelStorage
speedtest script, but this version allows arbitrary storage types and
configurations, provides more measurements, and produces numbers that
are easier to interpret.
.. toctree::
install
zodbshootout
results
changelog
=============
Development
=============
.. image:: https://travis-ci.org/zodb/zodbshootout.png?branch=master
:target: https://travis-ci.org/zodb/zodbshootout
.. image:: https://coveralls.io/repos/zodb/zodbshootout/badge.svg?branch=master&service=github
:target: https://coveralls.io/github/zodb/zodbshootout?branch=master
.. image:: https://readthedocs.org/projects/zodbshootout/badge/?version=latest
:target: http://zodbshootout.readthedocs.io/en/latest/?badge=latest
:alt: Documentation Status
zodbshootout is hosted at GitHub:
https://github.com/zodb/zodbshootout
| zodbshootout | /zodbshootout-0.8.0.tar.gz/zodbshootout-0.8.0/doc/index.rst | index.rst |
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
# pylint:disable=too-many-locals
import argparse
import json
import os
import tempfile
import pyperf
import matplotlib
import matplotlib.pyplot as plt
from pandas import DataFrame
import seaborn
def _fix_database(n, version=''):
result = n
if 'mysql' in n.lower():
result = 'MySQL'
if 'psycopg2' in n.lower():
result = 'PostgreSQL'
if 'zeo' in n.lower():
result = 'ZEO'
elif 'fs' in n.lower() or 'filestorage' in n.lower():
result = 'FileStorage'
if 'sqlite' in n.lower():
result = 'SQLite'
if version:
result = f'{result} ({version})'
return result
def suite_to_benchmark_data(_args, benchmark_suite, version=''):
"""
Return a DataFrame containing every observation.
"""
rows = []
for benchmark in benchmark_suite.get_benchmarks():
# {c=1 processes, o=100} mysqlclient_hf: read 100 hot objects'
name = benchmark.get_name()
if '(disabled)' in name:
continue
# '{c=1 processes, o=100', ' mysqlclient_hf: read 100 hot objects'
prefix, suffix = name.rsplit('}', 1)
if 'processes' in prefix:
ConcurrencyKind = 'processes'
elif 'greenlets' in prefix:
ConcurrencyKind = 'greenlets'
else:
assert 'threads' in prefix
ConcurrencyKind = 'threads'
prefix = prefix.replace(' processes', '').replace(' threads', '')
prefix = prefix.replace(' greenlets', '')
prefix += '}'
d = json.loads(prefix.replace('c', '"c"').replace('o', '"o"').replace('=', ':'))
Concurrency = d['c']
Objects = d['o']
Database, suffix = suffix.strip().split(':', 1)
suffix = suffix.strip()
Database = _fix_database(Database, version)
if version and Database.startswith('ZEO'):
# Exclude ZEO from these comparisons.
# (It messes up our pairing)
continue
Action = suffix.replace(str(Objects) + ' ', '')
for run in benchmark.get_runs():
for value in run.values:
row = dict(
concurrency_kind=ConcurrencyKind, database=Database,
concurrency=Concurrency,
action=Action, objects=Objects, duration=value,
version=version,
)
rows.append(row)
df = DataFrame(rows)
return df
def save_one(df, benchmark_name, outdir, palette=None,
# The x and y are for an individual graph in the matrix
x="concurrency", y="duration",
# The col and row define the columns and rows of the matrix
col="objects", row="concurrency_kind",
# while hue defines the category within an individual graph.
hue="database", hue_order=None,
show_y_ticks=False, kind="bar",
**kwargs):
fname = benchmark_name.replace(' ', '_').replace('/', '_').replace(':', '_')
fname = f'{fname}_{x}_{y}_{col}_{row}_{hue}_{kind}'
fig = seaborn.catplot(
x, y,
data=df,
#kind="swarm", # The swarm plots is also interesting, as is point
kind=kind,
hue=hue,
hue_order=sorted(df[hue].unique()) if hue_order is None else hue_order,
col=col,
row=row,
palette=palette,
sharey=False,
legend=False,
**kwargs
)
if not show_y_ticks:
fig.set(yticks=[])
else:
fig.set(ylabel="ms")
fig.add_legend(title=benchmark_name)
for ext in ('.png',):
fig.savefig(os.path.join(outdir, fname) + ext, transparent=True)
fig.despine()
plt.close(fig.fig)
def save_all(df, outdir, versions=None, pref_db_order=None):
all_bmarks = df['action'].unique()
# The drawing functions use Cocoa and don't work on either threads
# or processes.
# pool = ProcessPoolExecutor()
# def _save_one(*args, **kwargs):
# pool.submit(save_one, *args, **kwargs)
_save_one = save_one
for bmark in all_bmarks:
action_data = df[df.action == bmark]
action_data = action_data[action_data.concurrency_kind != "greenlets"]
_save_one(
action_data, bmark, outdir,
palette='Paired' if versions or pref_db_order else None,
hue_order=pref_db_order,
)
if versions:
all_dbs_including_versions = df['database'].unique()
all_dbs = {
db.replace('(' + versions[0] + ')', '').replace(
'(' + versions[1] + ')', ''
).strip()
for db in all_dbs_including_versions
}
parent_out_dir = outdir
for root_db in all_dbs:
outdir = os.path.join(parent_out_dir, root_db)
os.makedirs(outdir, exist_ok=True)
db_v1 = f"{root_db} ({versions[0]})"
db_v2 = f"{root_db} ({versions[1]})"
db_df = df[df.database == db_v1]
db_df2 = df[df.database == db_v2]
db_df = db_df.append(db_df2)
for bmark in all_bmarks:
# adf: By database, by action
adf = db_df[db_df.action == bmark]
_save_one(
adf.query('concurrency > 1'),
f"{root_db}: {bmark}",
outdir,
x="concurrency_kind",
hue="database",
row="concurrency",
palette='Paired',
)
# This puts all three concurrencies together
# and emphasizes the differences between them.
_save_one(
adf.query('concurrency > 1'),
f"{root_db}: {bmark}",
outdir,
x="database",
hue="concurrency_kind",
row="concurrency",
palette='Accent',
order=sorted((db_v1, db_v2)),
)
cdf = adf[adf.objects == 20]
try:
_save_one(
cdf,
f"{root_db}: {bmark} | objects = 20",
outdir,
palette='Paired',
col="concurrency_kind", row="objects",
)
except ValueError:
continue
for ck in adf['concurrency_kind'].unique():
ckf = adf[adf.concurrency_kind == ck]
# ckf: drilldown by database, by action, by concurrency kind.
for oc in ckf['objects'].unique():
ocf = ckf[ckf.objects == oc]
_save_one(
ocf,
f"{root_db}: {bmark} ck={ck} o={oc}",
outdir,
palette='Paired',
col="concurrency_kind", row="objects",
show_y_ticks=True,
)
def main():
parser = argparse.ArgumentParser()
parser.add_argument('input', type=argparse.FileType('r'))
parser.add_argument('compare_to', type=argparse.FileType('r'), nargs='?')
parser.add_argument(
'--versions',
nargs=2,
help="If compare_to is given, you must specify the versions to tag identical "
"databases with."
)
args = parser.parse_args()
with args.input as f:
s = f.read()
suite = pyperf.BenchmarkSuite.loads(s)
if args.compare_to:
with args.compare_to as f:
compare_to = f.read()
compare_to = pyperf.BenchmarkSuite.loads(compare_to)
if args.compare_to:
pfx1 = args.versions[0]
pfx2 = args.versions[1]
df = suite_to_benchmark_data(args, suite, version=pfx1)
df2 = suite_to_benchmark_data(args, compare_to, version=pfx2)
df = df2.append(df)
outdir_basename = '%s_vs_%s' % (pfx1, pfx2)
else:
df = suite_to_benchmark_data(args, suite)
outdir_basename = os.path.basename(args.input.name)
outdir_basename, _ = os.path.splitext(outdir_basename)
# Convert seconds to milliseconds
df['duration'] = df['duration'] * 1000.0
if args.input.name == '<stdin>':
outdir = tempfile.mkdtemp()
else:
outdir_parent = os.path.dirname(args.input.name)
outdir = os.path.join(outdir_parent, 'images', outdir_basename)
os.makedirs(outdir, exist_ok=True)
print("Saving images to", outdir)
matplotlib.rcParams["figure.figsize"] = 20, 10
seaborn.set(style="white")
save_all(df, outdir, args.versions)
if __name__ == "__main__":
main() | zodbshootout | /zodbshootout-0.8.0.tar.gz/zodbshootout-0.8.0/scripts/zs_matrix_graph.py | zs_matrix_graph.py |
# Written in 2019 by Jason Madden <[email protected]>
# and placed in the public domain.
import argparse
import csv
import json
import pyperf
def _fix_database(n):
if 'mysql' in n.lower():
return 'MySQL'
if 'psycopg2' in n.lower():
return 'PostgreSQL'
if 'zeo' in n.lower():
return 'ZEO'
return n
def export_csv(args, benchmark_suite):
rows = []
fields = ('ShortLabel', 'LongLabel',
'Database', 'Concurrency', 'ConcurrencyKind', 'Objects',
'Action',
'Mean', )
for benchmark in benchmark_suite.get_benchmarks():
row = {}
# {c=1 processes, o=100} mysqlclient_hf: read 100 hot objects'
name = benchmark.get_name()
# '{c=1 processes, o=100', ' mysqlclient_hf: read 100 hot objects'
prefix, suffix = name.rsplit('}', 1)
row['ConcurrencyKind'] = 'processes' if 'processes' in prefix else 'threads'
prefix = prefix.replace(' processes', '').replace(' threads', '')
prefix += '}'
d = json.loads(prefix.replace('c', '"c"').replace('o', '"o"').replace('=', ':'))
row['Concurrency'] = d['c']
row['Objects'] = d['o']
row['Database'], suffix = suffix.strip().split(':', 1)
suffix = suffix.strip()
row['Database'] = _fix_database(row['Database'])
row['Action'] = suffix.replace(str(row['Objects']) + ' ', '')
row['Mean'] = benchmark.mean()
row['ShortLabel'] = f'{row["Database"]}: {row["Concurrency"]} {row["ConcurrencyKind"]}'
row['LongLabel'] = f'{row["ShortLabel"]}: {suffix}'
rows.append(row)
rows.sort(key=lambda row: (row['Database'], row['Concurrency'], row['ConcurrencyKind'],
row['Objects']))
with args.output:
writer = csv.DictWriter(args.output, fields)
writer.writeheader()
writer.writerows(rows)
def parse_args():
parser = argparse.ArgumentParser()
parser.add_argument('input', type=argparse.FileType('r'))
parser.add_argument('output', type=argparse.FileType('w'))
return parser.parse_args()
def main():
args = parse_args()
with args.input as input:
s = input.read()
suite = pyperf.BenchmarkSuite.loads(s)
export_csv(args, suite)
if __name__ == "__main__":
main() | zodbshootout | /zodbshootout-0.8.0.tar.gz/zodbshootout-0.8.0/scripts/zs_matrix_export_csv.py | zs_matrix_export_csv.py |
import os
import subprocess
import sys
import time
import traceback
import tempfile
# The type of runner to enable, and the arguments needed
# to use it.
# TODO: Be intelligent about picking gevent based on the drivers
procs = {
#'gevent': ('--threads', 'shared', '--gevent'),
'process': (),
'threads': ('--threads', 'shared'),
}
# The concurrency levels.
concs = [
1,
5,
20,
]
# How many objects
counts = [
1,
5,
20
]
# The virtual environment to use.
# Relies on virtualenvwrapper.
envs = [
#'relstorage38',
'relstorage27',
'relstorage27-rs2',
]
if 'ZS_MATRIX_ENV' in os.environ:
envs = [os.environ['ZS_MATRIX_ENV']]
workon_home = os.environ['WORKON_HOME']
results_home = os.environ.get(
'ZS_MATRIX_RESULTS',
'~/Projects/GithubSources/zodbshootout-results'
)
# General configuration for the zodbshootout runs.
branch = os.environ['GIT_BRANCH']
# Set this if you restart a run after fixing a problem
# and edit out entries in the matrix that already completed.
now = os.environ.get('ZS_NOW', '') or int(time.time())
child_env = os.environ.copy()
child_env['PYTHONHASHSEED'] = '6587'
child_env['PYTHONFAULTHANDLER'] = '1'
child_env['ZS_COLLECTOR_FUNC'] = 'avg'
# We don't have the logs enabled anyway, and this shows up in
# profiling.
child_env['RS_PERF_LOG_ENABLE'] = 'off'
child_env.pop('PYTHONDEVMODE', None)
child_env.pop('ZS_NO_SMOOTH', None)
smooth_results_in_process_concurrency = True
if not smooth_results_in_process_concurrency:
child_env['ZS_NO_SMOOTH'] = '1'
def run_one(
env, proc, conc, count, conf,
excluded=(),
processes=2, # How many times the whole thing is repeated.
# How many times does the function get to run its loops. If
# processes * values = 1, then it can't report a standard deviation
# or print stability warnings.
values=4,
warmups=0,
min_time_ms=50.0, # Default is 100ms
loops=3 # How many loops (* its inner loops)
): # pylint:disable=too-many-locals
if 'pypy' in env:
values = 10 # Need to JIT
if conc == 1 and count == 1:
processes += 2
min_time_ms = max(min_time_ms, 100.0)
smooth = 'smoothed' if smooth_results_in_process_concurrency else 'unsmoothed'
out_dir = os.path.expanduser(
f"{results_home}/{env}/{branch}/{child_env['ZS_COLLECTOR_FUNC']}/"
f"{smooth}/"
f"{now}/{proc[0]}-c{conc}-o{count}-p{processes}-v{values}-l{loops}/"
)
os.makedirs(out_dir, exist_ok=True)
# Each process (-p) runs --loops for --values times.
# Plus the initial calibration, which is always at least two
# values (begin at 1 loop and go up until you get two consecutive
# runs with the same loop count > --min-time). For small counts, it can take a substantial
# amount of time to calibrate the loop.
print("***", env, proc, conc, count)
output_path = os.path.join(out_dir, "output.json")
if os.path.exists(output_path):
print("\t", output_path, "Already exists, skipping")
return
cmd = [
os.path.expanduser(f"{workon_home}/{env}/bin/zodbshootout"),
'-q',
'--include-mapping', "no",
'--zap', 'force',
'--values', str(values),
'--warmups', str(warmups),
'-p', str(processes),
'-o', output_path,
'-c', str(conc),
'--object-counts', str(count),
]
if loops and conc > 1 and count > 1:
cmd.extend(('--loops', str(loops)))
else:
cmd.extend(('--min-time', str(min_time_ms / 1000.0)))
cmd.extend(proc[1])
cmd.append(conf)
# Set these to only run a subset of the benchmarks.
# cmd.extend([
# "add",
# "store",
# "update",
# "conflicts",
# 'warm',
# 'new_oid',
# ])
cmd.extend([
'add',
'warm',
'cold',
])
if excluded:
cmd.append('--')
for exc in excluded:
cmd.append('-' + exc)
print("\t", ' '.join(cmd))
try:
subprocess.check_call(cmd, env=child_env)
except subprocess.CalledProcessError:
traceback.print_exc()
if os.path.exists(output_path):
fd, path = tempfile.mkstemp('.json', 'output-failed-', out_dir)
os.close(fd)
os.rename(output_path, path)
print("***")
print()
def main():
blacklist = set() # {(proc_name, conc)}
if 1 in concs and 'process' in procs and 'threads' in procs:
# This is redundant.
blacklist.add(('process', 1))
if len(sys.argv) > 1:
conf = sys.argv[1]
else:
conf = "~/Projects/GithubSources/zodbshootout-results/zodb3.conf"
conf = os.path.abspath(os.path.expanduser(conf))
for env in envs:
excluded_bmarks = set()
for count in sorted(counts):
for conc in sorted(concs):
if conc == 1 and len(procs) == 1 and 'gevent' in procs:
# If we're only testing one concurrent connection,
# and we're only testing gevent by itself, then
# the test is unlikely to be interesting. (It might be interesting
# to compare gevent to thread or process to see what overhead
# the driver adds, but otherwise we want to see how it does
# concurrently).
continue
for proc in sorted(procs.items()):
if (proc[0], conc) in blacklist:
continue
run_one(env, proc, conc, count, conf, excluded=excluded_bmarks)
# Once we've done these once, they don't really change.
# They're independent of count, they don't really even
# touch the storage or DB.
excluded_bmarks.add('ex_commit')
excluded_bmarks.add('im_commit')
if __name__ == '__main__':
main() | zodbshootout | /zodbshootout-0.8.0.tar.gz/zodbshootout-0.8.0/scripts/zs_matrix_runner.py | zs_matrix_runner.py |
Changes
=======
1.5 (2020-07-28)
----------------
- Fixed incompatibility with ZODB 5.6
(`#35 <https://github.com/zopefoundation/zodbupdate/issues/35>`_)
- Added support for history-free RelStorage
(`#28 <https://github.com/zopefoundation/zodbupdate/issues/28>`_)
- Support zope.interface >= 5 in tests.
(`issue 32 <https://github.com/zopefoundation/zodbupdate/issues/32>`_)
1.4 (2019-08-23)
----------------
- Fail with explanation when opening a Python 2 ZODB with --dry-run on Python 3
(`#22 <https://github.com/zopefoundation/zodbupdate/issues/22>`_)
1.3 (2019-07-30)
----------------
- Support converting sets.Set() objects from ancient Python 2 versions.
(`issue 23 <https://github.com/zopefoundation/zodbupdate/issues/23>`_)
- Convert set objects to ``builtins.set`` without relying on ZODB.broken.rebuild.
(`issue 25 <https://github.com/zopefoundation/zodbupdate/pull/25>`_)
1.2 (2019-05-09)
----------------
- Enable fallback encodings for Python 3 conversion for old/grown ZODBs using
the new command line option ``--encoding-fallback``.
(`#15 <https://github.com/zopefoundation/zodbupdate/pull/15>`_)
- Switch to use `argparse` as `optparse` is deprecated.
- Add ability to run the Python 3 migration with a default encoding for
``str`` objects.
(`#14 <https://github.com/zopefoundation/zodbupdate/pull/14>`_)
- Fix updating records that reference a broken interface
when the interface's top-level module is missing.
- Fixed skipping of blob records so that oids in references to blobs
are still converted.
- Add support for Python 3.8a3.
- Drop support for Python 3.4.
1.1 (2018-10-05)
----------------
- Skip records for ZODB.blob when migrating database to Python 3 to not break
references to blobfiles.
- When migrating databases to Python 3, do not fail when converting
attributes containing None.
- Fix tests on Python 2 with ZODB >= 5.4.0, which now uses pickle
protocol 3.
- Fix `is_broken` check for old-style class instances.
- Add support for Python 3.7.
- Drop PyPy support.
1.0 (2018-02-13)
----------------
- Support Python 2.7 and 3.4, 3.5 and 3.6 and pypy 3. Drop any older
version of Python.
- The option to the select the pickler (``--pickler``) has been
removed. This was only useful if you had extension classes with
Python 2.5 or less.
- Added an option to convert a database to Python 3.
0.5 (2010-10-07)
----------------
- More debug logging shows now the currently processed OID
(that is helpful to determine which object misses the factory).
- Support for missing factories have been improved: an error used to
occur if a pickle needed an update and contained a reference to a
missing class (not instance of this class). This case is now fixed.
- Python 2.4 is no longer supported. Please stick to version 0.3 if
you need Python 2.4 support.
0.4 (2010-07-14)
----------------
- Add an option to debug broken records.
- Add an option to skip records.
- Add an option to use Python unPickler instead of C one. This let you
debug records. As well Python unPickler let you update old ExtensionClass
records who had a special hack in the past.
- Broken interfaces are well supported now (if you did alsoProvides with them).
0.3 (2010-02-02)
----------------
- Unplickle and re-pickle the code to rename references to moved classes.
This make the script works on database created with older versions of
ZODB.
- If you are working directly with a FileStorage, POSKeyError are reported
but non-fatal.
- Remove superfluous code that tried to prevent commits when no changes
happened: ZODB does this all by itself already.
0.2 (2009-06-23)
----------------
- Add option to store the rename rules into a file.
- Don't commit transactions that have no changes.
- Load rename rules from entry points ``zodbupdate``.
- Compatibility with Python 2.4
- Rename from ``zodbupgrade`` to ``zodbupdate``.
- Add 'verbose' option.
- Improve logging.
- Suppress duplicate log messages (e.g. if the same class is missing in
multiple objects).
- Improve the updating process: rewrite pickle opcodes instead of blindly
touching a class. This also allows updating pickles that can't be unpickled
due to missing classes.
0.1 (2009-06-08)
----------------
- First release.
| zodbupdate | /zodbupdate-1.5.tar.gz/zodbupdate-1.5/CHANGES.rst | CHANGES.rst |
=============================================================
zodbupdate - Update existing databases to match your software
=============================================================
This package provides a tool that automatically identifies and updates
references from persistent objects to classes that are in the process of being
moved from one module to another and/or being renamed.
If a class is being moved or renamed, you need to update all references from
your database to the new name before finally deleting the old code.
This tool looks through all current objects of your database,
identifies moved/renamed classes and `touches` objects accordingly. It
creates transactions that contains the update of your database (one
transaction every 100,000 records).
Having run this tool, you are then free to delete the old code.
.. contents::
Usage
=====
Installing the egg of this tool provides a console script `zodbupdate` which
you can call giving either a FileStorage filename or a configuration file
defining a storage::
$ zodbupdate -f Data.fs
$ zodbupdate -c zodb.conf
Detailed usage information is available:
$ zodbupdate -h
Custom software/eggs
--------------------
It is important to install this egg in an interpreter/environment where your
software is installed as well. If you're using a regular Python installation
or virtualenv, just installing the package using easy_install should be fine.
If you are using buildout, installing can be done using the egg recipe with
this configuration::
[buildout]
parts += zodbupdate
[zodbupdate]
recipe = zc.recipe.egg
eggs = zodbupdate
<list additional eggs here>
If you do not install `zodbupdate` together with the necessary software it
will report missing classes and not touch your database.
Non-FileStorage configurations
------------------------------
You can configure any storage known to your ZODB installation by providing a
ZConfig configuration file (similar to zope.conf). For example you can connect
to a ZEO server by providing a config file `zeo.conf`::
<zeoclient>
server 127.0.0.1:8100
storage 1
</zeoclient>
And then running `zodbupdate` using:
$ zodbupdate -c zeo.conf
Pre-defined rename rules
------------------------
Rename rules can be defined using an entry point called ``zodbupdate``::
setup(...
entry_points = """
[zodbupdate]
renames = mypackage.mymodule:rename_dict
""")
These can also be defined in python::
setup(...
entry_points={
'zodbupdate': ['renames = mypackage.mymodule:rename_dict'],
})
Those entry points must points to dictionaries that map old class
names to new class names::
rename_dict = {
'mypackage.mymodule ClassName':
'otherpackage.othermodule OtherClass'}
As soon as you have rules defined, you can already remove the old
import location mentioned in them.
Packing
-------
The option ``--pack`` will pack the storage on success. (You tell your
users to use that option. If they never pack their storage, it is a good
occasion).
Converting to Python 3
----------------------
``zodbupdate`` can be used to migrate a database created with a Python
2 application to be usable with the same application in Python 3. To
accomplish this, you need to:
1. Stop your application. Nothing should be written to the database
while the migration is running.
2. Update your Python 2 application to use the latest ZODB version. It
will not work with ZODB 3.
3. With Python 2, run ``zodbupdate --pack --convert-py3``.
If you use a Data.fs we recommend you to use the ``-f`` option to
specify your database. After the conversion the magic header of the
database will be updated so that you will be able to open the database
with Python 3.
If you use a different storage (like RelStorage), be sure you will be
connecting to it using your Python 3 application after the
migration. You will still be able to connect to your database and use
your application with Python 2 without errors, but then you will need
to convert it again to Python 3.
While the pack is not required, it is highly recommended.
The conversion will take care of the following tasks:
- Updating stored Python datetime, date and time objects to use
Python 3 bytes,
- Updating ZODB references to use Python 3 bytes.
- Optionally convert stored strings to either unicode or bytes pending
your configuration.
If your application expect to use bytes in Python 3, they must be
stored as such in the database, and all other strings must be stored
as unicode string, if they contain other characters than ascii characters.
When using ``--convert-py3``, ``zodbupdate`` will load a set of
decoders from the entry points::
setup(...
entry_points = """
[zodbupdate.decode]
decodes = mypackage.mymodule:decode_dict
""")
Decoders are dictionaries that specifies as keys attributes on
Persistent classes that must either be encoded as bytes (if the value
is ``binary``) or decoded to unicode using value as encoding (for
instance ``utf-8`` here)::
decode_dict = {
'mypackage.mymodule ClassName attribute': 'binary',
'otherpackage.othermodule OtherClass other_attribute': 'utf-8'}
Please note that for the moment only attributes on Persistent classes
are supported.
Please also note that these conversion rules are _only_ selected for the
class that is referenced in the pickle, rules for superclasses are _not_
applied. This means that you have to push down annotation rules to all
the subclasses of a superclass that has a field that needs this annotation.
Converting to Python 3 from within Python 3
-------------------------------------------
``zodbupdate`` can also be run from within Python 3 to convert a database
created with Python 2 to be usable in Python 3. However this works
slightly differently than when running the conversion using Python 2.
In Python 3 you must specify a default encoding to use while unpickling strings:
``zodbupdate --pack --convert-py3 --encoding utf-8``.
For each string in the database, zodbupdate will convert it as follows:
1. If it's an attribute configured explicitly via a decoder as described
above, it will be decoded or encoded as specified there.
2. Otherwise the value will be decoded using the encoding specified
on the command line.
3. If there is an error while decoding using the encoding specified
on the command line, the value will be stored as bytes.
Problems and solutions
----------------------
Your Data.fs has POSKey errors
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
If you call `zodbupdate` with ``-f`` and the path to your Data.fs,
records triggering those errors will be ignored.
You have another error
~~~~~~~~~~~~~~~~~~~~~~
We recommend to run zodbupdate with ``-v -d`` to get the
maximum of information.
If you are working on big storages, you can use the option ``-o`` to
re-run `zodbupdate` at a failing record you previously encountered
afterward.
| zodbupdate | /zodbupdate-1.5.tar.gz/zodbupdate-1.5/README.rst | README.rst |
import os
import shutil
import sys
import tempfile
from optparse import OptionParser
tmpeggs = tempfile.mkdtemp()
usage = '''\
[DESIRED PYTHON FOR BUILDOUT] bootstrap.py [options]
Bootstraps a buildout-based project.
Simply run this script in a directory containing a buildout.cfg, using the
Python that you want bin/buildout to use.
Note that by using --find-links to point to local resources, you can keep
this script from going over the network.
'''
parser = OptionParser(usage=usage)
parser.add_option("-v", "--version", help="use a specific zc.buildout version")
parser.add_option("-t", "--accept-buildout-test-releases",
dest='accept_buildout_test_releases',
action="store_true", default=False,
help=("Normally, if you do not specify a --version, the "
"bootstrap script and buildout gets the newest "
"*final* versions of zc.buildout and its recipes and "
"extensions for you. If you use this flag, "
"bootstrap and buildout will get the newest releases "
"even if they are alphas or betas."))
parser.add_option("-c", "--config-file",
help=("Specify the path to the buildout configuration "
"file to be used."))
parser.add_option("-f", "--find-links",
help=("Specify a URL to search for buildout releases"))
parser.add_option("--allow-site-packages",
action="store_true", default=False,
help=("Let bootstrap.py use existing site packages"))
options, args = parser.parse_args()
######################################################################
# load/install setuptools
try:
if options.allow_site_packages:
import setuptools
import pkg_resources
from urllib.request import urlopen
except ImportError:
from urllib2 import urlopen
ez = {}
exec(urlopen('https://bootstrap.pypa.io/ez_setup.py').read(), ez)
if not options.allow_site_packages:
# ez_setup imports site, which adds site packages
# this will remove them from the path to ensure that incompatible versions
# of setuptools are not in the path
import site
# inside a virtualenv, there is no 'getsitepackages'.
# We can't remove these reliably
if hasattr(site, 'getsitepackages'):
for sitepackage_path in site.getsitepackages():
sys.path[:] = [x for x in sys.path if sitepackage_path not in x]
setup_args = dict(to_dir=tmpeggs, download_delay=0)
ez['use_setuptools'](**setup_args)
import setuptools
import pkg_resources
# This does not (always?) update the default working set. We will
# do it.
for path in sys.path:
if path not in pkg_resources.working_set.entries:
pkg_resources.working_set.add_entry(path)
######################################################################
# Install buildout
ws = pkg_resources.working_set
cmd = [sys.executable, '-c',
'from setuptools.command.easy_install import main; main()',
'-mZqNxd', tmpeggs]
find_links = os.environ.get(
'bootstrap-testing-find-links',
options.find_links or
('http://downloads.buildout.org/'
if options.accept_buildout_test_releases else None)
)
if find_links:
cmd.extend(['-f', find_links])
setuptools_path = ws.find(
pkg_resources.Requirement.parse('setuptools')).location
requirement = 'zc.buildout'
version = options.version
if version is None and not options.accept_buildout_test_releases:
# Figure out the most recent final version of zc.buildout.
import setuptools.package_index
_final_parts = '*final-', '*final'
def _final_version(parsed_version):
for part in parsed_version:
if (part[:1] == '*') and (part not in _final_parts):
return False
return True
index = setuptools.package_index.PackageIndex(
search_path=[setuptools_path])
if find_links:
index.add_find_links((find_links,))
req = pkg_resources.Requirement.parse(requirement)
if index.obtain(req) is not None:
best = []
bestv = None
for dist in index[req.project_name]:
distv = dist.parsed_version
if _final_version(distv):
if bestv is None or distv > bestv:
best = [dist]
bestv = distv
elif distv == bestv:
best.append(dist)
if best:
best.sort()
version = best[-1].version
if version:
requirement = '=='.join((requirement, version))
cmd.append(requirement)
import subprocess
if subprocess.call(cmd, env=dict(os.environ, PYTHONPATH=setuptools_path)) != 0:
raise Exception(
"Failed to execute command:\n%s" % repr(cmd)[1:-1])
######################################################################
# Import and run buildout
ws.add_entry(tmpeggs)
ws.require(requirement)
import zc.buildout.buildout
if not [a for a in args if '=' not in a]:
args.append('bootstrap')
# if -c was provided, we push it back into args for buildout' main function
if options.config_file is not None:
args[0:0] = ['-c', options.config_file]
zc.buildout.buildout.main(args)
shutil.rmtree(tmpeggs) | zodbupdate | /zodbupdate-1.5.tar.gz/zodbupdate-1.5/bootstrap.py | bootstrap.py |
===============================================================
zodbupgrade - Upgrade existing databases to match your software
===============================================================
This package provides a tool that automatically identifies and updates
references from persistent objects to classes that are in the process of being
moved from one module to another and/or being renamed.
If a class is being moved or renamed, you need to update all references from
your database to the new name before finally deleting the old code.
This tool looks through all current objects of your database, identifies
moved/renamed classes and `touches` objects accordingly. It creates a single
transaction that contains the update of your database.
Having run this tool, you are then free to delete the old code.
Usage
=====
Installing the egg of this tool provides a console script `zodbupgrade` which
you can call giving either a FileStorage filename or a configuration file
defining a storage::
$ zodbupgrade -f Data.fs
$ zodbupgrade -c zodb.conf
Detailed usage information is available:
$ zodbupgrade -h
Custom software/eggs
--------------------
It is important to install this egg in an interpreter/environment where your
software is installed as well. If you're using a regular Python installation
or virtualenv, just installing the package using easy_install should be fine.
If you are using buildout, installing can be done using the egg recipe with
this configuration::
[buildout]
parts += zodbupgrade
[zodbupgrade]
recipe = zc.recipe.eggs
eggs = zodbupgrade
<list additional eggs here>
If you do not install `zodbupgrade` together with the necessary software it
will report missing classes and not touch your database.
Non-FileStorage configurations
------------------------------
You can configure any storage known to your ZODB installation by providing a
ZConfig configuration file (similar to zope.conf). For example you can connect
to a ZEO server by providing a config file `zeo.conf`::
<zeoclient>
server 127.0.0.1:8100
storage 1
</zeoclient>
And then running `zodbupgrade` using:
$ zodbupgrade -c zeo.conf
| zodbupgrade | /zodbupgrade-0.1.tar.gz/zodbupgrade-0.1/README.txt | README.txt |
.. _change-log:
Change Log
----------
2.5.0 (2021-05-12)
~~~~~~~~~~~~~~~~~~
- Support both ZODB4 and ZODB5.
- Add support for PyPy.
- Add support for Python 3.8.
- Drop support for Python 3.4.
- Add support for ``demo:`` URI scheme.
2.4.0 (2019-01-11)
~~~~~~~~~~~~~~~~~~
- Add support for Python 3.7.
- Fix PendingDeprecationWarning about ``cgi.parse_qsl``. (PR #21)
2.3.0 (2017-10-17)
~~~~~~~~~~~~~~~~~~
- Fix parsing of ``zeo://`` URI with IPv6 address.
- Drop support for Python 3.3.
- Add support for Python 3.6.
2.2.2 (2017-05-05)
~~~~~~~~~~~~~~~~~~
- Fix transposed ``install_requires`` and ``tests_require`` lists in
``setup.py``.
2.2.1 (2017-04-18)
~~~~~~~~~~~~~~~~~~
- Fix breakage added in 2.2 to the ``zconfig`` resolver.
2.2 (2017-04-17)
~~~~~~~~~~~~~~~~
- Add support for additional database configuration parameters:
``pool_timeout``, ``cache_size_bytes``, ``historical_pool_size``,
``historical_cache_size``, ``historical_cache_size_bytes``,
``historical_timeout``, and ``large_record_size``.
2.1 (2017-04-17)
~~~~~~~~~~~~~~~~
- Add support for Python 3.4 and 3.5.
- Drop support for Python 2.6 and 3.2.
- Add missing ClientStorage constructor kw args to resolver.
2.0 (2014-01-05)
~~~~~~~~~~~~~~~~
- Update ``ZODB3`` meta-package dependency to ``ZODB`` + ``ZConfig`` + ``ZEO``.
Those releases are what we import, and have final Py3k-compatible releases.
- Packaging: fix missing ``url`` argument to ``setup()``.
2.0b1 (2013-05-02)
~~~~~~~~~~~~~~~~~~
- Add support for Python 3.2 / 3.3.
- Add ``setup.py docs`` alias (runs ``setup.py develop`` and installs
documentation dependencies).
- Add ``setup.py dev`` alias (runs ``setup.py develop`` and installs
testing dependencies).
- Automate building the Sphinx docs via ``tox``.
- Fix ``zconfig:`` URIs under Python 2.7. The code worked around a bug in
the stdlib's ``urlparse.urlsplit`` for Python < 2.7; that workaround broke
under 2.7. See https://github.com/Pylons/zodburi/issues/5
- Drop support for Python 2.5.
1.1 (2012-09-12)
~~~~~~~~~~~~~~~~
- Remove support for ``postgres://`` URIs, which will now be provided by
the ``relstorage`` package. Thanks to Georges Dubus for the patch!
1.0 (2012-06-07)
~~~~~~~~~~~~~~~~
- Add support for ``postgres://`` URIs. Thanks to Georges Dubus for
the patch!
- Pin dependencies to Python 2.5-compatible versions when testing with
tox under Python 2.5.
- Update the documentation for publication to `ReadTheDocs
<https://docs.pylonsproject.org/projects/zodburi/en/latest/>`_
1.0b1 (2011-08-21)
~~~~~~~~~~~~~~~~~~
- Initial release.
| zodburi | /zodburi-2.5.0.tar.gz/zodburi-2.5.0/CHANGES.rst | CHANGES.rst |
# Contributing
All projects under the Pylons Project, including this one, follow the guidelines established at [How to Contribute](https://pylonsproject.org/community-how-to-contribute.html), [Coding Style and Standards](https://pylonsproject.org/community-coding-style-standards.html), and [Pylons Project Documentation Style Guide](https://docs.pylonsproject.org/projects/zodburi/).
You can contribute to this project in several ways.
* [File an Issue on GitHub](https://github.com/Pylons/zodburi/issues)
* Fork this project, create a new branch, commit your suggested change, and push to your fork on GitHub.
When ready, submit a pull request for consideration.
[GitHub Flow](https://guides.github.com/introduction/flow/index.html) describes the workflow process and why it's a good practice.
When submitting a pull request, sign [CONTRIBUTORS.txt](https://github.com/Pylons/zodburi/blob/master/CONTRIBUTORS.txt) if you have not yet done so.
* Join the [IRC channel #pyramid on irc.freenode.net](https://webchat.freenode.net/?channels=pyramid).
## Git Branches
Git branches and their purpose and status at the time of this writing are listed below.
* [master](https://github.com/Pylons/zodburi/) - The branch which should always be *deployable*. The default branch on GitHub.
* For development, create a new branch. If changes on your new branch are accepted, they will be merged into the master branch and deployed.
## Prerequisites
Follow the instructions in [README.rst](https://github.com/Pylons/zodburi/) to install the tools needed to run the project.
| zodburi | /zodburi-2.5.0.tar.gz/zodburi-2.5.0/contributing.md | contributing.md |
zodburi
=======
Overview
--------
A library which parses URIs and converts them to ZODB storage objects and
database arguments.
It will run under CPython 2.7, 3.5 to 3.8, pypy and pypy3. It will not run under Jython. It requires ZODB >= 3.10.0.
Installation
------------
Install using setuptools, e.g. (within a virtualenv)::
$ easy_install zodburi
Using
-----
``zodburi`` has exactly one api: :func:`zodburi.resolve_uri`. This API
obtains a ZODB storage factory and a set of keyword arguments suitable for
passing to the ``ZODB.DB.DB`` constructor. For example:
.. code-block:: python
:linenos:
from zodburi import resolve_uri
storage_factory, dbkw = resolve_uri(
'zeo://localhost:9001?connection_cache_size=20000')
# factory will be an instance of ClientStorageURIResolver
# dbkw will be {'connection_cache_size':20000, 'pool_size':7,
# 'database_name':'unnamed'}
from ZODB.DB import DB
storage = storage_factory()
db = DB(storage, **dbkw)
URI Schemes
-----------
The URI schemes currently recognized in the ``zodbconn.uri`` setting
are ``file://``, ``zeo://``, ``zconfig://``, ``memory://`` and ``demo:``.
Documentation for these URI scheme syntaxes are below.
In addition to those schemes, the relstorage_ package adds support for
``postgres://``.
.. _relstorage : https://pypi.org/project/RelStorage/
``file://`` URI scheme
~~~~~~~~~~~~~~~~~~~~~~
The ``file://`` URI scheme can be passed as ``zodbconn.uri`` to create a ZODB
FileStorage database factory. The path info section of this scheme should
point at a filesystem file path that should contain the filestorage data.
For example::
file:///my/absolute/path/to/Data.fs
The URI scheme also accepts query string arguments. The query string
arguments honored by this scheme are as follows.
FileStorage constructor related
+++++++++++++++++++++++++++++++
These arguments generally inform the FileStorage constructor about
values of the same names.
create
boolean
read_only
boolean
quota
bytesize
Database-related
++++++++++++++++
These arguments relate to the database (as opposed to storage)
settings.
database_name
string
Connection-related
++++++++++++++++++
These arguments relate to connections created from the database.
connection_cache_size
integer (default 10000) target size, in number of objects, of each
connection's object cache
connection_cache_size_bytes
integer (default 0) target estimated size, in bytes, of each
connection's object cache
0 means no limit.
A suffix of KB, MB, or GB may be used to provide units.
connection_historical_cache_size
integer (default 1000) target size, in number of objects, of each
historical connection's object cache
connection_historical_cache_size_bytes
integer (default 0) target estimated size, in bytes, of each
historical connection's object cache
0 means no limit.
A suffix of KB, MB, or GB may be used to provide units.
connection_historical_pool_size
integer (default 3) expected maximum total number of historical connections
simultaneously open
connection_historical_timeout
integer (default 300) maximum age of inactive historical connections
When a historical connection has remained unused in a historical
connection pool for more than connection_historical_timeout seconds,
it will be discarded and its resources released.
connection_large_record_size
integer (default 16MB) record size limit before suggesting using blobs
When object records are saved that are larger than this, a warning
is issued, suggesting that blobs should be used instead.
A suffix of KB, MB, or GB may be used to provide units.
connection_pool_size
integer (default 7) expected maximum number of simultaneously open
connections
There is no hard limit (as many connections as are requested
will be opened, until system resources are exhausted). Exceeding
pool-size connections causes a warning message to be logged,
and exceeding twice pool-size connections causes a critical
message to be logged.
connection_pool_timeout
integer (default unlimited) maximum age of inactive (non-historical)
connections
When a connection has remained unused in a connection pool for more
than connection_pool_timeout seconds, it will be discarded and its
resources released.
Blob-related
++++++++++++
If these arguments exist, they control the blob settings for this
storage.
blobstorage_dir
string
blobstorage_layout
string
Misc
++++
demostorage
boolean (if true, wrap FileStorage in a DemoStorage)
Example
+++++++
An example that combines a path with a query string::
file:///my/Data.fs?connection_cache_size=100&blobstorage_dir=/foo/bar
``zeo://`` URI scheme
~~~~~~~~~~~~~~~~~~~~~~
The ``zeo://`` URI scheme can be passed as ``zodbconn.uri`` to create a ZODB
ClientStorage database factory. Either the host and port parts of this scheme
should point at a hostname/portnumber combination e.g.::
zeo://localhost:7899
Or the path part should point at a UNIX socket name::
zeo:///path/to/zeo.sock
The URI scheme also accepts query string arguments. The query string
arguments honored by this scheme are as follows.
ClientStorage-constructor related
+++++++++++++++++++++++++++++++++
These arguments generally inform the ClientStorage constructor about
values of the same names.
storage
string
cache_size
bytesize
name
string
client
string
debug
boolean
var
string
min_disconnect_poll
integer
max_disconnect_poll
integer
wait_for_server_on_startup (deprecated alias for wait)
boolean
wait
boolean
wait_timeout
integer
read_only
boolean
read_only_fallback
boolean
drop_cache_rather_verify
boolean
username
string
password
string
realm
string
blob_dir
string
shared_blob_dir
boolean
blob_cache_size
bytesize
blob_cache_size_check
integer
client_label
string
Misc
++++
demostorage
boolean (if true, wrap ClientStorage in a DemoStorage)
Connection-related
++++++++++++++++++
These arguments relate to connections created from the database.
connection_cache_size
integer (default 10000)
connection_pool_size
integer (default 7)
Database-related
++++++++++++++++
These arguments relate to the database (as opposed to storage)
settings.
database_name
string
Example
+++++++
An example that combines a path with a query string::
zeo://localhost:9001?connection_cache_size=20000
``zconfig://`` URI scheme
~~~~~~~~~~~~~~~~~~~~~~~~~
The ``zconfig://`` URI scheme can be passed as ``zodbconn.uri`` to create any
kind of storage that ZODB can load via ZConfig. The path info section of this
scheme should point at a ZConfig file on the filesystem. Use an optional
fragment identifier to specify which database to open. This URI scheme does
not use query string parameters.
Examples
++++++++
An example ZConfig file::
<zodb>
<mappingstorage>
</mappingstorage>
</zodb>
If that configuration file is located at /etc/myapp/zodb.conf, use the
following URI to open the database::
zconfig:///etc/myapp/zodb.conf
A ZConfig file can specify more than one database. For example::
<zodb temp1>
<mappingstorage>
</mappingstorage>
</zodb>
<zodb temp2>
<mappingstorage>
</mappingstorage>
</zodb>
In that case, use a URI with a fragment identifier::
zconfig:///etc/myapp/zodb.conf#temp1
``memory://`` URI scheme
~~~~~~~~~~~~~~~~~~~~~~~~~
The ``memory://`` URI scheme can be passed as ``zodbconn.uri`` to create a
ZODB MappingStorage (memory-based) database factory. The path info section
of this scheme should be a storage name. For example::
memory://storagename
However, the storage name is usually omitted, and the most common form is::
memory://
The URI scheme also accepts query string arguments. The query string
arguments honored by this scheme are as follows.
Database-related
++++++++++++++++
These arguments relate to the database (as opposed to storage)
settings.
database_name
string
Connection-related
++++++++++++++++++
These arguments relate to connections created from the database.
connection_cache_size
integer (default 10000)
connection_pool_size
integer (default 7)
Example
+++++++
An example that combines a dbname with a query string::
memory://storagename?connection_cache_size=100&database_name=fleeb
``demo:`` URI scheme
~~~~~~~~~~~~~~~~~~~~
The ``demo:`` URI scheme can be passed as ``zodbconn.uri`` to create a
DemoStorage database factory. DemoStorage provides an overlay combining base
and δ ("delta", or in other words, "changes") storages.
The URI scheme contains two parts, base and δ::
demo:(base_uri)/(δ_uri)
an optional fragment specifies arguments for ``ZODB.DB.DB`` constructor::
demo:(base_uri)/(δ_uri)#dbkw
Example
+++++++
An example that combines ZEO with local FileStorage for changes::
demo:(zeo://localhost:9001?storage=abc)/(file:///path/to/Changes.fs)
More Information
----------------
.. toctree::
:maxdepth: 1
api.rst
Reporting Bugs / Development Versions
-------------------------------------
Visit https://github.com/Pylons/zodburi to download development or
tagged versions.
Visit https://github.com/Pylons/zodburi/issues to report bugs.
.. include:: ../CHANGES.rst
Indices and tables
------------------
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`
| zodburi | /zodburi-2.5.0.tar.gz/zodburi-2.5.0/docs/index.rst | index.rst |
Changelog
=========
.. You should *NOT* be adding new change log entries to this file.
You should create a file in the news directory instead.
For helpful instructions, please see:
https://github.com/plone/plone.releaser/blob/master/ADD-A-NEWS-ITEM.rst
.. towncrier release notes start
1.2.0 (2022-07-06)
------------------
New features:
- Improve debugging output: show all objects that reference a oid.
See `Philip's blog post <https://www.starzel.de/blog/zodb-debugging>`_ for more information.
See also discussion in `pull request 8 <https://github.com/plone/zodbverify/pull/8>`_.
[pbauer] (#8)
1.1.0 (2020-04-22)
------------------
New features:
- Show the affected oids for each error.
Inspect a single oid.
The idea is to run zodbverify on the whole database and from the output copy one oid and run it again to further inspect that object.
[pbauer] (#6)
Bug fixes:
- Minor packaging updates. (#1)
1.0.2 (2019-08-08)
------------------
Bug fixes:
- Open Data.fs in read only mode. (#2)
1.0.1 (2019-05-09)
------------------
Bug fixes:
- Fix project description. [jensens] (#1)
1.0 (2019-05-09)
----------------
New features:
- Initial effort.
Base code taken from `Products.CMFPlone` (created by @davisagli).
Enhanced and packaged for more general Zope use.
[dwt,jensens] (#1)
| zodbverify | /zodbverify-1.2.0.tar.gz/zodbverify-1.2.0/CHANGES.rst | CHANGES.rst |
==========
zodbverify
==========
Overview
========
Verify a ZODB by iterating and loading all records.
Problems are reported in detail.
A debugger is provided, together with decompilation information.
zodbverify is available as a standalone script and as addon for `plone.recipe.zope2instance`.
Usage
=====
Script
------
The verification runs on a plain ZODB file.
The Zope application is not started.
Run i.e.::
bin/zodbverify -f var/filestorage/Data.fs
Usage::
zodbverify [-h] -f ZODBFILE [-D] [-o OID]
Verifies that all records in the database can be loaded.
optional arguments:
-h, --help show this help message and exit
-f ZODBFILE, --zodbfile ZODBFILE
Path to file-storage
-D, --debug pause to debug broken pickles
-o OID, --oid OID oid to inspect
plone.recipe.zope2instance integration
--------------------------------------
The verification runs in the context of the initialized Zope application.
Usage::
./bin/instance zodbverify [-h] [-D] [-o OID]
Verifies that all records in the database can be loaded.
optional arguments:
-h, --help show this help message and exit
-D, --debug pause to debug broken pickles
-o OID, --oid OID oid to inspect
Inspecting a single oid
-----------------------
The output of zodbverify gives you a list of all problems and the oid that are affected.
To inspect a single oid in detail you can pass one of these to zodbverify::
./bin/instance zodbverify -o 0x2e929f
This will output the pickle and the error for that oid.
By also adding the debug-switch you will get two pdb's while the script runs::
./bin/instance zodbverify -o 0x2e929f -D
2020-03-11 10:40:24,972 INFO [Zope:45][MainThread] Ready to handle requests
The object is 'obj'
The Zope instance is 'app'
[4] > /Users/pbauer/workspace/dipf-intranet/src-mrd/zodbverify/src/zodbverify/verify_oid.py(52)verify_oid()
-> pickle, state = storage.load(oid)
In the first pdb you have the object for the oid as `obj` and the zope instance as `app`. Before the second pdb the pickle will be disassembled the same way as when using zodbverify to pause to debug broken pickles without passing a oid.
Source Code
===========
Contributors please read the document `Process for Plone core's development <https://docs.plone.org/develop/coredev/docs/index.html>`_
Sources are at the `Plone code repository hosted at Github <https://github.com/plone/zodbverify>`_.
| zodbverify | /zodbverify-1.2.0.tar.gz/zodbverify-1.2.0/README.rst | README.rst |
import locale
# (start(day, month), end(day, month))
sign_dates = (
((20, 3), (19, 4)), # Aries
((20, 4), (20, 5)),
((21, 5), (20, 6)),
((21, 6), (22, 7)),
((23, 7), (22, 8)),
((23, 8), (22, 9)),
((23, 9), (22, 10)),
((23, 10), (21, 11)),
((22, 11), (21, 12)),
((22, 12), (19, 1)),
((20, 1), (17, 2)),
((18, 2), (19, 3)), # Pisces
)
# English
en_dict = (
(0, "Aries"),
(1, "Taurus"),
(2, "Gemini"),
(3, "Cancer"),
(4, "Leo"),
(5, "Virgo"),
(6, "Libra"),
(7, "Scorpio"),
(8, "Sagittarius"),
(9, "Capricorn"),
(10, "Aquarius"),
(11, "Pisces"),
)
# German
de_dict = (
(0, "Widder"),
(1, "Stier"),
(2, "Zwillinge"),
(3, "Krebs"),
(4, "Löwe"),
(5, "Jungfrau"),
(6, "Waage"),
(7, "Skorpion"),
(8, "Schütze"),
(9, "Steinbock"),
(10, "Wassermann"),
(11, "Fische"),
)
# Spanish
es_dict = (
(0, "Aries"),
(1, "Tauro"),
(2, "Géminis"),
(3, "Cáncer"),
(4, "Leo"),
(5, "Virgo"),
(6, "Libra"),
(7, "Escorpio"),
(8, "Sagitario"),
(9, "Capricornio"),
(10, "Acuario"),
(11, "Piscis"),
)
# Russian
ru_dict = (
(0, "Овен"),
(1, "Телец"),
(2, "Близнецы"),
(3, "Рак"),
(4, "Лев"),
(5, "Дева"),
(6, "Весы"),
(7, "Скорпион"),
(8, "Стрелец"),
(9, "Козерог"),
(10, "Водолей"),
(11, "Рыбы"),
)
# Portuguese
pt_dict = (
(0, "Áries"),
(1, "Touro"),
(2, "Gêmeos"),
(3, "Cancer"),
(4, "Leão"),
(5, "Virgem"),
(6, "Libra"),
(7, "Escorpião"),
(8, "Sargitário"),
(9, "Capricórnio"),
(10, "Aquário"),
(11, "Peixes"),
)
# Greek
el_dict = (
(0, "Κριός"),
(1, "Ταύρος"),
(2, "Δίδυμοι"),
(3, "Καρκίνος"),
(4, "Λέων"),
(5, "Παρθένος"),
(6, "Ζυγός"),
(7, "Σκορπιός"),
(8, "Τοξότης"),
(9, "Αιγόκερως"),
(10, "Υδροχόος"),
(11, "Ιχθείς"),
)
language_dict = {
'en_US': en_dict,
'de_DE': de_dict,
'ru_RU': ru_dict,
'pt_BR': pt_dict,
'pt_PT': pt_dict,
'el': el_dict,
'el_CY': el_dict,
'el_GR': el_dict,
'es_ES': es_dict,
None: en_dict
}
# @todo use gettext and etc
def _(word_index, language=None):
if language is not None:
return language_dict.get(language)[word_index][1]
language = locale.getlocale()
return language_dict.get(language[0], language_dict.get('en_US'))[word_index][1]
def get_zodiac_sign(d, month=None, language=None):
# params
if month is None:
month = int(d.month)
day = int(d.day)
else:
day = int(d)
month = int(month)
# calculate
for index, sign in enumerate(sign_dates):
if (month == sign[0][1] and day >= sign[0][0]) or (month == sign[1][1] and day <= sign[1][0]):
return _(index, language)
return '' | zodiac-sign | /zodiac_sign-0.2.5-py3-none-any.whl/zodiac_sign.py | zodiac_sign.py |
# sutime
*Python wrapper for Stanford CoreNLP's [SUTime](http://nlp.stanford.edu/software/sutime.shtml) Java library.*
## Build Status
#### CircleCI Builds
[](https://circleci.com/gh/FraBle/python-sutime)
#### PyPI
[](https://pypi.org/project/sutime/)
[](https://pypi.org/project/sutime/)
#### Code Quality
[](https://app.codacy.com/project/FraBle/python-sutime/dashboard)
[](https://scrutinizer-ci.com/g/FraBle/python-sutime/)
[](https://scan.coverity.com/projects/frable-python-sutime)
[](https://codeclimate.com/github/FraBle/python-sutime/maintainability)
## Installation
```bash
>> pip install setuptools_scm jpype1 # install pre-reqs
>> pip install sutime
>> # use package pom.xml to install all Java dependencies via Maven into ./jars
>> mvn dependency:copy-dependencies -DoutputDirectory=./jars
```
Run the following command to add the Spanish language model:
```bash
>> mvn dependency:copy-dependencies -DoutputDirectory=./jars -P spanish
```
## Supported Languages
SUTime currently supports only English, British and Spanish ([Source](https://github.com/stanfordnlp/CoreNLP/tree/master/src/edu/stanford/nlp/time/rules)).
This Python wrapper is prepared to support the other CoreNLP languages (e.g. German) as well as soon as they get added to SUTime.
The following command can be used to download the language models for `arabic`, `chinese`, `english`, `french`, `german`, and `spanish`:
```bash
>> mvn dependency:copy-dependencies -DoutputDirectory=./jars -P <language>
```
*However, SUTime only supports a subset (default model and `spanish`) of CoreNLP's languages and the other language models will get ignored.*
## Example
```python
import json
import os
from sutime import SUTime
if __name__ == '__main__':
test_case = u'I need a desk for tomorrow from 2pm to 3pm'
jar_files = os.path.join(os.path.dirname(__file__), 'jars')
sutime = SUTime(jars=jar_files, mark_time_ranges=True)
print(json.dumps(sutime.parse(test_case), sort_keys=True, indent=4))
```
Result:
```json
[
{
"end": 26,
"start": 18,
"text": "tomorrow",
"type": "DATE",
"value": "2016-10-14"
},
{
"end": 42,
"start": 27,
"text": "from 2pm to 3pm",
"type": "DURATION",
"value": {
"begin": "T14:00",
"end": "T15:00"
}
}
]
```
Other examples can be found in the [test](https://github.com/FraBle/python-sutime/blob/master/sutime/test) directory.
## Functions
```python
SUTime(jars=None, jvm_started=False, mark_time_ranges=False, include_range=False,
jvm_flags=None, language='english')
"""
jars: List of paths to the SUTime Java dependencies.
jvm_started: Optional attribute to specify if the JVM has already been
started (with all Java dependencies loaded).
mark_time_ranges: Optional attribute to specify CoreNLP property
zodiac-sutime.markTimeRanges. Default is False.
"Tells zodiac-sutime to mark phrases such as 'From January to March'
instead of marking 'January' and 'March' separately"
include_range: Optional attribute to specify CoreNLP property
zodiac-sutime.includeRange. Default is False.
"Tells zodiac-sutime to mark phrases such as 'From January to March'
instead of marking 'January' and 'March' separately"
jvm_flags: Optional attribute to specify an iterable of string flags
to be provided to the JVM at startup. For example, this may be
used to specify the maximum heap size using '-Xmx'. Has no effect
if jvm_started is set to True. Default is None.
language: Optional attribute to select language. The following options
are supported: english (/en), british, spanish (/es). Default is
english.
"""
sutime.parse(input_str, reference_date=''):
"""Parses datetime information out of string input.
It invokes the SUTimeWrapper.annotate() function in Java.
Args:
input_str: The input as string that has to be parsed.
reference_date: Optional reference data for SUTime.
Returns:
A list of dicts with the result from the SUTimeWrapper.annotate()
call.
"""
```
## Credit
- [The team behind Stanford CoreNLP](http://stanfordnlp.github.io/CoreNLP/) for their awesome work and tools for the NLP community
- [Luis Nell (Github: originell) and team](https://github.com/originell/jpype/) for maintaining JPype as interface between Python and Java
## Contributions
- [René Springer](https://github.com/r-springer): Support for reference date
- [Constantine Lignos](https://github.com/ConstantineLignos): Support for JVM flags, adoption of CircleCI 2.0, fix for mutable default argument, fix for test execution
## License
- GPLv3+ (check the LICENSE file)
| zodiac-sutime | /zodiac-sutime-1.0.0.tar.gz/zodiac-sutime-1.0.0/README.md | README.md |
from datetime import datetime as dt
def getSigno(data_nascimento):
data_nascimento = dt.strptime(data_nascimento, "%Y-%m-%d")
dia_nascimento = data_nascimento.day
mes_nascimento = data_nascimento.month
signo = ""
if (mes_nascimento == 1):
if (dia_nascimento <= 20):
signo = "Capricórnio"
else:
signo = "Aquário"
elif (mes_nascimento == 2):
if (dia_nascimento <= 19):
signo = "Aquário"
else:
signo = "Peixes"
elif (mes_nascimento == 3):
if (dia_nascimento <= 20):
signo = "Peixes"
else:
signo = "Áries"
elif (mes_nascimento == 4):
if (dia_nascimento <= 20):
signo = "Áries"
else:
signo = "Touro"
elif (mes_nascimento == 5):
if (dia_nascimento <= 20):
signo = "Touro"
else:
signo = "Gêmeos"
elif (mes_nascimento == 6):
if (dia_nascimento <= 20):
signo = "Gêmeos"
else:
signo = "Câncer"
elif (mes_nascimento == 7):
if (dia_nascimento <= 22):
signo = "Câncer"
else:
signo = "Leão"
elif (mes_nascimento == 8):
if (dia_nascimento <= 22):
signo = "Leão"
else:
signo = "Virgem"
elif (mes_nascimento == 9):
if (dia_nascimento <= 22):
signo = "Virgem"
else:
signo = "Libra"
elif (mes_nascimento == 10):
if (dia_nascimento <= 22):
signo = "Libra"
else:
signo = "Escorpião"
elif (mes_nascimento == 11):
if (dia_nascimento <= 21):
signo = "Escorpião"
else:
signo = "Sagitário"
elif (mes_nascimento == 11):
if (dia_nascimento <= 21):
signo = "Sagitário"
else:
signo = "Capricórnio"
return signo
def getCor(signo):
switcher = {
"Áries": "Vermelho",
"Touro": "Lilás",
"Gêmeos": "Amarelo",
"Câncer": "Branco",
"Leão": "Laranja",
"Virgem": "Verde",
"Libra": "Rosa",
"Escorpião": "Vinho",
"Sagitário": "Violeta",
"Capricórnio": "Preto",
"Aquário": "Azul",
"Peixes": "Roxo",
}
return switcher.get(signo, "nothing")
def getNumeroSorte(signo):
switcher = {
"Áries": 10,
"Touro": 3,
"Gêmeos": 8,
"Câncer": 1,
"Leão": 6,
"Virgem": 4,
"Libra": 9,
"Escorpião": 7,
"Sagitário": 2,
"Capricórnio": 5,
"Aquário": 8,
"Peixes": 6,
}
return switcher.get(signo, 0) | zodiaco | /signos/getSignos.py | getSignos.py |
<!-- <h1 align='center'>Zodiax</h1> -->
# Zodiax
[](https://badge.fury.io/py/zodiax)
[](https://opensource.org/licenses/BSD-3-Clause)
[](https://github.com/LouisDesdoigts/zodiax/actions/workflows/tests.yml)
[](https://louisdesdoigts.github.io/zodiax/)
---
[_Zodiax_](https://github.com/LouisDesdoigts/zodiax) is a lightweight extension to the object-oriented [_Jax_](https://github.com/google/jax) framework [_Equinox_](https://github.com/patrick-kidger/equinox). _Equinox_ allows for **differentiable classes** that are recognised as a valid _Jax_ type and _Zodiax_ adds lightweight methods to simplify interfacing with these classes! _Zodiax_ was originially built in the development of [dLux](https://github.com/LouisDesdoigts/dLux) and was designed to make working with large nested classes structures simple and flexible.
Zodiax is directly integrated with both Jax and Equinox, gaining all of their core features:
> - [Accelerated Numpy](https://jax.readthedocs.io/en/latest/jax-101/01-jax-basics.html): a Numpy like API that can run on GPU and TPU
>
> - [Automatic Differentiation](https://jax.readthedocs.io/en/latest/jax-101/04-advanced-autodiff.html): Allows for optimisation and inference in extremely high dimensional spaces
>
> - [Just-In-Time Compilation](https://jax.readthedocs.io/en/latest/jax-101/02-jitting.html): Compliles code into XLA at runtime and optimising execution across hardware
>
> - [Automatic Vectorisation](https://jax.readthedocs.io/en/latest/jax-101/03-vectorization.html): Allows for simple parallelism across hardware and asynchronys execution
>
> - [Object Oriented Jax](https://docs.kidger.site/equinox/all-of-equinox/): Allows for differentiable classes that are recognised as a valid _Jax_ type
>
> - [Inbuilt Neural Networks](https://docs.kidger.site/equinox/api/nn/linear/): Has pre-built neural network layers classes
>
> - [Path-Based Pytree Interface](docs/usage.md): Path based indexing allows for easy interfacing with large and highly nested physical models
>
> - [Leaf Manipulation Methods](docs/usage.md): Inbuilt methods allow for easy manipulation of Pytrees mirroring the _Jax_ Array API
Doccumentataion: [louisdesdoigts.github.io/zodiax/](https://louisdesdoigts.github.io/zodiax/)
Installation: ```pip install zodiax```
Contributors: [Louis Desdoigts](https://github.com/LouisDesdoigts)
Requires: Python 3.8+, Jax 0.4.3+
---
### Quickstart
Create a regular class that inherits from `zodiax.Base`
```python
import jax
import zodiax as zdx
import jax.numpy as np
class Linear(zdx.Base):
m : Jax.Array
b : Jax.Array
def __init__(self, m, b):
self.m = m
self.b = b
def model(self, x):
return self.m * x + self.b
linear = Linear(1., 1.)
```
Its that simple! The `linear` class is now a fully differentiable object that gives us **all** the benefits of jax with an object-oriented interface! Lets see how we can jit-compile and take gradients of this class.
```python
@jax.jit
@jax.grad
def loss_fn(model, xs, ys):
return np.square(model.model(xs) - ys).sum()
xs = np.arange(5)
ys = 2*np.arange(5)
grads = loss_fn(linear, xs, ys)
print(grads)
print(grads.m, grads.b)
```
```python
> Linear(m=f32[], b=f32[])
> -40.0 -10.0
```
The `grads` object is an instance of the `Linear` class with the gradients of the parameters with respect to the loss function!
<!-- !!! tip "zodiax.filter_grad"
If we replace the `jax.grad` decorator with `zdz.filter_grad` then we can choose speicifc parameters to take gradients with respect to! This is detailed in the [Using Zodiax section]((<https://louisdesdoigts.github.io/zodiax/docs/usage.md>)) of the docs.
!!! tip "Pretty-printing"
All `zodiax` classes gain a pretty-printing method that will display the class instance in a nice readable format! Lets use it here to see what the gradients look like:
```python
```
```python
> Linear(m=f32[], b=f32[])
> -40.0 -10.0
``` -->
| zodiax | /zodiax-0.4.1.tar.gz/zodiax-0.4.1/README.md | README.md |
.. contents:: **Table of Contents**
Requires
========
- Python2.4+
Usage
=====
Zodict
------
Ordered dictionary which implements the corresponding
``zope.interface.common.mapping`` interface.
::
>>> from zope.interface.common.mapping import IFullMapping
>>> from zodict import Zodict
>>> zod = Zodict()
>>> IFullMapping.providedBy(zod)
True
Node
----
This is a ``zodict`` which provides a location. Location the zope way means
each item in the node-tree knows its parent and its own name.
::
>>> from zope.location.interface import ILocation
>>> from zodict import Node
>>> root = Node('root')
>>> ILocation.providedBy(Node)
True
>>> root['child'] = Node()
>>> root['child'].path
['root', 'child']
>>> child = root['child']
>>> child.__name__
'child'
>>> child.__parent__
<Node object 'root' at ...>
The ``filtereditems`` function.
::
>>> from zope.interface import Interface
>>> from zope.interface import alsoProvides
>>> class IMarker(Interface): pass
>>> alsoProvides(root['child']['subchild'], IMarker)
>>> IMarker.providedBy(root['child']['subchild'])
True
>>> for item in root['child'].filtereditems(IMarker):
... print item.path
['root', 'child', 'subchild']
UUID related operations on Node.
::
>>> uuid = root['child']['subchild'].uuid
>>> uuid
UUID('...')
>>> root.node(uuid).path
['root', 'child', 'subchild']
>>> root.uuid = uuid
Traceback (most recent call last):
...
ValueError: Given uuid was already used for another Node
>>> import uuid
>>> newuuid = uuid.uuid4()
>>> root.uuid = newuuid
>>> root['child'].node(newuuid).path
['root']
Node insertion (an insertafter function exist as well).
::
>>> root['child1'] = Node()
>>> root['child2'] = Node()
>>> node = Node('child3')
>>> root.insertbefore(node, root['child2'])
>>> root.printtree()
<class 'zodict.node.Node'>: root
<class 'zodict.node.Node'>: child1
<class 'zodict.node.Node'>: child3
<class 'zodict.node.Node'>: child2
Move a node. Therefor we first need to detach the node we want to move from
tree. Then insert the detached node elsewhere. In general, you can insert the
detached node or subtree to a complete different tree.
::
>>> len(root._index.keys())
6
>>> node = root.detach('child4')
>>> node
<Node object 'child4' at ...>
>>> len(node._index.keys())
1
>>> len(root._index.keys())
5
>>> len(root.values())
4
>>> root.insertbefore(node, root['child1'])
>>> root.printtree()
<class 'zodict.node.Node'>: root
<class 'zodict.node.Node'>: child4
<class 'zodict.node.Node'>: child1
<class 'zodict.node.Node'>: child3
<class 'zodict.node.Node'>: child5
<class 'zodict.node.Node'>: child2
Merge 2 Node Trees.
::
>>> tree1 = Node()
>>> tree1['a'] = Node()
>>> tree1['b'] = Node()
>>> tree2 = Node()
>>> tree2['d'] = Node()
>>> tree2['e'] = Node()
>>> tree1._index is tree2._index
False
>>> len(tree1._index.keys())
3
>>> tree1.printtree()
<class 'zodict.node.Node'>: None
<class 'zodict.node.Node'>: a
<class 'zodict.node.Node'>: b
>>> len(tree2._index.keys())
3
>>> tree2.printtree()
<class 'zodict.node.Node'>: None
<class 'zodict.node.Node'>: d
<class 'zodict.node.Node'>: e
>>> tree1['c'] = tree2
>>> len(tree1._index.keys())
6
>> sorted(tree1._index.values(), key=lambda x: x.__name__)
>>> tree1._index is tree2._index
True
>>> tree1.printtree()
<class 'zodict.node.Node'>: None
<class 'zodict.node.Node'>: a
<class 'zodict.node.Node'>: b
<class 'zodict.node.Node'>: c
<class 'zodict.node.Node'>: d
<class 'zodict.node.Node'>: e
LifecycleNode
-------------
The ``LifecycleNode`` is able to send out notifies with object-events based on
``zope.lifecycleevent`` subclasses.
Creation of Node
``zodict.events.NodeCreatedEvent`` implementing
``zodict.interfaces.INodeCreatedEvent``.
Adding childs to Node
``zodict.events.NodeAddedEvent`` implementing
``zodict.interfaces.INodeAddedEvent``.
Deleting childs from Node
``zodict.events.NodeRemovedEvent`` implementing
``zodict.interfaces.INodeRemovedEvent``.
Detaching childs from Node
``zodict.events.NodeDetachedEvent`` implementing
``zodict.interfaces.INodeDetachedEvent``.
In subclasses of Node the event classes can be exchanged by modifying the
class attribute ``events`` on the node. It is a dictionary with the keys:
``['created', 'added', 'removed', 'detached']``
Thread safe Locking of a Tree
-----------------------------
Not ``Node`` nor ``LifecycleNode`` are thread safe. Application-builder are
responsible for this. Major reason: Acquiring and releasing locks is an
expensive operation.
The module ``zodict.locking`` provides a mechanism to lock the whole tree
thread safe. A class and a decorator is provided. The class is intended to be
used standalone with some Node, the decorator to be used on subclasses of
``Node`` or ``LifecycleNode``.
``zodict.locking.TreeLock`` is a adapter like class on a Node. It can be used
in Python > 2.6 within the ``with`` statement.
::
>>> node = Node()
>>> with TreeLock(node):
>>> # do something on the locked tree
>>> node['foo'] = Node()
Alternative it can be used in older Python version with in a try: finally.
::
>>> from zodict.locking import TreeLock
>>> lock = TreeLock(node)
>>> lock.acquire()
>>> try:
>>> # do something on the locked tree
>>> node['bar'] = Node()
>>> finally:
>>> lock.release()
``zodict.locking.locktree`` Decorator for methods of a (sub-)class of ``Node``.
::
>>> from zodict.locking import locktree
>>> class LockedNode(Node):
...
... @locktree
... def __setitem__(self, key, val):
... super(LockedNode, self).__setitem__(key, val)
Changes
=======
Version 1.9.3
-------------
- Provide abstract ``_Node`` implementation.
[rnix, 2010-07-08]
- Typos in documentation.
[thet, 2010-07-06]
- BBB imports in except block rather than trying it first.
[thet, 2010-07-06]
- Buildout configuration for testing purposes.
[thet, 2010-07-06]
Version 1.9.2
-------------
- set ``child.__name__`` and ``child.__parent__`` before ``child.keys()`` call
for index check in ``Node.__setitem__``. ``keys()`` of child might depend
on those.
[rnix, 2010-05-01]
- Separated ``AttributedNode`` from ``LifecycleNode``, so attributes can be used
without events now.
[jensens, 2010-04-28]
Version 1.9.1
-------------
- Add test for bool evaluation
[rnix, 2010-04-21]
- Add ``__setattr__`` and ``__getattr__`` to ``NodeAttributes`` object.
[rnix, 2010-04-21]
- BBB compatibility for zope2.9
[rnix, jensens, 2010-02-17]
Version 1.9.0
-------------
- Make zodict compatible with python 2.4 again, BBB
[jensens, 2009-12-23]
- Add locking test
[rnix, 2009-12-23]
- Refactor locking, remove tree-locking from Node base implementations.
Add easy to use locking class and a decorator intended to be used in
applications and subclasses of ``Node``.
[jensens, 2009-12-23]
- Introduce ``ICallableNode``, ``ILeaf`` and ``IRoot`` interfaces.
[rnix, 2009-12-23]
- Change Lisence to PSF
[rnix, 2009-12-22]
- Add ``zodict.node.NodeAttributes`` object.
[rnix, 2009-12-22]
- Add ``attributes`` Attribute to ``LifecycleNode``.
[rnix, 2009-12-22]
- Add ``ILifecycleNode`` and ``INodeAttributes`` interfaces.
[rnix, 2009-12-22]
- Removed typo in private variable name. added notify-suppress to setitem of
``LifecycleNode``.
[jensens, 2009-12-22]
Version 1.8.0
-------------
- Added ``zope.lifecycle`` events to the new ``LifecycleNode``. You can
easiely override them with your own events.
[jensens, 2009-12-21]
- Renamed class ``zodict`` to ``Zodict``, renamed module ``zodict.zodict`` to
``zodict._zodict``. This avoids ugly clashes on import (package vs. module
vs.class). BBB import is provided in the 1.x release series.
[jensens, 2009-12-21]
Version 1.7.0
-------------
- Add ``Node.detach`` function. Needed for node or subtree moving. This is
done due to performance reasons.
[rnix, 2009-12-18]
- ``Node.index`` returns now a ``NodeIndex`` object, which implements
``zope.interface.common.mapping.IReadMapping``. This functions convert uuid
instances to integers before node lookup. So we still fit the contract of
returning nodes from index by uuid.
[rnix, 2009-12-18]
- Change type of keys of ``Node._index`` to int. ``uuid.UUID.__hash__``
function was called too often
[jensens, rnix, 2009-12-18]
- make ``Node`` thread safe.
[jensens, rnix, 2009-12-18]
Version 1.6.1
-------------
- make ``Node`` trees merge properly.
[rnix, 2009-12-15]
- make getter and setter functions of ``uuid`` property private.
[rnix, 2009-12-15]
Version 1.6.0
-------------
- remove the ``traverser`` module.
[rnix, 2009-11-28]
- improve ``insertbefore`` and ``insertafter`` a little bit.
[rnix, 2009-11-28]
- add ``index`` Attribute to ``Node``. Allows access to the internal
``_index`` attribute.
[rnix, 2009-11-28]
- remove ``@accept`` and ``@return`` decorators. Just overhead.
[rnix, 2009-11-28]
Version 1.5.0
-------------
- add ``insertbefore`` and ``insertafter`` function to ``Node``.
[rnix, 2009-11-27]
- fix ``printtree`` if ``Node.__name__`` is ``None``.
[rnix, 2009-11-20]
- add ``printtree`` debug helper function to ``Node``.
[rnix, 2009-11-09]
- define own Traverser interface and reduce dependencies.
[rnix, 2009-10-28]
- removed import of tests from zodicts ``__init__``. this caused import errors
if ``interlude`` wasnt installed.
[jensens, 2009-07-16]
Version 1.4.0
-------------
- Don't allow classes as values of a ``Node``. Attribute ``__name__``
conflicts.
[jensens, 2009-05-06]
- ``repr(nodeobj)`` now returns the real classname and not fixed
``<Node object`` this helps a lot while testing and using classes inheriting
from ``Node``!
[jensens, 2009-05-06]
- Make tests run with ``python setup.py test``.
Removed superflous dependency on ``zope.testing``.
[jensens, 2009-05-06]
Version 1.3.3
-------------
- Fix ``ITraverser`` interface import including BBB.
Version 1.3.2
-------------
- Add ``root`` property to ``Node``.
[thet, 2009-04-24]
Version 1.3.1
-------------
- Add ``__delitem__`` function to ``Node``.
[rnix, 2009-04-16]
Version 1.3
-----------
- Add ``uuid`` Attribute and ``node`` function to ``Node``.
[rnix, 2009-03-23]
Version 1.2
-----------
- Add ``filtereditems`` function to ``Node``.
[rnix, 2009-03-22]
Version 1.1
-----------
- Add ``INode`` interface and implementation.
[rnix, 2009-03-18]
Credits
=======
- Written by Robert Niederreiter <[email protected]>
- Contributions and ideas by Jens Klein <[email protected]>
| zodict | /zodict-1.9.3.tar.gz/zodict-1.9.3/README.txt | README.txt |
import os, shutil, sys, tempfile, urllib2
from optparse import OptionParser
tmpeggs = tempfile.mkdtemp()
is_jython = sys.platform.startswith('java')
# parsing arguments
parser = OptionParser()
parser.add_option("-v", "--version", dest="version",
help="use a specific zc.buildout version")
parser.add_option("-d", "--distribute",
action="store_true", dest="distribute", default=False,
help="Use Disribute rather than Setuptools.")
parser.add_option("-c", None, action="store", dest="config_file",
help=("Specify the path to the buildout configuration "
"file to be used."))
options, args = parser.parse_args()
# if -c was provided, we push it back into args for buildout' main function
if options.config_file is not None:
args += ['-c', options.config_file]
if options.version is not None:
VERSION = '==%s' % options.version
else:
VERSION = ''
USE_DISTRIBUTE = options.distribute
args = args + ['bootstrap']
to_reload = False
try:
import pkg_resources
if not hasattr(pkg_resources, '_distribute'):
to_reload = True
raise ImportError
except ImportError:
ez = {}
if USE_DISTRIBUTE:
exec urllib2.urlopen('http://python-distribute.org/distribute_setup.py'
).read() in ez
ez['use_setuptools'](to_dir=tmpeggs, download_delay=0, no_fake=True)
else:
exec urllib2.urlopen('http://peak.telecommunity.com/dist/ez_setup.py'
).read() in ez
ez['use_setuptools'](to_dir=tmpeggs, download_delay=0)
if to_reload:
reload(pkg_resources)
else:
import pkg_resources
if sys.platform == 'win32':
def quote(c):
if ' ' in c:
return '"%s"' % c # work around spawn lamosity on windows
else:
return c
else:
def quote (c):
return c
cmd = 'from setuptools.command.easy_install import main; main()'
ws = pkg_resources.working_set
if USE_DISTRIBUTE:
requirement = 'distribute'
else:
requirement = 'setuptools'
if is_jython:
import subprocess
assert subprocess.Popen([sys.executable] + ['-c', quote(cmd), '-mqNxd',
quote(tmpeggs), 'zc.buildout' + VERSION],
env=dict(os.environ,
PYTHONPATH=
ws.find(pkg_resources.Requirement.parse(requirement)).location
),
).wait() == 0
else:
assert os.spawnle(
os.P_WAIT, sys.executable, quote (sys.executable),
'-c', quote (cmd), '-mqNxd', quote (tmpeggs), 'zc.buildout' + VERSION,
dict(os.environ,
PYTHONPATH=
ws.find(pkg_resources.Requirement.parse(requirement)).location
),
) == 0
ws.add_entry(tmpeggs)
ws.require('zc.buildout' + VERSION)
import zc.buildout.buildout
zc.buildout.buildout.main(args)
shutil.rmtree(tmpeggs) | zodict | /zodict-1.9.3.tar.gz/zodict-1.9.3/bootstrap.py | bootstrap.py |
<img src="docs/img/zodipy_logo.png" width="350">
[](https://badge.fury.io/py/zodipy)
[](http://www.astropy.org/)

[](https://codecov.io/gh/Cosmoglobe/zodipy)
[](https://arxiv.org/abs/2205.12962)
---
ZodiPy is a Python tool for simulating the zodiacal emission in intensity that an arbitrary Solar System observer sees, either in the form of timestreams or full-sky HEALPix maps.

# Help
See the [documentation](https://cosmoglobe.github.io/zodipy/) for more information and examples on how to use ZodiPy for different applications.
# Installation
ZodiPy is installed using `pip install zodipy`.
# A simple example
```python
import astropy.units as u
from astropy.time import Time
from zodipy import Zodipy
model = Zodipy("dirbe")
emission = model.get_emission_ang(
25 * u.micron,
theta=[10, 10.1, 10.2] * u.deg,
phi=[90, 89, 88] * u.deg,
obs_time=Time("2022-01-01 12:00:00"),
obs="earth",
)
print(emission)
#> [15.35392831 15.35495051 15.35616009] MJy / sr
```
# Scientific paper and citation
For an overview of the ZodiPy model approach and other information regarding zodiacal emission and interplanetary dust modeling we refer to the scientific paper on ZodiPy:
- [Cosmoglobe: Simulating zodiacal emission with ZodiPy (San et al. 2022)](https://arxiv.org/abs/2205.12962).
See [CITATION](https://github.com/Cosmoglobe/zodipy/blob/dev/CITATION.bib) if you have used ZodiPy in your work and want to cite the software.
# Funding
This work has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreements No 776282 (COMPET-4; BeyondPlanck), 772253 (ERC; bits2cosmology) and 819478 (ERC; Cosmoglobe).
<div style="display: flex; flex-direction: row; justify-content: space-evenly">
<img style="width: 49%; height: auto; max-width: 500px; align-self: center" src="https://user-images.githubusercontent.com/28634670/170697040-d5ec2935-29d0-4847-8999-9bc4eaa59e56.jpeg">
<img style="width: 49%; height: auto; max-width: 500px; align-self: center" src="https://user-images.githubusercontent.com/28634670/170697140-b010aa69-9f9a-44c0-b702-8a05ec0b6d3e.jpeg">
</div> | zodipy | /zodipy-0.8.0.tar.gz/zodipy-0.8.0/README.md | README.md |
Zoe - Container-based Analytics as a Service
============================================
Zoe provides a simple way to provision data analytics clusters and
workflows using container-based (Docker) virtualization. The guiding
principles are:
- ease of use: data scientists know about data and applications,
systems and resource constraints should be kept out of the way
- ease of administration: we have a strong background in systems and
network administration, so we put all effort possible to make Zoe
ease to install and maintain
- use well-known technologies: we try hard not to reinvent the wheel,
we use Python, ZeroMQ, Docker and the Apache Web Server
- a clear roadmap: our short and long-term objectives should always be
clear and well defined
- openness: the source code is open: clone, modify, discuss, test and
contribute, you are welcome!
Resources:
- Documentation: http://zoe-analytics.readthedocs.org/
- Docker images:
https://github.com/DistributedSystemsGroup/zoe-docker-images
|Documentation Status| |Requirements Status|
Zoe is licensed under the terms of the Apache 2.0 license.
.. |Documentation Status| image:: https://readthedocs.org/projects/zoe-analytics/badge/?version=latest
:target: https://readthedocs.org/projects/zoe-analytics/?badge=latest
.. |Requirements Status| image:: https://requires.io/github/DistributedSystemsGroup/zoe/requirements.svg?branch=master
:target: https://requires.io/github/DistributedSystemsGroup/zoe/requirements/?branch=master
| zoe-analytics | /zoe-analytics-0.8.1b.tar.gz/zoe-analytics-0.8.1b/README.rst | README.rst |
from argparse import ArgumentParser, Namespace
import logging
from zipfile import is_zipfile
from pprint import pprint
from zoe_client import ZoeClient
from common.configuration import init as conf_init, zoeconf
argparser = None
def get_zoe_client(args) -> ZoeClient:
return ZoeClient(args.ipc_server, args.ipc_port)
def stats_cmd(args):
client = get_zoe_client(args)
stats = client.platform_stats()
pprint(stats)
def user_new_cmd(args):
client = get_zoe_client(args)
user = client.user_new(args.email)
print("New user ID: {}".format(user.id))
def user_get_cmd(args):
client = get_zoe_client(args)
user = client.user_get_by_email(args.email)
print("User ID: {}".format(user.id))
def spark_cluster_new_cmd(args):
client = get_zoe_client(args)
application_id = client.application_spark_new(args.user_id, args.worker_count, args.executor_memory, args.executor_cores, args.name)
print("Spark application added with ID: {}".format(application_id))
def spark_notebook_new_cmd(args):
client = get_zoe_client(args)
application_id = client.application_spark_notebook_new(args.user_id, args.worker_count, args.executor_memory, args.executor_cores, args.name)
print("Spark application added with ID: {}".format(application_id))
def spark_submit_new_cmd(args):
if not is_zipfile(args.file):
print("Error: the file specified is not a zip archive")
return
fcontents = open(args.file, "rb").read()
client = get_zoe_client(args)
application_id = client.application_spark_submit_new(args.user_id, args.worker_count, args.executor_memory, args.executor_cores, args.name, fcontents)
print("Spark application added with ID: {}".format(application_id))
def run_spark_cmd(args):
client = get_zoe_client(args)
application = client.application_get(args.id)
if application is None:
print("Error: application {} does not exist".format(args.id))
return
ret = client.execution_spark_new(application.id, args.name, args.cmd, args.spark_opts)
if ret:
print("Application scheduled successfully, use the app-inspect command to check its status")
else:
print("Admission control refused to run the application specified")
def app_rm_cmd(args):
client = get_zoe_client(args)
application = client.application_get(args.id)
if application is None:
print("Error: application {} does not exist".format(args.id))
return
if args.force:
a = client.application_get(application.id)
for eid in a.executions:
e = client.execution_get(eid.id)
if e.status == "running":
print("Terminating execution {}".format(e.name))
client.execution_terminate(e.id)
client.application_remove(application.id, args.force)
def app_inspect_cmd(args):
client = get_zoe_client(args)
application = client.application_get(args.id)
if application is None:
print("Error: application {} does not exist".format(args.id))
return
print(application)
def app_list_cmd(args):
client = get_zoe_client(args)
applications = client.application_list(args.id)
if len(applications) > 0:
print("{:4} {:20} {:25}".format("ID", "Name", "Type"))
for app in applications:
print("{:4} {:20} {:25}".format(app.id, app.name, app.type))
def exec_kill_cmd(args):
client = get_zoe_client(args)
execution = client.execution_get(args.id)
if execution is None:
print("Error: execution {} does not exist".format(args.id))
return
client.execution_terminate(execution.id)
def log_get_cmd(args):
client = get_zoe_client(args)
log = client.log_get(args.id)
if log is None:
print("Error: No log found for container ID {}".format(args.id))
print(log)
def gen_config_cmd(args):
zoeconf().write(open(args.output_file, "w"))
def container_stats_cmd(args):
client = get_zoe_client(args)
stats = client.container_stats(args.container_id)
print(stats)
def process_arguments() -> Namespace:
global argparser
argparser = ArgumentParser(description="Zoe - Container Analytics as a Service command-line client")
argparser.add_argument('-d', '--debug', action='store_true', default=False, help='Enable debug output')
argparser.add_argument('--ipc-server', default='localhost', help='Address of the Zoe scheduler process')
argparser.add_argument('--ipc-port', default=8723, type=int, help='Port of the Zoe scheduler process')
subparser = argparser.add_subparsers(title='subcommands', description='valid subcommands')
argparser_stats = subparser.add_parser('stats', help="Show the platform statistics")
argparser_stats.set_defaults(func=stats_cmd)
argparser_user_new = subparser.add_parser('user-new', help="Create a new user")
argparser_user_new.add_argument('email', help="User email address")
argparser_user_new.set_defaults(func=user_new_cmd)
argparser_user_get = subparser.add_parser('user-get', help="Get the user id for an existing user")
argparser_user_get.add_argument('email', help="User email address")
argparser_user_get.set_defaults(func=user_get_cmd)
argparser_spark_cluster_create = subparser.add_parser('app-spark-cluster-new', help="Setup a new empty Spark cluster")
argparser_spark_cluster_create.add_argument('--user-id', type=int, required=True, help='Application owner')
argparser_spark_cluster_create.add_argument('--name', required=True, help='Application name')
argparser_spark_cluster_create.add_argument('--worker-count', type=int, default=2, help='Number of workers')
argparser_spark_cluster_create.add_argument('--executor-memory', default='2g', help='Maximum memory available per-worker, the system assumes only one executor per worker')
argparser_spark_cluster_create.add_argument('--executor-cores', default='2', type=int, help='Number of cores to assign to each executor')
argparser_spark_cluster_create.set_defaults(func=spark_cluster_new_cmd)
argparser_spark_nb_create = subparser.add_parser('app-spark-notebook-new', help="Setup a new Spark Notebook application")
argparser_spark_nb_create.add_argument('--user-id', type=int, required=True, help='Notebook owner')
argparser_spark_nb_create.add_argument('--name', required=True, help='Notebook name')
argparser_spark_nb_create.add_argument('--worker-count', type=int, default=2, help='Number of workers')
argparser_spark_nb_create.add_argument('--executor-memory', default='2g', help='Maximum memory available per-worker, the system assumes only one executor per worker')
argparser_spark_nb_create.add_argument('--executor-cores', default='2', type=int, help='Number of cores to assign to each executor')
argparser_spark_nb_create.set_defaults(func=spark_notebook_new_cmd)
argparser_spark_submit_create = subparser.add_parser('app-spark-new', help="Setup a new Spark submit application")
argparser_spark_submit_create.add_argument('--user-id', type=int, required=True, help='Application owner')
argparser_spark_submit_create.add_argument('--name', required=True, help='Application name')
argparser_spark_submit_create.add_argument('--worker-count', type=int, default=2, help='Number of workers')
argparser_spark_submit_create.add_argument('--executor-memory', default='2g', help='Maximum memory available per-worker, the system assumes only one executor per worker')
argparser_spark_submit_create.add_argument('--executor-cores', default='2', type=int, help='Number of cores to assign to each executor')
argparser_spark_submit_create.add_argument('--file', required=True, help='zip archive containing the application files')
argparser_spark_submit_create.set_defaults(func=spark_submit_new_cmd)
argparser_app_rm = subparser.add_parser('app-rm', help="Delete an application")
argparser_app_rm.add_argument('id', type=int, help="Application id")
argparser_app_rm.add_argument('-f', '--force', action="store_true", help="Kill also all active executions, if any")
argparser_app_rm.set_defaults(func=app_rm_cmd)
argparser_app_inspect = subparser.add_parser('app-inspect', help="Gather details about an application and its active executions")
argparser_app_inspect.add_argument('id', type=int, help="Application id")
argparser_app_inspect.set_defaults(func=app_inspect_cmd)
argparser_app_inspect = subparser.add_parser('app-ls', help="List all applications for a user")
argparser_app_inspect.add_argument('id', type=int, help="User id")
argparser_app_inspect.set_defaults(func=app_list_cmd)
argparser_spark_app_run = subparser.add_parser('run', help="Execute a previously registered Spark application")
argparser_spark_app_run.add_argument('id', type=int, help="Application id")
argparser_spark_app_run.add_argument('--name', required=True, help='Execution name')
argparser_spark_app_run.add_argument('--cmd', help="Command-line to pass to spark-submit")
argparser_spark_app_run.add_argument('--spark-opts', help="Optional Spark options to pass to spark-submit")
argparser_spark_app_run.set_defaults(func=run_spark_cmd)
argparser_execution_kill = subparser.add_parser('execution-kill', help="Terminates an execution")
argparser_execution_kill.add_argument('id', type=int, help="Execution id")
argparser_execution_kill.set_defaults(func=exec_kill_cmd)
argparser_log_get = subparser.add_parser('log-get', help="Retrieves the logs of a running container")
argparser_log_get.add_argument('id', type=int, help="Container id")
argparser_log_get.set_defaults(func=log_get_cmd)
argparser_log_get = subparser.add_parser('write-config', help="Generates a sample file containing current configuration values")
argparser_log_get.add_argument('output_file', help="Filename to create with default configuration")
argparser_log_get.set_defaults(func=gen_config_cmd)
argparser_container_stats = subparser.add_parser('container-stats', help="Retrieve statistics on a running container")
argparser_container_stats.add_argument('container_id', help="ID of the container")
argparser_container_stats.set_defaults(func=container_stats_cmd)
return argparser.parse_args()
def zoe():
args = process_arguments()
if args.debug:
logging.basicConfig(level=logging.DEBUG)
else:
logging.basicConfig(level=logging.INFO)
conf_init()
try:
args.func(args)
except AttributeError:
argparser.print_help()
return | zoe-analytics | /zoe-analytics-0.8.1b.tar.gz/zoe-analytics-0.8.1b/zoe_client/entrypoint.py | entrypoint.py |
import dateutil.parser
def deserialize_datetime(isoformat):
if isoformat is None:
return None
else:
return dateutil.parser.parse(isoformat)
class User:
def __init__(self, user: dict):
self.id = user['id']
self.email = user['email']
class Execution:
def __init__(self, execution: dict):
self.id = execution['id']
self.name = execution['name']
self.assigned_resources = execution['assigned_resources']
self.application_id = execution['application_id']
self.time_started = deserialize_datetime(execution['time_started'])
self.time_scheduled = deserialize_datetime(execution['time_scheduled'])
self.time_finished = deserialize_datetime(execution['time_finished'])
self.status = execution['status']
self.termination_notice = execution['termination_notice']
self.cluster_id = execution['cluster_id']
self.type = execution['type']
if self.type == 'spark-submit-application':
self.commandline = execution['commandline']
self.spark_opts = execution['spark_opts']
self.containers = []
for c in execution['containers']:
self.containers.append(Container(c))
class Container:
def __init__(self, container: dict):
self.id = container['id']
self.docker_id = container['docker_id']
self.cluster_id = container['cluster_id']
self.ip_address = container['ip_address']
self.readable_name = container['readable_name']
self.proxies = []
for p in container['proxies']:
self.proxies.append(Proxy(p))
class Proxy:
def __init__(self, proxy: dict):
self.id = proxy['id']
self.internal_url = proxy['internal_url']
self.cluster_id = proxy['cluster_id']
self.container_id = proxy['container_id']
self.service_name = proxy['service_name']
self.last_access = deserialize_datetime(proxy['last_access'])
class Application:
"""
:type id: int
:type name: str
:type required_resources: ApplicationResources
:type user_id: int
:type type: str
:type master_image: str
:type worker_image: str
:type notebook_image: str
:type submit_image: str
:type executions: list[Execution]
"""
def __init__(self, application: dict):
self.id = application['id']
self.name = application['name']
self.required_resources = application['required_resources']
self.user_id = application['user_id']
self.type = application['type']
if 'spark' in self.type:
self.master_image = application['master_image']
self.worker_image = application['worker_image']
if self.type == 'spark-notebook':
self.notebook_image = application['notebook_image']
if self.type == 'spark-submit':
self.submit_image = application['submit_image']
self.executions = []
for e in application['executions']:
self.executions.append(Execution(e)) | zoe-analytics | /zoe-analytics-0.8.1b.tar.gz/zoe-analytics-0.8.1b/zoe_client/entities.py | entities.py |
import base64
import logging
from zoe_client.ipc import ZoeIPCClient
from common.configuration import zoeconf
from zoe_client.entities import User, Execution, Application
log = logging.getLogger(__name__)
MASTER_IMAGE = "/zoerepo/spark-master"
WORKER_IMAGE = "/zoerepo/spark-worker"
SUBMIT_IMAGE = "/zoerepo/spark-submit"
NOTEBOOK_IMAGE = "/zoerepo/spark-notebook"
class ZoeClient:
def __init__(self, ipc_server='localhost', ipc_port=8723):
self.ipc_server = ZoeIPCClient(ipc_server, ipc_port)
self.image_registry = zoeconf().docker_private_registry
# Applications
def application_get(self, application_id: int) -> Application:
answer = self.ipc_server.ask('application_get', application_id=application_id)
if answer is not None:
return Application(answer['app'])
def application_get_binary(self, application_id: int) -> bytes:
data = self.ipc_server.ask('application_get_binary', application_id=application_id)
app_data = base64.b64decode(data['zip_data'])
return app_data
def application_list(self, user_id):
"""
Returns a list of all Applications belonging to user_id
:type user_id: int
:rtype : list[Application]
"""
answer = self.ipc_server.ask('application_list', user_id=user_id)
if answer is None:
return []
else:
return [Application(x) for x in answer['apps']]
def application_remove(self, application_id: int, force: bool) -> bool:
answer = self.ipc_server.ask('application_remove', application_id=application_id, force=force)
return answer is not None
def application_spark_new(self, user_id: int, worker_count: int, executor_memory: str, executor_cores: int, name: str) -> int:
answer = self.ipc_server.ask('application_spark_new',
user_id=user_id,
worker_count=worker_count,
executor_memory=executor_memory,
executor_cores=executor_cores,
name=name,
master_image=self.image_registry + MASTER_IMAGE,
worker_image=self.image_registry + WORKER_IMAGE)
if answer is not None:
return answer['app_id']
def application_spark_notebook_new(self, user_id: int, worker_count: int, executor_memory: str, executor_cores: int, name: str) -> int:
answer = self.ipc_server.ask('application_spark_notebook_new',
user_id=user_id,
worker_count=worker_count,
executor_memory=executor_memory,
executor_cores=executor_cores,
name=name,
master_image=self.image_registry + MASTER_IMAGE,
worker_image=self.image_registry + WORKER_IMAGE,
notebook_image=self.image_registry + NOTEBOOK_IMAGE)
if answer is not None:
return answer['app_id']
def application_spark_submit_new(self, user_id: int, worker_count: int, executor_memory: str, executor_cores: int, name: str, file_data: bytes) -> int:
file_data = base64.b64encode(file_data).decode('ascii')
answer = self.ipc_server.ask('application_spark_submit_new',
user_id=user_id,
worker_count=worker_count,
executor_memory=executor_memory,
executor_cores=executor_cores,
name=name,
file_data=file_data,
master_image=self.image_registry + MASTER_IMAGE,
worker_image=self.image_registry + WORKER_IMAGE,
submit_image=self.image_registry + SUBMIT_IMAGE)
if answer is not None:
return answer['app_id']
# Containers
def container_stats(self, container_id):
return self.ipc_server.ask('container_stats', container_id=container_id)
# Executions
def execution_delete(self, execution_id: int) -> None:
ret = self.ipc_server.ask('execution_delete', execution_id=execution_id)
return ret is not None
def execution_get(self, execution_id: int) -> Execution:
exec_dict = self.ipc_server.ask('execution_get', execution_id=execution_id)
if exec_dict is not None:
return Execution(exec_dict)
def execution_get_proxy_path(self, execution_id):
answer = self.ipc_server.ask('execution_get_proxy_path', execution_id=execution_id)
if answer is not None:
return answer['path']
def execution_spark_new(self, application_id: int, name, commandline=None, spark_options=None) -> bool:
ret = self.ipc_server.ask('execution_spark_new', application_id=application_id, name=name, commandline=commandline, spark_options=spark_options)
return ret is not None
def execution_terminate(self, execution_id: int) -> None:
ret = self.ipc_server.ask('execution_terminate', execution_id=execution_id)
return ret is not None
# Logs
def log_get(self, container_id: int) -> str:
clog = self.ipc_server.ask('log_get', container_id=container_id)
if clog is not None:
return clog['log']
def log_history_get(self, execution_id):
data = self.ipc_server.ask('log_history_get', execution_id=execution_id)
log_data = base64.b64decode(data['zip_data'])
return log_data
# Platform
def platform_stats(self) -> dict:
stats = self.ipc_server.ask('platform_stats')
return stats
# Users
def user_check(self, user_id: int) -> bool:
user = self.user_get(user_id)
return user is not None
def user_new(self, email: str) -> User:
user_dict = self.ipc_server.ask('user_new', email=email)
if user_dict is not None:
return User(user_dict)
def user_get(self, user_id: int) -> User:
user_dict = self.ipc_server.ask('user_get', user_id=user_id)
if user_dict is not None:
return User(user_dict)
def user_get_by_email(self, email: str) -> User:
user_dict = self.ipc_server.ask('user_get_by_email', user_email=email)
if user_dict is not None:
return User(user_dict) | zoe-analytics | /zoe-analytics-0.8.1b.tar.gz/zoe-analytics-0.8.1b/zoe_client/client.py | client.py |
import logging
from threading import Barrier
import time
from zoe_scheduler.platform import PlatformManager
from zoe_scheduler.platform_status import PlatformStatus
from zoe_scheduler.periodic_tasks import PeriodicTaskManager
from zoe_scheduler.proxy_manager import proxy_manager
from common.configuration import zoeconf
from zoe_scheduler.state.execution import ExecutionState
from common.application_resources import ApplicationResources
from zoe_scheduler.stats import SchedulerStats
log = logging.getLogger(__name__)
class SimpleSchedulerPolicy:
def __init__(self, platform_status: PlatformStatus):
self.platform_status = platform_status
self.waiting_list = []
self.running_list = []
def admission_control(self, required_resources: ApplicationResources) -> bool:
if required_resources.core_count() < self.platform_status.swarm_status.cores_total:
return True
else:
return False
def insert(self, execution_id: int, resources: ApplicationResources):
self.waiting_list.append((execution_id, resources))
def runnable(self) -> (int, ApplicationResources):
try:
exec_id, resources = self.waiting_list.pop(0)
except IndexError:
return None, None
assigned_resources = resources # Could modify the amount of resource assigned before running
return exec_id, assigned_resources
def started(self, execution_id: int, resources: ApplicationResources):
self.running_list.append((execution_id, resources))
def terminated(self, execution_id: int):
if self.find_execution_running(execution_id):
self.running_list = [x for x in self.running_list if x[0] != execution_id]
if self.find_execution_waiting(execution_id):
self.waiting_list = [x for x in self.waiting_list if x[0] != execution_id]
def find_execution_running(self, exec_id) -> bool:
for e, r in self.running_list:
if e == exec_id:
return True
return False
def find_execution_waiting(self, exec_id) -> bool:
for e, r in self.waiting_list:
if e == exec_id:
return True
else:
return False
def stats(self):
ret = SchedulerStats()
ret.count_running = len(self.running_list)
ret.count_waiting = len(self.waiting_list)
ret.timestamp = time.time()
return ret
class ZoeScheduler:
def __init__(self):
self.platform = PlatformManager()
self.platform_status = PlatformStatus(self)
self.scheduler_policy = SimpleSchedulerPolicy(self.platform_status)
def init_tasks(self, tm: PeriodicTaskManager) -> Barrier:
barrier = Barrier(4) # number of tasks + main thread
tm.add_task("platform status updater", self.platform_status.update, zoeconf().interval_status_refresh, barrier)
tm.add_task("proxy access timestamp updater", proxy_manager().update_proxy_access_timestamps, zoeconf().interval_proxy_update_accesses, barrier)
tm.add_task("execution health checker", self.platform.check_executions_health, zoeconf().interval_check_health, barrier)
return barrier
def incoming(self, execution: ExecutionState) -> bool:
if not self.scheduler_policy.admission_control(execution.application.required_resources):
return False
self.scheduler_policy.insert(execution.id, execution.application.required_resources)
return True
def _check_runnable(self): # called periodically, does not use state to keep database load low
execution_id, resources = self.scheduler_policy.runnable()
if execution_id is None:
return
log.debug("Found a runnable execution!")
if self.platform.start_execution(execution_id, resources):
self.scheduler_policy.started(execution_id, resources)
else: # Some error happened
log.error('Execution ID {} cannot be started'.format(execution_id))
def loop(self): # FIXME the scheduler should wait on events, not sleep
"""
This method is the scheduling task. It is the loop the main thread runs, started from the zoe-scheduler executable.
It does not use an sqlalchemy session.
:return: None
"""
while True:
self.schedule()
time.sleep(zoeconf().interval_scheduler_task)
def schedule(self):
self._check_runnable()
def execution_terminate(self, state, execution: ExecutionState):
self.platform.execution_terminate(state, execution)
self.scheduler_policy.terminated(execution.id) | zoe-analytics | /zoe-analytics-0.8.1b.tar.gz/zoe-analytics-0.8.1b/zoe_scheduler/scheduler.py | scheduler.py |
class Stats:
def __init__(self):
self.timestamp = None
def to_dict(self) -> dict:
ret = {}
ret.update(vars(self))
return ret
class SwarmNodeStats(Stats):
def __init__(self, name):
super().__init__()
self.name = name
self.docker_endpoint = None
self.container_count = 0
self.cores_total = 0
self.cores_reserved = 0
self.memory_total = 0
self.memory_reserved = 0
self.labels = {}
class SwarmStats(Stats):
def __init__(self):
super().__init__()
self.container_count = 0
self.image_count = 0
self.memory_total = 0
self.cores_total = 0
self.placement_strategy = ''
self.active_filters = []
self.nodes = []
def to_dict(self) -> dict:
ret = {
'container_count': self.container_count,
'image_count': self.image_count,
'memory_total': self.memory_total,
'cores_total': self.cores_total,
'placement_strategy': self.placement_strategy,
'active_filters': self.active_filters,
'nodes': []
}
for node in self.nodes:
ret['nodes'].append(node.to_dict())
return ret
class SchedulerStats(Stats):
def __init__(self):
super().__init__()
self.count_running = 0
self.count_waiting = 0
class PlatformStats(Stats):
def __init__(self):
super().__init__()
self.swarm = SwarmStats()
self.scheduler = SchedulerStats()
def to_dict(self) -> dict:
return {
'swarm': self.swarm.to_dict(),
'scheduler': self.scheduler.to_dict()
}
class ContainerStats(Stats):
def __init__(self, docker_stats):
super().__init__()
# self.docker_stats = docker_stats
# self.blkio_serviced_ops_read = sum([x['value'] for x in docker_stats['blkio_stats']['io_serviced_recursive'] if x['op'] == 'Read'])
# self.blkio_serviced_ops_write = sum([x['value'] for x in docker_stats['blkio_stats']['io_serviced_recursive'] if x['op'] == 'Write'])
# self.blkio_serviced_ops_async = sum([x['value'] for x in docker_stats['blkio_stats']['io_serviced_recursive'] if x['op'] == 'Async'])
# self.blkio_serviced_ops_sync = sum([x['value'] for x in docker_stats['blkio_stats']['io_serviced_recursive'] if x['op'] == 'Sync'])
# self.blkio_serviced_ops_total = sum([x['value'] for x in docker_stats['blkio_stats']['io_serviced_recursive'] if x['op'] == 'Total'])
self.io_bytes_read = sum([x['value'] for x in docker_stats['blkio_stats']['io_service_bytes_recursive'] if x['op'] == 'Read'])
self.io_bytes_write = sum([x['value'] for x in docker_stats['blkio_stats']['io_service_bytes_recursive'] if x['op'] == 'Write'])
# self.blkio_serviced_bytes_async = sum([x['value'] for x in docker_stats['blkio_stats']['io_service_bytes_recursive'] if x['op'] == 'Async'])
# self.blkio_serviced_bytes_sync = sum([x['value'] for x in docker_stats['blkio_stats']['io_service_bytes_recursive'] if x['op'] == 'Sync'])
# self.blkio_serviced_bytes_total = sum([x['value'] for x in docker_stats['blkio_stats']['io_service_bytes_recursive'] if x['op'] == 'Total'])
self.memory_used = docker_stats['memory_stats']['usage']
self.memory_total = docker_stats['memory_stats']['limit']
self.net_bytes_rx = docker_stats['network']['rx_bytes']
self.net_bytes_tx = docker_stats['network']['tx_bytes']
documentation_sample = {
'blkio_stats': {
'io_time_recursive': [],
'io_wait_time_recursive': [],
'io_merged_recursive': [],
'io_service_time_recursive': [],
'io_serviced_recursive': [
{'minor': 0, 'op': 'Read', 'major': 8, 'value': 0},
{'minor': 0, 'op': 'Write', 'major': 8, 'value': 1},
{'minor': 0, 'op': 'Sync', 'major': 8, 'value': 0},
{'minor': 0, 'op': 'Async', 'major': 8, 'value': 1},
{'minor': 0, 'op': 'Total', 'major': 8, 'value': 1},
{'minor': 0, 'op': 'Read', 'major': 252, 'value': 0},
{'minor': 0, 'op': 'Write', 'major': 252, 'value': 1},
{'minor': 0, 'op': 'Sync', 'major': 252, 'value': 0},
{'minor': 0, 'op': 'Async', 'major': 252, 'value': 1},
{'minor': 0, 'op': 'Total', 'major': 252, 'value': 1}
],
'io_service_bytes_recursive': [
{'minor': 0, 'op': 'Read', 'major': 8, 'value': 0},
{'minor': 0, 'op': 'Write', 'major': 8, 'value': 32768},
{'minor': 0, 'op': 'Sync', 'major': 8, 'value': 0},
{'minor': 0, 'op': 'Async', 'major': 8, 'value': 32768},
{'minor': 0, 'op': 'Total', 'major': 8, 'value': 32768},
{'minor': 0, 'op': 'Read', 'major': 252, 'value': 0},
{'minor': 0, 'op': 'Write', 'major': 252, 'value': 32768},
{'minor': 0, 'op': 'Sync', 'major': 252, 'value': 0},
{'minor': 0, 'op': 'Async', 'major': 252, 'value': 32768},
{'minor': 0, 'op': 'Total', 'major': 252, 'value': 32768}
],
'io_queue_recursive': [],
'sectors_recursive': []
},
'cpu_stats': {
'cpu_usage': {
'usage_in_usermode': 8380000000,
'usage_in_kernelmode': 2630000000,
'total_usage': 34451274609,
'percpu_usage': [931702517, 2764976848, 928621564, 2669799012, 1117103491, 2797807324, 1278365416, 2919322388, 1195818284, 2794439644, 1105212782, 2628238214, 1018437691, 2713559369, 913142014, 2966544077, 555254965, 73830222, 129362189, 120696574, 232636452, 54415721, 71511012, 111871561, 261233403, 736167553, 61198008, 713285344, 41359796, 287955073, 78816569, 178589532]},
'throttling_data': {
'throttled_periods': 0,
'throttled_time': 0,
'periods': 0
}, 'system_cpu_usage': 4257821208713451
},
'memory_stats': {
'usage': 249561088,
'limit': 2147483648,
'stats': {
'total_inactive_anon': 12288,
'pgfault': 75899,
'inactive_file': 32768,
'total_rss': 249479168,
'total_writeback': 0,
'total_inactive_file': 32768,
'writeback': 0,
'total_pgmajfault': 0,
'active_file': 0,
'total_pgfault': 75899,
'hierarchical_memory_limit': 2147483648,
'total_active_file': 0,
'total_pgpgout': 34070,
'pgpgout': 34070,
'total_rss_huge': 195035136,
'total_cache': 81920,
'total_mapped_file': 32768,
'total_pgpgin': 47475,
'rss_huge': 195035136,
'unevictable': 0,
'total_unevictable': 0,
'rss': 249479168,
'total_active_anon': 249499648,
'cache': 81920,
'active_anon': 249499648,
'inactive_anon': 12288,
'pgpgin': 47475,
'mapped_file': 32768,
'pgmajfault': 0
},
'max_usage': 266846208,
'failcnt': 0
},
'network': {
'rx_packets': 1214,
'rx_bytes': 308646,
'tx_dropped': 0,
'rx_dropped': 0,
'tx_errors': 0,
'tx_bytes': 61784,
'rx_errors': 0,
'tx_packets': 1019
},
'precpu_stats': {
'cpu_usage': {
'usage_in_usermode': 0,
'usage_in_kernelmode': 0,
'total_usage': 0,
'percpu_usage': None
},
'throttling_data': {
'throttled_periods': 0,
'throttled_time': 0,
'periods': 0
},
'system_cpu_usage': 0
},
'read': '2015-09-09T14:52:19.254587126+02:00'
} | zoe-analytics | /zoe-analytics-0.8.1b.tar.gz/zoe-analytics-0.8.1b/zoe_scheduler/stats.py | stats.py |
from argparse import ArgumentParser, Namespace
import logging
from zoe_scheduler.scheduler import ZoeScheduler
from zoe_scheduler.periodic_tasks import PeriodicTaskManager
from zoe_scheduler.ipc import ZoeIPCServer
from zoe_scheduler.object_storage import init_history_paths
from zoe_scheduler.proxy_manager import init as proxy_init
from zoe_scheduler.state import create_tables, init as state_init
from common.configuration import init as conf_init, zoeconf
argparser = None
db_engine = None
def setup_db_cmd(_):
create_tables(db_engine)
def process_arguments_manage() -> Namespace:
global argparser
argparser = ArgumentParser(description="Zoe - Container Analytics as a Service ops client")
argparser.add_argument('-d', '--debug', action='store_true', default=False, help='Enable debug output')
subparser = argparser.add_subparsers(title='subcommands', description='valid subcommands')
argparser_setup_db = subparser.add_parser('setup-db', help="Create the tables in the database")
argparser_setup_db.set_defaults(func=setup_db_cmd)
return argparser.parse_args()
def zoe_manage():
"""
The entry point for the zoe-manage script.
:return: int
"""
global db_engine
args = process_arguments_manage()
if args.debug:
logging.basicConfig(level=logging.DEBUG)
else:
logging.basicConfig(level=logging.INFO)
conf_init()
db_engine = state_init(zoeconf().db_url)
try:
args.func(args)
except AttributeError:
argparser.print_help()
return 1
def process_arguments_scheduler() -> Namespace:
argparser = ArgumentParser(description="Zoe Scheduler - Container Analytics as a Service scheduling component")
argparser.add_argument('-d', '--debug', action='store_true', help='Enable debug output')
argparser.add_argument('--ipc-server-port', type=int, default=8723, help='Port the IPC server should bind to')
return argparser.parse_args()
def zoe_scheduler():
"""
The entrypoint for the zoe-scheduler script.
:return: int
"""
args = process_arguments_scheduler()
if args.debug:
logging.basicConfig(level=logging.DEBUG)
else:
logging.basicConfig(level=logging.INFO)
logging.getLogger('requests').setLevel(logging.WARNING)
conf_init()
state_init(zoeconf().db_url)
proxy_init()
zoe_sched = ZoeScheduler()
ipc_server = ZoeIPCServer(zoe_sched, args.ipc_server_port)
if not init_history_paths():
return
tm = PeriodicTaskManager()
barrier = zoe_sched.init_tasks(tm)
barrier.wait() # wait for all tasks to be ready and running
ipc_server.start_thread()
zoe_sched.loop()
tm.stop_all() | zoe-analytics | /zoe-analytics-0.8.1b.tar.gz/zoe-analytics-0.8.1b/zoe_scheduler/entrypoint.py | entrypoint.py |
import time
import logging
import docker
import docker.utils
import docker.errors
from common.configuration import zoeconf
from zoe_scheduler.stats import SwarmStats, SwarmNodeStats, ContainerStats
log = logging.getLogger(__name__)
class SwarmClient:
def __init__(self):
manager = zoeconf().docker_swarm_manager
self.cli = docker.Client(base_url=manager)
def info(self) -> SwarmStats:
info = self.cli.info()
pl_status = SwarmStats()
pl_status.container_count = info["Containers"]
pl_status.image_count = info["Images"]
pl_status.memory_total = info["MemTotal"]
pl_status.cores_total = info["NCPU"]
# DriverStatus is a list...
idx = 1
assert 'Strategy' in info["DriverStatus"][idx][0]
pl_status.placement_strategy = info["DriverStatus"][idx][1]
idx = 2
assert 'Filters' in info["DriverStatus"][idx][0]
pl_status.active_filters = [x.strip() for x in info["DriverStatus"][idx][1].split(", ")]
idx = 3
assert 'Nodes' in info["DriverStatus"][idx][0]
node_count = int(info["DriverStatus"][idx][1])
idx = 4
for node in range(node_count):
ns = SwarmNodeStats(info["DriverStatus"][idx + node][0])
ns.docker_endpoint = info["DriverStatus"][idx + node][1]
ns.container_count = int(info["DriverStatus"][idx + node + 1][1])
ns.cores_reserved = int(info["DriverStatus"][idx + node + 2][1].split(' / ')[0])
ns.cores_total = int(info["DriverStatus"][idx + node + 2][1].split(' / ')[1])
ns.memory_reserved = info["DriverStatus"][idx + node + 3][1].split(' / ')[0]
ns.memory_total = info["DriverStatus"][idx + node + 3][1].split(' / ')[1]
ns.labels = info["DriverStatus"][idx + node + 4][1:]
pl_status.nodes.append(ns)
idx += 4
pl_status.timestamp = time.time()
return pl_status
def spawn_container(self, image, options) -> dict:
cont = None
try:
host_config = docker.utils.create_host_config(network_mode="bridge",
binds=options.get_volume_binds(),
mem_limit=options.get_memory_limit())
cont = self.cli.create_container(image=image,
environment=options.get_environment(),
network_disabled=False,
host_config=host_config,
detach=True,
volumes=options.get_volumes(),
command=options.get_command())
self.cli.start(container=cont.get('Id'))
except docker.errors.APIError as e:
if cont is not None:
self.cli.remove_container(container=cont.get('Id'), force=True)
log.error(str(e))
return None
info = self.inspect_container(cont.get('Id'))
return info
def inspect_container(self, docker_id) -> dict:
try:
docker_info = self.cli.inspect_container(container=docker_id)
except docker.errors.APIError:
return None
info = {
"ip_address": docker_info["NetworkSettings"]["IPAddress"],
"docker_id": docker_id
}
if docker_info["State"]["Running"]:
info["state"] = "running"
info["running"] = True
elif docker_info["State"]["Paused"]:
info["state"] = "paused"
info["running"] = True
elif docker_info["State"]["Restarting"]:
info["state"] = "restarting"
info["running"] = True
elif docker_info["State"]["OOMKilled"]:
info["state"] = "killed"
info["running"] = False
elif docker_info["State"]["Dead"]:
info["state"] = "killed"
info["running"] = False
else:
info["state"] = "unknown"
info["running"] = False
return info
def terminate_container(self, docker_id):
self.cli.remove_container(docker_id, force=True)
def log_get(self, docker_id) -> str:
logdata = self.cli.logs(container=docker_id, stdout=True, stderr=True, stream=False, timestamps=False, tail="all")
return logdata.decode("utf-8")
def stats(self, docker_id) -> ContainerStats:
stats_stream = self.cli.stats(docker_id, decode=True)
for s in stats_stream:
return ContainerStats(s)
class ContainerOptions:
def __init__(self):
self.env = {}
self.volume_binds = []
self.volumes = []
self.command = ""
self.memory_limit = '2g'
def add_env_variable(self, name, value):
if value is not None:
self.env[name] = value
def get_environment(self):
return self.env
def add_volume_bind(self, path, mountpoint, readonly=False):
self.volumes.append(mountpoint)
self.volume_binds.append(path + ":" + mountpoint + ":" + "ro" if readonly else "rw")
def get_volumes(self):
return self.volumes
def get_volume_binds(self):
return self.volume_binds
def set_command(self, cmd):
self.command = cmd
def get_command(self):
return self.command
def set_memory_limit(self, limit):
self.memory_limit = limit
def get_memory_limit(self):
return self.memory_limit | zoe-analytics | /zoe-analytics-0.8.1b.tar.gz/zoe-analytics-0.8.1b/zoe_scheduler/swarm_client.py | swarm_client.py |
import smtplib
from email.mime.text import MIMEText
import logging
from jinja2 import Template
from zoe_scheduler.state.execution import SparkSubmitExecutionState, ExecutionState
from zoe_scheduler.urls import generate_log_history_url, generate_notebook_url
from common.configuration import zoeconf
log = logging.getLogger(__name__)
APP_FINISH_EMAIL_TEMPLATE = """Application {{ name }} has finished executing after {{ runtime }}.
At this URL you can download the execution logs: {{ log_url }}
"""
NOTEBOOK_KILLED_EMAIL_TEMPLATE = """Your Spark notebook has not been used in the past {{ max_age }} hours and has been terminated."""
NOTEBOOK_WARNING_EMAIL_TEMPLATE = """Your Spark notebook has not been used in the past {{ wrn_age }} hours
and will be terminated unless you access it in the next {{ grace_time }} hours.
Notebook URL: {{ nb_url }}
"""
def do_duration(seconds):
m, s = divmod(seconds, 60)
h, m = divmod(m, 60)
d, h = divmod(h, 24)
tokens = []
if d > 1:
tokens.append('{d:.0f} days')
elif d:
tokens.append('{d:.0f} day')
if h > 1:
tokens.append('{h:.0f} hours')
elif h:
tokens.append('{h:.0f} hour')
if m > 1:
tokens.append('{m:.0f} minutes')
elif m:
tokens.append('{m:.0f} minute')
if s > 1:
tokens.append('{s:.0f} seconds')
elif s:
tokens.append('{s:.0f} second')
template = ', '.join(tokens)
return template.format(d=d, h=h, m=m, s=s)
def notify_execution_finished(execution: SparkSubmitExecutionState):
app = execution.application
email = app.user.email
template_vars = {
'cmdline': execution.commandline,
'runtime': do_duration((execution.time_finished - execution.time_started).total_seconds()),
'name': execution.name,
'app_name': execution.application.name,
'log_url': generate_log_history_url(execution)
}
subject = '[Zoe] Spark execution {} finished'.format(execution.name)
send_email(email, subject, APP_FINISH_EMAIL_TEMPLATE, template_vars)
def notify_notebook_notice(execution: ExecutionState):
app = execution.application
email = app.user.email
subject = "[Zoe] Notebook termination warning"
template_vars = {
'grace_time': zoeconf().notebook_max_age_no_activity - zoeconf().notebook_warning_age_no_activity,
'wrn_age': zoeconf().notebook_warning_age_no_activity,
'nb_url': generate_notebook_url(execution)
}
send_email(email, subject, NOTEBOOK_WARNING_EMAIL_TEMPLATE, template_vars)
def notify_notebook_termination(execution: ExecutionState):
app = execution.application
email = app.user.email
subject = "[Zoe] Notebook terminated"
template_vars = {'max_age': zoeconf().notebook_max_age_no_activity}
send_email(email, subject, NOTEBOOK_KILLED_EMAIL_TEMPLATE, template_vars)
def send_email(address, subject, template, template_vars):
jinja_template = Template(template)
body = jinja_template.render(template_vars)
msg = MIMEText(body)
msg['Subject'] = subject
msg['From'] = '[email protected]'
msg['To'] = address
s = smtplib.SMTP(zoeconf().smtp_server)
s.ehlo()
s.starttls()
s.login(zoeconf().smtp_user, zoeconf().smtp_password)
s.send_message(msg)
s.quit() | zoe-analytics | /zoe-analytics-0.8.1b.tar.gz/zoe-analytics-0.8.1b/zoe_scheduler/emails.py | emails.py |
from datetime import datetime, timedelta
import logging
from io import BytesIO
import zipfile
from zoe_scheduler.swarm_client import SwarmClient, ContainerOptions
from zoe_scheduler.proxy_manager import proxy_manager
from zoe_scheduler.emails import notify_execution_finished, notify_notebook_notice, notify_notebook_termination
from zoe_scheduler.state import AlchemySession
from zoe_scheduler.state.application import SparkApplicationState, SparkNotebookApplicationState, SparkSubmitApplicationState
from zoe_scheduler.state.cluster import ClusterState
from zoe_scheduler.state.container import ContainerState
from zoe_scheduler.state.execution import ExecutionState, SparkSubmitExecutionState
from zoe_scheduler.state.proxy import ProxyState
from common.application_resources import ApplicationResources
from zoe_scheduler.exceptions import CannotCreateCluster
from common.configuration import zoeconf
from zoe_scheduler.object_storage import logs_archive_upload
from zoe_scheduler.urls import generate_application_binary_url
log = logging.getLogger(__name__)
class PlatformManager:
def __init__(self):
self.swarm = SwarmClient()
def start_execution(self, execution_id: int, resources: ApplicationResources) -> bool:
state = AlchemySession()
execution = state.query(ExecutionState).filter_by(id=execution_id).one()
execution.assigned_resources = resources
try:
self._application_to_containers(state, execution)
except CannotCreateCluster:
return False
execution.set_started()
state.commit()
proxy_manager().update_proxy()
return True
def _application_to_containers(self, state: AlchemySession, execution: ExecutionState):
cluster = ClusterState(execution_id=execution.id)
state.add(cluster)
if isinstance(execution.application, SparkApplicationState):
master_container = self._spawn_master(state, execution, cluster)
self._spawn_workers(state, execution, cluster, master_container)
if isinstance(execution.application, SparkNotebookApplicationState):
self._spawn_notebook(state, execution, cluster, master_container)
elif isinstance(execution.application, SparkSubmitApplicationState):
self._spawn_submit_client(state, execution, cluster, master_container)
else:
raise NotImplementedError('%s applications are not implemented' % type(execution.application))
def _spawn_master(self, state: AlchemySession, execution: ExecutionState, cluster: ClusterState) -> ContainerState:
application = execution.application
resources = execution.assigned_resources
master_requirements = resources.master_resources
master_opts = ContainerOptions()
if "memory_limit" in master_requirements:
master_opts.set_memory_limit(master_requirements["memory_limit"])
image = application.master_image
master_info = self.swarm.spawn_container(image, master_opts)
if master_info is None:
raise CannotCreateCluster(application)
container = ContainerState(docker_id=master_info["docker_id"], ip_address=master_info["ip_address"], readable_name="spark-master", cluster=cluster)
state.add(container)
master_web_url = "http://" + master_info["ip_address"] + ":8080"
master_proxy = ProxyState(service_name="Spark master web interface", container=container, cluster=cluster, internal_url=master_web_url)
state.add(master_proxy)
return container
def _spawn_workers(self, state: AlchemySession, execution: ExecutionState, cluster: ClusterState, master: ContainerState):
application = execution.application
resources = execution.assigned_resources
worker_requirements = resources.worker_resources
worker_opts = ContainerOptions()
worker_opts.add_env_variable("SPARK_MASTER_IP", master.ip_address)
if "memory_limit" in worker_requirements:
worker_opts.add_env_variable("SPARK_WORKER_RAM", worker_requirements["memory_limit"])
worker_opts.set_memory_limit(worker_requirements["memory_limit"])
if "cores" in worker_requirements:
worker_opts.add_env_variable("SPARK_WORKER_CORES", worker_requirements["cores"])
image = application.worker_image
workers_docker_id = []
for i in range(resources.worker_count):
worker_info = self.swarm.spawn_container(image, worker_opts)
if worker_info is None:
self.swarm.terminate_container(master.docker_id)
for j in range(i):
self.swarm.terminate_container(workers_docker_id[j])
raise CannotCreateCluster(application)
workers_docker_id.append(worker_info["docker_id"])
container = ContainerState(docker_id=worker_info["docker_id"], ip_address=worker_info["ip_address"], readable_name="spark-worker-%d" % i)
container.cluster = cluster
state.add(container)
worker_web_url = "http://" + worker_info["ip_address"] + ":8081"
worker_proxy = ProxyState(service_name="Spark worker web interface", container=container, cluster=cluster, internal_url=worker_web_url)
state.add(worker_proxy)
def _spawn_notebook(self, state: AlchemySession, execution: ExecutionState, cluster: ClusterState, master: ContainerState):
application = execution.application
resources = execution.assigned_resources
nb_requirements = resources.notebook_resources
# Create this proxy entry here as we need to pass the ID in the container environment
container = ContainerState(readable_name="spark-notebook", cluster=cluster)
state.add(container)
nb_url_proxy = ProxyState(service_name="Spark Notebook interface", container=container, cluster=cluster)
state.add(nb_url_proxy)
state.flush()
nb_opts = ContainerOptions()
nb_opts.add_env_variable("SPARK_MASTER_IP", master.ip_address)
nb_opts.add_env_variable("PROXY_ID", nb_url_proxy.id)
if "memory_limit" in execution.assigned_resources.worker_resources:
nb_opts.add_env_variable("SPARK_EXECUTOR_RAM", execution.assigned_resources.worker_resources["memory_limit"])
if "memory_limit" in nb_requirements:
nb_opts.set_memory_limit(nb_requirements["memory_limit"])
image = application.notebook_image
nb_info = self.swarm.spawn_container(image, nb_opts)
if nb_info is None:
self.swarm.terminate_container(master.docker_id)
# FIXME terminate all containers in case of error
container.docker_id = nb_info["docker_id"]
container.ip_address = nb_info["ip_address"]
nb_app_url = "http://" + nb_info["ip_address"] + ":4040"
nb_app_proxy = ProxyState(service_name="Spark application web interface", container=container, cluster=cluster, internal_url=nb_app_url)
state.add(nb_app_proxy)
nb_url_proxy.internal_url = "http://" + nb_info["ip_address"] + ":9000/proxy/%d" % nb_url_proxy.id
def _spawn_submit_client(self, state: AlchemySession, execution: SparkSubmitExecutionState, cluster: ClusterState, master: ContainerState):
application = execution.application
resources = execution.assigned_resources
requirements = resources.client_resources
# Do this here so we can use the ID later for building proxy URLs
container = ContainerState(readable_name="spark-submit", cluster=cluster)
state.add(container)
state.flush()
cli_opts = ContainerOptions()
cli_opts.add_env_variable("SPARK_MASTER_IP", master.ip_address)
cli_opts.add_env_variable("PROXY_ID", container.id)
cli_opts.add_env_variable("APPLICATION_ID", application.id)
cli_opts.add_env_variable("SPARK_OPTIONS", execution.spark_opts)
cli_opts.add_env_variable("APPLICATION_URL", generate_application_binary_url(application))
cli_opts.add_env_variable("SPARK_OPTIONS", execution.spark_opts)
if "memory_limit" in execution.assigned_resources.worker_resources:
cli_opts.add_env_variable("SPARK_EXECUTOR_RAM", execution.assigned_resources.worker_resources["memory_limit"])
if "memory_limit" in requirements:
cli_opts.set_memory_limit(requirements["memory_limit"])
image = application.submit_image
cli_opts.set_command("/opt/submit.sh " + execution.commandline)
cli_info = self.swarm.spawn_container(image, cli_opts)
if cli_info is None:
self.swarm.terminate_container(master.docker_id)
# FIXME terminate all containers in case of error
container.docker_id = cli_info["docker_id"]
container.ip_address = cli_info["ip_address"]
nb_app_url = "http://" + cli_info["ip_address"] + ":4040"
nb_app_proxy = ProxyState(service_name="Spark application web interface", container=container, cluster=cluster, internal_url=nb_app_url)
state.add(nb_app_proxy)
def execution_terminate(self, state: AlchemySession, execution: ExecutionState):
cluster = execution.cluster
logs = []
if cluster is not None:
containers = cluster.containers
for c in containers:
logs.append((c.readable_name, c.ip_address, self.log_get(c)))
self.swarm.terminate_container(c.docker_id)
state.delete(c)
for p in cluster.proxies:
state.delete(p)
state.delete(cluster)
execution.set_terminated()
self._archive_execution_logs(execution, logs)
proxy_manager().update_proxy()
def log_get(self, container: ContainerState) -> str:
return self.swarm.log_get(container.docker_id)
def _archive_execution_logs(self, execution: ExecutionState, logs: list):
zipdata = BytesIO()
with zipfile.ZipFile(zipdata, "w", compression=zipfile.ZIP_DEFLATED) as logzip:
for c in logs:
fname = c[0] + "-" + c[1] + ".txt"
logzip.writestr(fname, c[2])
logs_archive_upload(execution, zipdata.getvalue())
def is_container_alive(self, container: ContainerState) -> bool:
ret = self.swarm.inspect_container(container.docker_id)
return ret["running"]
def check_executions_health(self):
state = AlchemySession()
all_containers = state.query(ContainerState).all()
for c in all_containers:
if not self.is_container_alive(c):
self._container_died(state, c)
notebooks = state.query(SparkNotebookApplicationState).all()
for nb in notebooks:
execs = nb.executions_running()
for e in execs:
c = e.find_container("spark-notebook")
if c is not None:
pr = state.query(ProxyState).filter_by(container_id=c.id, service_name="Spark Notebook interface").one()
if datetime.now() - pr.last_access > timedelta(hours=zoeconf().notebook_max_age_no_activity):
log.info("Killing spark notebook {} for inactivity".format(e.id))
self.execution_terminate(state, e)
notify_notebook_termination(e)
if datetime.now() - pr.last_access > timedelta(hours=zoeconf().notebook_max_age_no_activity) - timedelta(hours=zoeconf().notebook_warning_age_no_activity):
if not e.termination_notice:
log.info("Spark notebook {} is on notice for inactivity".format(e.id))
e.termination_notice = True
notify_notebook_notice(e)
state.commit()
state.close()
def _container_died(self, state: AlchemySession, container: ContainerState):
if container.readable_name == "spark-submit" or container.readable_name == "spark-master":
log.debug("found a dead spark-submit container, cleaning up")
self.execution_terminate(state, container.cluster.execution)
container.cluster.execution.set_finished()
notify_execution_finished(container.cluster.execution)
else:
log.warning("Container {} (ID: {}) died unexpectedly")
def container_stats(self, container_id):
state = AlchemySession()
container = state.query(ContainerState).filter_by(id=container_id).one()
return self.swarm.stats(container.docker_id) | zoe-analytics | /zoe-analytics-0.8.1b.tar.gz/zoe-analytics-0.8.1b/zoe_scheduler/platform.py | platform.py |
import os
import logging
from zoe_scheduler.state.application import ApplicationState
from zoe_scheduler.state.execution import ExecutionState
from common.configuration import zoeconf
log = logging.getLogger(__name__)
def init_history_paths() -> bool:
if not os.path.exists(zoeconf().history_path):
try:
os.makedirs(zoeconf().history_path)
except OSError:
log.error("Cannot create history directory in {}".format(zoeconf().history_path))
return False
os.makedirs(os.path.join(zoeconf().history_path, 'apps'))
os.makedirs(os.path.join(zoeconf().history_path, 'logs'))
return True
def application_data_upload(application: ApplicationState, data: bytes) -> bool:
fpath = os.path.join(zoeconf().history_path, 'apps', 'app-{}.zip'.format(application.id))
open(fpath, "wb").write(data)
def application_data_download(application: ApplicationState) -> bytes:
fpath = os.path.join(zoeconf().history_path, 'apps', 'app-{}.zip'.format(application.id))
data = open(fpath, "rb").read()
return data
def application_data_delete(application: ApplicationState):
fpath = os.path.join(zoeconf().history_path, 'apps', 'app-{}.zip'.format(application.id))
try:
os.unlink(fpath)
except OSError:
log.warning("Binary data for application {} not found, cannot delete".format(application.id))
def logs_archive_upload(execution: ExecutionState, data: bytes) -> bool:
fpath = os.path.join(zoeconf().history_path, 'logs', 'log-{}.zip'.format(execution.id))
open(fpath, "wb").write(data)
def logs_archive_download(execution: ExecutionState) -> bytes:
fpath = os.path.join(zoeconf().history_path, 'logs', 'log-{}.zip'.format(execution.id))
data = open(fpath, "rb").read()
return data
def logs_archive_delete(execution: ExecutionState):
fpath = os.path.join(zoeconf().history_path, 'logs', 'log-{}.zip'.format(execution.id))
try:
os.unlink(fpath)
except OSError:
log.warning("Logs archive for execution {} not found, cannot delete".format(execution.id)) | zoe-analytics | /zoe-analytics-0.8.1b.tar.gz/zoe-analytics-0.8.1b/zoe_scheduler/object_storage.py | object_storage.py |
import base64
from datetime import datetime
import json
import logging
import threading
from sqlalchemy.orm.exc import NoResultFound
import zmq
from common.application_resources import SparkApplicationResources
from zoe_scheduler.state import AlchemySession
from zoe_scheduler.state.application import ApplicationState, SparkSubmitApplicationState, SparkNotebookApplicationState, SparkApplicationState
from zoe_scheduler.state.container import ContainerState
from zoe_scheduler.state.execution import ExecutionState, SparkSubmitExecutionState
from zoe_scheduler.state.proxy import ProxyState
from zoe_scheduler.state.user import UserState
import zoe_scheduler.object_storage as storage
from common.configuration import zoeconf
from zoe_scheduler.scheduler import ZoeScheduler
log = logging.getLogger(__name__)
class ZoeIPCServer:
def __init__(self, scheduler: ZoeScheduler, port=8723):
self.context = zmq.Context()
self.socket = self.context.socket(zmq.REP)
self.socket.bind("tcp://*:%s" % port)
self.th = None
self.state = None
self.sched = scheduler
def start_thread(self):
self.th = threading.Thread(target=self._loop, name="IPC server", daemon=True)
self.th.start()
def _loop(self):
log.debug("IPC server thread started")
while True:
message = self.socket.recv_json()
self.state = AlchemySession()
try:
reply = self._dispatch(message)
except:
log.exception("Uncaught exception in IPC server thread")
reply = self._reply_error('exception')
finally:
self.state.close()
self.state = None
json_reply = json.dumps(reply, default=self._json_default_serializer)
self.socket.send_string(json_reply)
def _json_default_serializer(self, obj):
if isinstance(obj, datetime):
return obj.isoformat()
else:
log.error('Cannot serialize type {}'.format(type(obj)))
raise TypeError
def _dispatch(self, message: dict) -> dict:
if "command" not in message or "args" not in message:
log.error("Ignoring malformed message: {}".format(message))
return self._reply_error('malformed')
if not isinstance(message['args'], dict):
log.error("Ignoring malformed message: {}".format(message))
return self._reply_error('malformed')
try:
func = getattr(self, message["command"])
except AttributeError:
log.error("Ignoring unkown command: {}".format(message["command"]))
return self._reply_error('unknown command')
return func(**message["args"])
def _reply_ok(self, **reply) -> dict:
return {'status': 'ok', 'answer': reply}
def _reply_error(self, error_msg: str) -> dict:
return {'status': 'error', 'answer': error_msg}
# ############# Exposed methods below ################
# Applications
def application_get(self, application_id: int) -> dict:
try:
application = self.state.query(ApplicationState).filter_by(id=application_id).one()
except NoResultFound:
return self._reply_error('no such application')
return self._reply_ok(app=application.to_dict())
def application_get_binary(self, application_id: int) -> dict:
try:
application = self.state.query(ApplicationState).filter_by(id=application_id).one()
except NoResultFound:
return self._reply_error('no such application')
else:
app_data = storage.application_data_download(application)
app_data = base64.b64encode(app_data)
return self._reply_ok(zip_data=app_data.decode('ascii'))
def application_list(self, user_id: int) -> dict:
try:
self.state.query(UserState).filter_by(id=user_id).one()
except NoResultFound:
return self._reply_error('no such user')
apps = self.state.query(ApplicationState).filter_by(user_id=user_id).all()
return self._reply_ok(apps=[x.to_dict() for x in apps])
def application_remove(self, application_id: int, force=False) -> dict:
try:
application = self.state.query(ApplicationState).filter_by(id=application_id).one()
except NoResultFound:
return self._reply_error('no such application')
running = self.state.query(ExecutionState).filter_by(application_id=application.id, time_finished=None).all()
if not force and len(running) > 0:
return self._reply_error('there are active execution, cannot delete')
storage.application_data_delete(application)
for e in application.executions:
self.execution_delete(e.id)
self.state.delete(application)
self.state.commit()
return self._reply_ok()
def application_spark_new(self, user_id: int, worker_count: int, executor_memory: str, executor_cores: int, name: str,
master_image: str, worker_image: str) -> dict:
try:
self.state.query(UserState).filter_by(id=user_id).one()
except NoResultFound:
return self._reply_error('no such user')
resources = SparkApplicationResources()
resources.worker_count = worker_count
resources.container_count = worker_count + 1
resources.worker_resources["memory_limit"] = executor_memory
resources.worker_resources["cores"] = executor_cores
app = SparkApplicationState(master_image=master_image,
worker_image=worker_image,
name=name,
required_resources=resources,
user_id=user_id)
self.state.add(app)
self.state.commit()
return self._reply_ok(app_id=app.id)
def application_spark_notebook_new(self, user_id: int, worker_count: int, executor_memory: str, executor_cores: int, name: str,
master_image: str, worker_image: str, notebook_image: str) -> dict:
try:
self.state.query(UserState).filter_by(id=user_id).one()
except NoResultFound:
return self._reply_error('no such user')
resources = SparkApplicationResources()
resources.worker_count = worker_count
resources.container_count = worker_count + 2
resources.worker_resources["memory_limit"] = executor_memory
resources.worker_resources["cores"] = executor_cores
app = SparkNotebookApplicationState(master_image=master_image,
worker_image=worker_image,
notebook_image=notebook_image,
name=name,
required_resources=resources,
user_id=user_id)
self.state.add(app)
self.state.commit()
return self._reply_ok(app_id=app.id)
def application_spark_submit_new(self, user_id: int, worker_count: int, executor_memory: str, executor_cores: int, name: str, file_data: bytes,
master_image: str, worker_image: str, submit_image: str) -> dict:
try:
self.state.query(UserState).filter_by(id=user_id).one()
except NoResultFound:
return self._reply_error('no such user')
resources = SparkApplicationResources()
resources.worker_count = worker_count
resources.container_count = worker_count + 2
resources.worker_resources["memory_limit"] = executor_memory
resources.worker_resources["cores"] = executor_cores
app = SparkSubmitApplicationState(master_image=master_image,
worker_image=worker_image,
submit_image=submit_image,
name=name,
required_resources=resources,
user_id=user_id)
self.state.add(app)
self.state.flush()
storage.application_data_upload(app, file_data)
self.state.commit()
return self._reply_ok(app_id=app.id)
# Containers
def container_stats(self, container_id: int) -> dict:
ret = self.sched.platform.container_stats(container_id).to_dict()
return self._reply_ok(**ret)
# Executions
def execution_delete(self, execution_id: int) -> dict:
try:
execution = self.state.query(ExecutionState).filter_by(id=execution_id).one()
except NoResultFound:
return self._reply_error('no such execution')
if execution.status == "running":
self.sched.execution_terminate(self.state, execution)
# FIXME remove it also from the scheduler, check for scheduled state
storage.logs_archive_delete(execution)
self.state.delete(execution)
self.state.commit()
return self._reply_ok()
def execution_get(self, execution_id: int) -> dict:
try:
execution = self.state.query(ExecutionState).filter_by(id=execution_id).one()
except NoResultFound:
return self._reply_error('no such execution')
return self._reply_ok(**execution.to_dict())
def execution_get_proxy_path(self, execution_id: int) -> dict:
try:
execution = self.state.query(ExecutionState).filter_by(id=execution_id).one()
except NoResultFound:
return self._reply_error('no such execution')
if isinstance(execution.application, SparkNotebookApplicationState):
c = execution.find_container("spark-notebook")
pr = self.state.query(ProxyState).filter_by(container_id=c.id, service_name="Spark Notebook interface").one()
path = zoeconf().proxy_path_url_prefix + '/{}'.format(pr.id)
return self._reply_ok(path=path)
elif isinstance(execution.application, SparkSubmitApplicationState):
c = execution.find_container("spark-submit")
pr = self.state.query(ProxyState).filter_by(container_id=c.id, service_name="Spark application web interface").one()
path = zoeconf().proxy_path_url_prefix + '/{}'.format(pr.id)
return self._reply_ok(path=path)
else:
return self._reply_error('unknown application type')
def execution_spark_new(self, application_id: int, name: str, commandline=None, spark_options=None) -> dict:
try:
application = self.state.query(ApplicationState).filter_by(id=application_id).one()
except NoResultFound:
return self._reply_error('no such application')
if type(application) is SparkSubmitApplicationState:
if commandline is None:
raise ValueError("Spark submit application requires a commandline")
execution = SparkSubmitExecutionState(name=name,
application_id=application.id,
status="submitted",
commandline=commandline,
spark_opts=spark_options)
else:
execution = ExecutionState(name=name,
application_id=application.id,
status="submitted")
self.state.add(execution)
self.state.flush()
ret = self.sched.incoming(execution)
if ret:
execution.set_scheduled()
self.state.commit()
else:
self._reply_error('admission control refused this application execution')
self.state.rollback()
return self._reply_ok(execution_id=execution.id)
def execution_terminate(self, execution_id: int) -> dict:
execution = self.state.query(ExecutionState).filter_by(id=execution_id).one()
self.sched.execution_terminate(self.state, execution)
self.state.commit()
return self._reply_ok()
# Logs
def log_get(self, container_id: int) -> dict:
try:
container = self.state.query(ContainerState).filter_by(id=container_id).one()
except NoResultFound:
return self._reply_error('no such container')
else:
ret = self.sched.platform.log_get(container)
return self._reply_ok(log=ret)
def log_history_get(self, execution_id) -> dict:
try:
execution = self.state.query(ExecutionState).filter_by(id=execution_id).one()
except NoResultFound:
return self._reply_error('no such execution')
log_data = storage.logs_archive_download(execution)
log_data = base64.b64encode(log_data)
return self._reply_ok(zip_data=log_data.decode('ascii'))
# Platform
def platform_stats(self) -> dict:
ret = self.sched.platform_status.stats()
return self._reply_ok(**ret.to_dict())
# Users
def user_get(self, user_id) -> dict:
try:
user = self.state.query(UserState).filter_by(id=user_id).one()
except NoResultFound:
return self._reply_error('no such user')
else:
return self._reply_ok(**user.to_dict())
def user_get_by_email(self, user_email) -> dict:
try:
user = self.state.query(UserState).filter_by(email=user_email).one()
except NoResultFound:
return self._reply_error('no such user')
else:
return self._reply_ok(**user.to_dict())
def user_new(self, email: str) -> dict:
user = UserState(email=email)
self.state.add(user)
self.state.commit()
return self._reply_ok(**user.to_dict()) | zoe-analytics | /zoe-analytics-0.8.1b.tar.gz/zoe-analytics-0.8.1b/zoe_scheduler/ipc.py | ipc.py |
from os import system
from urllib.parse import urlparse
import re
from datetime import datetime
import logging
from jinja2 import Template
from common.configuration import zoeconf
from zoe_scheduler.state import AlchemySession
from zoe_scheduler.state.proxy import ProxyState
log = logging.getLogger(__name__)
LOOP_INTERVAL = 1 # seconds
ACCESS_TIME_REFRESH_INTERVAL = 60 # seconds
ENTRY_TEMPLATE = """
# Zoe proxy entry for service {{ service_name }}
<Location /proxy/{{ proxy_id }}>
ProxyHtmlEnable On
ProxyHTMLExtended On
ProxyPass {{ proxy_url }} retry=1
ProxyPassReverse {{ proxy_url }}
{% if service_name != "Spark Notebook interface" %}
ProxyHTMLURLMap ^/(.*)$ /proxy/{{ proxy_id }}/$1 RL
ProxyHTMLURLMap ^logPage(.*)$ /proxy/{{ proxy_id }}/logPage$1 RL
ProxyHTMLURLMap ^app(.*)$ /proxy/{{ proxy_id }}/app$1 RL
{% for node in nodes %}
ProxyHTMLURLMap ^http://{{ node[0] }}(.*)$ /proxy/{{node[1]}}$1 RL
{% endfor %}
{% endif %}
</Location>
{% if service_name == "Spark Notebook interface" %}
<Location /proxy/{{ proxy_id }}/ws/>
ProxyPass ws://{{ netloc }}/proxy/{{ proxy_id }}/ws/
</Location>
{% endif %}
"""
class ProxyManager:
def __init__(self):
self.apache_conf_filepath = zoeconf().apache_proxy_config_file
self.apache_access_log = zoeconf().apache_log_file
def _get_proxy_entries(self):
state = AlchemySession()
ret = state.query(ProxyState).all()
state.close()
return ret
def _generate_file(self, proxy_entries):
output = ""
jinja_template = Template(ENTRY_TEMPLATE)
node_list = []
for p in proxy_entries:
netloc = urlparse(p.internal_url)[1]
node_list.append((netloc, p.id))
for p in proxy_entries:
netloc = urlparse(p.internal_url)[1]
jinja_dict = {
"proxy_id": p.id,
"proxy_url": p.internal_url,
"service_name": p.service_name,
"netloc": netloc,
"nodes": node_list
}
apache_entry = jinja_template.render(jinja_dict)
output += apache_entry + "\n"
return output
def _commit_and_reload(self, generated_file):
open(self.apache_conf_filepath, "w").write(generated_file)
system("sudo service apache2 reload")
log.info("Apache reloaded")
def update_proxy(self):
entries = self._get_proxy_entries()
output = self._generate_file(entries)
self._commit_and_reload(output)
def update_proxy_access_timestamps(self):
regex = re.compile('[0-9.]+ - - \[(.*)\] "GET /proxy/([0-9a-z\-]+)/')
logf = open(self.apache_access_log, 'r')
last_accesses = {}
for line in logf:
match = re.match(regex, line)
if match is not None:
proxy_id = int(match.group(2))
timestamp = datetime.strptime(match.group(1), "%d/%b/%Y:%H:%M:%S %z")
last_accesses[proxy_id] = timestamp.replace(tzinfo=None)
state = AlchemySession()
something_to_commit = False
for proxy in state.query(ProxyState).all():
if proxy.id in last_accesses:
proxy = state.query(ProxyState).filter_by(id=proxy.id).one()
if proxy.last_access != last_accesses[proxy.id]:
log.debug("Updating access timestamp for proxy ID {}".format(proxy.id))
proxy.last_access = last_accesses[proxy.id]
something_to_commit = True
proxy.container.cluster.execution.termination_notice = False
if something_to_commit:
state.commit()
state.close()
_pm = None
def init():
global _pm
_pm = ProxyManager()
_pm.update_proxy()
def proxy_manager() -> ProxyManager:
return _pm | zoe-analytics | /zoe-analytics-0.8.1b.tar.gz/zoe-analytics-0.8.1b/zoe_scheduler/proxy_manager.py | proxy_manager.py |
from datetime import datetime
from sqlalchemy import Column, Integer, String, PickleType, DateTime, ForeignKey, Boolean
from sqlalchemy.orm import relationship
from zoe_scheduler.state import Base
class ExecutionState(Base):
__tablename__ = 'executions'
id = Column(Integer, primary_key=True)
name = Column(String(64))
assigned_resources = Column(PickleType())
application_id = Column(Integer, ForeignKey('applications.id'))
time_scheduled = Column(DateTime)
time_started = Column(DateTime)
time_finished = Column(DateTime)
status = Column(String(32))
termination_notice = Column(Boolean, default=False)
cluster = relationship("ClusterState", uselist=False, backref="execution")
type = Column(String(32)) # Needed by sqlalchemy to manage class inheritance
__mapper_args__ = {
'polymorphic_on': type,
'polymorphic_identity': 'execution'
}
def set_scheduled(self):
self.status = "scheduled"
self.time_scheduled = datetime.now()
def set_started(self):
self.status = "running"
self.time_started = datetime.now()
def set_finished(self):
self.status = "finished"
self.time_finished = datetime.now()
def set_terminated(self):
self.status = "terminated"
self.time_finished = datetime.now()
def find_container(self, name):
for c in self.cluster.containers:
if c.readable_name == name:
return c
def to_dict(self) -> dict:
ret = {
'id': self.id,
'name': self.name,
'application_id': self.application_id,
'time_scheduled': self.time_scheduled,
'time_started': self.time_started,
'time_finished': self.time_finished,
'status': self.status,
'termination_notice': self.termination_notice,
'type': self.type
}
if self.assigned_resources is None:
ret['assigned_resources'] = None
else:
ret['assigned_resources'] = self.assigned_resources.to_dict()
if self.cluster is not None:
ret['cluster_id'] = self.cluster.id
ret['containers'] = [c.to_dict() for c in self.cluster.containers]
else:
ret['cluster_id'] = None
ret['containers'] = []
return ret
class SparkSubmitExecutionState(ExecutionState):
commandline = Column(String(1024))
spark_opts = Column(String(1024))
__mapper_args__ = {
'polymorphic_identity': 'spark-submit-application'
}
def to_dict(self) -> dict:
ret = super().to_dict()
ret['commandline'] = self.commandline
ret['spark_opts'] = self.spark_opts
return ret | zoe-analytics | /zoe-analytics-0.8.1b.tar.gz/zoe-analytics-0.8.1b/zoe_scheduler/state/execution.py | execution.py |
from sqlalchemy import Column, Integer, String, PickleType, ForeignKey
from sqlalchemy.orm import relationship
from zoe_scheduler.state import Base
class ApplicationState(Base):
__tablename__ = 'applications'
id = Column(Integer, primary_key=True)
name = Column(String(64))
required_resources = Column(PickleType()) # JSON resource description
user_id = Column(Integer, ForeignKey('users.id'))
executions = relationship("ExecutionState", order_by="ExecutionState.id", backref="application")
type = Column(String(20)) # Needed by sqlalchemy to manage class inheritance
__mapper_args__ = {
'polymorphic_on': type,
'polymorphic_identity': 'application'
}
def executions_running(self):
ret = []
for e in self.executions:
if e.status == "running":
ret.append(e)
return ret
def to_dict(self) -> dict:
ret = {
'id': self.id,
'name': self.name,
'user_id': self.user_id,
'type': self.type,
'required_resources': self.required_resources.to_dict(),
'executions': [e.to_dict() for e in self.executions]
}
return ret
class SparkApplicationState(ApplicationState):
master_image = Column(String(256))
worker_image = Column(String(256))
__mapper_args__ = {
'polymorphic_identity': 'spark-application'
}
def to_dict(self) -> dict:
ret = super().to_dict()
ret['master_image'] = self.master_image
ret['worker_image'] = self.worker_image
return ret
class SparkNotebookApplicationState(SparkApplicationState):
notebook_image = Column(String(256))
__mapper_args__ = {
'polymorphic_identity': 'spark-notebook'
}
def to_dict(self) -> dict:
ret = super().to_dict()
ret['notebook_image'] = self.notebook_image
return ret
class SparkSubmitApplicationState(SparkApplicationState):
submit_image = Column(String(256))
__mapper_args__ = {
'polymorphic_identity': 'spark-submit'
}
def to_dict(self) -> dict:
ret = super().to_dict()
ret['submit_image'] = self.submit_image
return ret | zoe-analytics | /zoe-analytics-0.8.1b.tar.gz/zoe-analytics-0.8.1b/zoe_scheduler/state/application.py | application.py |
from io import BytesIO
from zipfile import is_zipfile
from flask import Blueprint, jsonify, request, session, abort, send_file
from zoe_client import ZoeClient
from common.configuration import ipcconf
api_bp = Blueprint('api', __name__)
def _api_check_user(zoe_client):
if 'user_id' not in session:
return jsonify(status='error', msg='user not logged in')
user = zoe_client.user_get(session['user_id'])
if user is None:
return jsonify(status='error', msg='unknown user')
else:
return user
@api_bp.route('/status/basic')
def status_basic():
client = ZoeClient(ipcconf['server'], ipcconf['port'])
platform_stats = client.platform_stats()
ret = {
'num_nodes': len(platform_stats['swarm']['nodes']),
'num_containers': platform_stats['swarm']['container_count']
}
return jsonify(**ret)
@api_bp.route('/login', methods=['POST'])
def login():
form_data = request.form
email = form_data["email"]
client = ZoeClient(ipcconf['server'], ipcconf['port'])
user = client.user_get_by_email(email)
if user is None:
user = client.user_new(email)
session["user_id"] = user.id
return jsonify(status="ok")
@api_bp.route('/applications/new', methods=['POST'])
def application_new():
client = ZoeClient(ipcconf['server'], ipcconf['port'])
user = _api_check_user(client)
form_data = request.form
if form_data['app_type'] == "spark-notebook":
client.application_spark_notebook_new(user.id, int(form_data["num_workers"]), form_data["ram"] + 'g', int(form_data["num_cores"]), form_data["app_name"])
elif form_data['app_type'] == "spark-submit":
file_data = request.files['file']
if not is_zipfile(file_data.stream):
return jsonify(status='error', msg='not a zip file')
file_data.stream.seek(0)
fcontents = file_data.stream.read()
client.application_spark_submit_new(user.id, int(form_data["num_workers"]), form_data["ram"] + 'g', int(form_data["num_cores"]), form_data["app_name"], fcontents)
else:
return jsonify(status="error", msg='unknown application type')
return jsonify(status="ok")
@api_bp.route('/applications/delete/<app_id>', methods=['GET', 'POST'])
def application_delete(app_id):
client = ZoeClient(ipcconf['server'], ipcconf['port'])
_api_check_user(client)
if client.application_remove(app_id, False):
return jsonify(status="error", msg="The application has active executions and cannot be deleted")
else:
return jsonify(status="ok")
@api_bp.route('/applications/download/<int:app_id>')
def application_binary_download(app_id: int):
client = ZoeClient(ipcconf['server'], ipcconf['port'])
_api_check_user(client)
data = client.application_get_binary(app_id)
if data is None:
return jsonify(status="error")
else:
return send_file(BytesIO(data), mimetype="application/zip", as_attachment=True, attachment_filename="app-{}.zip".format(app_id))
@api_bp.route('/executions/new', methods=['POST'])
def execution_new():
client = ZoeClient(ipcconf['server'], ipcconf['port'])
_api_check_user(client)
form_data = request.form
app_id = int(form_data["app_id"])
application = client.application_get(app_id)
if application.type == "spark-notebook":
ret = client.execution_spark_new(app_id, form_data["exec_name"])
else:
ret = client.execution_spark_new(app_id, form_data["exec_name"], form_data["commandline"], form_data["spark_opts"])
if ret:
return jsonify(status="ok")
else:
return jsonify(status="error")
@api_bp.route('/executions/logs/container/<int:container_id>')
def execution_logs(container_id: int):
client = ZoeClient(ipcconf['server'], ipcconf['port'])
_api_check_user(client)
log = client.log_get(container_id)
if log is None:
return jsonify(status="error", msg="no log found")
else:
return jsonify(status="ok", log=log)
@api_bp.route('/executions/stats/container/<int:container_id>')
def container_stats(container_id: int):
client = ZoeClient(ipcconf['server'], ipcconf['port'])
_api_check_user(client)
stats = client.container_stats(container_id)
if stats is None:
return jsonify(status="error", msg="no stats found")
else:
return jsonify(status="ok", **stats)
@api_bp.route('/executions/terminate/<int:exec_id>')
def execution_terminate(exec_id: int):
client = ZoeClient(ipcconf['server'], ipcconf['port'])
_api_check_user(client)
client.execution_terminate(exec_id)
return jsonify(status="ok")
@api_bp.route('/history/logs/<int:execution_id>')
def history_logs_get(execution_id: int):
client = ZoeClient(ipcconf['server'], ipcconf['port'])
_api_check_user(client)
logs = client.log_history_get(execution_id)
if logs is None:
return abort(404)
else:
return send_file(BytesIO(logs), mimetype="application/zip", as_attachment=True, attachment_filename="logs-{}.zip".format(execution_id)) | zoe-analytics | /zoe-analytics-0.8.1b.tar.gz/zoe-analytics-0.8.1b/zoe_web/api/__init__.py | __init__.py |
from flask import render_template
from zoe_client import ZoeClient
from common.configuration import ipcconf
from zoe_web.web import web_bp
import zoe_web.utils as web_utils
@web_bp.route('/apps/new')
def application_new():
client = ZoeClient(ipcconf['server'], ipcconf['port'])
user = web_utils.check_user(client)
template_vars = {
"user_id": user.id,
"email": user.email,
}
return render_template('application_new.html', **template_vars)
@web_bp.route('/executions/new/<app_id>')
def execution_new(app_id):
client = ZoeClient(ipcconf['server'], ipcconf['port'])
user = web_utils.check_user(client)
application = client.application_get(app_id)
template_vars = {
"user_id": user.id,
"email": user.email,
'app': application
}
return render_template('execution_new.html', **template_vars)
@web_bp.route('/executions/terminate/<exec_id>')
def execution_terminate(exec_id):
client = ZoeClient(ipcconf['server'], ipcconf['port'])
user = web_utils.check_user(client)
execution = client.execution_get(exec_id)
template_vars = {
"user_id": user.id,
"email": user.email,
'execution': execution
}
return render_template('execution_terminate.html', **template_vars)
@web_bp.route('/apps/delete/<app_id>')
def application_delete(app_id):
client = ZoeClient(ipcconf['server'], ipcconf['port'])
user = web_utils.check_user(client)
application = client.application_get(app_id)
template_vars = {
"user_id": user.id,
"email": user.email,
'app': application
}
return render_template('application_delete.html', **template_vars)
@web_bp.route('/executions/inspect/<execution_id>')
def execution_inspect(execution_id):
client = ZoeClient(ipcconf['server'], ipcconf['port'])
user = web_utils.check_user(client)
execution = client.execution_get(execution_id)
template_vars = {
"user_id": user.id,
"email": user.email,
'execution': execution
}
return render_template('execution_inspect.html', **template_vars) | zoe-analytics | /zoe-analytics-0.8.1b.tar.gz/zoe-analytics-0.8.1b/zoe_web/web/applications.py | applications.py |
from configparser import ConfigParser
ipcconf = {
'server': None,
'port': None,
}
config_paths = [
'zoe.conf',
'/etc/zoe/zoe.conf'
]
defaults = {
'docker': {
'swarm_manager_url': 'tcp://swarm.example.com:2380',
'private_registry': '10.1.0.1:5000'
},
'intervals': {
'status_refresh': 10,
'scheduler_task': 10,
'proxy_update_accesses': 300,
'check_health': 30,
'notebook_max_age_no_activity': 24,
'notebook_warning_age_no_activity': 2
},
'db': {
'url': 'mysql+mysqlconnector://zoe:pass@dbhost/zoe'
},
'apache': {
'proxy_config_file': '/tmp/zoe-proxy.conf',
'access_log': '/var/log/apache2/access.log',
'web_server_name': 'bigfoot-m2.eurecom.fr',
'proxy_path_prefix': '/proxy'
},
'smtp': {
'server': 'smtp.exmaple.com',
'user': '[email protected]',
'password': 'changeme'
},
'filesystem': {
'history_path': "/var/lib/zoe/history"
},
'flask': {
'secret_key': b"\xc3\xb0\xa7\xff\x8fH'\xf7m\x1c\xa2\x92F\x1d\xdcz\x05\xe6CJN5\x83!"
}
}
_zoeconf = None
class ZoeConfig(ConfigParser):
def __init__(self):
super().__init__(interpolation=None)
self.read_dict(defaults)
def write_defaults(self, fp):
tmp = ZoeConfig()
tmp.write(fp)
@property
def history_path(self) -> str:
return self.get('filesystem', 'history_path')
@property
def web_server_name(self) -> str:
return self.get('apache', 'web_server_name')
@property
def proxy_path_url_prefix(self) -> str:
return self.get('apache', 'proxy_path_prefix')
@property
def smtp_server(self) -> str:
return self.get('smtp', 'server')
@property
def smtp_user(self) -> str:
return self.get('smtp', 'user')
@property
def smtp_password(self) -> str:
return self.get('smtp', 'password')
@property
def notebook_warning_age_no_activity(self) -> int:
return self.getint('intervals', 'notebook_warning_age_no_activity')
@property
def notebook_max_age_no_activity(self) -> int:
return self.getint('intervals', 'notebook_max_age_no_activity')
@property
def interval_check_health(self) -> int:
return self.getint('intervals', 'check_health')
@property
def interval_proxy_update_accesses(self) -> int:
return self.getint('intervals', 'proxy_update_accesses')
@property
def apache_log_file(self) -> str:
return self.get('apache', 'access_log')
@property
def apache_proxy_config_file(self) -> str:
return self.get('apache', 'proxy_config_file')
@property
def db_url(self) -> str:
return self.get('db', 'url')
@property
def interval_scheduler_task(self) -> int:
return self.getint('intervals', 'scheduler_task')
@property
def interval_status_refresh(self) -> int:
return self.getint('intervals', 'status_refresh')
@property
def docker_swarm_manager(self) -> str:
return self.get('docker', 'swarm_manager_url')
@property
def cookies_secret_key(self):
return self.get('flask', 'secret_key')
@property
def docker_private_registry(self) -> str:
return self.get('docker', 'private_registry')
def init(config_file=None) -> ZoeConfig:
global _zoeconf
_zoeconf = ZoeConfig()
if config_file is None:
_zoeconf.read(config_paths)
else:
_zoeconf.read_file(open(config_file))
return _zoeconf
def zoeconf() -> ZoeConfig:
return _zoeconf | zoe-analytics | /zoe-analytics-0.8.1b.tar.gz/zoe-analytics-0.8.1b/common/configuration.py | configuration.py |
# zoe_ci Job framework
If you are new to zoe_ci, look at the architecture docs there: [misc/ZoeArch.md](misc/ZoeArch.md)
This repository is the client part of the zoe_ci architecture that executes work items on target machines.
It consists of:
* a python library that implements the base primitives: [zoe_ci](zoe_ci)
* and several [examples](examples)
## Conventions
* All jobs should be in files named `*.job.py`. This way the version control systems can trigger jobs correctly.
* Only one job per file
* Try to keep complexity down as much as possible, increasing maintainability of the overall system
* Try to write the code in a cross-platform matter if simple enough (linux/windows/console)
### Installation
```batch
pip install zoe
```
to run:
```batch
python -m zoe_ci
```
## Development: start / testing
Run the solution in vscode with F5, see `.vscode/launch.json`
## Manual testing
just execute any .job.py file in python:
```batch
examples\\01-simple-svn-checkout.job.py
``` | zoe-ci | /zoe_ci-0.1.38.tar.gz/zoe_ci-0.1.38/README.md | README.md |
import websocket
import threading
import time
import queue
import json
import random
import os
import logging
from datetime import datetime
from typing import Union
import zoe_ci.utils
import zoe_ci.tasks
import zoe_ci.work
MAX_RECONNECT_TIME_SEC = 300
MIN_RECONNECT_TIME_SEC = 3
logger = logging.getLogger('Websocket')
class CommThread(threading.Thread):
def __init__(self, eventHandler, env, executorInfo):
threading.Thread.__init__(self)
self.env = env
self.executorInfo = executorInfo
self.fullyConnected = False
self.shutdown = False
self.disconnectionDatetime = None
self.ws = None
self.eventHandler = eventHandler
self.sendQueue = queue.Queue()
self.recvQueue = queue.Queue()
self.sleepCooldown = MIN_RECONNECT_TIME_SEC
def on_error(self, wsapp, ex):
self.fullyConnected = False
logger.error("error: " + str(ex))
def on_close(self, wsapp, close_status_code, close_msg):
self.fullyConnected = False
self.disconnectionDatetime = datetime.now()
if close_status_code or close_msg:
logger.warning("closed: " + str(close_status_code) + " - " + str(close_msg))
def on_data(self, wsapp, msgRaw, dataType, continueData):
#logger.info("on_message: " + str(msgRaw))
self.recvQueue.put([msgRaw, dataType])
self.eventHandler.dataReady.acquire()
self.eventHandler.dataReady.notify_all()
self.eventHandler.dataReady.release()
def on_open(self, wsapp):
self.fullyConnected = True
self.sleepCooldown = MIN_RECONNECT_TIME_SEC # reset the disconnection cooldowns
#logger.debug("connected")
if self.disconnectionDatetime and not self.shutdown:
logger.info("reconnected after {:.2f} seconds".format((datetime.now() - self.disconnectionDatetime).total_seconds()))
self.disconnectionDatetime = None
while self.ws and not self.sendQueue.empty():
self._send(self.sendQueue.get())
self._send({'type': 'register', 'data': self.executorInfo})
def _threadEntry(self):
#logger.info("thread {} started".format(threading.current_thread().name))
while True:
serverURL = os.environ.get('WS_SERVER')
if not self.shutdown:
logger.debug("connecting to server {}".format(serverURL))
self.ws = websocket.WebSocketApp(serverURL, on_error = self.on_error, on_close = self.on_close, on_data = self.on_data, on_open = self.on_open)
#websocket.enableTrace(True)
self.ws.run_forever()
# reconnect cooldowns: double every time we disconnect
sleepTime = random.randint(self.sleepCooldown, self.sleepCooldown * 2) # use random so all clients wont hammer the server at the same time
self.sleepCooldown = min(MAX_RECONNECT_TIME_SEC, self.sleepCooldown * 2)
if self.ws and not self.shutdown:
logger.info("reconnecting in {} seconds".format(sleepTime))
time.sleep(sleepTime)
logger.info("thread {} DONE".format(threading.current_thread().name))
def _encodeMessage(self, msg: Union[int, str, dict]):
if type(msg) is str:
return msg
elif type(msg) is dict:
msg = json.dumps(msg, default=lambda o: '<not serializable>')
else:
msg = str(msg)
return msg
def _send(self, msgRaw):
msg = self._encodeMessage(msgRaw)
if not self.ws or not self.ws.sock:
logger.error('error: socket gone')
return
try:
self.ws.send(msg)
except websocket.WebSocketConnectionClosedException as ex:
logger.error('disconnected')
self.ws = None
except Exception as ex:
logger.exception('exception: ' + str(ex))
self.ws = None
def send(self, msgRaw):
#logger.info(">> " + str(msgRaw))
if not self.fullyConnected:
self.sendQueue.put(msgRaw)
return
self._send(msgRaw)
def stopThread(self):
if self.ws:
self.ws.keep_running = False
self.shutdown = True
class CommsLogHandler(logging.StreamHandler):
def __init__(self, comm, name, *args, **kwargs):
logging.StreamHandler.__init__(self, name, *args, **kwargs)
self.comm = comm
self.dummyLog = logging.LogRecord(None,None,None,None,None,None,None)
self._logRunning = False
def emit(self, record):
if self._logRunning:
# prevent recursive calling due to logging system
return
self._logRunning = True
data = {
'type': 'log', # can be overwritten with extra
#'t': record.relativeCreated, # this is not really usable
't': datetime.now().isoformat(),
'l': record.levelname,
'm': record.msg,
}
if zoe_ci.tasks.GenericTask.lastTask:
data['task_id'] = zoe_ci.tasks.GenericTask.lastTask.taskid
data['build_id'] = zoe_ci.work.Runtime.buildId
for k, v in record.__dict__.items():
if k not in self.dummyLog.__dict__:
if type(v) == tuple:
pass
#data[k] = dict(v) # preserve keys correctly ...
else:
if k == 'message' and v == record.msg or k == 'asctime':
continue
data[k] = v
self.comm.send(data)
self._logRunning = False
def createComms(cb, env, executorInfo):
comm = CommThread(cb, env, executorInfo)
thread = threading.Thread(target = comm._threadEntry)
thread.daemon = True
thread.start()
# install the log listener on the root logger to catch all logs
commsHandler = CommsLogHandler(comm, 'CommsLogHandler')
logging.getLogger().addHandler(commsHandler)
return comm | zoe-ci | /zoe_ci-0.1.38.tar.gz/zoe_ci-0.1.38/zoe_ci/serverConnection.py | serverConnection.py |
import argparse
import logging
import sys
import os
import json
depsPath = os.path.normpath(os.path.join(os.path.dirname(__file__), '..'))
sys.path.insert(0, depsPath)
os.environ["PYTHONUNBUFFERED"] = "1"
import zoe_ci
from zoe_ci.utils import installSignalHandler
import zoe_ci.work
logger = None
def setupLogging():
global logger
stream_handler = logging.StreamHandler()
stream_handler.setLevel(logging.INFO)
logging.basicConfig(level=logging.DEBUG, format='%(asctime)s | %(levelname)s | %(name)s | %(message)s', handlers=[
#logging.FileHandler("my_log.log", mode='w'),
stream_handler
])
logger = logging.getLogger('zoe')
if not os.environ.get('RUNNING_AS_WINDOWS_SERVICE', None) and not os.environ.get('NO_COLORLOG', None):
# only use color logs if not running as windows service
try:
import coloredlogs
coloredlogs.install(logging.INFO, fmt='%(asctime)s | %(levelname)s | %(name)s | %(message)s')
except ImportError:
pass
def loadDotEnv():
try:
from dotenv import load_dotenv, find_dotenv
load_dotenv(find_dotenv(filename=".env.default"))
load_dotenv(find_dotenv(filename=".env"))
except Exception as e:
logger.exception(e)
def loadLocalConfig():
try:
from uuid import uuid4
from appdirs import user_data_dir
appConfigPath = user_data_dir('zoe_ci', 'BeamNG')
jsonFilename = os.path.join(appConfigPath, 'config.json')
data = None
if os.path.exists(jsonFilename):
try:
with open(jsonFilename, 'r') as f:
data = json.load(f)
except:
pass
if data is None:
uuid = uuid4().hex
logger.info('Generated new UUID for this machine: {}'.format(uuid))
data = { 'machine_uuid': uuid }
data['zoe_version'] = zoe_ci.__version__
os.makedirs(appConfigPath, exist_ok = True)
with open(jsonFilename, 'w') as f:
json.dump(data, f, sort_keys=True, indent=2)
return data
except Exception as e:
logger.exception(e)
return {}
def zoeMain():
setupLogging()
logger.info(f"===== Welcome to zoe_ci v{zoe_ci.__version__} =====")
loadDotEnv()
parser = argparse.ArgumentParser(prog='zoe', description='The zoe_ci client and execution program suit')
# mode flags
parser.add_argument("-j", "--jobfile", help="job filename to process", default=None, nargs='?')
# boolean flags
parser.add_argument("-v", "--verbose", help="increase output verbosity", action="store_true")
parser.add_argument("-q", "--quiet", help="decrease output verbosity", action="store_true")
parser.add_argument("-l", "--local", help="offline mode. No communication with the server.", action="store_true")
parser.add_argument("-u", "--autoupdate", help="Enable automatic updates", action="store_true")
args = parser.parse_args()
if args.verbose:
logger.setLevel(logging.DEBUG)
if args.quiet:
logger.setLevel(logging.ERROR)
env = loadLocalConfig()
env['autoupdate'] = args.autoupdate
if args.local:
env['localMode'] = True
if not args.jobfile:
logger.error('Local mode is not available when running as executor')
return 1
ex = zoe_ci.work.Executor(env)
if args.jobfile:
return ex.executeLocalJobs(args.jobfile.strip())
else:
installSignalHandler()
return ex.serveForever()
if __name__ == "__main__":
sys.exit(zoeMain()) | zoe-ci | /zoe_ci-0.1.38.tar.gz/zoe_ci-0.1.38/zoe_ci/__main__.py | __main__.py |
import os
import logging
logger = logging.getLogger('GPU')
if os.name == 'nt':
try:
import wmi
except Exception:
wmi = None
logger.exception("wmi module not found")
try:
import GPUtil
except Exception:
GPUtil = None
logger.exception("GPUtil not installed, Nvidia GPU info not available")
try:
from pyadl import *
except Exception:
pyadl = None
logger.exception("pyadl import error, AMD GPU info not available")
class NvidiaGpuInfo:
"""
Nvidia Class for getting GPU information
"""
def __init__(self) -> None:
self.list_gpus = []
try:
self.gpus = GPUtil.getGPUs()
self.gpuCount = len(self.gpus)
except Exception:
self.gpuCount = 0
logger.exception("An error occurred while getting Nvidia GPU info")
def getGpuInfo(self) -> list:
if self.gpuCount > 0:
for gpu in self.gpus:
gpu_name = gpu.name
gpu_load = f"{gpu.load*100}%"
gpu_free_memory = round(gpu.memoryFree)
self.list_gpus.append(
(
gpu_name,
gpu_free_memory,
gpu_load,
)
)
return self.list_gpus
else:
return []
class AMDGpuInfo:
"""
AMD Class for getting GPU information
"""
def __init__(self) -> None:
self.gpus_info = []
self.gpuCount = 0
if pyadl is not None:
try:
self.gpus = ADLManager.getInstance().getDevices()
self.gpuCount = len(self.gpus)
except Exception:
logger.exception("An error occurred while getting AMD GPU info")
def getGpuInfo(self) -> list:
"""
build gpu info list for AMD
"""
try:
for gpu in self.gpuCount:
self.gpu = pyamdgpuinfo.get_gpu(gpu)
gpu_name = self.gpu.name
gpu_vram = self.gpu.query_vram_usage()
gpu_load = self.gpu.query_load()
self.gpus_info.append(
(
gpu_name,
f"{round(gpu_vram / (1024 * 1024))} MB",
f"{gpu_load*100}%",
)
)
except Exception:
logger.exception("An error occurred while getting AMD GPU info")
return []
return self.gpus_info
class WindowsGpuInfo:
"""
class windowsGpuInfo get gpu info for windows
"""
def __init__(self) -> None:
self.list_gpus = []
self.gpuCount = 0
if os.name == 'nt':
self.c = wmi.WMI()
if self.c.Win32_VideoController():
self.gpuCount = len(self.c.Win32_VideoController())
else:
self.c = None
def getGpuInfo(self) -> list:
if os.name == 'nt' and self.c:
name = self.c.Win32_VideoController()[0].Name
ram = self.c.Win32_VideoController()[0].AdapterRAM
ram = f"{round(ram / (1024 * 1024))} MB"
self.list_gpus.append(name)
self.list_gpus.append(ram)
return self.list_gpus
class GpuInfo:
"""
class GpuInfo get gpu info for windows, linux and mac
"""
def __init__(self) -> None:
self.gpus = []
self.nvidia_client = NvidiaGpuInfo()
self.amd_client = AMDGpuInfo()
self.windows_client = WindowsGpuInfo()
def _build_gpu_list(self) -> list:
if self.nvidia_client.gpuCount > 0:
self.gpus.extend(self.nvidia_client.getGpuInfo())
elif self.amd_client.gpuCount > 0:
self.gpus.extend(self.amd_client.getGpuInfo())
elif self.windows_client.gpuCount > 0:
self.gpus.extend(self.windows_client.getGpuInfo())
return self.gpus
def getGpuInfo(self) -> list:
return self._build_gpu_list()
def getGpuCount(self) -> int:
return len(self.gpus) | zoe-ci | /zoe_ci-0.1.38.tar.gz/zoe_ci-0.1.38/zoe_ci/gpuInfo.py | gpuInfo.py |
import zoe_ci
import os
import time
import inspect
import threading
import logging
import logging
import json
import base64
import importlib.util
from datetime import datetime
from zoe_ci.utils import decodeMessage, hashPath, restartScript, hashFileSHA1, ZoeException
from zoe_ci.gpuInfo import GpuInfo
logger = logging.getLogger('zoe')
parentPath = os.path.normpath(os.path.join(os.path.dirname(os.path.abspath(__file__)), '..'))
MAKE_FILE_BACKUPS = False
# instantiate the gpu class
gpu_client = GpuInfo()
try:
if gpu_client.getGpuCount() > 0:
for gpu in gpu_client.getGpuInfo():
gpu_name = gpu[0]
gpu_free_memory = gpu[1]
gpu_load = gpu[2]
else:
gpu_name = None
gpu_free_memory = None
gpu_load = None
except ZoeException:
logger.exception("An error occurred while getting GPU info")
class Runtime():
buildId: str = None # the build number of the current execution
class Job():
def __init__(self, env, commitInfo, executorInfo):
self.env = env
self.commitInfo = commitInfo
self.executorInfo = executorInfo
self.logger = logging.getLogger(self.__class__.__name__)
def setup(self, commitInfo, executorInfo):
pass
def run(self, commitInfo, executorInfo):
pass
def teardown(self, commitInfo, executorInfo):
pass
def _getAndIncreaseBuildNumber(self):
jsonFilename = os.path.join(os.environ['WORKSPACE'], 'config.json')
data = None
if os.path.exists(jsonFilename):
try:
with open(jsonFilename, 'r') as f:
data = json.load(f)
except:
pass
if data is None:
data = { 'build_number': 0 }
data['zoe_version'] = zoe_ci.__version__
data['build_number'] = int(data['build_number']) + 1
data['last_build'] = datetime.now().isoformat()
os.makedirs(os.environ['WORKSPACE'], exist_ok = True)
with open(jsonFilename, 'w') as f:
json.dump(data, f, sort_keys=True, indent=2)
return data['build_number']
def _execute(self):
if not os.environ.get('WORKSPACE', None):
workspace_root = os.environ.get('WORKSPACE_ROOT', 'workspace')
if workspace_root:
if not os.path.isabs(workspace_root):
workspace_root = os.path.join(parentPath, workspace_root)
os.environ['WORKSPACE'] = os.path.join(workspace_root, self.__class__.__name__)
os.environ['WORKSPACE'] = os.path.normpath(os.environ['WORKSPACE'])
# TODO: implement more env variables: BUILD_NUMBER, NODE_NAME, JOB_NAME, BUILD_TAG, EXECUTOR_NUMBER, SVN_REVISION, GIT_COMMIT, GIT_URL, GIT_BRANCH
# Figure out build number per job:
Runtime.buildId = self._getAndIncreaseBuildNumber()
os.environ['BUILD_NUMBER'] = str(Runtime.buildId)
os.makedirs(os.environ['WORKSPACE'], exist_ok=True)
os.chdir(os.environ['WORKSPACE'])
with zoe_ci.tasks.GenericTask(self.__class__.__name__):
self.setup(self.commitInfo, self.executorInfo)
self.run(self.commitInfo, self.executorInfo)
self.teardown(self.commitInfo, self.executorInfo)
def loadJobsFromModule(module):
jobs = []
for name, cls in inspect.getmembers(module, inspect.isclass):
if issubclass(cls, zoe_ci.work.Job):
cls._NAME = name
jobs.append(cls)
if len(jobs) == 0:
logger.error('No jobs found')
return []
return jobs
def loadJobsFromFile(filename):
if not os.path.isfile(filename):
logger.error("Job file not readable: {}".format(filename))
return []
try:
spec = importlib.util.spec_from_file_location('Job', filename)
module = importlib.util.module_from_spec(spec)
spec.loader.exec_module(module)
return loadJobsFromModule(module)
except Exception as e:
logger.error("Unable to load job file {}: {}".format(filename, e))
return []
def loadJobFromMemory(filename, fileContent):
try:
spec = importlib.util.spec_from_loader('Job', loader=None)
module = importlib.util.module_from_spec(spec)
exec(fileContent, module.__dict__)
return loadJobsFromModule(module)
#sys.modules['Job'] = module
except Exception as e:
logger.error("Unable to load job file {}: {}".format(filename, e))
return []
class Executor():
def __init__(self, env):
self.env = env
self.dataReady = threading.Condition()
self.pingTime = 60
self.lastPingTime = time.process_time() - self.pingTime
def _getExecutorInfo(self):
import platform
info = {
'clientType': 'executor',
'name': platform.node().lower(),
'ZoeVersion': zoe_ci.__version__,
'machine_uuid': self.env.get('machine_uuid', None),
'binaryCapable': True, # important to allow binary websocket communication
'platform': {
'arch': platform.architecture(),
'machine': platform.machine().lower(),
'node': platform.node().lower(),
'platform': platform.platform(),
'processor': platform.processor(),
'python_build': platform.python_build(),
'python_compiler': platform.python_compiler(),
'python_branch': platform.python_branch(),
'python_implementation': platform.python_implementation(),
'python_revision': platform.python_revision(),
'python_version': platform.python_version(),
'python_version_tuple': platform.python_version_tuple(),
'release': str(platform.release()).lower(),
'system': platform.system(),
'version': platform.version(),
'uname': platform.uname(),
'gpu_name': gpu_name if gpu_name else None,
'gpu_free_memory': gpu_free_memory if gpu_free_memory else None,
'gpu_load': gpu_load if gpu_load else None,
},
# tags are a way of defining features for tests to select where to run on
# i.e. a test can say it wants to run on: ['windows', 'amd', 'max spec']
# or on a specific node name for example
'tags': []
}
# add some tags :)
# tags are always lower-case please
info['tags'].append(platform.system()) # windows / linux
info['tags'].append(f"{platform.system()}{str(platform.release())}") # windows10, linux5.15.0-53-generic
info['tags'].append(platform.architecture()[0]) # 64bit
info['tags'].append(platform.node()) # DESKTOP-XXXX, testinglinux
info['tags'].append(platform.machine()) # machine type: AMD64, x86_64
info['tags'].append(gpu_name)
info['tags'].append(gpu_load)
info['tags'].append(gpu_free_memory)
# TODO:
# - minspec, midspec, max spec
# - can this windows build machine compile for consoles? Is a console attached for debugging or testing?
# ensure everything is lower case ;)
info['tags'] = [str(x).lower() for x in info['tags']]
info['autoupdate'] = 'autoupdate' in self.env and self.env['autoupdate']
#for v in ['COMPUTERNAME', 'TIME', 'DATE', 'USERNAME', 'NUMBER_OF_PROCESSORS', 'APPDATA']:
# data[v] = zoe_ci.utils.getWindowsShellVariable(v)
def setup(self):
from zoe_ci.serverConnection import createComms
self.executorInfo = self._getExecutorInfo()
self.comm = None
if not 'localMode' in self.env:
self.comm = createComms(self, self.env, self.executorInfo)
def teardown(self):
"""
Stop the thread
"""
if self.comm:
self.comm.stopThread()
def handleMessage(self, msgRaw):
msg = decodeMessage(msgRaw[0], msgRaw[1] == 2)
if type(msg) != dict or 'type' not in msg:
logger.error(f'Invalid message received: {msg}')
return
if msg['type'] == 'fileHashes':
self._checkUpdate(msg['data'])
elif msg['type'] == 'executeFile':
filename = msg['data']['filename']
filecontent = base64.b64decode(msg['data']['filecontent']).decode()
try:
commitInfo = msg['data']['commitInfo']
except KeyError:
commitInfo = None
self._executeJobs(loadJobFromMemory(filename, filecontent), commitInfo)
elif msg['type'] == 'updateData':
if not self.env['autoupdate']:
logger.info('Auto update rejected')
return
logger.info(' + AUTO-UPDATE: Downloading files ...')
if not 'data' in msg or not 'fileData' in msg['data'] or not 'fileHashes' in msg['data']:
logger.error('malformed update data: ' + str(msg))
return
try:
fileData = msg['data']['fileData']
fileHashes = msg['data']['fileHashes']
if len(fileData) == 0:
logger.info('Auto update empty')
return
for filename in fileData:
filenameDisk = os.path.normpath(os.path.join(parentPath, filename.replace('..', '')))
os.makedirs(os.path.dirname(filenameDisk), exist_ok=True)
if MAKE_FILE_BACKUPS and os.path.isfile(filenameDisk):
oldFilename = filenameDisk + '.old'
if os.path.isfile(oldFilename):
os.unlink(oldFilename)
os.rename(filenameDisk, oldFilename)
#print("OLD FILE HASH: ", hashFileSHA1(oldFilename))
with open(filenameDisk, 'wb') as file:
file.write(fileData[filename])
fileHash = hashFileSHA1(filenameDisk)
if fileHash != fileHashes[filename]:
logger.error(' * {} - incorrect hash: {} {}'.format(filename, fileHash, fileHashes[filename]))
logger.error(' + AUTO-UPDATE: ABORTED')
return
logger.info(' * {} - OK'.format(filename))
#logger.info(' * successfully downloaded and verified file {} ({}) with hash {}'.format(filename, filenameDisk, fileHash))
logger.info(' + AUTO-UPDATE: DONE - restarting now')
self.teardown()
restartScript()
#sys.exit(0)
except Exception as ex:
logger.exception('Exception on auto update: ' + str(ex))
#else:
# print("got unknown message: ", msg)
def _sendPingIfNeeded(self):
import psutil
if time.process_time() - self.lastPingTime < self.pingTime:
return
self.lastPingTime = time.process_time()
data = {
'memory_virtual': psutil.virtual_memory()._asdict(),
'memory_swap': psutil.swap_memory()._asdict(),
'cpu_freq': psutil.cpu_freq()._asdict(),
'cpu_times': psutil.cpu_times()._asdict(),
'cpu_loadavg': psutil.getloadavg(),
#'netcounters': psutil.net_io_counters(pernic=True),
#'sensors_temperatures': psutil.sensors_temperatures(),
#'sensors_fans': psutil.sensors_fans(),
#'sensors_battery': psutil.sensors_battery(),
}
self.comm.send(json.dumps({'type': 'ping', 'data': data}))
def serveForever(self):
self.dataReady.acquire()
self.setup()
while True:
self._sendPingIfNeeded()
if self.dataReady.wait(self.pingTime):
while not self.comm.recvQueue.empty():
self.handleMessage(self.comm.recvQueue.get())
def _executeJobs(self, jobs, commitInfo):
if len(jobs) == 0:
logger.error('no jobs found, exiting')
return 1
for job in jobs:
j = job(self.env, commitInfo, self.executorInfo)
j._execute()
def executeLocalJobs(self, jobfilename, commitInfo=None):
self.setup()
self._executeJobs(loadJobsFromFile(jobfilename), commitInfo)
self.teardown()
def _checkUpdate(self, serverHashes):
if not self.comm or not self.env['autoupdate']:
return
localHashes = hashPath(parentPath)
localFiles = localHashes.keys()
serverFiles = serverHashes.keys()
outdatedFiles = []
# find modified / deleted files
for f in localFiles:
if f in serverFiles and localHashes[f] != serverHashes[f]:
# modified
#logger.info(' * {} M'.format(f))
outdatedFiles.append(f)
#elif not f in serverFiles:
#logger.info(' * {} D'.format(f))
#print('local file not present on server: ' + str(f))
# find new files
for f in serverFiles:
if not f in localFiles:
#logger.info(' * {} A'.format(f))
outdatedFiles.append(f)
if len(outdatedFiles) == 0:
# no updates needed, all up to date
#logger.info(" + AUTO-UPDATE: SYNCED")
return
logger.info(' + AUTO-UPDATE: Updating {} files ...'.format(len(outdatedFiles)))
self.comm.send({ 'type': 'requestFiles', 'data': outdatedFiles }) | zoe-ci | /zoe_ci-0.1.38.tar.gz/zoe_ci-0.1.38/zoe_ci/work.py | work.py |
import os
import shutil
import hashlib
import glob
import json
import msgpack
import logging
import sys
import zoe_ci.tasks
logger = logging.getLogger('utils')
_startup_cwd = os.getcwd()
class ZoeException(BaseException):
pass
def recursive_delete(path: str) -> None:
def onerror(func, path, exc_info):
import stat
if not os.access(path, os.W_OK):
os.chmod(path, stat.S_IWUSR)
func(path)
shutil.rmtree(path, onerror=onerror)
def installSignalHandler() -> None:
import signal
import sys
def signal_handler(signal, frame):
sys.exit(0)
signal.signal(signal.SIGINT, signal_handler)
def restartScript() -> None:
args = sys.argv[:]
args.insert(0, sys.executable)
if sys.platform == 'win32':
args = ['"{}"'.format(arg) for arg in args]
logger.info('restarting {}'.format(' '.join(args)))
sys.stdout.flush()
sys.stderr.flush()
#os.fsync()
os.chdir(_startup_cwd)
# this is a hack for windows + execv:
if os.environ.get('RUNNING_AS_WINDOWS_SERVICE', None):
# restart using the service to not confuse windows
# if we do not do this, you'll end up with two processes running
sys.exit(0)
os.execv(sys.executable, args)
def hashFileSHA1(filename):
BUF_SIZE = 65536 # 64kb chunks
sha1 = hashlib.sha1()
with open(filename, 'rb') as file:
while True:
data = file.read(BUF_SIZE)
if not data:
break
sha1.update(data)
return sha1.hexdigest()
def encodeMessageBinary(data):
return msgpack.packb(data, use_bin_type=True)
def encodeMessageText(data):
return json.dumps(data, default=lambda o: o.decode("utf-8") or '<not serializable>')
def decodeMessage(data, isBinary):
"""can decode messagepack and json"""
if isBinary:
return msgpack.unpackb(data)
else:
try:
return json.loads(data)
except json.JSONDecodeError as jex:
logger.exception(jex)
def hashPath(path):
filenames = glob.glob(os.path.join(path, '**'), recursive = True)
fileHashes = {}
for filename in filenames:
(rootPath, fileExt) = os.path.splitext(filename)
if not os.path.isfile(filename) or fileExt.lower() == '.pyc':
continue
fileHashes[os.path.relpath(filename, path).replace('\\', '/')] = hashFileSHA1(filename)
return fileHashes
# TODO: kill this thing and just have a SVN / GIT class
class VCS:
type: str = 'svn'
url: str = 'http://svn/game/trunk'
branch: str = 'main' # used for git
targetRevision: str = 'HEAD'
outPath: str = 'trunk'
username: str = os.environ.get('SVN_USER')
password: str = os.environ.get('SVN_PASS')
def __init__(self, **kwargs):
self.__dict__.update(kwargs)
def sync(self):
if self.type == 'svn':
return svnSync(self)
elif self.type == 'git':
return gitSync(self)
else:
logger.error('Unknown self: ' + str(self.type))
def exec_available(exeName: str):
return shutil.which(exeName) is not None
def execBlock(cmdBlock, **kwargs):
res = True
cmdLines = cmdBlock.strip().split('\n')
for cmd in cmdLines:
cmd = cmd.strip()
if len(cmd) == 0 or cmd[0] == '#':
continue
res = res and zoe_ci.tasks.ShellTask(cmd, **kwargs).run()
return res
def exec(*args, **kwargs):
return zoe_ci.tasks.ShellTask(*args, **kwargs).run()
def human_readable_size(size, decimal_places = 2):
for unit in ['B', 'KB', 'MB', 'GB', 'TB', 'PB']:
if size < 1000.0 or unit == 'PB':
break
size /= 1000.0
return f"{size:.{decimal_places}f} {unit}"
def getWindowsShellVariable(varName):
return ''.join(zoe_ci.tasks.ShellTask('echo %{}%'.format(varName), shell=True).run()[1])
def getUnixShellVariable(varName):
return ''.join(zoe_ci.tasks.ShellTask('echo ${}'.format(varName), shell=True).run()[1])
def runCommandSimple(cmd):
return ''.join(zoe_ci.tasks.ShellTask(cmd, shell=True).run()[1])
from zoe_ci.svnUtils import svnSync
from zoe_ci.gitUtils import gitSync
def getClientInstallationPath():
return os.path.normpath(os.path.join(os.path.dirname(os.path.abspath(__file__)), '..'))
def getClientFilesHashes():
"""we hash the client files to allow comparison / auto-updating"""
clientFileHashesRaw = hashPath(getClientInstallationPath())
res = {}
for filename in clientFileHashesRaw:
if not filename.startswith('workspace/'):
res[filename] = clientFileHashesRaw[filename]
#logger.debug(f'Tracking {len(res)} client files...')
return res | zoe-ci | /zoe_ci-0.1.38.tar.gz/zoe_ci-0.1.38/zoe_ci/utils.py | utils.py |
import os
import subprocess
import time
import hashlib
import datetime
import logging
import zoe_ci.utils
class GenericTask:
"""A simple placeholder that describes something happening in a given timeframe between __enter__ and __exit__ (use with 'with')"""
globalTaskStack = []
lastTask = None
def __init__(self, *args: list, **kwargs: dict) -> None:
self.logger = logging.getLogger(self.__class__.__name__)
self.taskName = kwargs.get('title', self.__class__.__name__)
if type(self) == GenericTask and len(args) > 0:
self.taskName = args[0]
self.siblings = {}
self.taskid = f"/{self.taskName}/"
self.result = True
if GenericTask.globalTaskStack: # empty lists eval to False
parentTask = GenericTask.globalTaskStack[-1]
if not self.taskName in parentTask.siblings:
parentTask.siblings[self.taskName] = 0
parentTask.siblings[self.taskName] += 1
self.taskid = parentTask.__dict__.get('taskid', '/') + '{}#{}/'.format(self.taskName, parentTask.siblings[self.taskName])
def __enter__(self, *args, **kwargs):
self.startTime = datetime.datetime.now().isoformat()
extra = kwargs.get('extra', {})
extra.update({
'type': 'task_begin',
'task_id': self.taskid,
})
self.logger.debug(self.taskName, extra = extra)
GenericTask.globalTaskStack.append(self)
GenericTask.lastTask = self
return self
def __exit__(self, *args, **kwargs):
self.endTime = datetime.datetime.now().isoformat()
GenericTask.lastTask = GenericTask.globalTaskStack.pop()
extra = kwargs.get('extra', {})
extra.update({
'type': 'task_end',
'task_id': self.taskid,
'result': self.result,
})
self.logger.debug(self.taskName, extra = extra)
def _fillLoggerTaskData(self, d):
"""for the UI, etc"""
d['task_id'] = self.taskid
class ShellTask(GenericTask):
def __init__(self, *args, **kwargs):
super(ShellTask, self).__init__(*args, **kwargs)
self.vcs = kwargs.get('vcs', zoe_ci.utils.VCS())
self.useShell = kwargs.get('shell', None)
self.cmdEnv = {'vcs': self.vcs}
self.cmdEnv.update(os.environ)
if len(args) != 1:
raise Exception('Only one positional argument allowed: the command itself')
_cmd = args[0].format(**self.cmdEnv) + ' ' + kwargs.get('cmdPostfix', '')
# convert / to \\ on windows for the executable
cmdArgs = _cmd.split(' ', maxsplit=1) + [''] # [''] is the default fallback for no arguments below
try:
# if os.name == 'nt':
executablePath = cmdArgs[0].replace('/', '\\')
if executablePath.lower().find('.exe') != -1: # if cmd contains .exe, do not use the shell by default
self.useShell = False
self.cmd = executablePath + ' ' + cmdArgs[1]
self.cmd = self.cmd.rstrip()
#print(">>> ", self.cmd)
except:
self.cmd = _cmd
if self.useShell is None:
self.useShell = True
self.cmd_log = self.cmd
if self.vcs and self.vcs.password:
self.cmd_log = self.cmd.replace(self.vcs.password, '<PASSWORD>') # to not leak the password into log files
self.title = kwargs.get('title', None)
if not self.title:
self.title = ' '.join(self.cmd_log.split(' ')[0:3])[0:20]
self.cmdHash = hashlib.sha256(self.cmd.encode()).hexdigest()
self.optional = kwargs.get('optional', False)
self.callbackClass = kwargs.get('callbackClass', None)
if self.callbackClass:
self.callbackClass.shellTask = self # tell the progress reporting class about the execution context
self.procState = 'idle'
self.retCode = None
self.throw = kwargs.get('throw', True)
self.timeout = kwargs.get('timeout', None)
self.exitTimeout = kwargs.get('exitTimeout', 10) # wait 10 seconds between the pipes closing and killing the process
self.parentTask = kwargs.get('parentTask', None)
self.workingDirectory = os.path.join(os.environ.get('WORKSPACE', ''), kwargs.get('workingDirectory', ''))
def __enter__(self):
self.procState = 'starting'
extra = {
'type': 'task_begin',
'task_id': self.taskid,
'cmd_log': self.cmd_log,
'cmdHash': self.cmdHash,
'optional': self.optional,
'procState': self.procState,
}
super(ShellTask, self).__enter__(title=self.title, extra=extra) # will fire task_begin
if self.workingDirectory:
os.makedirs(self.workingDirectory, exist_ok = True)
processEnv = dict(filter(lambda elem: type(elem[0]) == str and type(elem[1]) == str, self.cmdEnv.items()))
del processEnv['SVN_USER']
del processEnv['SVN_PASS']
#print("CMD: ", self.cmd)
pr = subprocess.Popen(self.cmd,
cwd=self.workingDirectory,
stderr=subprocess.STDOUT,
shell=self.useShell,
universal_newlines=True,
bufsize=32000, # 0=unbuffered, 1=line-buffered, else buffer-size
#close_fds=True,
#env=processEnv,
)
self.procState = 'running'
self.linesOut = []
# little shortcut to make the usage easier
pr.readable = self.readable
pr.writable = self.writable
pr.readline = self.readline
pr.write = self.write
pr.kill = self.__killProcess
self.process = pr
self.startTime = time.time()
return self.process
def _checkProcessTimeout(self):
timeRunning = time.time() - self.startTime
if self.timeout and timeRunning > self.timeout:
self.logger.warn('command timed out and killed after {} seconds'.format(timeRunning))
self.__killProcess()
if self.throw:
raise Exception('command timeout')
return True
return False
def run(self):
with self as process:
while process.readable():
if not process.readline():
pass
return self.retCode, self.linesOut
def readable(self):
if self._checkProcessTimeout():
self.result = False
return None
return self.process.stderr.readable()
def readline(self, bufferLines = True):
line = self.process.stderr.readline()
if line is None:
return None
line = line.rstrip()
#print(line.strip())
if len(line) == 0:
return line
self.logger.debug(line, extra = {'type': 'task_log', 'task_id': self.taskid})
if bufferLines:
self.linesOut.append(line)
if self.callbackClass:
self.callbackClass.progress(line)
return line
def writable(self):
return self.process.stderr.writable()
def write(self, data):
if not self.process.stderr.writable():
print("Error: stdin not write-able")
return
self.process.stderr.write(data)
self.process.stderr.flush()
def __killProcess(self):
if os.name == 'nt':
subprocess.call(['taskkill', '/F', '/T', '/PID', str(self.process.pid)])
else:
try:
os.kill(pid, signal.SIGTERM)
except:
return
def __exit__(self, type, value, traceback):
if not type is None:
# we got an exception, re-raise exception by returning false
return False
# pipes are closed already, wait for the process to quit properly
lastTime = time.time()
exitTime = time.time()
while self.process.poll() is None:
# wait until process exits
time.sleep(0.01)
if time.time() - lastTime > 3:
self.logger.warning("waiting since {:0.0f} seconds for process to exit...".format(time.time() - exitTime))
lastTime = time.time()
if time.time() - exitTime > self.exitTimeout:
self.logger.error("Killing process after waited {:0.0f} seconds for process to exit...".format(time.time() - exitTime))
self.__killProcess()
self.result = False
self.procState = 'done'
self.retCode = self.process.poll()
extra = {
'type': 'task_end',
'task_id': self.taskid,
}
if self.retCode != 0 and not self.optional:
extra['type'] = 'task_error'
error_msg = '*** cmd failure ***\n{}\n failed with return code {} (0x{:02X}) - {}'.format(self.cmd_log, self.retCode, self.retCode, '\n'.join(self.linesOut[:3]))
self.logger.error(error_msg, extra = extra)
self.result = False
try:
if self.throw:
raise zoe_ci.utils.ZoeException(error_msg)
except zoe_ci.utils.ZoeException as e:
self.logger.error(f"Exception: {e}")
pass
if self.callbackClass and hasattr(self.callbackClass, 'finalReport'):
self.callbackClass.finalReport()
extra['lines'] = self.linesOut
# should be all closed now, stop the parent class
super(ShellTask, self).__exit__(extra) # will fire task_end | zoe-ci | /zoe_ci-0.1.38.tar.gz/zoe_ci-0.1.38/zoe_ci/tasks.py | tasks.py |
import os
import re
import time
from zoe_ci.utils import *
from zoe_ci.tasks import *
import zoe_ci.tasks
import logging
logger = logging.getLogger('svn')
# re for svn list
svn_list_re = re.compile(r'^([0-9]+)[ ]+([^ ]+)[ ]+(?:([0-9]+)|)[ ]+(?:[a-zA-Z]+ [0-9]+[ ]+[0-9:]+|) (.+)$')
# re for svn checkout/update
svn_update_re = re.compile(r'^([^ ]+)[ ]+(.+)$')
class _ProgressReporterSimple:
"""helper logging class"""
def progress(self, line):
extra = {
'type': 'task_progress',
}
if hasattr(self, 'shellTask'):
self.shellTask._fillLoggerTaskData(extra)
logger.info(line.strip(), extra = extra)
def finalReport(self):
pass
#self.progress("task done")
class _SVNListOutputProcessor:
"""helper class for svn list"""
def __init__(self):
self.svn_files = {}
self.total_bytes = 0
self.lastTime = time.time()
def progress(self, line):
extra = {
'type': 'task_progress',
'svn_state': 'svn_list',
'found_files': len(self.svn_files),
'total_bytes': self.total_bytes,
}
line = line.strip()
res = svn_list_re.findall(line)
if res:
res = list(res[0])
res[0] = int(res[0])
if len(res) > 2 and res[2].isdigit():
res[2] = int(res[2])
self.total_bytes += res[2]
dt = time.time() - self.lastTime
if dt > 1:
self.lastTime = time.time()
if hasattr(self, 'shellTask'):
self.shellTask._fillLoggerTaskData(extra)
logger.info("Found {: >6} ({: >9}) files so far ...".format(
len(self.svn_files),
human_readable_size(self.total_bytes)
), extra = extra)
else:
res[2] = 0
self.svn_files[res[-1].rstrip('/')] = res
else:
logger.info(line, extra = extra)
def finalReport(self):
pass
#self.progress("task done")
def getResults(self):
return self.svn_files, self.total_bytes
class _SVNCheckoutOutputProcessor:
"""helper class for svn checkout"""
def __init__(self, svn_files, total_size, **execCtx):
self.svn_files = svn_files
self.filecount = len(svn_files)
self.fileCounter = 0
self.lastTime = time.time() - 8
self.bytesDownloadedTemp = 0
self.bytesDownloaded = 0
self.bytesLeft = total_size
self.bytesTotal = total_size
self.startTime = time.time()
def _formatSeconds(self, seconds):
m, s = divmod(int(seconds), 60)
h, m = divmod(m, 60)
return '{:d}:{:02d}:{:02d}'.format(h, m, s)
def finalReport(self):
if self.fileCounter > 0:
timePassed = time.time() - self.startTime
speed = self.bytesDownloaded / timePassed
extra = {
'type': 'task_progress',
'svn_state': 'checkout_done',
'filesDone': self.fileCounter,
'bytesTotal': self.bytesTotal,
'dlSpeed': speed,
'timePassed': timePassed
}
if hasattr(self, 'shellTask'):
self.shellTask._fillLoggerTaskData(extra)
logger.info('Done. Downloaded {filesDone:} files ({bytesTotal:}) at {dlSpeed:}/s in {t:}'.format(
filesDone = self.fileCounter,
bytesTotal = human_readable_size(self.bytesTotal),
dlSpeed = human_readable_size(speed),
t = self._formatSeconds(timePassed)
), extra = extra)
def progress(self, line):
line = line.strip()
self.fileCounter += 1
res = svn_update_re.findall(line)
speed = 0
extra = {
'type': 'task_progress',
}
if hasattr(self, 'shellTask'):
self.shellTask._fillLoggerTaskData(extra)
if res:
res = res[0]
filename = res[1].replace('\\', '/').strip()
if filename in self.svn_files:
fileSize = self.svn_files[filename][2]
self.bytesDownloadedTemp += fileSize
self.bytesDownloaded += fileSize
self.bytesLeft -= fileSize
else:
logger.info(line, extra = extra)
dt = time.time() - self.lastTime
if dt > 10:
self.lastTime = time.time()
if self.fileCounter > 0 and self.bytesDownloaded > 0:
speed = self.bytesDownloadedTemp / dt
self.bytesDownloadedTemp = 0
timePassedSeconds = time.time() - self.startTime
etaSeconds = self.bytesLeft / (self.bytesDownloaded / timePassedSeconds)
percDone = min(100, self.fileCounter / self.filecount * 100)
etaStr = 'unknown'
if etaSeconds < 99999999:
etaStr = self._formatSeconds(etaSeconds)
extra = {
'type': 'task_progress',
'svn_state': 'checkout_running',
'filesDone': self.fileCounter,
'filesTotal': self.filecount,
'bytesDownloaded': self.bytesDownloaded,
'bytesTotal': self.bytesTotal,
'percentDone': percDone,
'dlSpeedPerSec': speed,
'etaSeconds': etaSeconds,
}
if hasattr(self, 'shellTask'):
self.shellTask._fillLoggerTaskData(extra)
logger.info('{percentDone:6.2f}% | {filesDone: >6} / {filesTotal:} files | {bytesDownloaded: >9} / {bytesTotal:} | {dlSpeed: >9}/s | ETA: {eta:}'.format(
filesDone = self.fileCounter,
filesTotal = self.filecount,
bytesDownloaded = human_readable_size(self.bytesDownloaded),
bytesTotal = human_readable_size(self.bytesTotal),
percentDone = percDone,
dlSpeed = human_readable_size(speed),
eta = etaStr
), extra = extra)
class _SVNUpdateOutputProcessor:
"""helper class for svn update"""
def __init__(self):
self.fileCounter = 0
self.stats = {}
self.lastTime = time.time()
def report(self):
if self.fileCounter > 0:
fileStats = []
if 'A' in self.stats:
fileStats.append('{: >6} Added'.format(self.stats['A']))
if 'D' in self.stats:
fileStats.append('{: >6} Deleted'.format(self.stats['D']))
if 'U' in self.stats:
fileStats.append('{: >6} Updated'.format(self.stats['U']))
if 'C' in self.stats:
fileStats.append('{: >6} Conflicted'.format(self.stats['C']))
if 'M' in self.stats:
fileStats.append('{: >6} Merged'.format(self.stats['M']))
if 'B' in self.stats:
fileStats.append('{: >6} Broken'.format(self.stats['B']))
if len(fileStats) > 0:
#c = proc.io_counters()
filesStr = 'svn update file progress: ' + ', '.join(fileStats)
extra = {
'type': 'task_progress',
'svn_state': 'updating',
'stats': self.stats,
}
if hasattr(self, 'shellTask'):
self.shellTask._fillLoggerTaskData(extra)
logger.info(filesStr, extra = extra)
def finalReport(self):
self.report()
def progress(self, line):
line = line.strip()
self.fileCounter += 1
res = svn_update_re.findall(line)
if res:
res = res[0]
mode = res[0]
if mode == 'Updating':
return
if not mode in self.stats:
self.stats[mode] = 0
self.stats[mode] += 1
else:
extra = {
'type': 'task_progress',
'svn_state': 'updating',
}
if hasattr(self, 'shellTask'):
self.shellTask._fillLoggerTaskData(extra)
logger.info(line, extra = extra)
dt = time.time() - self.lastTime
if dt > 10:
self.lastTime = time.time()
self.report()
def svnCheckout(**execCtx):
# look how much stuff is out there
pr = _SVNListOutputProcessor()
shellTask = zoe_ci.tasks.ShellTask('svn list -v -R {vcs.url:} --non-interactive --username "{vcs.username:}" --password "{vcs.password:}"', callbackClass=pr, **execCtx)
ret, _ = shellTask.run()
svn_files, total_bytes = pr.getResults()
if ret == 0:
extra = {
'svn_state': 'about_to_checkout',
'fileCount': len(svn_files),
'fileSize': total_bytes,
}
shellTask._fillLoggerTaskData(extra)
logger.info('About to download {fileCount:} files / {fileSize:} ...'.format(
fileCount = len(svn_files),
fileSize = human_readable_size(total_bytes)
), extra = extra)
reporter = _SVNCheckoutOutputProcessor(svn_files, total_bytes, **execCtx)
ret, _ = exec('svn checkout {vcs.url:} . --config-option config:miscellany:use-commit-times=yes --non-interactive --username "{vcs.username:}" --password "{vcs.password:}" -r {vcs.targetRevision:}''', callbackClass=reporter, **execCtx)
return ret == 0
def svnCleanup(**execCtx):
logger.info("Cleaning up svn ...", extra = {'svn_state': 'cleanup'})
retA, _ = exec('svn cleanup . --non-interactive', callbackClass=_ProgressReporterSimple(), optional=True, **execCtx)
retB, _ = exec('svn cleanup . --remove-unversioned --remove-ignored --vacuum-pristines --include-externals --non-interactive', callbackClass=_ProgressReporterSimple(), optional=True, **execCtx)
return retA == 0 and retB == 0
def svnResetPath(**execCtx):
if os.name == 'nt':
ret = exec('rmdir /S /Q "{WORKSPACE:}\\{vcs.outPath:}"', title='recursive delete', **execCtx)
if ret == 0:
return True
else:
logger.error("Unable to reset svn path {} - files still in use?".format(execCtx['vcs'].outPath))
return False
recursive_delete(vcs.outPath)
return True
def _parseSVNInfo(lines):
res = {}
for i in range(0, len(lines)):
args = lines[i].strip().split(': ')
if(len(args) == 2):
res[args[0]] = args[1]
if 'Revision' in res:
res['Revision'] = int(res['Revision'])
else:
res['Revision'] = 0
return res
def _getSVNInfoLocal(**execCtx):
ret, lines = exec('svn info . --non-interactive', optional=True, **execCtx)
if ret == 0:
return _parseSVNInfo(lines), False
elif ret == 1:
# E155007: '<>' is not a working copy
if len(lines) > 0 and lines[0].find('E155007'):
return None, True
return None, False
def _getSVNInfoRemote(**execCtx):
ret, lines = exec('svn info --non-interactive --username "{vcs.username:}" --password "{vcs.password:}" {vcs.url:}', optional=True, **execCtx)
if ret == 0:
return _parseSVNInfo(lines)
return None
def svnUpdate(**execCtx):
with GenericTask("svn update") as t:
# ok, look at the local directory first
svnInfo_local, notAnSVNRepo = _getSVNInfoLocal(**execCtx)
if svnInfo_local:
svnInfo_remote = _getSVNInfoRemote(**execCtx)
if svnInfo_remote:
if svnInfo_local['Revision'] == svnInfo_remote['Revision']:
logger.info("Updating to revision {} ...".format(
svnInfo_remote['Revision']
), extra = {
'svn_state': 'updating_to_resume',
'svn_revision_to': svnInfo_remote['Revision']
})
else:
logger.info("Updating {} revisions from {} to {} ...".format(
svnInfo_remote['Revision'] - svnInfo_local['Revision'],
svnInfo_local['Revision'], svnInfo_remote['Revision']
), extra = {
'svn_state': 'updating_to',
'svn_revision_from': svnInfo_local['Revision'],
'svn_revision_to': svnInfo_remote['Revision']
})
pr = _SVNUpdateOutputProcessor()
ret, lines = exec('svn update . --force --accept tf --config-option config:miscellany:use-commit-times=yes --non-interactive --username "{vcs.username:}" --password "{vcs.password:}" -r {vcs.targetRevision:}', callbackClass=pr, optional=True, **execCtx)
if ret == 0:
logger.info("Updating done.", extra = {'svn_state': 'update_done'})
return True
elif ret == 1:
#svn: E155037: Previous operation has not finished; run 'cleanup' if it was interrupted
if len(lines) > 0 and lines[0].find('E155037'):
if svnCleanup(**execCtx):
return False
else:
logger.warn('Unable to clean up. Resetting ...', extra = {'svn_state': 'cleanup'})
if not svnResetPath(**execCtx):
logger.fatal('Unable to reset path')
return False
return False
return False
if notAnSVNRepo:
logger.warn('Working checkout is completely corrupted: {WORKSPACE:}\\{vcs.outPath:} - Resetting...'.format(WORKSPACE= os.environ['WORKSPACE'], **execCtx), extra = {'svn_state': 'corrupted'})
if not svnResetPath(**execCtx):
logger.fatal('Unable to reset path')
return False
return False
return False
def logSVNVersion():
"""For debugging purposes, logs the svn version"""
try:
svn_version = exec('svn --version --quiet', optional=True)[1][0]
logger.debug('Found svn version {svn_version:}'.format(svn_version = svn_version), extra = {'svn_version': svn_version})
except:
logger.warn('Unable to get svn version')
def svnSync(vcs: VCS):
with GenericTask("svn sync") as t:
execCtx = {'vcs': vcs, 'parentTask': t, 'workingDirectory': vcs.outPath}
res = False
if not exec_available('svn'):
raise Exception('svn executable not usable')
logSVNVersion()
os.chdir(os.environ['WORKSPACE'])
if os.path.exists(vcs.outPath):
for i in range(1, 5):
if svnUpdate(**execCtx):
res = True
break
else:
res = svnCheckout(**execCtx)
return res | zoe-ci | /zoe_ci-0.1.38.tar.gz/zoe_ci-0.1.38/zoe_ci/svnUtils.py | svnUtils.py |
[](https://pypi.org/project/zoegas/) [](https://travis-ci.org/cltk/old_norse_dictionary_zoega)
# Old Norse Dictionary Zoega
Zoëga's A Concise Dictionary of Old Icelandic parser
With the **reader.py** module, you can:
* search a word with an edit distance below a given threshold,
* extract the POS tags in dictionary entries,
* search for exact entry and approximate entry.
However POS tag extractor is not very efficient. More special cases need to be handled.
TODO list:
* [x] look up a word in dictionary
* [x] search all words in dictionary which are at most at a given edit distance with the query word
* [x] for a given dictionary entry, give all its inflected forms (partially done),
* [ ] handle more dictionary entry,
* [ ] process all entries so that we would get virtually all the Old Norse words,
* [ ] for each form, we can associate lemmas with a proposed POS tag.
## Elaboration of data
Data come from [https://github.com/GreekFellows/lesser-dannatt](https://github.com/GreekFellows/lesser-dannatt) and [http://norroen.info/dct/zoega](http://norroen.info/dct/zoega) by Tim Ermolaev.
Then `utils.first_step()` is launched. Files are modified in order to ensure
XML syntax consistency, finally `utils.second_step()` is launched.
| zoegas | /zoegas-1.3.0.tar.gz/zoegas-1.3.0/README.md | README.md |
<p align="center">
<a href="https://gitlab.com/dkreeft/pycasino">
<img align="center" src="https://gitlab.com/dkreeft/zoek/-/raw/master/logo.png" width="174" height="170" />
</a>
</p>
# zoek - find files and directories
<p align="center">
<a href="https://gitlab.com/dkreeft/zoek/"><img src="https://gitlab.com/dkreeft/zoek/badges/master/pipeline.svg?style=flat alt="pipeline status"></a>
<a href="https://gitlab.com/dkreeft/zoek/"><img src="https://gitlab.com/dkreeft/zoek/badges/master/coverage.svg?style=flat" alt="code coverage"></a>
</p>
zoek (Dutch for "search") is a Python library and command-line utility aiming to duplicate and extend the functionality of the find command-line utility.
## Installation
[pip](https://pip.pypa.io/en/stable/) can be used to install zoek:
```bash
pip install zoek
```
However, we recommend to install zoek using [pipx](https://github.com/pipxproject/pipx):
```bash
pipx install zoek
```
## Usage
zoek can be used as a command-line utility as follows:
```bash
zoek <dir>
```
zoek currently supports the following flags:
* `--depth` or `-d` to indicate the depth of directories and files to return (default: 1):
```bash
zoek <dir> -d <int>
```
* `--startswith` or `-s` to return files and directories starting with the provided string:
```bash
zoek <dir> -s <str>
```
* `--contains` or `-c` to return files and directories that contain the provided string:
```bash
zoek <dir> -c <str>
```
* `--minsize` or `-m` to filter output on size, a positive int returns files equal or larger, a negative int returns files smaller than input:
```bash
zoek <dir> -m <int>
```
* `--datecreated` or `-dc` to filter output on time created, a positive int returns files created more than int minutes ago, a negative int return files less than int minutes ago:
```bash
zoek <dir> -dc <int>
```
* `--datemodified` or `-dm` similar to `--datecreated`, but then for filtering date modified:
```bash
zoek <dir> -dc <int>
```
As filters stack, multiple flags can be used simultaneously.
## Contributing
Please refer to [CONTRIBUTING.md](https://gitlab.com/dkreeft/zoek/-/blob/master/CONTRIBUTING.md)
## License
[BSD-3](https://gitlab.com/dkreeft/zoek/-/blob/master/LICENSE)
| zoek | /zoek-0.0.3.tar.gz/zoek-0.0.3/README.md | README.md |
# Zoetrope
This is a simple tool for creating unittest code stubs from a python file. Use it
by typing:
```python
zoetrope "relative_path/to/my/file.py"
zoetrope "pythonic.path.to.my.file"
```
The result is that zoetrope will print something like:
```python
import unittest
import a_custom_class
class TestModule(unittest.TestCase): # tests for module methods
def test_a_function(self): # will use name of function in place of 'a_function'
pass
def test_2(self):
pass
class TestClass(unittest.TestCase): # a class contained in the target module
def test_a_function(self): # tests a class method or instance method
pass
```
There is a hidden assumption that *your code will already run locally* - `zoetrope`
will fail if you try to generate tests for files which have ImportErrors or other
errors in them. Additionally, any relative path that uses `../` will not work.
## Future Development
1. Pytest functionality
2. Better relative path support/hacks
3. Mocks for specific calls made to external modules
| zoetrope | /zoetrope-0.2.tar.gz/zoetrope-0.2/README.md | README.md |
import math
import matplotlib.pyplot as plt
from .Generaldistribution import Distribution
# a Binomial class that inherits from the Distribution class.
class Binomial(Distribution):
""" Binomial distribution class for calculating and
visualizing a Binomial distribution.
Attributes:
mean (float) representing the mean value of the distribution
stdev (float) representing the standard deviation of the distribution
data_list (list of floats) a list of floats to be extracted from the data file
p (float) representing the probability of an event occurring
n (integer) representing the number of trials
"""
# A binomial distribution is defined by two variables:
# the probability of getting a positive outcome
# the number of trials
# If you know these two values, you can calculate the mean and the standard deviation
#
# For example, if you flip a fair coin 25 times, p = 0.5 and n = 25
# You can then calculate the mean and standard deviation with the following formula:
# mean = p * n
# standard deviation = sqrt(n * p * (1 - p))
#
# define the init function
def __init__(self, prob=.5, size=20):
self.p = prob
self.n = size
Distribution.__init__(self, self.calculate_mean(), self.calculate_stdev())
def calculate_mean(self):
"""Function to calculate the mean from p and n
Args:
None
Returns:
float: mean of the data set
"""
mean = self.p * self.n
self.mean = mean
return self.mean
def calculate_stdev(self):
"""Function to calculate the standard deviation from p and n.
Args:
None
Returns:
float: standard deviation of the data set
"""
std = math.sqrt(self.n * self.p * (1 - self.p))
self.stdev = std
return self.stdev
def replace_stats_with_data(self):
"""Function to calculate p and n from the data set. The function updates the p and n variables of the object.
Args:
None
Returns:
float: the p value
float: the n value
"""
self.n = len(self.data)
self.p = 1.0 * sum(self.data)/len(self.data)
self.mean = self.calculate_mean()
self.stdev = self.calculate_stdev()
return self.p, self.n
def plot_bar():
"""Function to output a histogram of the instance variable data using
matplotlib pyplot library.
Args:
None
Returns:
None
"""
plt.bar(x = ['0', '1'], height = [(1 - self.p) * self.n, self.p * self.n])
plt.title('Bar Chart of Data')
plt.xlabel('outcome')
plt.ylabel('count')
def pdf(self,k):
"""Probability density function calculator for the binomial distribution.
Args:
k (float): point for calculating the probability density function
Returns:
float: probability density function output
"""
first = (math.factorial(self.n))/(math.factorial(k) * math.factorial(self.n - k))
return first * ((self.p)**k) * ((1 - self.p)**(self.n - k))
def plot_bar_pdf(self):
"""Function to plot the pdf of the binomial distribution
Args:
None
Returns:
list: x values for the pdf plot
list: y values for the pdf plot
"""
x = []
y = []
# calculate the x values to visualize
for i in range(self.n + 1):
x.append(i)
y.append(self.pdf(i))
# make the plots
plt.bar(x, y)
plt.title('Distribution of Outcomes')
plt.ylabel('Probability')
plt.xlabel('Outcome')
plt.show()
return x, y
def __add__(self,other):
"""Function to add together two Binomial distributions with equal p
Args:
other (Binomial): Binomial instance
Returns:
Binomial: Binomial distribution
"""
try:
assert self.p == other.p, 'p values are not equal'
except AssertionError as error:
raise
result = Binomial()
result.n = self.n + other.n
result.p = self.p
result.calculate_mean()
result.calculate_stdev()
return result
# use the __repr__ magic method to output the characteristics of the binomial distribution object.
def __repr__(self):
"""Function to output the characteristics of the Binomial instance
Args:
None
Returns:
string: characteristics of the Binomial object
"""
return "mean {}, standard deviation {}, p {}, n {}".format(self.mean, self.stdev,self.p,self.n) | zoey-distributions | /zoey_distributions-1.0.tar.gz/zoey_distributions-1.0/zoey_distributions/Binomialdistribution_challenge.py | Binomialdistribution_challenge.py |
import math
import matplotlib.pyplot as plt
from .Generaldistribution import Distribution
class Gaussian(Distribution):
""" Gaussian distribution class for calculating and
visualizing a Gaussian distribution.
Attributes:
mean (float) representing the mean value of the distribution
stdev (float) representing the standard deviation of the distribution
data_list (list of floats) a list of floats extracted from the data file
"""
def __init__(self, mu=0, sigma=1):
Distribution.__init__(self, mu, sigma)
def calculate_mean(self):
"""Function to calculate the mean of the data set.
Args:
None
Returns:
float: mean of the data set
"""
avg = 1.0 * sum(self.data) / len(self.data)
self.mean = avg
return self.mean
def calculate_stdev(self, sample=True):
"""Function to calculate the standard deviation of the data set.
Args:
sample (bool): whether the data represents a sample or population
Returns:
float: standard deviation of the data set
"""
if sample:
n = len(self.data) - 1
else:
n = len(self.data)
mean = self.calculate_mean()
sigma = 0
for d in self.data:
sigma += (d - mean) ** 2
sigma = math.sqrt(sigma / n)
self.stdev = sigma
return self.stdev
def plot_histogram(self):
"""Function to output a histogram of the instance variable data using
matplotlib pyplot library.
Args:
None
Returns:
None
"""
plt.hist(self.data)
plt.title('Histogram of Data')
plt.xlabel('data')
plt.ylabel('count')
def pdf(self, x):
"""Probability density function calculator for the gaussian distribution.
Args:
x (float): point for calculating the probability density function
Returns:
float: probability density function output
"""
return (1.0 / (self.stdev * math.sqrt(2*math.pi))) * math.exp(-0.5*((x - self.mean) / self.stdev) ** 2)
def plot_histogram_pdf(self, n_spaces = 50):
"""Function to plot the normalized histogram of the data and a plot of the
probability density function along the same range
Args:
n_spaces (int): number of data points
Returns:
list: x values for the pdf plot
list: y values for the pdf plot
"""
mu = self.mean
sigma = self.stdev
min_range = min(self.data)
max_range = max(self.data)
# calculates the interval between x values
interval = 1.0 * (max_range - min_range) / n_spaces
x = []
y = []
# calculate the x values to visualize
for i in range(n_spaces):
tmp = min_range + interval*i
x.append(tmp)
y.append(self.pdf(tmp))
# make the plots
fig, axes = plt.subplots(2,sharex=True)
fig.subplots_adjust(hspace=.5)
axes[0].hist(self.data, density=True)
axes[0].set_title('Normed Histogram of Data')
axes[0].set_ylabel('Density')
axes[1].plot(x, y)
axes[1].set_title('Normal Distribution for \n Sample Mean and Sample Standard Deviation')
axes[0].set_ylabel('Density')
plt.show()
return x, y
def __add__(self, other):
"""Function to add together two Gaussian distributions
Args:
other (Gaussian): Gaussian instance
Returns:
Gaussian: Gaussian distribution
"""
result = Gaussian()
result.mean = self.mean + other.mean
result.stdev = math.sqrt(self.stdev ** 2 + other.stdev ** 2)
return result
def __repr__(self):
"""Function to output the characteristics of the Gaussian instance
Args:
None
Returns:
string: characteristics of the Gaussian
"""
return "mean {}, standard deviation {}".format(self.mean, self.stdev) | zoey-distributions | /zoey_distributions-1.0.tar.gz/zoey_distributions-1.0/zoey_distributions/Gaussiandistribution.py | Gaussiandistribution.py |
zof: OpenFlow Micro-Framework
=============================
|MIT licensed| |Build Status| |codecov.io|
*zof* is a Python framework for creating asyncio-based applications that control
the network using the OpenFlow protocol.
Supported Features
------------------
- OpenFlow versions 1.0 - 1.4 (with partial support for 1.5)
- TLS connections
- Limited packet parsing and generation: ARP, LLDP, IPv4, IPv6, UDP, TCP, ICMPv4, ICMPv6
- App's can simulate switches; Supports both sides of the OpenFlow protocol
Requirements
------------
- Python 3.5.1 or later
- oftr command line tool
Install - Linux
---------------
.. code:: bash
# Install /usr/bin/oftr dependency.
sudo add-apt-repository ppa:byllyfish/oftr
sudo apt-get update
sudo apt-get install oftr
# Create virtual environment and install latest zof.
python3.5 -m venv myenv
source myenv/bin/activate
pip install zof
The oftr command line tool can also be installed on a `Raspberry PI or using HomeBrew <https://github.com/byllyfish/oftr/blob/master/docs/INSTALL.rst>`_.
Demos
-----
To run the layer2 controller demo::
python -m zof.demo.layer2
Architecture
------------
*zof* uses a separate *oftr* process to terminate OpenFlow connections and translate OpenFlow messages to JSON.
.. figure:: doc/sphinx/_static/img/zof_architecture.png
:align: center
:alt: Architecture diagram
Architecture: The oftr process translates OpenFlow to JSON.
You construct OpenFlow messages via YAML strings or Python dictionaries. Incoming OpenFlow messages are generic Python objects. Special OpenFlow constants such as 'NO_BUFFER' appear as strings.
.. code:: yaml
type: FLOW_MOD
msg:
command: ADD
match:
- field: IN_PORT
value: 1
- field: ETH_DST
value: 00:00:00:00:00:01
instructions:
- instruction: APPLY_ACTIONS
actions:
- action: OUTPUT
port_no: 2
The basic building block of zof is an *app*. An *app* is associated with various message and event handlers.
You create an app object using the ``zof.Application`` class. Then, you associate handlers using the app's `message` decorator.
.. code:: python
import zof
APP = zof.Application('app_name_here')
@APP.message('packet_in')
def packet_in(event):
APP.logger.info('packet_in message %r', event)
@APP.message(any)
def other(event):
APP.logger.info('other message %r', event)
if __name__ == '__main__':
zof.run()
Place the above text in a file named `demo.py` and run it with `python demo.py`. This app handles OpenFlow 'PACKET_IN' messages using the packet_in function. All other messages are dispatched to the `other` function. The app does not do anything; it just logs events.
To compose the demo.py program with the layer2 demo::
python demo.py --x-modules=zof.demo.layer2
.. |MIT licensed| image:: https://img.shields.io/badge/license-MIT-blue.svg
:target: https://raw.githubusercontent.com/byllyfish/zof/master/LICENSE.txt
.. |Build Status| image:: https://travis-ci.org/byllyfish/zof.svg?branch=master
:target: https://travis-ci.org/byllyfish/zof
.. |codecov.io| image:: https://codecov.io/gh/byllyfish/zof/coverage.svg?branch=master
:target: https://codecov.io/gh/byllyfish/zof?branch=master
| zof | /zof-0.19.0-py3-none-any.whl/zof-0.19.0.dist-info/DESCRIPTION.rst | DESCRIPTION.rst |
# ZOG Utils
Utilities to use in my base-api project template. https://github.com/tienhm0202/base-fastapi/
Why ZOG? Because I can't named it as `utils` only, so I have to add a prefix.
ZOG sounds like `joke` and looks like `zoo`. I found that funny enough to use.
# Usage
```bash
$ pip install zogutils
```
## To generate short unique id string
```python
from zogutils import secret
secret.unique_id(8, "ID_")
# return: ID_a7uFg9k0
```
## To shorten package name like Java's Logback
```python
from zogutils import package_name
package_name.shorten("company.scope.modules.Function", 9)
# return: (something like) c.s.m.Function - depends on max length
```
## To init some middlewares
```python
from zogutils import middlewares
from your.app import settings, fastapi_app
middlewares.init_app(fastapi_app, settings)
```
### Configs:
```
# Sentry
SENTRY_DSN: Optional[HttpUrl] = None
SENTRY_INCLUDE: Optional[List[str]] = ["src"]
SENTRY_SAMPLE_RATE: Optional[float] = 0.5
# CSRF
SECURITY_CSRF: bool = False
# Rate limit
RATE_LIMIT: int = 100
RATE_LIMIT_TIME_SPAN: int = 30
RATE_LIMIT_BLOCK_DURATION: int = 300
# Prometheus
PROMETHEUS_ENABLE: bool = True
PROMETHEUS_PATH: str = "/metrics/"
# Cors
BACKEND_CORS_ORIGINS: List[AnyHttpUrl] = []
``` | zogutils | /zogutils-1.5.1.tar.gz/zogutils-1.5.1/README.md | README.md |
# ZiP
ZOGY in Parallell (ZiP) is a fast(ish) computation of proper image subtraction [B.Zackay, E.Ofek, A.Gal-Yam (2016)](http://iopscience.iop.org/article/10.3847/0004-637X/830/1/27/pdf). Inspired by [Ofek 2014](http://adsabs.harvard.edu/abs/2014ascl.soft07005O) and [pmvreeswijk](https://github.com/pmvreeswijk/ZOGY). ZiP offers a faster subtraction at the expense of a more comprehensive input. I.e. The program should be tailored for one telescope or input of images. This code has a parallell function, however it requires 6+ cores to operate. This particular Case is optimised for the Gravitational-Wave Optical Transient Observer ([GOTO](https://goto-observatory.org/)) However, simple fudging of the parameters should make it possible to make this work for other telescopes.
An internal version of [spalipy](https://github.com/GOTO-OBS/spalipy) has been added as the alignment algorithm. This uses sextractor to find source locations in two images and then aligns them with an affine transform. Residuals are then used to build a 2D spline surface to correct for warping due to large field distortions.
Finally, a parallel version of [proper coadition](https://arxiv.org/abs/1512.06879) is used for stacking images. It still isn't increadibly fast for on the spot coaddition; so a meidian combine tool is also included.
---
In Serial the program takes ~ 2:06 per subtraction [for a field 8000 X 6000 pixels big]
In Parallell it takes ~ 33s per subtraction [for a field 8000 X 6000 pixels big]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Ryan Cutter
V1.4.00 (25/02/2019)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
| zogyp | /zogyp-1.6.3.tar.gz/zogyp-1.6.3/README.md | README.md |
Zoho Analytics Connector for Python
========================
Zoho's Python SDK for Zoho Reports is old, however it is very complete.
This is a version which is Python 3 ready, tested on Python 3.8 and 3.9 and in fairly substantial production use.
A more convenient wrapper class is in enhanced_report_client. This is based on Zoho's ReportClient but provides some more convenient features.
I use it mostly for uploading data, and creating and modifying tables.
This library uses the Analytics V1 API and as of v.1.4 will phase in v2 API endpoints bit by bit
Authentication
==============
AuthTokens are now retired, replaced with OAuth2.
OAuth2 notes are below.
When you create EnhancedZohoAnalyticsClient or ReportClient, you need to pass ClientID, ClientSecret and a RefreshToken.
The RefreshToken is the equivalent of the old AuthToken.
To use AuthToken (retired authentication method), pass the AuthToken, and set ClientID and ClientSecret to none.
The test cases give some hints.
For OAuth2:
----------
#### Note: Which user?
Only the admin user, the owner, can make the Self Client. Other users, even organisational admins, won't work.
### Choose the correct API Console site
You need to be aware of the Zoho hosting domain, e.g. .com or .com.au etc.
<b>Site to visit</b>
https://api-console.zoho.com/ or
https://api-console.zoho.com.au
or
...
Self Clients are an easy start to getting authenticated. They are suitable for server-based applications, because there is no user-interaction.
You choose Self Client when you 'register' (create) a new app. A Self Client means that you interactively get a Refresh Token.
OAuth2 is mostly designed for flows where the user interactively approves: the Self Client approach is the equivalent of the old AuthToken, requiring no user action.
However, you need to access the Zoho Analytics account as the admin user.
In the UI, they have purple hexagonal icons. You ae limited to one self-client, so the scope may need to be shared with your other usages amongst Zoho APIs.
So, create a Self Client (at least, to experiment)
<b>Tip: The scope for full access</b>
ZohoAnalytics.fullaccess.all
I paste this into the Scope Description as well.
Make the 'Time Duration' the maximum: 10 minutes.
Now is a good time to copy the correct curl template from below into a text editor.
Choose "Create"
Now with data gathered (client id, client secret, the code which expires in a few minutes, the scope), execute a POST to
https://accounts.zoho.com/oauth/v2/token?code=
or for Zoho Australia (.com.au)
https://accounts.zoho.com.au/oauth/v2/token?code=
or to the URL matching your Zoho data centre.
### Using curl to POST
You can do this from terminal with curl:
curl -d "code=1000.dedaa...&client_id=1000.2TY...&client_secret=b74103c...&grant_type=authorization_code&scope=ZohoAnalytics.fullaccess.all" \
-X POST https://accounts.zoho.com/oauth/v2/token
or for Australia
curl -d "code=1000.dedaa...&client_id=1000.2TY...&client_secret=b74103c...&grant_type=authorization_code&scope=ZohoAnalytics.fullaccess.all" \
-X POST https://accounts.zoho.com.au/oauth/v2/token
and you should get back JSON which looks like this:
{"access_token":"1000....","refresh_token":"1000.53e...","expires_in_sec":3600,"api_domain":"https://www.zohoapis.com","token_type":"Bearer","expires_in":3600000}
save this somewhere, it is confidential. The refresh token is permanent, it is basically the same as the old authtoken.
NOTE!!! For Australian-hosted Zoho accounts and other regional variations:
The token URL is adapted for the server location. e.g. for Australia, post to https://accounts.zoho.com.au/oauth/v2/token
Usage
=====
Zoho's full API v1 is available through the ReportClient API.
Selectively, v2 API endpoints will be added to the ReportClient if they are useful.
One example of this is get_metadata_api_v2()
Note that for data import and export, my EnhancedReportClient has its own methods, and these are what I use in production so they are much better tested.
class EnhancedZohoAnalyticsClient(ReportClient)
is a higher level layer.
The tests show how to use it:
Setup necessary values (database is the Z.A. Workspace name)
Config class is used in the testcases as a convenience.
class Config:
LOGINEMAILID = os.getenv('ZOHOANALYTICS_LOGINEMAIL')
AUTHTOKEN = os.getenv('ZOHOANALYTICS_AUTHTOKEN')
DATABASENAME = os.getenv('ZOHOANALYTICS_DATABASENAME')
Make the API instance:
rc = EnhancedZohoAnalyticsClient(
login_email_id=Config.LOGINEMAILID,
token=Config.REFRESHTOKEN if TEST_OAUTH else Config.AUTHTOKEN,
clientSecret=Config.CLIENTSECRET if TEST_OAUTH else None,
clientId=Config.CLIENTID if TEST_OAUTH else None,
default_databasename=Config.DATABASENAME,
serverURL=Config.SERVER_URL,
reportServerURL=Config.REPORT_SERVER_URL,
default_retries=3
)
Australian and EU Zoho Servers
------------------------------
The default root of the main server is (ServerURL)```https://accounts.zoho.com```
and the default root of the Analytics API server (reportServerURL) is ```https://analyticsapi.zoho.com```
You can provide alternatives via the parameters: ```serverURL``` and ```reportServerURL``` (because you are using a non-US zoho data location)
Retry exceptions
---------------
in development: calling `enhanced_zoho_analytics_client.data_upload(...)` or `report_client.import_data(...)` can raise one of two exceptions for API limits:
UnrecoverableRateLimitError
RecoverableRateLimitError
Managing retries is a beta feature but I am using it in production. It is opt-in except where I was already doing retry.
The retry logic is in
def __sendRequest(self, url, httpMethod, payLoad, action, callBackData,retry_countdown=None):
It attempts to differentiate between recoverable and non-recoverable errors. Recoverable errors so far are temporary rate limit errors, errors due to another update running on the same table, and token refresh errors.
It should be enhanced to use smarter retry timing, but first I will see if this works under production loads.
Change in v.1.2.0
You can pass default_retries when creating the client, or you can set it on an existing client.
This will be the retry count if none is specified. This means you can use retries with the 'low-level' report_client methods by setting a retry level at the EnhancedZohoAnalyticsClient level (actually, the attribute is added to ReportClient)
e.g.
zoho_enhanced_client.default_retries = 5
and then 'low-level' methods such as add_column() will get the benefit of the retry logic.
Of course, you should be careful to test this.
Do some stuff
-------------
<b>Get table metadata </b>
def test_get_database_metadata(get_enhanced_zoho_analytics_client):
enhanced_rc = get_enhanced_zoho_analytics_client
table_meta_data = enhanced_rc.get_table_metadata()
assert table_meta_data
<b>Push data </b>
def test_data_upload(get_enhanced_zoho_analytics_client:EnhancedZohoAnalyticsClient):
try:
with open('StoreSales.csv', 'r') as f:
import_content = f.read()
except Exception as e:
print("Error Check if file StoreSales.csv exists in the current directory!! ", str(e))
return
# import_modes = APPEND / TRUNCATEADD / UPDATEADD
impResult = get_enhanced_zoho_analytics_client.data_upload(import_content=import_content,table_name="sales")
assert(impResult)
try:
with open('Animals.csv', 'r') as f:
import_content2 = f.read()
except Exception as e:
print("Error Check if file Animals.csv exists in the current directory!! ", str(e))
return
impResult2 = get_enhanced_zoho_analytics_client.data_upload(import_content=import_content2, table_name="animals")
assert (impResult2)
<b>Run SQL</b>. You can join tables. The rows are returned as a DictReader. If you pass ' characters into IN(...) clauses,
you need to escape them yourself (double ')
def test_data_download(get_enhanced_zoho_analytics_client):
sql="select * from sales"
result = get_enhanced_zoho_analytics_client.data_export_using_sql(sql=sql,table_name="sales")
assert result
#the table name does not matter
sql="select * from animals"
result = get_enhanced_zoho_analytics_client.data_export_using_sql(sql=sql,table_name="sales",retry_countdown=10)
assert result
You can cache a query too, if you provide a cache object which has the same interface as Django's cache.
https://docs.djangoproject.com/en/3.1/topics/cache/
this is, the cache object needs to offer cache.set(...) and cache.get(...) as Django does
from django.core.cache import cache
def test_data_download(get_enhanced_zoho_analytics_client):
sql="select * from sales"
result = get_enhanced_zoho_analytics_client.data_export_using_sql(sql=sql,table_name="sales",cache_object=cache,
cache_timeout_seconds=600,retry_countdown=10)
assert result
result = get_enhanced_zoho_analytics_client.data_export_using_sql(sql=sql,table_name="sales",cache_object=cache, cache_timeout_seconds=600)
assert result
<b>Delete rows</b>
def test_deleteData(enhanced_zoho_analytics_client):
""" This tests the underlying ReportClient function.
for criteria tips see https://www.zoho.com/analytics/api/?shell#applying-filter-criteria"""
enhanced_client = get_enhanced_zoho_analytics_client()
animals_table_uri = enhanced_client.getURI(dbOwnerName=enhanced_client.login_email_id,
dbName=enhanced_client.default_databasename,
tableOrReportName='animals')
criteria = """ 'Rabbit' in "common_name" """
row_count = enhanced_client.deleteData(tableURI=animals_table_uri,criteria=criteria,retry_countdown=10)
<b>create a table</b>
zoho_sales_fact_table = {
'TABLENAME': 'sales_fact',
'COLUMNS': [
{'COLUMNNAME':'inv_date', 'DATATYPE':'DATE'},
{'COLUMNNAME':'customer', 'DATATYPE':'PLAIN'},
{'COLUMNNAME':'sku', 'DATATYPE':'PLAIN'},
{'COLUMNNAME':'qty_invoiced', 'DATATYPE':'NUMBER'},
{'COLUMNNAME':'line_total_excluding_tax', 'DATATYPE':'NUMBER'}]
}
def test_create_table(get_enhanced_zoho_analytics_client):
#is the table already defined?
try:
zoho_table_metadata = get_enhanced_zoho_analytics_client.get_table_metadata()
except ServerError as e:
if getattr(e, 'message') == 'No view present in the workspace.':
zoho_table_metadata = {}
else:
raise
zoho_tables = set(zoho_table_metadata.keys())
if "sales_fact" not in zoho_tables:
get_enhanced_zoho_analytics_client.create_table(table_design=zoho_sales_fact_table)
else:
#get an error, but error handling is not working, the API returns a 400 with no content in the message
r = get_enhanced_zoho_analytics_client.create_table(table_design=zoho_sales_fact_table)
print (r)
Changes
-------------
1.4.3 Exponential backoff with jitter used for retry
1.4.2 Added reporting_currency to enhanced_reporting_client
1.4.1 Something seems to changed with the UTF encoding returned by the export endpoint. Change decoding to use utf-8-sig
1.4.0 some adaptation towards new API from Zoho
1.3.6 Documentation updates, test updates. Added a 'pre-delete function' to calculate how many rows should be deleted.
deleteData returns an int not a string for the number of rows deleted.
1.3.3 - 1.3.5 Handle some more Zoho exceptions
1.3.2 Under heavy concurrent load, an oauth token error was not being caught. Fixed; new token is generated and a retry occurs.
1.3.1 Some small improvements to Zoho error handling
1.3.0 Retry on connection errors. First effort at a test case covering an exception.
Simple little helper script to get the auth token from a self-client, get_token.py. This requires PySimpleGUI.
1.2.5 Merged PR with code clean ups, thanks gredondogc
1.2.4 Readme changes
1.2.3 LICENSE file updated to include text relating to the licence
1.2.2 Workaround for error code detection when json decoding fails. Fixed a bug around exception handling
1.2.1 Emoticons are unicode but Analytics raises an error on import. I am now using the emoji library in enhanced_report_client.data_upload to look for emojis and replace them.
so 'Ok to leave at front door 🙂' becomes 'Ok to leave at front door :slightly_smiling_face:'
1.2.0 Specify a default retry count when making report_client or enhanced_report_client
1.1.2 fix issue #2 to fix criteria on export. Added test case.
1.1.1 minor fixes
1.1.0.1 Documentation fixes
1.1.0 Treat "another import is in progress" as a recoverable error (can be retried)
Move home-made retry logic to low level: report_client.__sendRequest(), and make retry optionally available to the key functions in EnhancedZohoAnalyticsClient.
Functions can pass retry_countdown to use retry. The retry handling is coping well under some initial use in high volume production loads.
1.0.4 Documentation improvements
1.0.3 Some slightly better error handling if Zoho returns an empty response
| zoho-analytics-connector | /zoho_analytics_connector-1.4.3.tar.gz/zoho_analytics_connector-1.4.3/README.md | README.md |
import csv
import json
import logging
import time
from typing import MutableMapping, Optional, List
import emoji
from . import report_client as report_client
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)
ch = logging.StreamHandler()
ch.setLevel(logging.DEBUG)
logger.addHandler(ch)
""" add some helper functions on top of report_client"""
class EnhancedZohoAnalyticsClient(report_client.ReportClient):
@staticmethod
def process_table_meta_data(catalog, force_lowercase_column_names=False):
""" catalog is a ZOHO_CATALOG_INFO dict. Call this from get_database_metadata for example
Return a dict keyed by tablename, each item being a dict keyed by column name, with the item being the
catalog info for the col
So all the table names can be found as table_data.keys()
for for a given table name, the column names are table_data['table1'].keys()
and to find the column meta data such as dataType:
data['table1']['col1']['typeName']
Zoho gives each type an integer coding ['dataType'], a descriptive datatype name ['typeName'],
and some other meta data.
"""
db_name = catalog['tableCat']
table_data = {}
for table in catalog['views']:
if table['tableType'] == 'TABLE':
table_data[table['tableName']] = {}
col_data = table_data[table['tableName']]
for col in table['columns']:
if force_lowercase_column_names:
col_data[col['columnName'].lower()] = col
else:
col_data[col['columnName']] = col
return table_data
def __init__(self, login_email_id: str, token: str, default_databasename: str = None, clientId=None,
clientSecret=None, serverURL=None, reportServerURL=None, default_retries=None,
reporting_currency: str = None,
error_email_list: Optional[List[str]] = None):
""" error email list is not used by the client, but it is available for callers as a convenience"""
self.login_email_id = login_email_id
self.default_databasename = default_databasename
self.error_email_list = error_email_list or [login_email_id]
self.reporting_currency = reporting_currency
super().__init__(token=token, clientId=clientId, clientSecret=clientSecret, serverURL=serverURL,
reportServerURL=reportServerURL, default_retries=default_retries)
def get_database_catalog(self, database_name: str = None) -> MutableMapping:
db_uri = self.getDBURI(self.login_email_id, database_name or self.default_databasename)
catalog_info = self.getDatabaseMetadata(requestURI=db_uri, metadata="ZOHO_CATALOG_INFO")
return catalog_info
def get_table_metadata(self, database_name: str = None, force_lowercase_column_names=False) -> MutableMapping:
database_name = database_name or self.default_databasename
catalog_info = self.get_database_catalog(database_name=database_name)
table_metadata = self.process_table_meta_data(catalog_info,
force_lowercase_column_names=force_lowercase_column_names)
return table_metadata
def create_table(self, table_design, database_name=None) -> MutableMapping:
"""
ZOHO_DATATYPE
(Supported data types are:
PLAIN
MULTI_LINE
EMAIL
NUMBER
POSITIVE_NUMBER
DECIMAL_NUMBER
CURRENCY
PERCENT
DATE
BOOLEAN
URL
AUTO_NUMBER
"""
db_uri = self.getDBURI(self.login_email_id, database_name or self.default_databasename)
columns = table_design['COLUMNS']
BIG_NUMBER_OF_COLUMNS = 10
if len(columns) < BIG_NUMBER_OF_COLUMNS: # too many columns and zoho rejects the very long URL
result = self.createTable(dbURI=db_uri, tableDesign=json.dumps(table_design))
else:
columns_initial, columns_residual = columns[:BIG_NUMBER_OF_COLUMNS], columns[BIG_NUMBER_OF_COLUMNS:]
table_design['COLUMNS'] = columns_initial
table_name = table_design['TABLENAME']
result = self.createTable(dbURI=db_uri, tableDesign=json.dumps(table_design))
time.sleep(1)
uri_addcol = self.getURI(self.login_email_id, database_name or self.default_databasename,
tableOrReportName=table_name)
for col in columns_residual:
self.addColumn(tableURI=uri_addcol, columnName=col['COLUMNNAME'], dataType=col['DATATYPE'])
return result
def data_upload(self, import_content: str, table_name: str, import_mode="TRUNCATEADD",
matching_columns: Optional[str] = None,
database_name: Optional[str] = None,
retry_limit=None,
date_format=None) -> Optional[report_client.ImportResult]:
""" data is a csv-style string, newline separated. Matching columns is a comma separated string
import_mode is one of TRUNCATEADD, APPEND, UPDATEADD
"""
retry_count = 0
retry_limit = retry_limit or self.default_retries
impResult = None
# import_content_demojized = emoji.demojize(import_content)
import_content_demojized = import_content # try without this, move data cleaning to the calling function
database_name = database_name or self.default_databasename
uri = self.getURI(dbOwnerName=self.login_email_id, dbName=database_name, tableOrReportName=table_name)
# import_modes = APPEND / TRUNCATEADD / UPDATEADD
impResult = self.importData_v2(uri, import_mode=import_mode, import_content=import_content_demojized,
date_format=date_format,
matching_columns=matching_columns, retry_countdown=retry_limit)
return impResult
def data_export_using_sql(self, sql, table_name, database_name: str = None, cache_object=None,
cache_timeout_seconds=60, retry_countdown=5) -> csv.DictReader:
""" returns a csv.DictReader after querying with the sql provided.
retry_countdown is the number of retries
The Zoho API insists on a table or report name, but it doesn't seem to restrict the query
The cache object has a get and set function like the django cache does: https://docs.djangoproject.com/en/3.1/topics/cache/
The cache key is the sql query"""
if cache_object:
returned_data = cache_object.get(sql)
else:
returned_data = None
if not returned_data:
database_name = database_name or self.default_databasename
uri = self.getURI(dbOwnerName=self.login_email_id, dbName=database_name or self.default_databasename,
tableOrReportName=table_name)
callback_data = self.exportDataUsingSQL_v2(tableOrReportURI=uri, format='CSV', sql=sql,
retry_countdown=retry_countdown)
returned_data = callback_data.getvalue().decode('utf-8-sig').splitlines()
if cache_object:
cache_object.set(sql, returned_data, cache_timeout_seconds)
reader = csv.DictReader(returned_data)
return reader
def delete_rows(self, table_name, sql, database_name: Optional[str] = None, retry_countdown=5) -> int:
""" criteria is SQL fragments such as 'a' in ColA, for example,
sql = f"{id_column} IN ('ce76dc3a-bac0-47dd-841a-70e66613958e')
return the count of eows
"""
if len(sql) > 5000:
raise RuntimeError("The SQL passed to delete_rows is too big and will cause Zoho 400 errors")
uri = self.getURI(dbOwnerName=self.login_email_id, dbName=database_name or self.default_databasename,
tableOrReportName=table_name)
row_count = self.deleteData(tableURI=uri, criteria=sql, retry_countdown=retry_countdown)
return row_count
def pre_delete_rows(self, table_name, sql, database_name: Optional[str] = None, retry_countdown=5):
""" uses the same sql input as delete_rows and counts what is present in the table. This is to check the nbr of rows deleted is correct.
Zoho sometimes gets table corruption which means rows don't delete.
When the operation is critical, you can use this function to check the nbr of rows deleted is correct.
"""
if len(sql) > 5000:
raise RuntimeError("The SQL passed to delete_rows is too big and will cause Zoho 400 errors")
sql = f"select count(*) from {table_name} where {sql}"
reader = self.data_export_using_sql(sql, table_name=table_name)
result = list(reader)[0]
try:
row_count = int(result['count(*)'])
except KeyError:
raise RuntimeError(f"Zoho returned unexpected data, did not found (count(8)) in the result: {result}")
return row_count
def rename_column(self, table_name, old_column_name, new_column_name, database_name: Optional[str] = None,
retry_countdown=5):
""" rename a column in a table """
uri = self.getURI(dbOwnerName=self.login_email_id, dbName=database_name or self.default_databasename,
tableOrReportName=table_name)
self.renameColumn(tableURI=uri, oldColumnName=old_column_name, newColumnName=new_column_name,
retry_countdown=retry_countdown) | zoho-analytics-connector | /zoho_analytics_connector-1.4.3.tar.gz/zoho_analytics_connector-1.4.3/zoho_analytics_connector/enhanced_report_client.py | enhanced_report_client.py |
import json
import io
import random
import re
import time
import logging
import urllib
import urllib.parse
import xml.dom.minidom
from typing import MutableMapping, Optional, Union
import requests
from requests.adapters import HTTPAdapter, Retry
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)
ch = logging.StreamHandler()
ch.setLevel(logging.DEBUG)
logger.addHandler(ch)
def requests_retry_session(
retries=5,
backoff_factor=2,
status_forcelist=(500, 502, 503, 504),
session=None,
) -> requests.Session:
session = session or requests.Session()
retry_strategy = Retry(
total=retries,
read=retries,
connect=retries,
backoff_factor=backoff_factor,
status_forcelist=status_forcelist,
)
adapter = HTTPAdapter(max_retries=retry_strategy)
session.mount('http://', adapter)
session.mount('https://', adapter)
return session
class ReportClient:
"""
ReportClient provides the python based language binding to the https based API of Zoho Analytics.
@note: Authentication via authtoken is deprecated, use OAuth. kindly send parameter as ReportClient(token,clientId,clientSecret).
"""
isOAuth = False
request_timeout = 60
# clientId = None
# clientSecret = None
# refresh_or_access_token = None
# token_timestamp = time.time()
def __init__(self, token, clientId=None, clientSecret=None, serverURL=None, reportServerURL=None,
default_retries=0):
"""
Creates a new C{ReportClient} instance.
@param token: User's authtoken or ( refresh token for OAUth).
@type token:string
@param clientId: User client id for OAUth
@type clientId:string
@param clientSecret: User client secret for OAuth
@type clientSecret:string
@param serverURL: Zoho server URL if .com default needs to be replaced
@type serverURL:string
@param reportServerURL:Zoho Analytics server URL if .com default needs to be replaced
@type reportServerURL:string
"""
self.iamServerURL = serverURL or "https://accounts.zoho.com"
self.reportServerURL = reportServerURL or "https://analyticsapi.zoho.com"
self.requests_session = requests_retry_session(retries=default_retries)
self.clientId = clientId
self.clientSecret = clientSecret
self.accesstoken = token
self.token_timestamp = time.time() # this is a safe default
self.default_retries = default_retries
if (clientId == None and clientSecret == None): # not using OAuth2
self.__token = token
else:
self.getOAuthToken() # this sets the instance variable
ReportClient.isOAuth = True
self.request_timeout = 60
@property
def token(self):
if ReportClient.isOAuth and time.time() - self.token_timestamp > 50 * 60:
logger.debug("Refreshing zoho analytics oauth token")
token = self.getOAuthToken()
self.__token = token
self.token_timestamp = time.time()
return self.__token
@token.setter
def token(self, token):
self.__token = token
self.token_timestamp = time.time()
def getOAuthToken(self):
"""
Internal method for getting OAuth token.
"""
dict = {}
dict["client_id"] = self.clientId
dict["client_secret"] = self.clientSecret
dict["refresh_token"] = self.accesstoken
dict["grant_type"] = "refresh_token"
# dict = urllib.parse.urlencode(dict) we should pass a dict, not a string
accUrl = self.iamServerURL + "/oauth/v2/token"
respObj = self.getResp(accUrl, "POST", dict, add_token=False)
if (respObj.status_code != 200):
raise ServerError(respObj)
else:
resp = respObj.response.json()
if ("access_token" in resp):
self.__token = resp['access_token']
return resp["access_token"]
else:
raise ValueError("Error while getting OAuth token ", resp)
def getResp(self, url: str, httpMethod: str, payLoad, add_token=True, extra_headers=None, **kwargs):
"""
Internal method. for GET, payLoad is params
"""
requests_session = self.requests_session or requests_retry_session()
if httpMethod.upper() == 'POST':
headers = {}
if add_token and ReportClient.isOAuth and hasattr(self,
'token'): # check for token because this can be called during __init__ and isOAuth could be true.
headers["Authorization"] = "Zoho-oauthtoken " + self.token
headers['User-Agent'] = "ZohoAnalytics Python GrowthPath Library"
if extra_headers:
headers = {**headers, **extra_headers}
try:
resp = requests_session.post(url, data=payLoad, headers=headers, timeout=self.request_timeout, **kwargs)
if 'invalid client' in resp.text:
raise requests.exceptions.RequestException("Invalid Client")
respObj = ResponseObj(resp)
except requests.exceptions.RequestException as e:
logger.exception(f"{e=}")
raise e
return respObj
elif httpMethod.upper() == 'GET':
headers = {}
if add_token and ReportClient.isOAuth and hasattr(self,
'token'): # check for token because this can be called during __init__ and isOAuth could be true.
headers["Authorization"] = "Zoho-oauthtoken " + self.token
headers['User-Agent'] = "ZohoAnalytics Python GrowthPath Library"
if extra_headers:
headers = {**headers, **extra_headers}
try:
resp = requests_session.get(url, params=payLoad, headers=headers, timeout=self.request_timeout,
**kwargs)
if 'invalid client' in resp.text:
raise requests.exceptions.RequestException("Invalid Client")
respObj = ResponseObj(resp)
except requests.exceptions.RequestException as e:
logger.exception(f"{e=}")
raise e
return respObj
else:
raise RuntimeError(f"Unexpected httpMethod in getResp, was expecting POST or GET but got {httpMethod}")
def __sendRequest(self, url, httpMethod, payLoad, action, callBackData=None, retry_countdown: int = None,
extra_headers=None, **keywords):
code = ""
if not retry_countdown:
retry_countdown = self.default_retries or 1
init_retry_countdown = retry_countdown
while retry_countdown > 0:
retry_countdown -= 1
try:
respObj = self.getResp(url, httpMethod, payLoad, extra_headers=extra_headers, **keywords)
except Exception as e:
logger.exception(f" getResp exception in __sendRequest, {retry_countdown}, {e}") # connection error
if retry_countdown <= 0:
raise e
else:
sleep_time = min((3 * (2 ** (10 - retry_countdown))) + random.random(),
60) # Add jitter and cap max delay at 60 seconds
time.sleep(sleep_time)
continue
if (respObj.status_code in [200, ]):
return self.handleResponse(respObj, action, callBackData)
elif (respObj.status_code in [400, ]):
# 400 errors may be an API limit error, which are handled by the result parsing
try:
try:
# j = respObj.response.json(strict=False) #getting decode errors in this and they don't make sense
j = json.loads(respObj.response.text, strict=False)
code = j['response']['error']['code']
except json.JSONDecodeError as e:
logger.error(f"API caused a JSONDecodeError for {respObj.response.text} ")
code = None
if not code:
m = re.search(r'"code":(\d+)', respObj.response.text)
if m:
code = int(m.group(1))
else:
code = -1
logger.error(f"could not find error code in {respObj.response.text} ")
time.sleep(min(10 - retry_countdown, 1) * 10)
continue
logger.debug(f"API returned a 400 result and an error code: {code} ")
if code in [6045, ]:
logger.error(
f"Zoho API Recoverable rate limit (rate limit exceeded); there are {retry_countdown + 1} retries left")
if retry_countdown < 0:
logger.error(
f"Zoho API Recoverable error (rate limit exceeded), but exhausted retries")
raise UnrecoverableRateLimitError(urlResp=respObj, zoho_error_code=code)
else:
time.sleep(min(10 - retry_countdown, 1) * 10)
continue
elif code in [6001, ]:
logger.error(
f"6001 error, rows in Zoho plan exceeded {respObj.response.text}")
raise UnrecoverableRateLimitError(urlResp=respObj, zoho_error_code=code)
elif code in [6043, ]:
logger.error(
f"6043 error, daily API limit in Zoho plan exceeded {respObj.response.text}")
raise UnrecoverableRateLimitError(urlResp=respObj, zoho_error_code=code)
elif code in [7103, ]:
logger.error(
f"7103 error, workspace not found (check authentication) {respObj.response.text}")
raise ServerError(urlResp=respObj, zoho_error_code=code)
elif code in [7179, ]:
logger.error(
f"7179 error, workspace reports no view present. Initialise with a dummy table {respObj.response.text}")
raise ServerError(urlResp=respObj, zoho_error_code=code)
elif code in [7232, ]:
logger.error(
f"7232 error,an invalid value has been provided according to the column's data type) {respObj.response.text=} ")
raise ServerError(urlResp=respObj, zoho_error_code=code)
elif code in [7280, ]:
logger.error(
f"7280 error, relating to schema errors, return immediately {respObj.response.text}")
raise ServerError(urlResp=respObj, zoho_error_code=code)
elif code in [7389, ]:
logger.error(f"7389 Error from zoho Organisation doest not exist {respObj.response.text}")
raise ServerError(urlResp=respObj, zoho_error_code=code)
elif code in [8540, ]:
logger.error(f"8540 Error, token has incorrect scope {respObj.response.text}")
raise ServerError(urlResp=respObj, zoho_error_code=code)
elif code in [8535, ]: # invalid oauth token
try:
self.getOAuthToken()
except:
pass
logger.error(f"Zoho API Recoverable error encountered (invalid oauth token), will retry")
if retry_countdown < 0:
logger.error(
f"Zoho API Recoverable error (invalid oauth token) exhausted retries")
raise UnrecoverableRateLimitError(urlResp=respObj, zoho_error_code=code)
else:
time.sleep(min(10 - retry_countdown, 1) * 10)
continue
elif code in [8509, ]: # parameter does not match accepted input pattern
logger.error(
f"Error 8509 encountered, something is wrong with the data format, no retry is attempted")
raise BadDataError(respObj, zoho_error_code=code)
elif code in [10001, ]: # 10001 is "Another import is in progress, so we can try this again"
logger.error(
f"Zoho API Recoverable error encountered (Another import is in progress), will retry")
if retry_countdown < 0:
logger.error(
f"Zoho API Recoverable error (Another import is in progress) but exhausted retries")
raise UnrecoverableRateLimitError(urlResp=respObj, zoho_error_code=code,
message="Zoho error: Another import is in progress")
else:
time.sleep(min(10 - retry_countdown, 1) * 10)
continue
else:
# raise ServerError(respObj,zoho_error_code=code)
msg = f"Unexpected status code {code=}, will attempt retry"
try:
msg += respObj.response.text
except Exception:
pass
logger.exception(msg)
time.sleep(min(10 - retry_countdown, 1) * 10)
continue
except (RecoverableRateLimitError, UnrecoverableRateLimitError, BadDataError):
raise
except ServerError as e:
logger.error(f"ServerError raised on _sendRequest. {url=} {payLoad=} {action=} ")
import_data = payLoad.get("ZOHO_IMPORT_DATA") if payLoad else None
if import_data:
logger.error(
f"Import data, a csv file as a string. Row 1 is header, col 0 is first col (id): {import_data} ")
raise ServerError(respObj, zoho_error_code=code)
elif (respObj.status_code in [401, ]):
try:
# j = respObj.response.json(strict=False) #getting decode errors in this and they don't make sense
j = json.loads(respObj.response.text, strict=False)
code = j['response']['error']['code']
except json.JSONDecodeError as e:
logger.error(f"API caused a JSONDecodeError for {respObj.response.text} ")
code = None
logger.debug(f"API returned a 401 result and an error code: {code} ")
if code in [8535, ]: # invalid oauth token
try:
self.getOAuthToken()
except:
pass
logger.error(f"Zoho API Recoverable error encountered (invalid oauth token), will retry")
if retry_countdown < 0:
logger.error(
f"Zoho API Recoverable error (invalid oauth token) exhausted retries")
raise UnrecoverableRateLimitError(urlResp=respObj, zoho_error_code=code)
else:
time.sleep(min(10 - retry_countdown, 1) * 10)
continue
elif (respObj.status_code in [414, ]):
msg = f"HTTP response 414 was encountered (URI too large), no retry is attempted. {respObj.response.text} URL for {httpMethod=} {url=} {payLoad=}"
logger.error(msg)
raise BadDataError(respObj, zoho_error_code=None)
elif (respObj.status_code in [500, ]):
code = respObj.response.status_code
if ":7005" in respObj.response.text:
logger.error(
f"Error 7005 encountered ('unexpected error'), no retry is attempted. {respObj.response.text}")
raise BadDataError(respObj, zoho_error_code=code)
else:
try:
response_text = respObj.response.text
except Exception as e:
response_text = "unreadable response text"
msg = f"Unexpected status code in from __sendRequest. Server response code is {respObj.status_code=} {response_text=}. {url=}, {httpMethod=}, {payLoad=}, {action=} Retry attempts will be made..."
logger.exception(msg)
time.sleep(min(10 - retry_countdown, 1) * 10)
continue
# fell off while loop
raise RuntimeError(
f"After starting with {init_retry_countdown} retries allowed, there are now no more retries left in __sendRequest. ")
def invalidOAUTH(self, respObj):
"""
Internal method to check whether accesstoken expires or not.
"""
if (respObj.status_code != 200):
try:
dom = ReportClientHelper.getAsDOM(respObj.content)
err_code = dom.getElementsByTagName("code")
err_code = ReportClientHelper.getText(err_code[0].childNodes).strip()
return err_code == "8535"
except Exception:
return False
return False
def handle_response_v2(self, response: requests.Response, action: str, callBackData) -> Optional[
Union[MutableMapping, 'ImportResult', 'ShareInfo', 'PlanInfo']]:
""" this is a replace for sendRequest: we do the request using requests"""
if (response.status_code != 200):
raise ServerError(response)
else:
return self.handleResponse(response, action, callBackData)
def handleResponse(self, response, action, callBackData) -> Optional[
Union[MutableMapping, 'ImportResult', 'ShareInfo', 'PlanInfo']]:
"""
Internal method. To be used by classes extending this. To phase in V2 api,
set action to be None or "API_V2"
"""
if not action or action == "API_V2":
resp = response.content
resp_json = json.loads(resp)
return resp_json
elif ("ADDROW" == action):
resp = response.content
dom = ReportClientHelper.getAsDOM(resp)
try:
dict = {}
cols = dom.getElementsByTagName("column")
for el in cols:
content = ReportClientHelper.getText(el.childNodes).strip()
if ("" == content):
content = None
dict[el.getAttribute("name")] = content
return dict
except Exception as inst:
raise ParseError(resp, "Returned XML format for ADDROW not proper.Could possibly be version mismatch",
inst)
elif ("DELETE" == action):
resp = response.content
resp = json.loads(resp)
return resp["response"]["result"]["deletedrows"]
elif ("UPDATE" == action):
resp = response.content
resp = json.loads(resp)
return resp["response"]["result"]["updatedRows"]
elif ("IMPORT" == action):
return ImportResult(response.content)
elif ("EXPORT" == action):
f = callBackData
f.write(response.content)
return None
elif ("COPYDB" == action):
resp = response.content
resp = json.loads(resp)
return resp["response"]["result"]["dbid"]
elif ("AUTOGENREPORTS" == action or "CREATESIMILARVIEWS" == action):
resp = response.content
resp = json.loads(resp)
return resp["response"]["result"]
elif ("HIDECOLUMN" == action):
resp = response.content
resp = json.loads(resp)
return resp["response"]["result"]
elif ("SHOWCOLUMN" == action):
resp = response.content
resp = json.loads(resp)
return resp["response"]["result"]
elif ("DATABASEMETADATA" == action):
resp = response.content
resp = json.loads(resp)
return resp["response"]["result"]
elif ("GETDATABASENAME" == action):
resp = response.content
dom = ReportClientHelper.getAsDOM(resp)
return ReportClientHelper.getInfo(dom, "dbname", response)
elif ("GETDATABASEID" == action):
resp = response.content
dom = ReportClientHelper.getAsDOM(resp)
return ReportClientHelper.getInfo(dom, "dbid", response)
elif ("ISDBEXIST" == action):
resp = response.content
resp = json.loads(resp)
return resp["response"]["result"]["isdbexist"]
elif ("ISVIEWEXIST" == action):
resp = response.content
resp = json.loads(resp)
return resp["response"]["result"]["isviewexist"]
elif ("ISCOLUMNEXIST" == action):
resp = response.content
resp = json.loads(resp)
return resp["response"]["result"]["iscolumnexist"]
elif ("GETCOPYDBKEY" == action):
resp = response.content
dom = ReportClientHelper.getAsDOM(resp)
return ReportClientHelper.getInfo(dom, "copydbkey", response)
elif ("GETVIEWNAME" == action):
resp = response.content
dom = ReportClientHelper.getAsDOM(resp)
return ReportClientHelper.getInfo(dom, "viewname", response)
elif ("GETINFO" == action):
resp = response.content
dom = ReportClientHelper.getAsDOM(resp)
result = {}
result['objid'] = ReportClientHelper.getInfo(dom, "objid", response)
result['dbid'] = ReportClientHelper.getInfo(dom, "dbid", response)
return result
elif ("GETSHAREINFO" == action):
return ShareInfo(response.content)
elif ("GETVIEWURL" == action):
resp = response.content
dom = ReportClientHelper.getAsDOM(resp)
return ReportClientHelper.getInfo(dom, "viewurl", response)
elif ("GETEMBEDURL" == action):
resp = response.content
dom = ReportClientHelper.getAsDOM(resp)
return ReportClientHelper.getInfo(dom, "embedurl", response)
elif ("GETUSERS" == action):
resp = response.content
resp = json.loads(resp)
return resp["response"]["result"]
elif ("GETUSERPLANDETAILS" == action):
return PlanInfo(response.content)
elif ("GETDASHBOARDS" == action):
resp = response.content
resp = (json.loads(resp))
return (resp["response"]["result"]["dashboards"])
elif ("RECENTITEMS" == action):
resp = response.content
resp = (json.loads(resp))
return (resp["response"]["result"]["recentviews"])
elif (
"GETVIEWINFO" == action or "MYWORKSPACELIST" == action or "SHAREDWORKSPACELIST" == action or "VIEWLIST" == action or "FOLDERLIST" == action):
resp = response.content
resp = (json.loads(resp))
return (resp["response"]["result"])
elif ("SAVEAS" == action):
resp = response.content
resp = (json.loads(resp))
return (resp["response"]["result"]["message"])
def addRow(self, tableURI, columnValues, config=None):
"""
Adds a row to the specified table identified by the URI.
@param tableURI: The URI of the table. See L{getURI<getURI>}.
@type tableURI:string
@param columnValues: Contains the values for the row. The column name(s) are the key.
@type columnValues:dictionary
@param config: Contains any additional control parameters. Can be C{None}.
@type config:dictionary
@return: The values of the row.
@rtype:dictionary
@raise ServerError: If the server has recieved the request but did not process the request
due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
"""
# payLoad = ReportClientHelper.getAsPayLoad([columnValues, config], None, None)
# url = ReportClientHelper.addQueryParams(tableURI, self.token, "ADDROW", "XML")
# url += "&" + payLoad
# return self.__sendRequest(url, "POST", payLoad=None, action="ADDROW", callBackData=None)
payLoad = ReportClientHelper.getAsPayLoad([columnValues, config], None, None)
url = ReportClientHelper.addQueryParams(tableURI, self.token, "ADDROW", "XML")
return self.__sendRequest(url, "POST", payLoad, "ADDROW", None)
def deleteData(self, tableURI, criteria=None, config=None, retry_countdown=0) -> int:
""" This has been refactored to use requests.post.
Returns the number of rows deleted
Delete the data in the specified table identified by the URI.
@param tableURI: The URI of the table. See L{getURI<getURI>}.
@type tableURI:string
@param criteria: The criteria to be applied for deleting. Only rows matching the criteria will be
updated. Can be C{None}. Incase it is C{None}, then all rows will be deleted.
@type criteria:string
@param config: Contains any additional control parameters. Can be C{None}.
@type config:dictionary
@raise ServerError: If the server has recieved the request but did not process the request
due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
"""
# payLoad = ReportClientHelper.getAsPayLoad([config], criteria, None)
payload = None # can't put the SQL in the body of the post request, the library is wrong or out of date
url = ReportClientHelper.addQueryParams(tableURI, self.token, "DELETE", "JSON", criteria=criteria)
r = self.__sendRequest(url=url, httpMethod="POST", payLoad=payload, action="DELETE", callBackData=None,
retry_countdown=retry_countdown)
return int(r)
def updateData(self, tableURI, columnValues, criteria, config=None):
"""
update the data in the specified table identified by the URI.
@param tableURI: The URI of the table. See L{getURI<getURI>}.
@type tableURI:string
@param columnValues: Contains the values for the row. The column name(s) are the key.
@type columnValues:dictionary
@param criteria: The criteria to be applied for updating. Only rows matching the criteria will be
updated. Can be C{None}. Incase it is C{None}, then all rows will be updated.
@type criteria:Optional[string]
@param config: Contains any additional control parameters. Can be C{None}.
@type config:dictionary
@raise ServerError: If the server has recieved the request but did not process the request
due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
"""
payLoad = ReportClientHelper.getAsPayLoad([columnValues, config], criteria, None)
url = ReportClientHelper.addQueryParams(tableURI, self.token, "UPDATE", "JSON")
return self.__sendRequest(url, "POST", payLoad, "UPDATE", None)
def importData(self, tableURI, importType, importContent, autoIdentify="TRUE", onError="ABORT", importConfig=None):
"""
Bulk import data into the table identified by the URI.
@param tableURI: The URI of the table. See L{getURI<getURI>}.
@type tableURI:string
@param importType: The type of import.
Can be one of
1. APPEND
2. TRUNCATEADD
3. UPDATEADD
See U{Import types<https://www.zoho.com/analytics/api/#import-data>} for more details.
@type importType:string
@param importContent: The data in csv format.
@type importContent:string
@param autoIdentify: Used to specify whether to auto identify the CSV format. Allowable values - true/false.
@type autoIdentify:string
@param onError: This parameter controls the action to be taken In-case there is an error during import.
@type onError:string
@param importConfig: Contains any additional control parameters.
See U{Import types<https://www.zoho.com/analytics/api/#import-data>} for more details.
@type importConfig:dictionary
@return: An L{ImportResult} containing the results of the Import
@rtype:L{ImportResult}
@raise ServerError: If the server has received the request but did not process the request
due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
"""
if (importConfig == None):
importConfig = {}
importConfig['ZOHO_IMPORT_TYPE'] = importType
importConfig["ZOHO_ON_IMPORT_ERROR"] = onError
importConfig["ZOHO_AUTO_IDENTIFY"] = autoIdentify
if not ("ZOHO_CREATE_TABLE" in importConfig):
importConfig["ZOHO_CREATE_TABLE"] = 'false'
files = {"ZOHO_FILE": ("file", importContent, 'multipart/form-data')}
url = ReportClientHelper.addQueryParams(tableURI, self.token, "IMPORT", "XML")
headers = {}
# To set access token for the first time when an instance is created.
if ReportClient.isOAuth:
if self.accesstoken == None:
self.accesstoken = self.getOAuthToken()
headers = {"Authorization": "Zoho-oauthtoken " + self.accesstoken}
respObj = requests.post(url, data=importConfig, files=files, headers=headers)
# To generate new access token once after it expires.
if self.invalidOAUTH(respObj):
self.accesstoken = self.getOAuthToken()
headers = {"Authorization": "Zoho-oauthtoken " + self.accesstoken}
respObj = requests.post(url, data=importConfig, files=files, headers=headers)
if (respObj.status_code != 200):
raise ServerError(respObj)
else:
return ImportResult(respObj.content)
def importData_v2(self, tableURI: str, import_mode: str,
import_content: str,
matching_columns: str = None,
date_format=None,
import_config=None,
retry_countdown=0) -> 'ImportResult':
""" Send data to zoho using a string formatted in CSV style.
This has been refactored to use requests.post.
Bulk import data into the table identified by the URI. import_content is a string in csv format (\n separated)
The first line is column headers.
Note: the API supports JSON too but it is not implemented here.
raises RuntimeError if api limits are exceeded
@param tableURI: The URI of the table. See L{getURI<getURI>}.
@type tableURI:string
@param import_mode: The type of import.
Can be one of
1. APPEND
2. TRUNCATEADD
3. UPDATEADD
See U{Import types<http://zohoreportsapi.wiki.zoho.com/Importing-CSV-File.html>} for more details.
@type import_mode:string
@param import_content: The data in csv format.
@type import_content:string
@param import_config: Contains any additional control parameters.
See U{Import types<http://zohoreportsapi.wiki.zoho.com/Importing-CSV-File.html>} for more details.
@type import_config:dictionary
@param matching_columns: A comma separated list of column names to match on. If this is not provided, then the first column is used.
@type matching_columns:string
@param date_format: The Zoho date format to use. If this is not provided, then the default is used.
@type date_format:string
@param retry_countdown: The number of retries to attempt if the API returns a recoverable error. If this is not provided, then the default is used.
@type retry_countdown:int
@return: An L{ImportResult} containing the results of the Import
@rtype:L{ImportResult}
@raise ServerError: If the server has received the request but did not process the request
due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
"""
date_format = date_format or "yyyy-MM-dd"
payload = {"ZOHO_AUTO_IDENTIFY": "true",
# "ZOHO_COMMENTCHAR":"#",
# "ZOHO_DELIMITER":0, #comma
# "ZOHO_QUOTED":2, #double quote
"ZOHO_ON_IMPORT_ERROR": "ABORT",
"ZOHO_CREATE_TABLE": "false", "ZOHO_IMPORT_TYPE": import_mode,
"ZOHO_DATE_FORMAT": date_format,
"ZOHO_IMPORT_DATA": import_content}
if matching_columns:
payload['ZOHO_MATCHING_COLUMNS'] = matching_columns
url = ReportClientHelper.addQueryParams(tableURI, self.token, "IMPORT", "XML")
r = self.__sendRequest(url=url, httpMethod="POST", payLoad=payload, action="IMPORT", callBackData=None,
retry_countdown=retry_countdown)
return ImportResult(r.response) # a parser from Zoho
def importDataAsString(self, tableURI, importType, importContent, autoIdentify, onError, importConfig=None):
"""
Bulk import data into the table identified by the URI.
@param tableURI: The URI of the table. See L{getURI<getURI>}.
@type tableURI:string
@param importType: The type of import.
Can be one of
1. APPEND
2. TRUNCATEADD
3. UPDATEADD
See U{Import types<https://www.zoho.com/analytics/api/#import-data>} for more details.
@type importType:string
@param importContent: The data in csv format or json.
@type importContent:string
@param autoIdentify: Used to specify whether to auto identify the CSV format. Allowable values - true/false.
@type autoIdentify:string
@param onError: This parameter controls the action to be taken In-case there is an error during import.
@type onError:string
@param importConfig: Contains any additional control parameters.
See U{Import types<https://www.zoho.com/analytics/api/#import-data>} for more details.
@type importConfig:dictionary
@return: An L{ImportResult} containing the results of the Import.
@rtype:L{ImportResult}
@raise ServerError: If the server has received the request but did not process the request
due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
"""
dict = {"ZOHO_AUTO_IDENTIFY": autoIdentify, "ZOHO_ON_IMPORT_ERROR": onError,
"ZOHO_IMPORT_TYPE": importType, "ZOHO_IMPORT_DATA": importContent}
if not ("ZOHO_CREATE_TABLE" in importConfig):
importConfig["ZOHO_CREATE_TABLE"] = 'false'
payLoad = ReportClientHelper.getAsPayLoad([dict, importConfig], None, None)
url = ReportClientHelper.addQueryParams(tableURI, self.token, "IMPORT", "XML")
return self.__sendRequest(url, "POST", payLoad, "IMPORT", None)
def exportData(self, tableOrReportURI, format, exportToFileObj,
criteria: Optional[str] = None, config: Optional[str] = None):
"""
Export the data in the specified table identified by the URI.
@param tableOrReportURI: The URI of the table. See L{getURI<getURI>}.
@type tableOrReportURI:string
@param format: The format in which the data is to be exported.
See U{Supported Export Formats<https://www.zoho.com/analytics/api/#export-data>} for
the supported types.
@type format:string
@param exportToFileObj: File (or file like object) to which the exported data is to be written
@type exportToFileObj:file
@param criteria: The criteria to be applied for exporting. Only rows matching the criteria will be
exported. Can be C{None}. Incase it is C{None}, then all rows will be exported.
@type criteria:string
@param config: Contains any additional control parameters. Can be C{None}.
@type config:dictionary
@raise ServerError: If the server has recieved the request but did not process the request
due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
"""
payLoad = ReportClientHelper.getAsPayLoad([config], criteria, None)
url = ReportClientHelper.addQueryParams(tableOrReportURI, self.token, "EXPORT", format)
return self.__sendRequest(url, "POST", payLoad, "EXPORT", exportToFileObj)
def exportDataUsingSQL(self, tableOrReportURI, format, exportToFileObj, sql, config=None):
"""
Export the data with the specified SQL query identified by the URI.
@param tableOrReportURI: The URI of the workspace. See L{getDBURI<getDBURI>}.
@type tableOrReportURI:string
@param format: The format in which the data is to be exported.
See U{Supported Export Formats<https://www.zoho.com/analytics/api/#export-data>} for
the supported types.
@type format:string
@param exportToFileObj: File (or file like object) to which the exported data is to be written
@type exportToFileObj:file
@param sql: The sql whose output need to be exported.
@type sql:string
@param config: Contains any additional control parameters. Can be C{None}.
@type config:dictionary
@raise ServerError: If the server has received the request but did not process the request
due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
"""
payLoad = ReportClientHelper.getAsPayLoad([config], None, sql)
url = ReportClientHelper.addQueryParams(tableOrReportURI, self.token, "EXPORT", format)
return self.__sendRequest(url, "POST", payLoad, "EXPORT", exportToFileObj)
def exportDataUsingSQL_v2(self, tableOrReportURI, format, sql, config=None, retry_countdown=0) -> io.BytesIO:
""" This has been refactored to use requests.post
Export the data with the specified SQL query identified by the URI.
@param tableOrReportURI: The URI of the database. See L{getDBURI<getDBURI>}.
@type tableOrReportURI:string
@param format: The format in which the data is to be exported.
See U{Supported Export Formats<http://zohoreportsapi.wiki.zoho.com/Export.html>} for
the supported types.
@type format:string
@param exportToFileObj: File (or file like object) to which the exported data is to be written
@type exportToFileObj:file
@param sql: The sql whose output need to be exported.
@type sql:string
@param config: Contains any additional control parameters. Can be C{None}.
@type config:dictionary
@param retry_countdown: Number of retry attempts allowed. If 0, no retries are attempted.
@type retry_countdown:int
@raise ServerError: If the server has recieved the request but did not process the request
due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
"""
# this is a bug in Zoho's Python library. The SQL query must be passed as a parameter, not in the body.
# in the body, it is ignored.
# payload = ReportClientHelper.getAsPayLoad([config], None, sql)
payload = None
""" sql does not need to URL encoded when passed in, but wrap in quotes"""
# addQueryParams adds parameters to the URL, not in the POST body but that seems ok for zoho.. url += "&ZOHO_ERROR_FORMAT=XML&ZOHO_ACTION=" + urllib.parse.quote(action)
# addQueryParams adds: ZOHO_ERROR_FORMAT, ZOHO_OUTPUT_FORMAT
url = ReportClientHelper.addQueryParams(tableOrReportURI, self.token, "EXPORT", format,
sql=sql) # urlencoding is done in here
callback_object = io.BytesIO()
r = self.__sendRequest(url=url, httpMethod="POST", payLoad=payload, action="EXPORT",
callBackData=callback_object, retry_countdown=retry_countdown)
return callback_object
def copyDatabase(self, dbURI, config=None):
"""
Copy the specified database identified by the URI.
@param dbURI: The URI of the database. See L{getDBURI<getDBURI>}.
@type dbURI:string
@param config: Contains any additional control parameters like ZOHO_DATABASE_NAME.
@type config:dictionary
@return: The new database id.
@rtype: string
@raise ServerError: If the server has recieved the request but did not process the request
due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
"""
payLoad = ReportClientHelper.getAsPayLoad([config], None, None)
url = ReportClientHelper.addQueryParams(dbURI, self.token, "COPYDATABASE", "JSON")
return self.__sendRequest(url, "POST", payLoad, "COPYDB", None)
def copy_workspace_api_v2(self, workspace_id, new_workspace_name, workspace_key, copy_with_data: bool,
source_org_id,
dest_org_id,
copy_with_import_source: bool = False,
):
"""
A v2 API functions
"""
config_dict = {"newWorkspaceName": new_workspace_name, "newWorkspaceDesc": f"copy",
"workspaceKey": workspace_key,
"copyWithData": copy_with_data,
"copyWithImportSource": copy_with_import_source}
config_data = "CONFIG=" + urllib.parse.quote_plus(json.dumps(config_dict))
url = self.getURI_v2() + f"workspaces/{workspace_id}"
extra_headers = {"ZANALYTICS-ORGID": source_org_id, "ZANALYTICS-DEST-ORGID": dest_org_id}
return self.__sendRequest(url, "POST", payLoad=None, params=config_data, action=None,
extra_headers=extra_headers)
def get_orgs_metadata_api_v2(self):
url = self.getURI_v2() + f"orgs/"
return self.__sendRequest(url, "GET", payLoad=None, action=None)
def get_all_workspaces_metadata_api_v2(self):
url = self.getURI_v2() + f"workspaces/"
return self.__sendRequest(url, "GET", payLoad=None, action=None)
def get_workspace_secretkey_api_v2(self, workspace_id, org_id):
extra_headers = {"ZANALYTICS-ORGID": org_id, }
url = self.getURI_v2() + f"workspaces/{workspace_id}/secretkey"
return self.__sendRequest(url, "GET", payLoad=None, action=None, extra_headers=extra_headers)
def get_workspace_details_api_v2(self, workspace_id):
extra_headers = None
url = self.getURI_v2() + f"workspaces/{workspace_id}"
return self.__sendRequest(url, "GET", payLoad=None, action=None, extra_headers=extra_headers)
def deleteDatabase(self, userURI, databaseName, config=None):
"""
Delete the specified database.
@param userURI: The URI of the user. See L{getUserURI<getUserURI>}.
@type userURI:string
@param databaseName: The name of the database to be deleted.
@type databaseName:string
@param config: Contains any additional control parameters.
@type config:dictionary
@raise ServerError: If the server has recieved the request but did not process the request
due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
"""
payLoad = ReportClientHelper.getAsPayLoad([config], None, None)
url = ReportClientHelper.addQueryParams(userURI, self.token, "DELETEDATABASE", "XML")
url += "&ZOHO_DATABASE_NAME=" + urllib.parse.quote(databaseName)
return self.__sendRequest(url, "POST", payLoad, "DELETEDATABASE", None)
def enableDomainDB(self, userUri, dbName, domainName, config=None):
"""
Enable database for custom domain.
@param userUri: The URI of the user. See L{getUserURI<getUserURI>}.
@type userUri:string
@param dbName: The database name.
@type dbName:string
@param domainName: The domain name.
@type domainName:string
@param config: Contains any additional control parameters. Can be C{None}.
@type config:dictionary
@return: Domain database status.
@rtype:dictionary
@raise ServerError: If the server has recieved the request but did not process the request
due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
"""
payLoad = ReportClientHelper.getAsPayLoad([config], None, None)
url = ReportClientHelper.addQueryParams(userUri, self.token, "ENABLEDOMAINDB", "JSON")
url += "&DBNAME=" + urllib.parse.quote(dbName)
url += "&DOMAINNAME=" + urllib.parse.quote(domainName)
return self.__sendRequest(url, "POST", payLoad, "ENABLEDOMAINDB", None)
def disableDomainDB(self, userUri, dbName, domainName, config=None):
"""
Disable database for custom domain.
@param userUri: The URI of the user. See L{getUserURI<getUserURI>}.
@type userUri:string
@param dbName: The database name.
@type dbName:string
@param domainName: The domain name.
@type domainName:string
@param config: Contains any additional control parameters. Can be C{None}.
@type config:dictionary
@return: Domain database status.
@rtype:dictionary
@raise ServerError: If the server has recieved the request but did not process the request
due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
"""
payLoad = ReportClientHelper.getAsPayLoad([config], None, None)
url = ReportClientHelper.addQueryParams(userUri, self.token, "DISABLEDOMAINDB", "JSON")
url += "&DBNAME=" + urllib.parse.quote(dbName)
url += "&DOMAINNAME=" + urllib.parse.quote(domainName)
return self.__sendRequest(url, "POST", payLoad, "DISABLEDOMAINDB", None)
def createTable(self, dbURI, tableDesign, config=None):
"""
Create a table in the specified database.
@param dbURI: The URI of the database. See L{getDBURI<getDBURI>}.
@type dbURI:string
@param tableDesign: Table structure in JSON format (includes table name, description, folder name, column and lookup details, is system table).
@type tableDesign:string
@param config: Contains any additional control parameters. Can be C{None}.
@type config:dictionary
@raise ServerError: If the server has recieved the request but did not process the request
due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
"""
payLoad = ReportClientHelper.getAsPayLoad([config], None, None)
url = ReportClientHelper.addQueryParams(dbURI, self.token, "CREATETABLE", "JSON")
# url += "&ZOHO_TABLE_DESIGN=" + urllib.parse.quote(tableDesign)
url += "&ZOHO_TABLE_DESIGN=" + urllib.parse.quote_plus(tableDesign) # smaller URL, fits under limit better
return self.__sendRequest(url, "POST", payLoad, "CREATETABLE", None)
def autoGenReports(self, tableURI, source, config=None):
"""
Generate reports for the particular table.
@param tableURI: The URI of the table. See L{getURI<getURI>}.
@type tableURI:string
@param source: Source should be column or table.
@type source:string
@param config: Contains any additional control parameters. Can be C{None}.
@type config:dictionary
@return: Auto generate report result.
@rtype:dictionary
@raise ServerError: If the server has recieved the request but did not process the request
due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
"""
payLoad = ReportClientHelper.getAsPayLoad([config], None, None)
url = ReportClientHelper.addQueryParams(tableURI, self.token, "AUTOGENREPORTS", "JSON")
url += "&ZOHO_SOURCE=" + urllib.parse.quote(source)
return self.__sendRequest(url, "POST", payLoad, "AUTOGENREPORTS", None)
def createSimilarViews(self, tableURI, refView, folderName, customFormula, aggFormula, config=None):
"""
This method is used to create similar views .
@param tableURI: The URI of the table. See L{getURI<getURI>}.
@type tableURI:string
@param refView: It contains the refrence view name.
@type refView:string
@param folderName: It contains the folder name where the reports to be saved.
@type folderName:string
@param customFormula: If its true the reports created with custom formula.
@type customFormula:bool
@param aggFormula: If its true the reports created with aggrigate formula.
@type aggFormula:bool
@param config: Contains any additional control parameters. Can be C{None}.
@type config:dictionary
@return: Generated reports status.
@rtype:dictionary
@raise ServerError: If the server has recieved the request but did not process the request
due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
"""
payLoad = ReportClientHelper.getAsPayLoad([config], None, None)
url = ReportClientHelper.addQueryParams(tableURI, self.token, "CREATESIMILARVIEWS", "JSON")
url += "&ZOHO_REFVIEW=" + urllib.parse.quote(refView)
url += "&ZOHO_FOLDERNAME=" + urllib.parse.quote(folderName)
url += "&ISCOPYCUSTOMFORMULA=" + urllib.parse.quote("true" if customFormula == True else "false")
url += "&ISCOPYAGGFORMULA=" + urllib.parse.quote("true" if aggFormula == True else "false")
return self.__sendRequest(url, "POST", payLoad, "CREATESIMILARVIEWS", None)
def renameView(self, dbURI, viewName, newViewName, viewDesc="", config=None):
"""
Rename the specified view with the new name and description.
@param dbURI: The URI of the database. See L{getDBURI<getDBURI>}.
@type dbURI:string
@param viewName: Current name of the view.
@type viewName:string
@param newViewName: New name for the view.
@type newViewName:string
@param viewDesc: New description for the view.
@type viewDesc:string
@param config: Contains any additional control parameters. Can be C{None}.
@type config:dictionary
@raise ServerError: If the server has recieved the request but did not process the request
due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
"""
payLoad = ReportClientHelper.getAsPayLoad([config], None, None)
url = ReportClientHelper.addQueryParams(dbURI, self.token, "RENAMEVIEW", "XML")
url += "&ZOHO_VIEWNAME=" + urllib.parse.quote(viewName)
url += "&ZOHO_NEW_VIEWNAME=" + urllib.parse.quote(newViewName)
url += "&ZOHO_NEW_VIEWDESC=" + urllib.parse.quote(viewDesc)
self.__sendRequest(url, "POST", payLoad, "RENAMEVIEW", None)
def saveAs(self, dbURI, viewToCopy, newViewName, config=None):
"""
Create a new view by copying the structure and data of existing view.
@param dbURI: The URI of the workspace. See L{getDBURI<getDBURI>}.
@type dbURI:string
@param viewToCopy: Name of the view to be copied.
@type viewToCopy:string
@param newViewName: Name of the view to be created.
@type newViewName:string
@param config: Contains any additional control parameters. Can be C{None}.
@type config:dictionary
@raise ServerError: If the server has received the request but did not process the request
due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
@return: The status about the request (success or failure).
@rtype: string
"""
payLoad = ReportClientHelper.getAsPayLoad([config], None, None)
url = ReportClientHelper.addQueryParams(dbURI, self.token, "SAVEAS", "JSON")
url += "&ZOHO_VIEWTOCOPY=" + urllib.parse.quote(viewToCopy)
url += "&ZOHO_NEW_VIEWNAME=" + urllib.parse.quote(newViewName)
return self.__sendRequest(url, "POST", payLoad, "SAVEAS", None)
def copyReports(self, dbURI, views, dbName, dbKey, config=None):
"""
The Copy Reports API is used to copy one or more reports from one database to another within the same account or even across user accounts.
@param dbURI: The URI of the source database. See L{getDBURI<getDBURI>}.
@type dbURI:string
@param views: This parameter holds the list of view names.
@type views:string
@param dbName: The database name where the reports are to be copied.
@type dbName:string
@param dbKey: The secret key used for allowing the user to copy the report.
@type dbKey:string
@param config: Contains any additional control parameters. Can be C{None}.
@type config:dictionary
@raise ServerError: If the server has recieved the request but did not process the request
due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
"""
payLoad = ReportClientHelper.getAsPayLoad([config], None, None)
url = ReportClientHelper.addQueryParams(dbURI, self.token, "COPYREPORTS", "XML")
url += "&ZOHO_VIEWTOCOPY=" + urllib.parse.quote(views)
url += "&ZOHO_DATABASE_NAME=" + urllib.parse.quote(dbName)
url += "&ZOHO_COPY_DB_KEY=" + urllib.parse.quote(dbKey)
return self.__sendRequest(url, "POST", payLoad, "COPYREPORTS", None)
def copyFormula(self, tableURI, formula, dbName, dbKey, config=None):
"""
The Copy Formula API is used to copy one or more formula columns from one table to another within the same database or across databases and even across one user account to another.
@param tableURI: The URI of the table. See L{getURI<getURI>}.
@type tableURI:string
@param formula: This parameter holds the list of formula names.
@type formula:string
@param dbName: The database name where the formula's had to be copied.
@type dbName:string
@param dbKey: The secret key used for allowing the user to copy the formula.
@type dbKey:string
@param config: Contains any additional control parameters. Can be C{None}.
@type config:dictionary
@raise ServerError: If the server has recieved the request but did not process the request
due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
"""
payLoad = ReportClientHelper.getAsPayLoad([config], None, None)
url = ReportClientHelper.addQueryParams(tableURI, self.token, "COPYFORMULA", "XML")
url += "&ZOHO_FORMULATOCOPY=" + urllib.parse.quote(formula)
url += "&ZOHO_DATABASE_NAME=" + urllib.parse.quote(dbName)
url += "&ZOHO_COPY_DB_KEY=" + urllib.parse.quote(dbKey)
return self.__sendRequest(url, "POST", payLoad, "COPYREPORTS", None)
def addColumn(self, tableURI, columnName, dataType, config=None):
"""
Adds a column into Zoho Reports Table.
@param tableURI: The URI of the table. See L{getURI<getURI>}.
@type tableURI:string
@param columnName: The column name to be added into Zoho Reports Table.
@type columnName:string
@param dataType: The data type of the column to be added into Zoho Reports Table.
@type dataType:string
@param config: Contains any additional control parameters.
@type config:dictionary
@raise ServerError: If the server has recieved the request but did not process the request
due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
"""
payLoad = ReportClientHelper.getAsPayLoad([config], None, None)
url = ReportClientHelper.addQueryParams(tableURI, self.token, "ADDCOLUMN", "XML")
url += "&ZOHO_COLUMNNAME=" + urllib.parse.quote(columnName)
url += "&ZOHO_DATATYPE=" + urllib.parse.quote(dataType)
return self.__sendRequest(url, "POST", payLoad, "ADDCOLUMN", None)
def deleteColumn(self, tableURI, columnName, config=None):
"""
Deletes a column from Zoho Reports Table.
@param tableURI: The URI of the table. See L{getURI<getURI>}.
@type tableURI:string
@param columnName: The column name to be deleted from Zoho Reports Table.
@type columnName:string
@param config: Contains any additional control parameters.
@type config:dictionary
@raise ServerError: If the server has recieved the request but did not process the request
due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
"""
payLoad = ReportClientHelper.getAsPayLoad([config], None, None)
url = ReportClientHelper.addQueryParams(tableURI, self.token, "DELETECOLUMN", "XML")
url += "&ZOHO_COLUMNNAME=" + urllib.parse.quote(columnName)
return self.__sendRequest(url, "POST", payLoad, "DELETECOLUMN", None)
def renameColumn(self, tableURI, oldColumnName, newColumnName, config=None):
"""
Deactivate the users in the Zoho Reports Account.
@param tableURI: The URI of the table. See L{getURI<getURI>}.
@type tableURI:string
@param oldColumnName: The column name to be renamed in Zoho Reports Table.
@type oldColumnName:string
@param newColumnName: New name for the column.
@type newColumnName:string
@param config: Contains any additional control parameters.
@type config:dictionary
@raise ServerError: If the server has recieved the request but did not process the request
due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
"""
payLoad = ReportClientHelper.getAsPayLoad([config], None, None)
url = ReportClientHelper.addQueryParams(tableURI, self.token, "RENAMECOLUMN", "XML")
url += "&OLDCOLUMNNAME=" + urllib.parse.quote(oldColumnName)
url += "&NEWCOLUMNNAME=" + urllib.parse.quote(newColumnName)
return self.__sendRequest(url, "POST", payLoad, "RENAMECOLUMN", None)
def hideColumn(self, tableURI, columnNames, config=None):
"""
Hide the columns in the table.
@param tableURI: The URI of the table. See L{getURI<getURI>}.
@type tableURI:string
@param columnNames: Contains list of column names.
@type columnNames:list
@param config: Contains any additional control parameters. Can be C{None}.
@type config:dictionary
@return: Column status.
@rtype:list
@raise ServerError: If the server has recieved the request but did not process the request
due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
"""
payLoad = ReportClientHelper.getAsPayLoad([config], None, None)
url = ReportClientHelper.addQueryParams(tableURI, self.token, "HIDECOLUMN", "JSON")
for columnName in columnNames:
url += "&ZOHO_COLUMNNAME=" + urllib.parse.quote(columnName)
return self.__sendRequest(url, "POST", payLoad, "HIDECOLUMN", None)
def showColumn(self, tableURI, columnNames, config=None):
"""
Show the columns in the table.
@param tableURI: The URI of the table. See L{getURI<getURI>}.
@type tableURI:string
@param columnNames: Contains list of column names.
@type columnNames:list
@param config: Contains any additional control parameters. Can be C{None}.
@type config:dictionary
@return: Column status.
@rtype:list
@raise ServerError: If the server has recieved the request but did not process the request
due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
"""
payLoad = ReportClientHelper.getAsPayLoad([config], None, None)
url = ReportClientHelper.addQueryParams(tableURI, self.token, "SHOWCOLUMN", "JSON")
for columnName in columnNames:
url += "&ZOHO_COLUMNNAME=" + urllib.parse.quote(columnName)
return self.__sendRequest(url, "POST", payLoad, "SHOWCOLUMN", None)
def addLookup(self, tableURI, columnName, referedTable, referedColumn, onError, config=None):
"""
Add the lookup for the given column.
@param tableURI: The URI of the table. See L{getURI<getURI>}.
@type tableURI:string
@param columnName: Name of the column (Child column).
@type columnName:string
@param referedTable: Name of the referred table (parent table).
@type referedTable:string
@param referedColumn: Name of the referred column (parent column).
@type referedColumn:string
@param onError: This parameter controls the action to be taken incase there is an error during lookup.
@type onError:string
@param config: Contains any additional control parameters. Can be C{None}.
@type config:dictionary
@raise ServerError: If the server has recieved the request but did not process the request
due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
"""
payLoad = ReportClientHelper.getAsPayLoad([config], None, None)
url = ReportClientHelper.addQueryParams(tableURI, self.token, "ADDLOOKUP", "XML")
url += "&ZOHO_COLUMNNAME=" + urllib.parse.quote(columnName)
url += "&ZOHO_REFERREDTABLE=" + urllib.parse.quote(referedTable)
url += "&ZOHO_REFERREDCOLUMN=" + urllib.parse.quote(referedColumn)
url += "&ZOHO_IFERRORONCONVERSION=" + urllib.parse.quote(onError)
return self.__sendRequest(url, "POST", payLoad, "ADDLOOKUP", None)
def removeLookup(self, tableURI, columnName, config=None):
"""
Remove the lookup for the given column.
@param tableURI: The URI of the table. See L{getURI<getURI>}.
@type tableURI:string
@param columnName: Name of the column.
@type columnName:string
@param config: Contains any additional control parameters. Can be C{None}.
@type config:dictionary
@raise ServerError: If the server has recieved the request but did not process the request
due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
"""
payLoad = ReportClientHelper.getAsPayLoad([config], None, None)
url = ReportClientHelper.addQueryParams(tableURI, self.token, "REMOVELOOKUP", "XML")
url += "&ZOHO_COLUMNNAME=" + urllib.parse.quote(columnName)
return self.__sendRequest(url, "POST", payLoad, "REMOVELOOKUP", None)
def createBlankDb(self, userURI, dbName, dbDesc, config=None):
"""
Create a blank workspace.
@param userURI: The URI of the user. See L{getUserURI<getUserURI>}.
@type userURI:string
@param dbName: The workspace name.
@type dbName:string
@param dbDesc: The workspace description.
@type dbDesc:string
@param config: Contains any additional control parameters. Can be C{None}.
@type config:dictionary
@raise ServerError: If the server has received the request but did not process the request
due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
"""
payLoad = ReportClientHelper.getAsPayLoad([config], None, None)
url = ReportClientHelper.addQueryParams(userURI, self.token, "CREATEBLANKDB", "JSON")
url += "&ZOHO_DATABASE_NAME=" + urllib.parse.quote(dbName)
if (dbDesc != None):
url += "&ZOHO_DATABASE_DESC=" + urllib.parse.quote(dbDesc)
self.__sendRequest(url, "POST", payLoad, "CREATEBLANKDB", None)
def getDatabaseMetadata(self, requestURI, metadata, config=None) -> MutableMapping:
"""
This method is used to get the meta information about the reports.
@param requestURI: The URI of the database or table.
@type requestURI:string
@param metadata: It specifies the information to be fetched (e.g. ZOHO_CATALOG_LIST, ZOHO_CATALOG_INFO)
@type metadata:string
@param config: Contains any additional control parameters. Can be C{None}.
@type config:dictionary
@return: The metadata of the database.
@rtype: dictionary
@raise ServerError: If the server has recieved the request but did not process the request
due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
"""
payload = ReportClientHelper.getAsPayLoad([config], None, None)
url = ReportClientHelper.addQueryParams(requestURI, self.token, "DATABASEMETADATA", "JSON")
url += "&ZOHO_METADATA=" + urllib.parse.quote(metadata)
r = self.__sendRequest(url=url, httpMethod="POST", payLoad=payload, action="DATABASEMETADATA",
callBackData=None)
return r
def getDatabaseName(self, userURI, dbid, config=None):
"""
Get database name for a specified database identified by the URI.
@param userURI: The URI of the user. See L{getUserURI<getUserURI>}.
@type userURI:string
@param dbid: The ID of the database.
@type dbid:string
@param config: Contains any additional control parameters. Can be C{None}.
@type config:dictionary
@return: The Database name.
@rtype: string
@raise ServerError: If the server has recieved the request but did not process the request
due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
"""
payLoad = ReportClientHelper.getAsPayLoad([config], None, None)
url = ReportClientHelper.addQueryParams(userURI, self.token, "GETDATABASENAME", "XML")
url += "&DBID=" + urllib.parse.quote(dbid)
return self.__sendRequest(url, "POST", payLoad, "GETDATABASENAME", None)
def getDatabaseID(self, userURI, dbName, config=None):
"""
Get workspace ID for a specified workspace identified by the URI.
@param userURI: The URI of the user. See L{getUserURI<getUserURI>}.
@type userURI:string
@param dbName: The name of the workspace.
@type dbName:string
@param config: Contains any additional control parameters. Can be C{None}.
@type config:dictionary
@return: The workspace ID.
@rtype: string
@raise ServerError: If the server has received the request but did not process the request
due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
"""
payLoad = ReportClientHelper.getAsPayLoad([config], None, None)
url = ReportClientHelper.addQueryParams(userURI, self.token, "GETDATABASEID", "XML")
url += "&ZOHO_DATABASE_NAME=" + urllib.parse.quote(dbName)
return self.__sendRequest(url, "POST", payLoad, "GETDATABASEID", None)
def isDbExist(self, userURI, dbName, config=None):
"""
Check wheather the database is exist or not.
@param userURI: The URI of the user. See L{getUserURI<getUserURI>}.
@type userURI:string
@param dbName: Database name.
@type dbName:string
@param config: Contains any additional control parameters. Can be C{None}.
@type config:dictionary
@return: Return wheather the database is exist or not.
@rtype:string
@raise ServerError: If the server has recieved the request but did not process the request
due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
"""
payLoad = ReportClientHelper.getAsPayLoad([config], None, None)
url = ReportClientHelper.addQueryParams(userURI, self.token, "ISDBEXIST", "JSON")
url += "&ZOHO_DB_NAME=" + urllib.parse.quote(dbName)
return self.__sendRequest(url, "POST", payLoad, "ISDBEXIST", None)
def isViewExist(self, dbURI, viewName, config=None):
"""
Checks whether the view exist or not in the workspace identified by dbURI.
@param dbURI: The URI of the workspace. See L{getDBURI<getDBURI>}.
@type dbURI:string
@param viewName: Name of the view.
@type viewName:string
@param config: Contains any additional control parameters. Can be C{None}.
@type config:dictionary
@return: Returns True, if view exist. False, otherwise.
@rtype:string
@raise ServerError: If the server has received the request but did not process the request
due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
"""
payLoad = ReportClientHelper.getAsPayLoad([config], None, None)
url = ReportClientHelper.addQueryParams(dbURI, self.token, "ISVIEWEXIST", "JSON")
url += "&ZOHO_VIEW_NAME=" + urllib.parse.quote(viewName)
return self.__sendRequest(url, "POST", payLoad, "ISVIEWEXIST", None)
def isColumnExist(self, tableURI, columnName, config=None):
"""
Checks whether the column exist or not in the workspace identified by tableURI.
@param tableURI: The URI of the table. See L{getURI<getURI>}.
@type tableURI:string
@param columnName: Name of the column.
@type columnName:string
@param config: Contains any additional control parameters. Can be C{None}.
@type config:dictionary
@return: Returns True, if column exist. False, otherwise.
@rtype:string
@raise ServerError: If the server has received the request but did not process the request
due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
"""
payLoad = ReportClientHelper.getAsPayLoad([config], None, None)
url = ReportClientHelper.addQueryParams(tableURI, self.token, "ISCOLUMNEXIST", "JSON")
url += "&ZOHO_COLUMN_NAME=" + urllib.parse.quote(columnName)
return self.__sendRequest(url, "POST", payLoad, "ISCOLUMNEXIST", None)
def getCopyDBKey(self, dbURI, config=None):
"""
Get copy database key for specified database identified by the URI.
@param dbURI: The URI of the database. See L{getDBURI<getDBURI>}.
@type dbURI:string
@param config: Contains any additional control parameters like ZOHO_REGENERATE_KEY. Can be C{None}.
@type config:dictionary
@return: Copy Database key.
@rtype:string
@raise ServerError: If the server has recieved the request but did not process the request
due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
"""
payLoad = ReportClientHelper.getAsPayLoad([config], None, None)
url = ReportClientHelper.addQueryParams(dbURI, self.token, "GETCOPYDBKEY", "XML")
return self.__sendRequest(url, "POST", payLoad, "GETCOPYDBKEY", None)
def getViewName(self, userURI, objid, config=None):
"""
This function returns the name of a view in Zoho Reports.
@param userURI: The URI of the user. See L{getUserURI<getUserURI>}.
@type userURI:string
@param objid: The view id (object id).
@type objid:string
@param config: Contains any additional control parameters. Can be C{None}.
@type config:dictionary
@return: The View name.
@rtype: string
@raise ServerError: If the server has recieved the request but did not process the request
due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
"""
payLoad = ReportClientHelper.getAsPayLoad([config], None, None)
url = ReportClientHelper.addQueryParams(userURI, self.token, "GETVIEWNAME", "XML")
url += "&OBJID=" + urllib.parse.quote(objid)
return self.__sendRequest(url, "POST", payLoad, "GETVIEWNAME", None)
def getInfo(self, tableURI, config=None):
"""
This method returns the Database ID (DBID) and View ID (OBJID) of the corresponding Database.
@param tableURI: The URI of the table. See L{getURI<getURI>}.
@type tableURI:string
@param config: Contains any additional control parameters. Can be C{None}.
@type config:dictionary
@return: The View-Id (object id) and Database-Id.
@rtype: dictionary
@raise ServerError: If the server has recieved the request but did not process the request
due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
"""
payLoad = ReportClientHelper.getAsPayLoad([config], None, None)
url = ReportClientHelper.addQueryParams(tableURI, self.token, "GETINFO", "XML")
return self.__sendRequest(url, "POST", payLoad, "GETINFO", None)
def getViewInfo(self, dbURI, viewID, config=None):
"""
Returns view details like view name,description,type from the the particular workspace identified by dbURI.
@param dbURI: The URI of the workspace. See L{getDBURI<getDBURI>}.
@type dbURI:string
@param viewID: The ID of the view.
@type viewID:string
@param config: Contains any additional control parameters. Can be C{None}.
@type config:dictionary
@raise ServerError: If the server has received the request but did not process the request
due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
@return: The information about the view.
@rtype: dictionary
"""
payLoad = ReportClientHelper.getAsPayLoad([config], None, None)
url = ReportClientHelper.addQueryParams(dbURI, self.token, "GETVIEWINFO", "JSON")
url += "&ZOHO_VIEW_ID=" + urllib.parse.quote(viewID)
return self.__sendRequest(url, "GET", payLoad, "GETVIEWINFO", None)
def recentItems(self, userURI, config=None):
"""
Returns the details of recently accessed views from the ZohoAnalytics account identified by the URI.
@param userURI: The URI of the user. See L{getUserURI<getUserURI>}.
@type userURI:string
@param config: Contains any additional control parameters. Can be C{None}.
@type config:dictionary
@raise ServerError: If the server has received the request but did not process the request
due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
@return: Recently modified views.
@rtype:List
"""
payLoad = ReportClientHelper.getAsPayLoad([config], None, None)
url = ReportClientHelper.addQueryParams(userURI, self.token, "RECENTITEMS", "JSON")
return self.__sendRequest(url, "GET", payLoad, "RECENTITEMS", None)
def getDashboards(self, userURI, config=None):
"""
Returns the list of owned/shared dashboards present in the zoho analytics account identified by the URI.
@param userURI: The URI of the user. See L{getUserURI<getUserURI>}.
@type userURI:string
@param config: Contains any additional control parameters. Can be C{None}.
@type config:dictionary
@raise ServerError: If the server has received the request but did not process the request
due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
@return: The details of dashboards present in the organization.
@rtype:List
"""
payLoad = ReportClientHelper.getAsPayLoad([config], None, None)
url = ReportClientHelper.addQueryParams(userURI, self.token, "GETDASHBOARDS", "JSON")
return self.__sendRequest(url, "GET", payLoad, "GETDASHBOARDS", None)
def myWorkspaceList(self, userURI, config=None):
"""
Returns the list of all owned workspaces present in the ZohoAnalytics account identified by the URI.
@param userURI: The URI of the user. See L{getUserURI<getUserURI>}.
@type userURI:string
@param config: Contains any additional control parameters. Can be C{None}.
@type config:dictionary
@raise ServerError: If the server has received the request but did not process the request
due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
@return: Metainfo of owned workspaces present in the organization.
@rtype:List
"""
payLoad = ReportClientHelper.getAsPayLoad([config], None, None)
url = ReportClientHelper.addQueryParams(userURI, self.token, "MYWORKSPACELIST", "JSON")
return self.__sendRequest(url, "GET", payLoad, "MYWORKSPACELIST", None)
def sharedWorkspaceList(self, userURI, config=None):
"""
Returns the list of all shared workspaces present in the ZohoAnalytics account identified by the URI.
@param userURI: The URI of the user. See L{getUserURI<getUserURI>}.
@type userURI:string
@param config: Contains any additional control parameters. Can be C{None}.
@type config:dictionary
@raise ServerError: If the server has received the request but did not process the request
due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
@return: Metainfo of shared workspaces present in the organization.
@rtype:List
"""
payLoad = ReportClientHelper.getAsPayLoad([config], None, None)
url = ReportClientHelper.addQueryParams(userURI, self.token, "SHAREDWORKSPACELIST", "JSON")
return self.__sendRequest(url, "GET", payLoad, "SHAREDWORKSPACELIST", None)
def viewList(self, dbURI, config=None):
"""
Returns the list of all accessible views present in the workspace identified by the URI.
@param dbURI: The URI of the workspace. See L{getUserURI<getUserURI>}.
@type dbURI:string
@param config: Contains any additional control parameters. Can be C{None}.
@type config:dictionary
@raise ServerError: If the server has received the request but did not process the request
due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
@return: Metainfo of all accessible views present in the workspace.
@rtype:List
"""
payLoad = ReportClientHelper.getAsPayLoad([config], None, None)
url = ReportClientHelper.addQueryParams(dbURI, self.token, "VIEWLIST", "JSON")
return self.__sendRequest(url, "GET", payLoad, "VIEWLIST", None)
def folderList(self, dbURI, config=None):
"""
Returns the list of all accessible views present in the workspace identified by the URI.
@param dbURI: The URI of the workspace. See L{getUserURI<getUserURI>}.
@type dbURI:string
@param config: Contains any additional control parameters. Can be C{None}.
@type config:dictionary
@raise ServerError: If the server has received the request but did not process the request
due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
@return: Metainfo of all accessible folders present in the workspace.
@rtype:List
"""
payLoad = ReportClientHelper.getAsPayLoad([config], None, None)
url = ReportClientHelper.addQueryParams(dbURI, self.token, "FOLDERLIST", "JSON")
return self.__sendRequest(url, "GET", payLoad, "FOLDERLIST", None)
def shareView(self, dbURI, emailIds, views, criteria=None, config=None):
"""
This method is used to share the views (tables/reports/dashboards) created in Zoho Reports with users.
@param dbURI: The URI of the database. See L{getDBURI<getDBURI>}.
@type dbURI:string
@param emailIds: It contains the owners email-id.
@type emailIds:string
@param views: It contains the view names.
@type views:string
@param criteria: Set criteria for share. Can be C{None}.
@type criteria:string
@param config: Contains any additional control parameters. Can be C{None}.
@type config:dictionary
@raise ServerError: If the server has recieved the request but did not process the request
due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
"""
payLoad = ReportClientHelper.getAsPayLoad([config], criteria, None)
url = ReportClientHelper.addQueryParams(dbURI, self.token, "SHARE", "XML")
url += "&ZOHO_EMAILS=" + urllib.parse.quote(emailIds)
url += "&ZOHO_VIEWS=" + urllib.parse.quote(views)
return self.__sendRequest(url, "POST", payLoad, "SHARE", None)
def removeShare(self, dbURI, emailIds, config=None):
"""
This method is used to remove the shared views (tables/reports/dashboards) in Zoho Reports from the users.
@param dbURI: The URI of the database. See L{getDBURI<getDBURI>}.
@type dbURI:string
@param emailIds: It contains the owners email-id.
@type emailIds:string
@param config: Contains any additional control parameters. Can be C{None}.
@type config:dictionary
@raise ServerError: If the server has recieved the request but did not process the request
due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
"""
payLoad = ReportClientHelper.getAsPayLoad([config], None, None)
url = ReportClientHelper.addQueryParams(dbURI, self.token, "REMOVESHARE", "XML")
url += "&ZOHO_EMAILS=" + urllib.parse.quote(emailIds)
return self.__sendRequest(url, "POST", payLoad, "REMOVESHARE", None)
def addDbOwner(self, dbURI, emailIds, config=None):
"""
This method is used to add new owners to the reports database.
@param dbURI: The URI of the database. See L{getDBURI<getDBURI>}.
@type dbURI:string
@param emailIds: It contains the owners email-id.
@type emailIds:string
@param config: Contains any additional control parameters. Can be C{None}.
@type config:dictionary
@raise ServerError: If the server has recieved the request but did not process the request
due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
"""
payLoad = ReportClientHelper.getAsPayLoad([config], None, None)
url = ReportClientHelper.addQueryParams(dbURI, self.token, "ADDDBOWNER", "XML")
url += "&ZOHO_EMAILS=" + urllib.parse.quote(emailIds)
return self.__sendRequest(url, "POST", payLoad, "ADDDBOWNER", None)
def removeDbOwner(self, dbURI, emailIds, config=None):
"""
This method is used to remove the existing owners from the reports database.
@param dbURI: The URI of the database. See L{getDBURI<getDBURI>}.
@type dbURI:string
@param emailIds: It contains the owners email-id.
@type emailIds:string
@param config: Contains any additional control parameters. Can be C{None}.
@type config:dictionary
@raise ServerError: If the server has recieved the request but did not process the request
due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
"""
payLoad = ReportClientHelper.getAsPayLoad([config], None, None)
url = ReportClientHelper.addQueryParams(dbURI, self.token, "REMOVEDBOWNER", "XML")
url += "&ZOHO_EMAILS=" + urllib.parse.quote(emailIds)
return self.__sendRequest(url, "POST", payLoad, "REMOVEDBOWNER", None)
def getShareInfo(self, dbURI, config=None):
"""
Get the shared informations.
@param dbURI: The URI of the database. See L{getDBURI<getDBURI>}.
@type dbURI:string
@param config: Contains any additional control parameters like ZOHO_REGENERATE_KEY. Can be C{None}.
@type config:dictionary
@return: ShareInfo object.
@rtype: ShareInfo
@raise ServerError: If the server has recieved the request but did not process the request
due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
"""
payLoad = ReportClientHelper.getAsPayLoad([config], None, None)
url = ReportClientHelper.addQueryParams(dbURI, self.token, "GETSHAREINFO", "JSON")
return self.__sendRequest(url, "POST", payLoad, "GETSHAREINFO", None)
def getViewUrl(self, tableURI, config=None):
"""
This method returns the URL to access the mentioned view.
@param tableURI: The URI of the table. See L{getURI<getURI>}.
@type tableURI:string
@param config: Contains any additional control parameters. Can be C{None}.
@type config:dictionary
@return: The view URI.
@rtype: string
@raise ServerError: If the server has recieved the request but did not process the request
due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
"""
payLoad = ReportClientHelper.getAsPayLoad([config], None, None)
url = ReportClientHelper.addQueryParams(tableURI, self.token, "GETVIEWURL", "XML")
return self.__sendRequest(url, "POST", payLoad, "GETVIEWURL", None)
def getEmbedUrl(self, tableURI, criteria=None, config=None):
"""
This method is used to get the embed URL of the particular table / view. This API is available only for the White Label Administrator.
@param tableURI: The URI of the table. See L{getURI<getURI>}.
@type tableURI:string
@param criteria: Set criteria for url. Can be C{None}.
@type criteria:string
@param config: Contains any additional control parameters. Can be C{None}.
@type config:dictionary
@return: The embed URI.
@rtype: string
@raise ServerError: If the server has recieved the request but did not process the request
due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
"""
payLoad = ReportClientHelper.getAsPayLoad([config], criteria, None)
url = ReportClientHelper.addQueryParams(tableURI, self.token, "GETEMBEDURL", "XML")
return self.__sendRequest(url, "POST", payLoad, "GETEMBEDURL", None)
def getUsers(self, userURI, config=None):
"""
Get users list for the user account.
@param userURI: The URI of the user. See L{getUserURI<getUserURI>}.
@type userURI:string
@param config: Contains any additional control parameters. Can be C{None}.
@type config:dictionary
@return: The list of user details.
@rtype:list
@raise ServerError: If the server has recieved the request but did not process the request
due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
"""
payLoad = ReportClientHelper.getAsPayLoad([config], None, None)
url = ReportClientHelper.addQueryParams(userURI, self.token, "GETUSERS", "JSON")
return self.__sendRequest(url, "POST", payLoad, "GETUSERS", None)
def addUser(self, userURI, emailIds, config=None):
"""
Add the users to the Zoho Reports Account.
@param userURI: The URI of the user. See L{getUserURI<getUserURI>}.
@type userURI:string
@param emailIds: The email addresses of the users to be added to the Zoho Reports Account separated by comma.
@type emailIds:string
@param config: Contains any additional control parameters.
@type config:dictionary
@raise ServerError: If the server has recieved the request but did not process the request
due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
"""
payLoad = ReportClientHelper.getAsPayLoad([config], None, None)
url = ReportClientHelper.addQueryParams(userURI, self.token, "ADDUSER", "XML")
url += "&ZOHO_EMAILS=" + urllib.parse.quote(emailIds)
return self.__sendRequest(url, "POST", payLoad, "ADDUSER", None)
def removeUser(self, userURI, emailIds, config=None):
"""
Remove the users from the Zoho Reports Account.
@param userURI: The URI of the user. See L{getUserURI<getUserURI>}.
@type userURI:string
@param emailIds: The email addresses of the users to be removed from the Zoho Reports Account separated by comma.
@type emailIds:string
@param config: Contains any additional control parameters.
@type config:dictionary
@raise ServerError: If the server has recieved the request but did not process the request
due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
"""
payLoad = ReportClientHelper.getAsPayLoad([config], None, None)
url = ReportClientHelper.addQueryParams(userURI, self.token, "REMOVEUSER", "XML")
url += "&ZOHO_EMAILS=" + urllib.parse.quote(emailIds)
return self.__sendRequest(url, "POST", payLoad, "REMOVEUSER", None)
def activateUser(self, userURI, emailIds, config=None):
"""
Activate the users in the Zoho Reports Account.
@param userURI: The URI of the user. See L{getUserURI<getUserURI>}.
@type userURI:string
@param emailIds: The email addresses of the users to be activated in the Zoho Reports Account separated by comma.
@type emailIds:string
@param config: Contains any additional control parameters.
@type config:dictionary
@raise ServerError: If the server has recieved the request but did not process the request
due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
"""
payLoad = ReportClientHelper.getAsPayLoad([config], None, None)
url = ReportClientHelper.addQueryParams(userURI, self.token, "ACTIVATEUSER", "XML")
url += "&ZOHO_EMAILS=" + urllib.parse.quote(emailIds)
return self.__sendRequest(url, "POST", payLoad, "ACTIVATEUSER", None)
def deActivateUser(self, userURI, emailIds, config=None):
"""
Deactivate the users in the Zoho Reports Account.
@param userURI: The URI of the user. See L{getUserURI<getUserURI>}.
@type userURI:string
@param emailIds: The email addresses of the users to be deactivated in the Zoho Reports Account separated by comma.
@type emailIds:string
@param config: Contains any additional control parameters.
@type config:dictionary
@raise ServerError: If the server has recieved the request but did not process the request
due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
"""
payLoad = ReportClientHelper.getAsPayLoad([config], None, None)
url = ReportClientHelper.addQueryParams(userURI, self.token, "DEACTIVATEUSER", "XML")
url += "&ZOHO_EMAILS=" + urllib.parse.quote(emailIds)
return self.__sendRequest(url, "POST", payLoad, "DEACTIVATEUSER", None)
def getPlanInfo(self, userURI, config=None):
"""
Get the plan informations.
@param userURI: The URI of the user. See L{getUserURI<getUserURI>}.
@type userURI:string
@param config: Contains any additional control parameters like ZOHO_REGENERATE_KEY. Can be C{None}.
@type config:dictionary
@return: PlanInfo object.
@rtype: PlanInfo
@raise ServerError: If the server has recieved the request but did not process the request
due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
"""
payLoad = ReportClientHelper.getAsPayLoad([config], None, None)
url = ReportClientHelper.addQueryParams(userURI, self.token, "GETUSERPLANDETAILS", "XML")
return self.__sendRequest(url, "POST", payLoad, "GETUSERPLANDETAILS", None)
def getUserURI(self, dbOwnerName):
"""
Returns the URI for the specified user..
@param dbOwnerName: User email-id of the database.
@type dbOwnerName:string
@return: The URI for the specified user.
@rtype:string
"""
url = self.reportServerURL + "/api/" + urllib.parse.quote(dbOwnerName)
return url
def getDBURI(self, dbOwnerName, dbName):
"""
Returns the URI for the specified database.
@param dbOwnerName: The owner of the database.
@type dbOwnerName:string
@param dbName: The name of the database.
@type dbName:string
@return: The URI for the specified database.
@rtype:string
"""
url = self.reportServerURL + "/api/" + urllib.parse.quote(dbOwnerName)
url += "/" + self.splCharReplace(urllib.parse.quote(dbName))
return url
def getURI(self, dbOwnerName: str, dbName: str, tableOrReportName: str) -> str:
"""
Returns the URI for the specified database table (or report).
@param dbOwnerName: The owner of the database containing the table (or report).
@type dbOwnerName:string
@param dbName: The name of the database containing the table (or report).
@type dbName:string
@param tableOrReportName: The name of the table (or report).
@type tableOrReportName:string
@return: The URI for the specified table (or report).
@rtype:string
"""
url = self.reportServerURL + "/api/" + urllib.parse.quote(dbOwnerName)
url += "/" + self.splCharReplace(urllib.parse.quote(dbName)) + "/" + self.splCharReplace(
urllib.parse.quote(tableOrReportName))
return url
def getURI_v2(self) -> str:
"""
Returns the base URL for v2 api with trailing /
"""
url = self.reportServerURL + "/restapi/v2/"
return url
def splCharReplace(self, value):
"""
Internal method for handling special charecters in tale or database name.
"""
value = value.replace("/", "(/)")
value = value.replace("%5C", "(//)")
return value
class ShareInfo:
"""
It contains the database shared details.
"""
def __init__(self, response):
self.response = response
"""
The unparsed complete response content as sent by the server.
@type:string
"""
self.adminMembers = {}
"""
Owners of the database.
@type:dictionary
"""
self.groupMembers = {}
"""
Group Members of the database.
@type:dictionary
"""
self.sharedUsers = []
"""
Shared Users of the database.
@type:list
"""
self.userInfo = {}
"""
The PermissionInfo for the shared user.
@type:dictionary
"""
self.groupInfo = {}
"""
The PermissionInfo for the groups.
@type:dictionary
"""
self.publicInfo = {}
"""
The PermissionInfo for the public link.
@type:dictionary
"""
self.privateInfo = {}
"""
The PermissionInfo for the private link.
@type:dictionary
"""
jsonresult = json.loads(self.response)
sharelist = jsonresult["response"]["result"]
userinfo = sharelist["usershareinfo"]
if (userinfo):
self.userInfo = self.getKeyInfo(userinfo, "email")
groupinfo = sharelist["groupshareinfo"]
if (groupinfo):
self.groupInfo = self.getKeyInfo(groupinfo, "group")
publicinfo = sharelist["publicshareinfo"]
if (publicinfo):
self.publicInfo = self.getInfo(sharelist["publicshareinfo"])
privateinfo = sharelist["privatelinkshareinfo"]
if (privateinfo):
self.privateInfo = self.getInfo(privateinfo)
self.adminMembers = sharelist["dbownershareinfo"]["dbowners"]
def getKeyInfo(self, perminfo, key):
shareinfo = {}
i = 0
for ele in perminfo:
if ("email" == key):
info = ele["shareinfo"]["permissions"]
userid = ele["shareinfo"]["email"]
self.sharedUsers.append(userid)
else:
info = ele["shareinfo"]["permissions"]
userid = ele["shareinfo"]["groupName"]
desc = ele["shareinfo"]["desc"]
gmember = ele["shareinfo"]["groupmembers"]
member = {}
member["name"] = userid
member["desc"] = desc
member["member"] = gmember
self.groupMembers[i] = member
i += 1
memberlist = {}
for ele2 in info:
permlist = {}
viewname = ele2["perminfo"]["viewname"]
sharedby = ele2["perminfo"]["sharedby"]
permissions = ele2["perminfo"]["permission"]
permlist["sharedby"] = sharedby
permlist["permissions"] = permissions
memberlist[viewname] = permlist
shareinfo[userid] = memberlist
return shareinfo
def getInfo(self, perminfo):
userid = perminfo["email"]
shareinfo = {}
memberlist = {}
for ele in perminfo["permissions"]:
permlist = {}
viewname = ele["perminfo"]["viewname"]
sharedby = ele["perminfo"]["sharedby"]
permissions = ele["perminfo"]["permission"]
permlist["sharedby"] = sharedby
permlist["permissions"] = permissions
memberlist[viewname] = permlist
shareinfo[userid] = memberlist
return shareinfo
class PlanInfo:
"""
It contains the plan details.
"""
def __init__(self, response):
self.response = response
"""
The unparsed complete response content as sent by the server.
@type:string
"""
dom = ReportClientHelper.getAsDOM(response)
self.plan = ReportClientHelper.getInfo(dom, "plan", response)
"""
The type of the user plan.
@type:string
"""
self.addon = ReportClientHelper.getInfo(dom, "addon", response)
"""
The addon details.
@type:string
"""
self.billingDate = ReportClientHelper.getInfo(dom, "billingDate", response)
"""
The billing date.
@type:string
"""
self.rowsAllowed = int(ReportClientHelper.getInfo(dom, "rowsAllowed", response))
"""
The total rows allowed to the user.
@type:int
"""
self.rowsUsed = int(ReportClientHelper.getInfo(dom, "rowsUsed", response))
"""
The number of rows used by the user.
@type:int
"""
self.trialAvailed = ReportClientHelper.getInfo(dom, "TrialAvailed", response)
"""
Used to identify the trial pack.
@type:string
"""
if ("false" != self.trialAvailed):
self.trialPlan = ReportClientHelper.getInfo(dom, "TrialPlan", response)
"""
The trial plan detail.
@type:string
"""
self.trialStatus = bool(ReportClientHelper.getInfo(dom, "TrialStatus", response))
"""
The trial plan status.
@type:bool
"""
self.trialEndDate = ReportClientHelper.getInfo(dom, "TrialEndDate", response)
"""
The end date of the trial plan.
@type:string
"""
class RecoverableRateLimitError(Exception):
"""
RatelimitError is thrown if the report server has received a ratelimit error.
"""
def __init__(self, urlResp, **kwargs):
self.httpStatusCode = urlResp.status_code #:The http status code for the request.
self.errorCode = self.httpStatusCode # The error code sent by the server.
self.uri = "" #: The uri which threw this exception.
self.action = "" #:The action to be performed over the resource specified by the uri
self.message = urlResp.content #: Returns the message sent by the server.
self.zoho_error_code = ""
self.extra = kwargs
def __str__(self):
return repr(self.message)
class UnrecoverableRateLimitError(Exception):
"""
RatelimitError is thrown if the report server has received a ratelimit error.
"""
def __init__(self, urlResp, **kwargs):
self.httpStatusCode = urlResp.status_code #:The http status code for the request.
self.errorCode = self.httpStatusCode # The error code sent by the server.
self.uri = "" #: The uri which threw this exception.
self.action = "" #:The action to be performed over the resource specified by the uri
self.message = urlResp.content #: Returns the message sent by the server.
self.extra = kwargs
def __str__(self):
return repr(self.message)
class ServerError(Exception):
"""
ServerError is thrown if the report server has received the request but did not process the
request due to some error. For example if authorization failure.
"""
def __init__(self, urlResp, **kwargs):
self.httpStatusCode = urlResp.status_code #:The http status code for the request.
self.errorCode = self.httpStatusCode # The error code sent by the server.
self.uri = "" #: The uri which threw this exception.
self.action = "" #:The action to be performed over the resource specified by the uri
self.message = urlResp.content #: Returns the message sent by the server.
self.zoho_error_code = ""
self.extra = kwargs
parseable = False
if not urlResp:
logger.error(f"response object is None")
else:
try:
contHeader = urlResp.headers.get("Content-Type", None)
if (contHeader and contHeader.find("text/xml") > -1):
self.__parseErrorResponse()
except AttributeError:
logger.error(f"response object is None")
def __parseErrorResponse(self):
try:
dom = xml.dom.minidom.parseString(self.message)
respEl = dom.getElementsByTagName("response")[0]
self.uri = respEl.getAttribute("uri")
self.action = respEl.getAttribute("action")
self.errorCode = int(ReportClientHelper.getInfo(dom, "code", self.message))
self.message = ReportClientHelper.getInfo(dom, "message", self.message)
except Exception as inst:
print(inst)
self.parseError = inst
def __str__(self):
return repr(self.message)
class BadDataError(Exception):
def __init__(self, urlResp, **kwargs):
self.httpStatusCode = urlResp.status_code #:The http status code for the request.
self.errorCode = self.httpStatusCode # The error code sent by the server.
self.uri = "" #: The uri which threw this exception.
self.action = "" #:The action to be performed over the resource specified by the uri
self.message = urlResp.content #: Returns the message sent by the server.
self.zoho_error_code = ""
self.extra = kwargs
parseable = False
if not urlResp:
logger.error(f"response object is None")
else:
try:
contHeader = urlResp.headers.get("Content-Type", None)
if (contHeader and contHeader.find("text/xml") > -1):
self.__parseErrorResponse()
except AttributeError:
logger.error(f"response object is None")
def __parseErrorResponse(self):
try:
dom = xml.dom.minidom.parseString(self.message)
respEl = dom.getElementsByTagName("response")[0]
self.uri = respEl.getAttribute("uri")
self.action = respEl.getAttribute("action")
self.errorCode = int(ReportClientHelper.getInfo(dom, "code", self.message))
self.message = ReportClientHelper.getInfo(dom, "message", self.message)
except Exception as inst:
print(inst)
self.parseError = inst
def __str__(self):
return repr(self.message)
class ParseError(Exception):
"""
ParseError is thrown if the server has responded but client was not able to parse the response.
Possible reasons could be version mismatch.The client might have to be updated to a newer version.
"""
def __init__(self, responseContent, message, origExcep):
self.responseContent = responseContent #: The complete response content as sent by the server.
self.message = message #: The message describing the error.
self.origExcep = origExcep #: The original exception that occurred during parsing(Can be C{None}).
def __str__(self):
return repr(self.message)
class ImportResult:
"""
ImportResult contains the result of an import operation.
"""
def __init__(self, response):
self.response = response
"""
The unparsed complete response content as sent by the server.
@type:string
"""
dom = ReportClientHelper.getAsDOM(response)
msg = response.decode('utf-8')
try:
self.result_code = int(ReportClientHelper.getInfo(dom, "code", response))
except ParseError as e:
# logger.debug(f"Note in import result: could not find result code {msg}")
self.result_code = 0
try:
self.totalColCount = int(ReportClientHelper.getInfo(dom, "totalColumnCount", response))
except ParseError as e:
logger.debug(f"Error in import result: did not get a good return message: {msg}")
raise ParseError(responseContent=msg, message=None, origExcep=None)
"""
The total columns that were present in the imported file.
@type:integer
"""
self.selectedColCount = int(ReportClientHelper.getInfo(dom, "selectedColumnCount", response))
"""
The number of columns that were imported.See ZOHO_SELECTED_COLUMNS parameter.
@type:integer
"""
self.totalRowCount = int(ReportClientHelper.getInfo(dom, "totalRowCount", response))
"""
The total row count in the imported file.
@type:integer
"""
self.successRowCount = int(ReportClientHelper.getInfo(dom, "successRowCount", response))
"""
The number of rows that were imported successfully without errors.
@type:integer
"""
self.warningCount = int(ReportClientHelper.getInfo(dom, "warnings", response))
"""
The number of rows that were imported with warnings. Applicable if ZOHO_ON_IMPORT_ERROR
parameter has been set to SETCOLUMNEMPTY.
@type:integer
"""
self.impErrors = ReportClientHelper.getInfo(dom, "importErrors", response)
"""
The first 100 import errors. Applicable if ZOHO_ON_IMPORT_ERROR parameter is either
SKIPROW or SETCOLUMNEMPTY. In case of ABORT , L{ServerError <ServerError>} is thrown.
@type:string
"""
self.operation = ReportClientHelper.getInfo(dom, "importOperation", response)
"""
The import operation. Can be either
1. B{created} if the specified table has been created. For this ZOHO_CREATE_TABLE parameter
should have been set to true
2. B{updated} if the specified table already exists.
@type:string
"""
self.dataTypeDict = {}
"""
Contains the mapping of column name to datatype.
@type:dictionary
"""
cols = dom.getElementsByTagName("column")
self.impCols = []
"""
Contains the list of columns that were imported. See also L{dataTypeDict<dataTypeDict>}.
@type:dictionary
"""
for el in cols:
content = ReportClientHelper.getText(el.childNodes)
self.dataTypeDict[content] = el.getAttribute("datatype")
self.impCols.append(content)
class ResponseObj:
"""
Internal class.
"""
def __init__(self, resp: requests.Response):
""" updated to assume a urllib3 object"""
self.content = getattr(resp, 'content', None)
self.reason = getattr(resp, 'reason', None) # This is used for communication about errors
self.status_code = getattr(resp, 'status_code', None)
self.headers = {}
self.headers = getattr(resp, 'headers', None)
self.response = resp
class ReportClientHelper:
"""
Internal class.
"""
API_VERSION = "1.0"
"""The api version of zoho reports based on which this library is written. This is a constant."""
@staticmethod
def getInfo(dom, elName, response):
nodeList = dom.getElementsByTagName(elName)
if (nodeList.length == 0):
raise ParseError(response, elName + " element is not present in the response", None)
el = nodeList[0]
return ReportClientHelper.getText(el.childNodes)
@staticmethod
def getText(nodelist):
txt = ""
for node in nodelist:
if node.nodeType == node.TEXT_NODE:
txt = txt + node.data
return txt
@staticmethod
def getAsDOM(response):
try:
dom = xml.dom.minidom.parseString(response)
return dom
except Exception as inst:
raise ParseError(response, "Unable parse the response as xml", inst)
@staticmethod
def addQueryParams(url, authtoken, action, exportFormat, sql=None, criteria=None, table_design=None):
url = ReportClientHelper.checkAndAppendQMark(url)
url += "&ZOHO_ERROR_FORMAT=JSON&ZOHO_ACTION=" + urllib.parse.quote(action)
url += "&ZOHO_OUTPUT_FORMAT=" + urllib.parse.quote(exportFormat)
url += "&ZOHO_API_VERSION=" + ReportClientHelper.API_VERSION
if ReportClient.isOAuth == False:
url += "&authtoken=" + urllib.parse.quote(authtoken)
if exportFormat == "JSON":
url += "&ZOHO_VALID_JSON=TRUE"
if sql:
url += "&ZOHO_SQLQUERY=" + urllib.parse.quote(sql)
if criteria:
url += "&ZOHO_CRITERIA=" + urllib.parse.quote(criteria)
if table_design:
# quote_plus seems to work and it makes for a smaller URL avoiding problems with a URL too long
url += "&ZOHO_TABLE_DESIGN=" + urllib.parse.quote_plus(table_design)
return url
@staticmethod
def getAsPayLoad(separateDicts, criteria: Optional[str], sql: Optional[str], encode_payload=False):
payload = {}
for i in separateDicts:
if (i != None):
payload.update(i)
if (criteria != None):
payload["ZOHO_CRITERIA"] = criteria
if (sql != None):
payload["ZOHO_SQLQUERY"] = sql
if len(payload) != 0:
if encode_payload:
payload = urllib.parse.urlencode(payload)
else:
pass
else:
payload = None
return payload
@staticmethod
def checkAndAppendQMark(url):
if (url.find("?") == -1):
url += "?"
elif (url[len(url) - 1] != '&'):
url += "&"
return url | zoho-analytics-connector | /zoho_analytics_connector-1.4.3.tar.gz/zoho_analytics_connector-1.4.3/zoho_analytics_connector/report_client.py | report_client.py |
from io import StringIO
import urllib
import json
import requests
class AnalyticsClient:
"""
AnalyticsClient provides the python based language binding to the https based API of Zoho Analytics.
"""
CLIENT_VERSION = "2.1.0"
COMMON_ENCODE_CHAR = "UTF-8"
def __init__(self, client_id, client_secret, refresh_token):
"""
Creates a new C{AnalyticsClient} instance.
@param client_id: User client id for OAUth
@type client_id:string
@param client_secret: User client secret for OAuth
@type client_secret:string
@param refresh_token: User's refresh token for OAUth).
@type refresh_token:string
"""
self.proxy = False
self.proxy_host = None
self.proxy_port = None
self.proxy_user_name = None
self.proxy_password = None
self.accounts_server_url = "https://accounts.zoho.com"
self.analytics_server_url = "https://analyticsapi.zoho.com"
self.client_id = client_id
self.client_secret = client_secret
self.refresh_token = refresh_token
self.access_token = None
def get_org_instance(self, org_id):
"""
Returns a new C{OrgAPI} instance.
@param org_id: The id of the organization.
@type org_id:string
"""
org_instance = AnalyticsClient.OrgAPI(self, org_id)
return org_instance
def get_workspace_instance(self, org_id, workspace_id):
"""
Returns a new C{WorkspaceAPI} instance.
@param org_id: The id of the organization.
@type org_id:string
@param workspace_id: The id of the workspace.
@type workspace_id:string
"""
workspace_instance = AnalyticsClient.WorkspaceAPI(self, org_id, workspace_id)
return workspace_instance
def get_view_instance(self, org_id, workspace_id, view_id):
"""
Returns a new C{ViewAPI} instance.
@param org_id: The id of the organization.
@type org_id:string
@param workspace_id: The id of the workspace.
@type workspace_id:string
@param view_id: The id of the view.
@type view_id:string
"""
view_instance = AnalyticsClient.ViewAPI(self, org_id, workspace_id, view_id)
return view_instance
def get_bulk_instance(self, org_id, workspace_id):
"""
Returns a new C{BulkAPI} instance.
@param org_id: The id of the organization.
@type org_id:string
@param workspace_id: The id of the workspace.
@type workspace_id:string
"""
data_instance = AnalyticsClient.BulkAPI(self, org_id, workspace_id)
return data_instance
def get_orgs(self):
"""
Returns list of all accessible organizations.
@return: Organization list.
@rtype:list
@raise ServerError: If the server has received the request but did not process the request
due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
"""
endpoint = "/restapi/v2/orgs"
response = self.send_api_request("GET", endpoint, None, None)
return response["data"]["orgs"]
def get_workspaces(self):
"""
Returns list of all accessible workspaces.
@return: Workspace list.
@rtype:list
@raise ServerError: If the server has received the request but did not process the request
due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
"""
endpoint = "/restapi/v2/workspaces"
response = self.send_api_request("GET", endpoint, None, None)
return response["data"]
def get_owned_workspaces(self):
"""
Returns list of owned workspaces.
@return: Workspace list.
@rtype:list
@raise ServerError: If the server has received the request but did not process the request
due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
"""
endpoint = "/restapi/v2/workspaces/owned"
response = self.send_api_request("GET", endpoint, None, None)
return response["data"]["workspaces"]
def get_shared_workspaces(self):
"""
Returns list of shared workspaces.
@return: Workspace list.
@rtype:list
@raise ServerError: If the server has received the request but did not process the request
due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
"""
endpoint = "/restapi/v2/workspaces/shared"
response = self.send_api_request("GET", endpoint, None, None)
return response["data"]["workspaces"]
def get_recent_views(self):
"""
Returns list of recently accessed views.
@return: View list.
@rtype:list
@raise ServerError: If the server has received the request but did not process the request
due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
"""
endpoint = "/restapi/v2/recentviews"
response = self.send_api_request("GET", endpoint, None, None)
return response["data"]["views"]
def get_dashboards(self):
"""
Returns list of all accessible dashboards.
@return: Dashboard list.
@rtype:list
@raise ServerError: If the server has received the request but did not process the request
due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
"""
endpoint = "/restapi/v2/dashboards"
response = self.send_api_request("GET", endpoint, None, None)
return response["data"]
def get_owned_dashboards(self):
"""
Returns list of owned dashboards.
@return: Dashboard list.
@rtype:list
@raise ServerError: If the server has received the request but did not process the request
due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
"""
endpoint = "/restapi/v2/dashboards/owned"
response = self.send_api_request("GET", endpoint, None, None)
return response["data"]["views"]
def get_shared_dashboards(self):
"""
Returns list of shared dashboards.
@return: Dashboard list.
@rtype:list
@raise ServerError: If the server has received the request but did not process the request
due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
"""
endpoint = "/restapi/v2/dashboards/shared"
response = self.send_api_request("GET", endpoint, None, None)
return response["data"]["views"]
def get_workspace_details(self, workspace_id):
"""
Returns details of the specified workspace.
@param workspace_id: Id of the workspace.
@type workspace_id: string
@raise ServerError: If the server has received the request but did not process the request due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
@return: Workspace details.
@rtype:dictionary
"""
endpoint = "/restapi/v2/workspaces/" + workspace_id
response = self.send_api_request("GET", endpoint, None, None)
return response["data"]["workspaces"]
def get_view_details(self, view_id, config={}):
"""
Returns details of the specified view.
@param view_id: Id of the view.
@type view_id: string
@param config: Contains any additional control parameters. Can be C{None}.
@type config:dictionary
@raise ServerError: If the server has received the request but did not process the request due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
@return: View details.
@rtype:dictionary
"""
endpoint = "/restapi/v2/views/" + view_id
response = self.send_api_request("GET", endpoint, config, None)
return response["data"]["views"]
class OrgAPI:
"""
OrgAPI contains organization level operations.
"""
def __init__(self, ac, org_id):
self.ac = ac
self.request_headers = {}
self.request_headers["ZANALYTICS-ORGID"] = org_id
def create_workspace(self, workspace_name, config={}):
"""
Create a blank workspace in the specified organization.
@param workspace_name: The name of the workspace.
@type workspace_name:string
@param config: Contains any additional control parameters. Can be C{None}.
@type config:dictionary
@raise ServerError: If the server has received the request but did not process the request due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
@return: Created workspace id.
@rtype:string
"""
config["workspaceName"] = workspace_name
endpoint = "/restapi/v2/workspaces/"
response = self.ac.send_api_request("POST", endpoint, config, self.request_headers)
return int(response["data"]["workspaceId"])
def get_admins(self):
"""
Returns list of admins for a specified organization.
@raise ServerError: If the server has received the request but did not process the request due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
@return: Organization admin list.
@rtype:list
"""
endpoint = "/restapi/v2/orgadmins"
response = self.ac.send_api_request("GET", endpoint, None, self.request_headers)
return response["data"]["orgAdmins"]
def get_users(self):
"""
Returns list of users for the specified organization.
@raise ServerError: If the server has received the request but did not process the request due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
@return: User list.
@rtype:list
"""
endpoint = "/restapi/v2/users"
response = self.ac.send_api_request("GET", endpoint, None, self.request_headers)
return response["data"]["users"]
def add_users(self, email_ids, config={}):
"""
Add users to the specified organization.
@param email_ids: The email address of the users to be added.
@type email_ids:list
@param config: Contains any additional control parameters. Can be C{None}.
@type config:dictionary
@raise ServerError: If the server has received the request but did not process the request due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
"""
config["emailIds"] = email_ids
endpoint = "/restapi/v2/users"
self.ac.send_api_request("POST", endpoint, config, self.request_headers)
def remove_users(self, email_ids, config={}):
"""
Remove users from the specified organization.
@param email_ids: The email address of the users to be removed.
@type email_ids:list
@param config: Contains any additional control parameters. Can be C{None}.
@type config:dictionary
@raise ServerError: If the server has received the request but did not process the request due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
"""
config["emailIds"] = email_ids
endpoint = "/restapi/v2/users"
self.ac.send_api_request("DELETE", endpoint, config, self.request_headers)
def activate_users(self, email_ids, config={}):
"""
Activate users in the specified organization.
@param email_ids: The email address of the users to be activated.
@type email_ids:list
@param config: Contains any additional control parameters. Can be C{None}.
@type config:dictionary
@raise ServerError: If the server has received the request but did not process the request due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
"""
config["emailIds"] = email_ids
endpoint = "/restapi/v2/users/active"
self.ac.send_api_request("PUT", endpoint, config, self.request_headers)
def deactivate_users(self, email_ids, config={}):
"""
Deactivate users in the specified organization.
@param email_ids: The email address of the users to be deactivated.
@type email_ids:list
@param config: Contains any additional control parameters. Can be C{None}.
@type config:dictionary
@raise ServerError: If the server has received the request but did not process the request due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
"""
config["emailIds"] = email_ids
endpoint = "/restapi/v2/users/inactive"
self.ac.send_api_request("PUT", endpoint, config, self.request_headers)
def change_user_role(self, email_ids, role, config={}):
"""
Change role for the specified users.
@param email_ids: The email address of the users to be deactivated.
@type email_ids:list
@param role: New role for the users.
@type role:string
@param config: Contains any additional control parameters. Can be C{None}.
@type config:dictionary
@raise ServerError: If the server has received the request but did not process the request due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
"""
config["emailIds"] = email_ids
config["role"] = role
endpoint = "/restapi/v2/users/role"
self.ac.send_api_request("PUT", endpoint, config, self.request_headers)
def get_subscription_details(self):
"""
Returns subscription details of the specified organization.
@raise ServerError: If the server has received the request but did not process the request due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
@return: Subscription details.
@rtype:dictionary
"""
endpoint = "/restapi/v2/subscription"
response = self.ac.send_api_request("GET", endpoint, None, self.request_headers)
return response["data"]["subscription"]
def get_meta_details(self, workspace_name, view_name):
"""
Returns details of the specified workspace/view.
@param workspace_name: Name of the workspace.
@type workspace_name:string
@param view_name: Name of the view.
@type view_name:string
@raise ServerError: If the server has received the request but did not process the request due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
@return: Workspace (or) View meta details.
@rtype:dictionary
"""
config = {}
config["workspaceName"] = workspace_name
if view_name != None:
config["viewName"] = view_name
endpoint = "/restapi/v2/metadetails"
response = self.ac.send_api_request("GET", endpoint, config, self.request_headers)
return response["data"]
class WorkspaceAPI:
"""
WorkspaceAPI contains workspace level operations.
"""
def __init__(self, ac, org_id, workspace_id):
self.ac = ac
self.endpoint = "/restapi/v2/workspaces/" + workspace_id
self.request_headers = {}
self.request_headers["ZANALYTICS-ORGID"] = org_id
def copy(self, new_workspace_name, config={}, dest_org_id=None):
"""
Copy the specified workspace from one organization to another or within the organization.
@param new_workspace_name: Name of the new workspace.
@type new_workspace_name: string
@param config: Contains any additional control parameters. Can be C{None}.
@type config:dictionary
@param dest_org_id: Id of the organization where the destination workspace is present. Can be C{None}.
@type dest_org_id: string
@raise ServerError: If the server has received the request but did not process the request due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
@return: Copied workspace id.
@rtype:string
"""
config["newWorkspaceName"] = new_workspace_name
headers = self.request_headers.copy()
if bool(dest_org_id):
headers["ZANALYTICS-DEST-ORGID"] = dest_org_id
response = self.ac.send_api_request("POST", self.endpoint, config, headers)
return int(response["data"]["workspaceId"])
def rename(self, workspace_name, config={}):
"""
Rename a specified workspace in the organization.
@param workspace_name: New name for the workspace.
@type workspace_name: string
@param config: Contains any additional control parameters. Can be C{None}.
@type config:dictionary
@raise ServerError: If the server has received the request but did not process the request due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
"""
config["workspaceName"] = workspace_name
response = self.ac.send_api_request("PUT", self.endpoint, config, self.request_headers)
def delete(self):
"""
Delete a specified workspace in the organization.
@raise ServerError: If the server has received the request but did not process the request due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
"""
response = self.ac.send_api_request("DELETE", self.endpoint, None, self.request_headers)
def get_secret_key(self, config={}):
"""
Returns the secret key of the specified workspace.
@param config: Contains any additional control parameters. Can be C{None}.
@type config:dictionary
@raise ServerError: If the server has received the request but did not process the request due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
@return: Workspace secret key.
@rtype:string
"""
endpoint = self.endpoint + "/secretkey"
response = self.ac.send_api_request("GET", endpoint, config, self.request_headers)
return response["data"]["workspaceKey"]
def add_favorite(self):
"""
Adds a specified workspace as favorite.
@raise ServerError: If the server has received the request but did not process the request due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
"""
endpoint = self.endpoint + "/favorite"
response = self.ac.send_api_request("POST", endpoint, None, self.request_headers)
def remove_favorite(self):
"""
Remove a specified workspace from favorite.
@raise ServerError: If the server has received the request but did not process the request due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
"""
endpoint = self.endpoint + "/favorite"
response = self.ac.send_api_request("DELETE", endpoint, None, self.request_headers)
def add_default(self):
"""
Adds a specified workspace as default.
@raise ServerError: If the server has received the request but did not process the request due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
"""
endpoint = self.endpoint + "/default"
response = self.ac.send_api_request("POST", endpoint, None, self.request_headers)
def remove_default(self):
"""
Remove a specified workspace from default.
@raise ServerError: If the server has received the request but did not process the request due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
"""
endpoint = self.endpoint + "/default"
response = self.ac.send_api_request("DELETE", endpoint, None, self.request_headers)
def get_admins(self):
"""
Returns list of admins for the specified workspace.
@raise ServerError: If the server has received the request but did not process the request due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
@return: Workspace admin list.
@rtype:list
"""
endpoint = self.endpoint + "/admins"
response = self.ac.send_api_request("GET", endpoint, None, self.request_headers)
return response["data"]["workspaceAdmins"]
def add_admins(self, email_ids, config={}):
"""
Add admins for the specified workspace.
@param email_ids: The email address of the admin users to be added.
@type email_ids: list
@param config: Contains any additional control parameters. Can be C{None}.
@type config:dictionary
@raise ServerError: If the server has received the request but did not process the request due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
"""
config["emailIds"] = email_ids
endpoint = self.endpoint + "/admins"
self.ac.send_api_request("POST", endpoint, config, self.request_headers)
def remove_admins(self, email_ids, config={}):
"""
Remove admins from the specified workspace.
@param email_ids: The email address of the admin users to be removed.
@type email_ids: list
@param config: Contains any additional control parameters. Can be C{None}.
@type config:dictionary
@raise ServerError: If the server has received the request but did not process the request due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
"""
config["emailIds"] = email_ids
endpoint = self.endpoint + "/admins"
self.ac.send_api_request("DELETE", endpoint, config, self.request_headers)
def get_share_info(self):
"""
Returns shared details of the specified workspace.
@raise ServerError: If the server has received the request but did not process the request due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
@return: Workspace share info.
@rtype:dictionary
"""
endpoint = self.endpoint + "/share"
response = self.ac.send_api_request("GET", endpoint, None, self.request_headers)
return response["data"]
def share_views(self, view_ids, email_ids, permissions, config={}):
"""
Share views to the specified users.
@param view_ids: View ids which to be shared.
@type view_ids: list
@param email_ids: The email address of the users to whom the views need to be shared.
@type email_ids: list
@param permissions: Contains permission details.
@type permissions: dictionary
@param config: Contains any additional control parameters. Can be C{None}.
@type config:dictionary
@raise ServerError: If the server has received the request but did not process the request due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
"""
config["viewIds"] = view_ids
config["emailIds"] = email_ids
config["permissions"] = permissions
endpoint = self.endpoint + "/share"
self.ac.send_api_request("POST", endpoint, config, self.request_headers)
def remove_share(self, view_ids, email_ids, config={}):
"""
Remove shared views for the specified users.
@param view_ids: View ids whose sharing needs to be removed.
@type view_ids: list
@param email_ids: The email address of the users to whom the sharing need to be removed.
@type email_ids: list
@param config: Contains any additional control parameters. Can be C{None}.
@type config:dictionary
@raise ServerError: If the server has received the request but did not process the request due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
"""
config["emailIds"] = email_ids
if view_ids != None:
config["viewIds"] = view_ids
endpoint = self.endpoint + "/share"
self.ac.send_api_request("DELETE", endpoint, config, self.request_headers)
def get_folders(self):
"""
Returns list of all accessible folders for the specified workspace.
@raise ServerError: If the server has received the request but did not process the request due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
@return: Folder list.
@rtype:list
"""
endpoint = self.endpoint + "/folders"
response = self.ac.send_api_request("GET", endpoint, None, self.request_headers)
return response["data"]["folders"]
def create_folder(self, folder_name, config={}):
"""
Create a folder in the specified workspace.
@param folder_name: Name of the folder to be created.
@type folder_name: string
@param config: Contains any additional control parameters. Can be C{None}.
@type config:dictionary
@raise ServerError: If the server has received the request but did not process the request due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
@return: Created folder id.
@rtype:string
"""
config["folderName"] = folder_name
endpoint = self.endpoint + "/folders"
response = self.ac.send_api_request("POST", endpoint, config, self.request_headers)
return int(response["data"]["folderId"])
def get_views(self, config={}):
"""
Returns list of all accessible views for the specified workspace.
@param config: Contains any additional control parameters. Can be C{None}.
@type config:dictionary
@raise ServerError: If the server has received the request but did not process the request due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
@return: View list.
@rtype:list
"""
endpoint = self.endpoint + "/views"
response = self.ac.send_api_request("GET", endpoint, config, self.request_headers)
return response["data"]["views"]
def create_table(self, table_design):
"""
Create a table in the specified workspace.
@param table_design: Table structure.
@type table_design: dictionary
@raise ServerError: If the server has received the request but did not process the request due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
@return: created table id.
@rtype:string
"""
config = {}
config["tableDesign"] = table_design
endpoint = self.endpoint + "/tables"
response = self.ac.send_api_request("POST", endpoint, config, self.request_headers)
return int(response["data"]["viewId"])
def copy_views(self, view_ids, dest_workspace_id, config={}, dest_org_id=None):
"""
Copy the specified views from one workspace to another workspace.
@param view_ids: The id of the views to be copied.
@type view_ids: list
@param dest_workspace_id: The destination workspace id.
@type dest_workspace_id: string
@param dest_org_id: Id of the organization where the destination workspace is present. Can be C{None}.
@type dest_org_id: string
@param config: Contains any additional control parameters. Can be C{None}.
@type config:dictionary
@raise ServerError: If the server has received the request but did not process the request due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
@return: View list.
@rtype:list
"""
config["viewIds"] = view_ids
config["destWorkspaceId"] = dest_workspace_id
endpoint = self.endpoint + "/views/copy"
headers = self.request_headers.copy()
if bool(dest_org_id):
headers["ZANALYTICS-DEST-ORGID"] = dest_org_id
response = self.ac.send_api_request("POST", endpoint, config, headers)
return response["data"]["views"]
def enable_domain_access(self):
"""
Enable workspace to the specified white label domain.
@raise ServerError: If the server has received the request but did not process the request due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
"""
endpoint = self.endpoint + "/wlaccess"
response = self.ac.send_api_request("POST", endpoint, None, self.request_headers)
def disable_domain_access(self):
"""
Disable workspace from the specified white label domain.
@raise ServerError: If the server has received the request but did not process the request due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
"""
endpoint = self.endpoint + "/wlaccess"
response = self.ac.send_api_request("DELETE", endpoint, None, self.request_headers)
def rename_folder(self, folder_id, folder_name, config={}):
"""
Rename a specified folder in the workspace.
@param folder_id: Id of the folder.
@type folder_id: string
@param folder_name: New name for the folder.
@type folder_name: string
@param config: Contains any additional control parameters. Can be C{None}.
@type config:dictionary
@raise ServerError: If the server has received the request but did not process the request due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
"""
config["folderName"] = folder_name
endpoint = self.endpoint + "/folders/" + folder_id
self.ac.send_api_request("PUT", endpoint, config, self.request_headers)
def delete_folder(self, folder_id):
"""
Delete a specified folder in the workspace.
@param folder_id: Id of the folder to be deleted.
@type folder_id: string
@raise ServerError: If the server has received the request but did not process the request due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
"""
endpoint = self.endpoint + "/folders/" + folder_id
self.ac.send_api_request("DELETE", endpoint, None, self.request_headers)
def get_groups(self):
"""
Returns list of groups for the specified workspace.
@raise ServerError: If the server has received the request but did not process the request due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
@return: Group list.
@rtype:list
"""
endpoint = self.endpoint + "/groups"
response = self.ac.send_api_request("GET", endpoint, None, self.request_headers)
return response["data"]["groups"]
def create_group(self, group_name, email_ids, config={}):
"""
Create a group in the specified workspace.
@param group_name: Name of the group.
@type group_name: string
@param email_ids: The email address of the users to be added to the group.
@type email_ids: list
@param config: Contains any additional control parameters. Can be C{None}.
@type config:dictionary
@raise ServerError: If the server has received the request but did not process the request due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
@return: Created group id.
@rtype:string
"""
config["groupName"] = group_name
config["emailIds"] = email_ids
endpoint = self.endpoint + "/groups"
response = self.ac.send_api_request("POST", endpoint, config, self.request_headers)
return int(response["data"]["groupId"])
def get_group_details(self, group_id):
"""
Get the details of the specified group.
@param group_id: Id of the group.
@type group_id: string
@raise ServerError: If the server has received the request but did not process the request due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
@return: Details of the specified group.
@rtype:dictionary
"""
endpoint = self.endpoint + "/groups/" + group_id
response = self.ac.send_api_request("GET", endpoint, None, self.request_headers)
return response["data"]["groups"]
def rename_group(self, group_id, group_name, config={}):
"""
Rename a specified group.
@param group_id: Id of the group.
@type group_id: string
@param group_name: New name for the group.
@type group_name: string
@param config: Contains any additional control parameters. Can be C{None}.
@type config:dictionary
@raise ServerError: If the server has received the request but did not process the request due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
"""
config["groupName"] = group_name
endpoint = self.endpoint + "/groups/" + group_id
self.ac.send_api_request("PUT", endpoint, config, self.request_headers)
def delete_group(self, group_id):
"""
Delete a specified group.
@param group_id: The id of the group.
@type group_id: string
@raise ServerError: If the server has received the request but did not process the request due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
"""
endpoint = self.endpoint + "/groups/" + group_id
self.ac.send_api_request("DELETE", endpoint, None, self.request_headers)
def add_group_members(self, group_id, email_ids, config={}):
"""
Add users to the specified group.
@param group_id: Id of the group.
@type group_id: string
@param email_ids: The email address of the users to be added to the group.
@type email_ids: list
@param config: Contains any additional control parameters. Can be C{None}.
@type config:dictionary
@raise ServerError: If the server has received the request but did not process the request due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
"""
config["emailIds"] = email_ids
endpoint = self.endpoint + "/groups/" + group_id + "/members"
self.ac.send_api_request("POST", endpoint, config, self.request_headers)
def remove_group_members(self, group_id, email_ids, config={}):
"""
Remove users from the specified group.
@param group_id: Id of the group.
@type group_id: string
@param email_ids: The email address of the users to be removed from the group.
@type email_ids: list
@param config: Contains any additional control parameters. Can be C{None}.
@type config:dictionary
@raise ServerError: If the server has received the request but did not process the request due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
"""
config["emailIds"] = email_ids
endpoint = self.endpoint + "/groups/" + group_id + "/members"
self.ac.send_api_request("DELETE", endpoint, config, self.request_headers)
def create_slideshow(self, slide_name, view_ids, config={}):
"""
Create a slideshow in the specified workspace.
@param slide_name: Name of the slideshow to be created.
@type slide_name: string
@param view_ids: Ids of the view to be included in the slideshow.
@type view_ids: list
@param config: Contains any additional control parameters. Can be C{None}.
@type config:dictionary
@raise ServerError: If the server has received the request but did not process the request due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
@return: Id of the created slideshow.
@rtype:string
"""
endpoint = self.endpoint + "/slides"
config["slideName"] = slide_name
config["viewIds"] = view_ids
response = self.ac.send_api_request("POST", endpoint, config, self.request_headers)
return int(response["data"]["slideId"])
def update_slideshow(self, slide_id, config={}):
"""
Update details of the specified slideshow.
@param slide_id: The id of the slideshow.
@type slide_id: string
@param config - Contains the control configurations.
@type config:dictionary
@raise ServerError: If the server has received the request but did not process the request due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
"""
endpoint = self.endpoint + "/slides/" + slide_id
self.ac.send_api_request("PUT", endpoint, config, self.request_headers)
def delete_slideshow(self, slide_id):
"""
Delete a specified slideshow in the workspace.
@param slide_id: Id of the slideshow.
@type slide_id: string
@raise ServerError: If the server has received the request but did not process the request due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
"""
endpoint = self.endpoint + "/slides/" + slide_id
self.ac.send_api_request("DELETE", endpoint, None, self.request_headers)
def get_slideshows(self):
"""
Returns list of slideshows for the specified workspace.
@raise ServerError: If the server has received the request but did not process the request due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
@return: Slideshow list.
@rtype:list
"""
endpoint = self.endpoint + "/slides"
response = self.ac.send_api_request("GET", endpoint, None, self.request_headers)
return response["data"]["slideshows"]
def get_slideshow_url(self, slide_id, config={}):
"""
Returns slide URL to access the specified slideshow.
@param slide_id: Id of the slideshow.
@type slide_id: string
@param config: Contains any additional control parameters. Can be C{None}.
@type config:dictionary
@raise ServerError: If the server has received the request but did not process the request due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
@return: Slideshow URL.
@rtype:string
"""
endpoint = self.endpoint + "/slides/" + slide_id + "/publish"
response = self.ac.send_api_request("GET", endpoint, config, self.request_headers)
return response["data"]["slideUrl"]
def get_slideshow_details(self, slide_id):
"""
Returns details of the specified slideshow.
@param slide_id: Id of the slideshow.
@type slide_id: string
@raise ServerError: If the server has received the request but did not process the request due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
@return: Slideshow details.
@rtype:dictionary
"""
endpoint = self.endpoint + "/slides/" + slide_id
response = self.ac.send_api_request("GET", endpoint, None, self.request_headers)
return response["data"]["slideInfo"]
def create_variable(self, variable_name, variable_datatype, variable_type, config={}):
"""
Create a variable in the workspace.
@param variable_name: Name of the variable to be created.
@type variable_name: string
@param variable_datatype: Datatype of the variable to be created.
@type variable_datatype: string
@param variable_type: Type of the variable to be created.
@type variable_type: string
@param config: Contains the control parameters.
@type config:dictionary
@raise ServerError: If the server has received the request but did not process the request due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
@return: Id of the created variable.
@rtype:string
"""
endpoint = self.endpoint + "/variables"
config["variableName"] = variable_name
config["variableDataType"] = variable_datatype
config["variableType"] = variable_type
response = self.ac.send_api_request("POST", endpoint, config, self.request_headers)
return int(response["data"]["variableId"])
def update_variable(self, variable_id, variable_name, variable_datatype, variable_type, config={}):
"""
Update details of the specified variable in the workspace.
@param variable_id: Id of the variable.
@type variable_id: string
@param variable_name: New name for the variable.
@type variable_name: string
@param variable_datatype: New datatype for the variable.
@type variable_datatype: string
@param variable_type: New type for the variable.
@type variable_type: string
@param config: Contains the control parameters.
@type config:dictionary
@raise ServerError: If the server has received the request but did not process the request due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
"""
endpoint = self.endpoint + "/variables/" + variable_id
config["variableName"] = variable_name
config["variableDataType"] = variable_datatype
config["variableType"] = variable_type
response = self.ac.send_api_request("PUT", endpoint, config, self.request_headers)
def delete_variable(self, variable_id):
"""
Delete the specified variable in the workspace.
@param variable_id: Id of the variable.
@type variable_id: string
@raise ServerError: If the server has received the request but did not process the request due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
"""
endpoint = self.endpoint + "/variables/" + variable_id
response = self.ac.send_api_request("DELETE", endpoint, None, self.request_headers)
def get_variables(self):
"""
Returns list of variables for the specified workspace.
@raise ServerError: If the server has received the request but did not process the request due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
@return: Variable list.
@rtype:list
"""
endpoint = self.endpoint + "/variables"
response = self.ac.send_api_request("GET", endpoint, None, self.request_headers)
return response["data"]["variables"]
def get_variable_details(self, variable_id):
"""
Returns list of variables for the specified workspace.
@param variable_id: Id of the variable.
@type variable_id: string
@raise ServerError: If the server has received the request but did not process the request due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
@return: Variable details.
@rtype:dictionary
"""
endpoint = self.endpoint + "/variables/" + variable_id
response = self.ac.send_api_request("GET", endpoint, None, self.request_headers)
return response["data"]
def make_default_folder(self, folder_id):
"""
Make the specified folder as default.
@param folder_id: Id of the folder.
@type folder_id: string
@raise ServerError: If the server has received the request but did not process the request due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
"""
endpoint = self.endpoint + "/folders/" + folder_id + "/default"
response = self.ac.send_api_request("PUT", endpoint, None, self.request_headers)
def get_datasources(self):
"""
Returns list of datasources for the specified workspace.
@raise ServerError: If the server has received the request but did not process the request due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
@return: Datasource list.
@rtype:list
"""
endpoint = self.endpoint + "/datasources"
response = self.ac.send_api_request("GET", endpoint, None, self.request_headers)
return response["data"]["dataSources"]
def sync_data(self, datasource_id, config={}):
"""
Initiate data sync for the specified datasource.
@param datasource_id: Id of the datasource.
@type datasource_id: string
@param config: Contains any additional control parameters. Can be C{None}.
@type config:dictionary
@raise ServerError: If the server has received the request but did not process the request due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
"""
endpoint = self.endpoint + "/datasources/" + datasource_id + "/sync"
response = self.ac.send_api_request("POST", endpoint, config, self.request_headers)
def update_datasource_connection(self, datasource_id, config={}):
"""
Update connection details for the specified datasource.
@param datasource_id: Id of the datasource.
@type datasource_id: string
@param config: Contains the control parameters.
@type config:dictionary
@raise ServerError: If the server has received the request but did not process the request due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
"""
endpoint = self.endpoint + "/datasources/" + datasource_id
response = self.ac.send_api_request("PUT", endpoint, config, self.request_headers)
class ViewAPI:
"""
ViewAPI contains view level operations.
"""
def __init__(self, ac, org_id, workspace_id, view_id):
self.ac = ac
self.endpoint = "/restapi/v2/workspaces/" + workspace_id + "/views/" + view_id
self.request_headers = {}
self.request_headers["ZANALYTICS-ORGID"] = org_id
def rename(self, view_name, config={}):
"""
Rename a specified view in the workspace.
@param view_name: New name of the view.
@type view_name: string
@param config: Contains any additional control parameters. Can be C{None}.
@type config:dictionary
@raise ServerError: If the server has received the request but did not process the request due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
"""
config["viewName"] = view_name
response = self.ac.send_api_request("PUT", self.endpoint, config, self.request_headers)
def delete(self, config={}):
"""
Delete a specified view in the workspace.
@param config: Contains any additional control parameters. Can be C{None}.
@type config:dictionary
@raise ServerError: If the server has received the request but did not process the request due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
"""
response = self.ac.send_api_request("DELETE", self.endpoint, config, self.request_headers)
def save_as(self, new_view_name, config={}):
"""
Copy a specified view within the workspace.
@param new_view_name: The name of the new view.
@type new_view_name: string
@param config: Contains any additional control parameters. Can be C{None}.
@type config:dictionary
@raise ServerError: If the server has received the request but did not process the request due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
@return: Created view id.
@rtype:string
"""
config["viewName"] = new_view_name
endpoint = self.endpoint + "/saveas"
response = self.ac.send_api_request("POST", endpoint, config, self.request_headers)
return int(response["data"]["viewId"])
def copy_formulas(self, formula_names, dest_workspace_id, config={}, dest_org_id=None):
"""
Copy the specified formulas from one table to another within the workspace or across workspaces.
@param formula_names: The name of the formula columns to be copied.
@type formula_names: list
@param dest_workspace_id: The ID of the destination workspace.
@type dest_workspace_id: string
@param dest_org_id: Id of the organization where the destination workspace is present. Can be C{None}.
@type dest_org_id: string
@param config: Contains any additional control parameters. Can be C{None}.
@type config:dictionary
@raise ServerError: If the server has received the request but did not process the request due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
"""
config["formulaColumnNames"] = formula_names
config["destWorkspaceId"] = dest_workspace_id
endpoint = self.endpoint + "/formulas/copy"
headers = self.request_headers.copy()
if bool(dest_org_id):
headers["ZANALYTICS-DEST-ORGID"] = dest_org_id
self.ac.send_api_request("POST", endpoint, config, headers)
def add_favorite(self):
"""
Adds a specified view as favorite.
@raise ServerError: If the server has received the request but did not process the request due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
"""
endpoint = self.endpoint + "/favorite"
response = self.ac.send_api_request("POST", endpoint, None, self.request_headers)
def remove_favorite(self):
"""
Remove a specified view from favorite.
@raise ServerError: If the server has received the request but did not process the request due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
"""
endpoint = self.endpoint + "/favorite"
response = self.ac.send_api_request("DELETE", endpoint, None, self.request_headers)
def create_similar_views(self, ref_view_id, folder_id, config={}):
"""
Create reports for the specified table based on the reference table.
@param ref_view_id: The ID of the reference view.
@type ref_view_id: string
@param folder_id: The folder id where the views to be saved.
@type folder_id: string
@param config: Contains any additional control parameters. Can be C{None}.
@type config:dictionary
@raise ServerError: If the server has received the request but did not process the request due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
"""
config["referenceViewId"] = ref_view_id
config["folderId"] = folder_id
endpoint = self.endpoint + "/similarviews"
self.ac.send_api_request("POST", endpoint, config, self.request_headers)
def auto_analyse(self, config={}):
"""
Auto generate reports for the specified table.
@param config: Contains any additional control parameters. Can be C{None}.
@type config:dictionary
@raise ServerError: If the server has received the request but did not process the request due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
"""
endpoint = self.endpoint + "/autoanalyse"
self.ac.send_api_request("POST", endpoint, config, self.request_headers)
def get_my_permissions(self):
"""
Returns permissions for the specified view.
@raise ServerError: If the server has received the request but did not process the request due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
@return: Permission details.
@rtype:dictionary
"""
endpoint = self.endpoint + "/share/mypermissions"
response = self.ac.send_api_request("GET", endpoint, None, self.request_headers)
return response["data"]["permissions"]
def get_view_url(self, config={}):
"""
Returns the URL to access the specified view.
@param config: Contains any additional control parameters. Can be C{None}.
@type config:dictionary
@raise ServerError: If the server has received the request but did not process the request due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
@return: View URL.
@rtype:string
"""
endpoint = self.endpoint + "/publish"
response = self.ac.send_api_request("GET", endpoint, config, self.request_headers)
return response["data"]["viewUrl"]
def get_embed_url(self, config={}):
"""
Returns embed URL to access the specified view.
@param config: Contains any additional control parameters. Can be C{None}.
@type config:dictionary
@raise ServerError: If the server has received the request but did not process the request due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
@return: Embed URL.
@rtype:string
"""
endpoint = self.endpoint + "/publish/embed"
response = self.ac.send_api_request("GET", endpoint, config, self.request_headers)
return response["data"]["embedUrl"]
def get_private_url(self, config={}):
"""
Returns private URL to access the specified view.
@param config: Contains any additional control parameters. Can be C{None}.
@type config:dictionary
@raise ServerError: If the server has received the request but did not process the request due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
@return: Private URL.
@rtype:string
"""
endpoint = self.endpoint + "/publish/privatelink"
response = self.ac.send_api_request("GET", endpoint, config, self.request_headers)
return response["data"]["privateUrl"]
def create_private_url(self, config={}):
"""
Create a private URL for the specified view.
@param config: Contains any additional control parameters. Can be C{None}.
@type config:dictionary
@raise ServerError: If the server has received the request but did not process the request due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
@return: Private URL.
@rtype:string
"""
endpoint = self.endpoint + "/publish/privatelink"
response = self.ac.send_api_request("POST", endpoint, config, self.request_headers)
return response["data"]["privateUrl"]
def add_column(self, column_name, data_type, config={}):
"""
Add a column in the specified table.
@param column_name: The name of the column.
@type column_name: string
@param data_type: The data-type of the column.
@type data_type: string
@param config: Contains any additional control parameters. Can be C{None}.
@type config:dictionary
@raise ServerError: If the server has received the request but did not process the request due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
@return: Created column id.
@rtype:string
"""
config["columnName"] = column_name
config["dataType"] = data_type
endpoint = self.endpoint + "/columns"
response = self.ac.send_api_request("POST", endpoint, config, self.request_headers)
return int(response["data"]["columnId"])
def hide_columns(self, column_ids):
"""
Hide the specified columns in the table.
@param column_ids: Ids of the columns to be hidden.
@type column_ids: list
@raise ServerError: If the server has received the request but did not process the request due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
"""
config = {}
config["columnIds"] = column_ids
endpoint = self.endpoint + "/columns/hide"
self.ac.send_api_request("PUT", endpoint, config, self.request_headers)
def show_columns(self, column_ids):
"""
Show the specified hidden columns in the table.
@param column_ids: Ids of the columns to be shown.
@type column_ids: list
@raise ServerError: If the server has received the request but did not process the request due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
"""
config = {}
config["columnIds"] = column_ids
endpoint = self.endpoint + "/columns/show"
self.ac.send_api_request("PUT", endpoint, config, self.request_headers)
def add_row(self, column_values, config={}):
"""
Add a single row in the specified table.
@param column_values: Contains the values for the row. The column names are the key.
@type column_values: dictionary
@param config: Contains any additional control parameters. Can be C{None}.
@type config:dictionary
@raise ServerError: If the server has received the request but did not process the request due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
@return: Column Names and Added Row Values.
@rtype:dictionary
"""
config["columns"] = column_values
endpoint = self.endpoint + "/rows"
response = self.ac.send_api_request("POST", endpoint, config, self.request_headers)
return response["data"]
def update_row(self, column_values, criteria, config={}):
"""
Update rows in the specified table.
@param column_values: Contains the values for the row. The column names are the key.
@type column_values: dictionary
@param criteria: The criteria to be applied for updating data. Only rows matching the criteria will be updated. Should be null for update all rows.
@type criteria: string
@param config: Contains any additional control parameters. Can be C{None}.
@type config:dictionary
@raise ServerError: If the server has received the request but did not process the request due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
@return: Updated Columns List and Updated Rows Count.
@rtype:dictionary
"""
config["columns"] = column_values
if criteria != None:
config["criteria"] = criteria
endpoint = self.endpoint + "/rows"
response = self.ac.send_api_request("PUT", endpoint, config, self.request_headers)
return response["data"]
def delete_row(self, criteria, config={}):
"""
Delete rows in the specified table.
@param criteria: The criteria to be applied for deleting data. Only rows matching the criteria will be deleted. Should be null for delete all rows.
@type criteria: string
@param config: Contains any additional control parameters. Can be C{None}.
@type config:dictionary
@raise ServerError: If the server has received the request but did not process the request due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
@return: Deleted rows details.
@rtype:string
"""
if criteria != None:
config["criteria"] = criteria
endpoint = self.endpoint + "/rows"
response = self.ac.send_api_request("DELETE", endpoint, config, self.request_headers)
return response["data"]["deletedRows"]
def rename_column(self, column_id, column_name, config={}):
"""
Rename a specified column in the table.
@param column_id: Id of the column.
@type column_id: string
@param column_name: New name for the column.
@type column_name: string
@param config: Contains any additional control parameters. Can be C{None}.
@type config:dictionary
@raise ServerError: If the server has received the request but did not process the request due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
"""
config["columnName"] = column_name
endpoint = self.endpoint + "/columns/" + column_id
self.ac.send_api_request("PUT", endpoint, config, self.request_headers)
def delete_column(self, column_id, config={}):
"""
Delete a specified column in the table.
@param column_id: Id of the column.
@type column_id: string
@param config: Contains any additional control parameters. Can be C{None}.
@type config:dictionary
@raise ServerError: If the server has received the request but did not process the request due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
"""
endpoint = self.endpoint + "/columns/" + column_id
self.ac.send_api_request("DELETE", endpoint, config, self.request_headers)
def add_lookup(self, column_id, ref_view_id, ref_column_id, config={}):
"""
Add a lookup in the specified child table.
@param column_id: Id of the column.
@type column_id: string
@param ref_view_id: The id of the table contains the parent column.
@type ref_view_id: string
@param ref_column_id: The id of the parent column.
@type ref_column_id: string
@param config: Contains any additional control parameters. Can be C{None}.
@type config:dictionary
@raise ServerError: If the server has received the request but did not process the request due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
"""
config["referenceViewId"] = ref_view_id;
config["referenceColumnId"] = ref_column_id
endpoint = self.endpoint + "/columns/" + column_id + "/lookup"
self.ac.send_api_request("POST", endpoint, config, self.request_headers)
def remove_lookup(self, column_id, config={}):
"""
Remove the lookup for the specified column in the table.
@param column_id: Id of the column.
@type column_id: string
@param config: Contains any additional control parameters. Can be C{None}.
@type config:dictionary
@raise ServerError: If the server has received the request but did not process the request due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
"""
endpoint = self.endpoint + "/columns/" + column_id + "/lookup"
self.ac.send_api_request("DELETE", endpoint, config, self.request_headers)
def auto_analyse_column(self, column_id, config={}):
"""
Auto generate reports for the specified column.
@param column_id: Id of the column.
@type column_id: string
@param config: Contains any additional control parameters. Can be C{None}.
@type config:dictionary
@raise ServerError: If the server has received the request but did not process the request due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
"""
endpoint = self.endpoint + "/columns/" + column_id + "/autoanalyse"
self.ac.send_api_request("POST", endpoint, config, self.request_headers)
def refetch_data(self, config={}):
"""
Sync data from available datasource for the specified view.
@param config: Contains any additional control parameters. Can be C{None}.
@type config:dictionary
@raise ServerError: If the server has received the request but did not process the request due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
"""
endpoint = self.endpoint + "/sync"
response = self.ac.send_api_request("POST", endpoint, config, self.request_headers)
def get_last_import_details(self):
"""
Returns last import details of the specified view.
@raise ServerError: If the server has received the request but did not process the request due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
@return: Last import details.
@rtype:dictionary
"""
endpoint = self.endpoint + "/importdetails"
response = self.ac.send_api_request("GET", endpoint, None, self.request_headers)
return response["data"]
class BulkAPI:
"""
BulkAPI contains data operations.
"""
def __init__(self, ac, org_id, workspace_id):
self.ac = ac
self.endpoint = "/restapi/v2/workspaces/" + workspace_id
self.bulk_endpoint = "/restapi/v2/bulk/workspaces/" + workspace_id
self.request_headers = {}
self.request_headers["ZANALYTICS-ORGID"] = org_id
def import_data_in_new_table(self, table_name, file_type, auto_identify, file_path, config={}):
"""
Create a new table and import the data contained in the mentioned file into the created table.
@param table_name: Name of the new table to be created.
@type table_name: string
@param file_type: Type of the file to be imported.
@type file_type: string
@param auto_identify: Used to specify whether to auto identify the CSV format. Allowable values - true/false.
@type auto_identify: string
@param file_path: Path of the file to be imported.
@type file_path: string
@param config: Contains any additional control parameters. Can be C{None}.
@type config:dictionary
@raise ServerError: If the server has received the request but did not process the request due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
@return Import result
@rtype:dictionary
"""
endpoint = self.endpoint + "/data"
config["tableName"] = table_name
config["fileType"] = file_type
config["autoIdentify"] = auto_identify
response = self.ac.send_import_api_request(endpoint, config, self.request_headers, file_path)
return response["data"]
def import_raw_data_in_new_table(self, table_name, file_type, auto_identify, data, config={}):
"""
Create a new table and import the raw data provided into the created table.
@param table_name: Name of the new table to be created.
@type table_name: string
@param file_type: Type of the file to be imported.
@type file_type: string
@param auto_identify: Used to specify whether to auto identify the CSV format. Allowable values - true/false.
@type auto_identify: string
@param data: Raw data to be imported.
@type data: string
@param config: Contains any additional control parameters. Can be C{None}.
@type config:dictionary
@raise ServerError: If the server has received the request but did not process the request due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
@return Import result
@rtype:dictionary
"""
endpoint = self.endpoint + "/data"
config["tableName"] = table_name
config["fileType"] = file_type
config["autoIdentify"] = auto_identify
response = self.ac.send_import_api_request(endpoint, config, self.request_headers, None, data)
return response["data"]
def import_data(self, view_id, import_type, file_type, auto_identify, file_path, config={}):
"""
Import the data contained in the mentioned file into the table.
@param view_id: Id of the view where the data to be imported.
@type view_id: string
@param import_type: The type of import. Can be one of - append, truncateadd, updateadd.
@type import_type: string
@param file_type: Type of the file to be imported.
@type file_type: string
@param auto_identify: Used to specify whether to auto identify the CSV format. Allowable values - true/false.
@type auto_identify: string
@param file_path: Path of the file to be imported.
@type file_path: string
@param config: Contains any additional control parameters. Can be C{None}.
@type config:dictionary
@raise ServerError: If the server has received the request but did not process the request due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
@return Import result
@rtype:dictionary
"""
endpoint = self.endpoint + "/views/" + view_id + "/data"
config["fileType"] = file_type
config["autoIdentify"] = auto_identify
config["importType"] = import_type
response = self.ac.send_import_api_request(endpoint, config, self.request_headers, file_path)
return response["data"]
def import_raw_data(self, view_id, import_type, file_type, auto_identify, data, config={}):
"""
Import the raw data provided into the table.
@param view_id: Id of the view where the data to be imported.
@type view_id: string
@param import_type: The type of import. Can be one of - append, truncateadd, updateadd.
@type import_type: string
@param file_type: Type of the file to be imported.
@type file_type: string
@param auto_identify: Used to specify whether to auto identify the CSV format. Allowable values - true/false.
@type auto_identify: string
@param data: Raw data to be imported.
@type data: string
@param config: Contains any additional control parameters. Can be C{None}.
@type config:dictionary
@raise ServerError: If the server has received the request but did not process the request due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
@return Import result
@rtype:dictionary
"""
endpoint = self.endpoint + "/views/" + view_id + "/data"
config["fileType"] = file_type
config["autoIdentify"] = auto_identify
config["importType"] = import_type
response = self.ac.send_import_api_request(endpoint, config, self.request_headers, None, data)
return response["data"]
def import_bulk_data_in_new_table(self, table_name, file_type, auto_identify, file_path, config={}):
"""
Asynchronously create a new table and import the data contained in the mentioned file into the created table.
@param table_name: Name of the new table to be created.
@type table_name: string
@param file_type: Type of the file to be imported.
@type file_type: string
@param auto_identify: Used to specify whether to auto identify the CSV format. Allowable values - true/false.
@type auto_identify: string
@param file_path: Path of the file to be imported.
@type file_path: string
@param config: Contains any additional control parameters. Can be C{None}.
@type config:dictionary
@raise ServerError: If the server has received the request but did not process the request due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
@return Import job id
@rtype:string
"""
endpoint = self.bulk_endpoint + "/data"
config["tableName"] = table_name
config["fileType"] = file_type
config["autoIdentify"] = auto_identify
response = self.ac.send_import_api_request(endpoint, config, self.request_headers, file_path)
return response["data"]["jobId"]
def import_bulk_data(self, view_id, import_type, file_type, auto_identify, file_path, config={}):
"""
Asynchronously import the data contained in the mentioned file into the table.
@param view_id: Id of the view where the data to be imported.
@type view_id: string
@param import_type: The type of import. Can be one of - append, truncateadd, updateadd.
@type import_type: string
@param file_type: Type of the file to be imported.
@type file_type: string
@param auto_identify: Used to specify whether to auto identify the CSV format. Allowable values - true/false.
@type auto_identify: string
@param file_path: Path of the file to be imported.
@type file_path: string
@param config: Contains any additional control parameters. Can be C{None}.
@type config:dictionary
@raise ServerError: If the server has received the request but did not process the request due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
@return Import job id
@rtype:string
"""
endpoint = self.bulk_endpoint + "/views/" + view_id + "/data"
config["fileType"] = file_type
config["autoIdentify"] = auto_identify
config["importType"] = import_type
response = self.ac.send_import_api_request(endpoint, config, self.request_headers, file_path)
return response["data"]["jobId"]
def get_import_job_details(self, job_id):
"""
Returns the details of the import job.
@param job_id: Id of the job.
@type job_id: string
@raise ServerError: If the server has received the request but did not process the request due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
@return Import job details
@rtype:dictionary
"""
endpoint = self.bulk_endpoint + "/importjobs/" + job_id
response = self.ac.send_api_request("GET", endpoint, None, self.request_headers)
return response["data"]
def export_data(self, view_id, response_format, file_path, config={}):
"""
Export the mentioned table (or) view data.
@param view_id: Id of the view to be exported.
@type view_id: string
@param response_format: The format in which the data is to be exported.
@type response_format: string
@param file_path: Path of the file where the data exported to be stored.
@type file_path: string
@param config: Contains any additional control parameters. Can be C{None}.
@type config:dictionary
@raise ServerError: If the server has received the request but did not process the request due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
"""
endpoint = self.endpoint + "/views/" + view_id + "/data"
config["responseFormat"] = response_format
self.ac.send_export_api_request(endpoint, config, self.request_headers, file_path)
def initiate_bulk_export(self, view_id, response_format, config={}):
"""
Initiate asynchronous export for the mentioned table (or) view data.
@param view_id: Id of the view to be exported.
@type view_id: string
@param response_format: The format in which the data is to be exported.
@type response_format: string
@param config: Contains any additional control parameters. Can be C{None}.
@type config:dictionary
@raise ServerError: If the server has received the request but did not process the request due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
@return Export job id
@rtype:string
"""
endpoint = self.bulk_endpoint + "/views/" + view_id + "/data"
config["responseFormat"] = response_format
response = self.ac.send_api_request("GET", endpoint, config, self.request_headers)
return response["data"]["jobId"]
def initiate_bulk_export_using_sql(self, sql_query, response_format, config={}):
"""
Initiate asynchronous export with the given SQL Query.
@param sql_query: The SQL Query whose output is exported.
@type sql_query: string
@param response_format: The format in which the data is to be exported.
@type response_format: string
@param config: Contains any additional control parameters. Can be C{None}.
@type config:dictionary
@raise ServerError: If the server has received the request but did not process the request due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
@return Export job id
@rtype:string
"""
endpoint = self.bulk_endpoint + "/data"
config["responseFormat"] = response_format
config["sqlQuery"] = sql_query
response = self.ac.send_api_request("GET", endpoint, config, self.request_headers)
return response["data"]["jobId"]
def get_export_job_details(self, job_id):
"""
Returns the details of the export job.
@param job_id: Id of the export job.
@type job_id: string
@raise ServerError: If the server has received the request but did not process the request due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
@return Export job details
@rtype:dictionary
"""
endpoint = self.bulk_endpoint + "/exportjobs/" + job_id
response = self.ac.send_api_request("GET", endpoint, None, self.request_headers)
return response["data"]
def export_bulk_data(self, job_id, file_path):
"""
Download the exported data for the mentioned job id.
@param job_id: Id of the job to be exported.
@type job_id: string
@param file_path: Path of the file where the data exported to be stored.
@type file_path: string
@raise ServerError: If the server has received the request but did not process the request due to some error.
@raise ParseError: If the server has responded but client was not able to parse the response.
"""
endpoint = self.bulk_endpoint + "/exportjobs/" + job_id + "/data"
self.ac.send_export_api_request(endpoint, None, self.request_headers, file_path)
def set_proxy(self, proxy_host, proxy_port, proxy_user_name, proxy_password):
"""
Internal method to handle proxy details.
"""
self.proxy = True
self.proxy_host = proxy_host
self.proxy_port = proxy_port
self.proxy_user_name = proxy_user_name
self.proxy_password = proxy_password
def send_import_api_request(self, request_url, config, request_headers, file_path, data=None):
"""
Internal method to handle HTTP request.
"""
if self.access_token == None:
self.regenerate_analytics_oauth_token()
request_url = self.analytics_server_url + request_url
config_data = None
if bool(config):
config_data = "CONFIG=" + urllib.parse.quote_plus(json.dumps(config))
if bool(data):
if (bool(config_data)):
config_data += "&"
else:
config_data = ""
config_data += "DATA=" + urllib.parse.quote_plus(json.dumps(data))
resp_obj = self.submit_import_request(request_url, config_data, request_headers, self.access_token)
else:
files = {'FILE': open(file_path, 'rb')}
resp_obj = self.submit_import_request(request_url, config_data, request_headers, self.access_token, files)
if not (str(resp_obj.status_code).startswith("2")):
if (self.is_oauth_expired(resp_obj)):
self.regenerate_analytics_oauth_token()
if bool(data):
resp_obj = self.submit_import_request(request_url, config_data, request_headers, self.access_token)
else:
resp_obj = self.submit_import_request(request_url, config_data, request_headers, self.access_token,
files)
if not (str(resp_obj.status_code).startswith("2")):
raise ServerError(resp_obj.resp_content, False)
else:
raise ServerError(resp_obj.resp_content, False)
response = resp_obj.resp_content
response = json.loads(response)
return response
def submit_import_request(self, request_url, parameters, request_headers={}, access_token=None, files=None):
"""
Internal method to send request to server.
"""
try:
if request_headers == None:
request_headers = {}
if access_token != None:
request_headers["Authorization"] = "Zoho-oauthtoken " + access_token
request_headers["User-Agent"] = "Analytics Python Client v" + self.CLIENT_VERSION
req_obj = req_obj = requests.Session()
if self.proxy:
proxy_details = {
"http": "http://" + self.proxy_host + ":" + self.proxy_port,
"https": "http://" + self.proxy_host + ":" + self.proxy_port
}
req_obj.proxies = proxy_details
if self.proxy_user_name != None and self.proxy_password != None:
proxy_auth_details = HTTPProxyDigestAuth(self.proxy_user_name, self.proxy_password)
req_obj.auth = proxy_auth_details
if bool(files):
resp_obj = req_obj.post(request_url, params=parameters, files=files, headers=request_headers)
else:
resp_obj = req_obj.post(request_url, params=parameters, headers=request_headers)
resp_obj = response_obj(resp_obj)
except Exception as ex:
resp_obj = response_obj(ex)
return resp_obj
def send_export_api_request(self, request_url, config, request_headers, file_path):
"""
Internal method to handle HTTP request.
"""
file = open(file_path, "wb")
if self.access_token == None:
self.regenerate_analytics_oauth_token()
request_url = self.analytics_server_url + request_url
config_data = None
if bool(config):
config_data = "CONFIG=" + urllib.parse.quote_plus(json.dumps(config))
resp_obj = self.submit_export_request(request_url, config_data, request_headers, self.access_token)
if not (str(resp_obj.status_code).startswith("2")):
resp_obj = response_obj(resp_obj)
if (self.is_oauth_expired(resp_obj)):
self.regenerate_analytics_oauth_token()
resp_obj = self.submit_export_request(request_url, config_data, request_headers, self.access_token)
if not (str(resp_obj.status_code).startswith("2")):
raise ServerError(resp_obj.resp_content, False)
else:
raise ServerError(resp_obj.resp_content, False)
file.write(resp_obj.content)
file.close()
return
def submit_export_request(self, request_url, parameters, request_headers={}, access_token=None):
"""
Internal method to send request to server.
"""
try:
if request_headers == None:
request_headers = {}
if access_token != None:
request_headers["Authorization"] = "Zoho-oauthtoken " + access_token
request_headers["User-Agent"] = "Analytics Python Client v" + self.CLIENT_VERSION
req_obj = req_obj = requests.Session()
if self.proxy:
proxy_details = {
"http": "http://" + self.proxy_host + ":" + self.proxy_port,
"https": "http://" + self.proxy_host + ":" + self.proxy_port
}
req_obj.proxies = proxy_details
if self.proxy_user_name != None and self.proxy_password != None:
proxy_auth_details = HTTPProxyDigestAuth(self.proxy_user_name, self.proxy_password)
req_obj.auth = proxy_auth_details
resp_obj = req_obj.get(request_url, params=parameters, headers=request_headers)
except Exception as ex:
resp_obj = response_obj(ex)
return resp_obj
def send_api_request(self, request_method, request_url, config, request_headers, is_json_response=True):
"""
Internal method to handle HTTP request.
"""
if self.access_token == None:
self.regenerate_analytics_oauth_token()
request_url = self.analytics_server_url + request_url
config_data = None
if bool(config):
config_data = "CONFIG=" + urllib.parse.quote_plus(json.dumps(config))
resp_obj = self.submit_request(request_method, request_url, config_data, request_headers, self.access_token)
if not (str(resp_obj.status_code).startswith("2")):
if (self.is_oauth_expired(resp_obj)):
self.regenerate_analytics_oauth_token()
resp_obj = self.submit_request(request_method, request_url, config_data, request_headers,
self.access_token)
if not (str(resp_obj.status_code).startswith("2")):
raise ServerError(resp_obj.resp_content, False)
else:
raise ServerError(resp_obj.resp_content, False)
# API success - No response case
if (str(resp_obj.status_code) != "200"):
return
response = resp_obj.resp_content
if is_json_response:
response = json.loads(response)
return response
def submit_request(self, request_method, request_url, parameters, request_headers={}, access_token=None):
"""
Internal method to send request to server.
"""
try:
if request_headers == None:
request_headers = {}
if access_token != None:
request_headers["Authorization"] = "Zoho-oauthtoken " + access_token
request_headers["User-Agent"] = "Analytics Python Client v" + self.CLIENT_VERSION
req_obj = req_obj = requests.Session()
if self.proxy:
proxy_details = {
"http": "http://" + self.proxy_host + ":" + self.proxy_port,
"https": "http://" + self.proxy_host + ":" + self.proxy_port
}
req_obj.proxies = proxy_details
if self.proxy_user_name != None and self.proxy_password != None:
proxy_auth_details = HTTPProxyDigestAuth(self.proxy_user_name, self.proxy_password)
req_obj.auth = proxy_auth_details
resp_obj = None
if request_method == "GET":
resp_obj = req_obj.get(request_url, params=parameters, headers=request_headers)
elif request_method == "POST":
resp_obj = req_obj.post(request_url, params=parameters, headers=request_headers)
elif request_method == "PUT":
resp_obj = req_obj.put(request_url, params=parameters, headers=request_headers)
elif request_method == "DELETE":
resp_obj = req_obj.delete(request_url, params=parameters, headers=request_headers)
resp_obj = response_obj(resp_obj)
except Exception as ex:
resp_obj = response_obj(ex)
return resp_obj
def get_request_obj(self):
"""
Internal method for getting OAuth token.
"""
req_obj = requests.Session()
if self.proxy:
proxy_details = {
"http": "http://" + self.proxy_host + ":" + self.proxy_port,
"https": "http://" + self.proxy_host + ":" + self.proxy_port
}
req_obj.proxies = proxy_details
if self.proxy_user_name != None and self.proxy_password != None:
proxy_auth_details = HTTPProxyDigestAuth(self.proxy_user_name, self.proxy_password)
req_obj.auth = proxy_auth_details
return request_obj
def is_oauth_expired(self, resp_obj):
"""
Internal method to check whether the accesstoken expired or not.
"""
try:
resp_content = json.loads(resp_obj.resp_content)
err_code = resp_content["data"]["errorCode"]
return err_code == 8535
except Exception:
return False
def regenerate_analytics_oauth_token(self):
"""
Internal method for getting OAuth token.
"""
oauth_params = {}
oauth_params["client_id"] = self.client_id
oauth_params["client_secret"] = self.client_secret
oauth_params["refresh_token"] = self.refresh_token
oauth_params["grant_type"] = "refresh_token"
oauth_params = urllib.parse.urlencode(oauth_params) # .encode(self.COMMON_ENCODE_CHAR)
req_url = self.accounts_server_url + "/oauth/v2/token"
oauth_resp_obj = self.submit_request("POST", req_url, oauth_params)
if (oauth_resp_obj.status_code == 200):
oauth_json_resp = json.loads(oauth_resp_obj.resp_content)
if ("access_token" in oauth_json_resp):
self.access_token = oauth_json_resp["access_token"]
return
raise ServerError(oauth_resp_obj.resp_content, True)
class response_obj:
"""
Internal class.
"""
def __init__(self, resp_obj):
self.resp_content = resp_obj.text
self.status_code = resp_obj.status_code
self.headers = resp_obj.headers
class ServerError(Exception):
"""
ServerError is thrown if the analytics server has received the request but did not process the
request due to some error. For example if authorization failure.
"""
def __init__(self, response, is_IAM_Error):
self.errorCode = 0
self.message = response
try:
error_data = json.loads(response)
if is_IAM_Error:
self.message = "Exception while generating oauth token. Response - " + response
else:
self.errorCode = error_data["data"]["errorCode"]
self.message = error_data["data"]["errorMessage"]
except Exception as inst:
print(inst)
self.parseError = inst
def __str__(self):
return repr(self.message)
class ParseError(Exception):
"""
ParseError is thrown if the server has responded but client was not able to parse the response.
Possible reasons could be version mismatch.The client might have to be updated to a newer version.
"""
def __init__(self, responseContent, message, origExcep):
self.responseContent = responseContent #: The complete response content as sent by the server.
self.message = message #: The message describing the error.
self.origExcep = origExcep #: The original exception that occurred during parsing(Can be C{None}).
def __str__(self):
return repr(self.message) | zoho-analytics-connector | /zoho_analytics_connector-1.4.3.tar.gz/zoho_analytics_connector-1.4.3/zoho_analytics_connector/analytics_client_upstream.py | analytics_client_upstream.py |
import configparser
import errno
import json
import os
import re
import subprocess
import sys
class VersioneerConfig:
"""Container for Versioneer configuration parameters."""
def get_root():
"""Get the project root directory.
We require that all commands are run from the project root, i.e. the
directory that contains setup.py, setup.cfg, and versioneer.py .
"""
root = os.path.realpath(os.path.abspath(os.getcwd()))
setup_py = os.path.join(root, "setup.py")
versioneer_py = os.path.join(root, "versioneer.py")
if not (os.path.exists(setup_py) or os.path.exists(versioneer_py)):
# allow 'python path/to/setup.py COMMAND'
root = os.path.dirname(os.path.realpath(os.path.abspath(sys.argv[0])))
setup_py = os.path.join(root, "setup.py")
versioneer_py = os.path.join(root, "versioneer.py")
if not (os.path.exists(setup_py) or os.path.exists(versioneer_py)):
err = ("Versioneer was unable to run the project root directory. "
"Versioneer requires setup.py to be executed from "
"its immediate directory (like 'python setup.py COMMAND'), "
"or in a way that lets it use sys.argv[0] to find the root "
"(like 'python path/to/setup.py COMMAND').")
raise VersioneerBadRootError(err)
try:
# Certain runtime workflows (setup.py install/develop in a setuptools
# tree) execute all dependencies in a single python process, so
# "versioneer" may be imported multiple times, and python's shared
# module-import table will cache the first one. So we can't use
# os.path.dirname(__file__), as that will find whichever
# versioneer.py was first imported, even in later projects.
me = os.path.realpath(os.path.abspath(__file__))
me_dir = os.path.normcase(os.path.splitext(me)[0])
vsr_dir = os.path.normcase(os.path.splitext(versioneer_py)[0])
if me_dir != vsr_dir:
print("Warning: build in %s is using versioneer.py from %s"
% (os.path.dirname(me), versioneer_py))
except NameError:
pass
return root
def get_config_from_root(root):
"""Read the project setup.cfg file to determine Versioneer config."""
# This might raise EnvironmentError (if setup.cfg is missing), or
# configparser.NoSectionError (if it lacks a [versioneer] section), or
# configparser.NoOptionError (if it lacks "VCS="). See the docstring at
# the top of versioneer.py for instructions on writing your setup.cfg .
setup_cfg = os.path.join(root, "setup.cfg")
parser = configparser.ConfigParser()
with open(setup_cfg, "r") as f:
parser.read_file(f)
VCS = parser.get("versioneer", "VCS") # mandatory
def get(parser, name):
if parser.has_option("versioneer", name):
return parser.get("versioneer", name)
return None
cfg = VersioneerConfig()
cfg.VCS = VCS
cfg.style = get(parser, "style") or ""
cfg.versionfile_source = get(parser, "versionfile_source")
cfg.versionfile_build = get(parser, "versionfile_build")
cfg.tag_prefix = get(parser, "tag_prefix")
if cfg.tag_prefix in ("''", '""'):
cfg.tag_prefix = ""
cfg.parentdir_prefix = get(parser, "parentdir_prefix")
cfg.verbose = get(parser, "verbose")
return cfg
class NotThisMethod(Exception):
"""Exception raised if a method is not valid for the current scenario."""
# these dictionaries contain VCS-specific tools
LONG_VERSION_PY = {}
HANDLERS = {}
def register_vcs_handler(vcs, method): # decorator
"""Create decorator to mark a method as the handler of a VCS."""
def decorate(f):
"""Store f in HANDLERS[vcs][method]."""
if vcs not in HANDLERS:
HANDLERS[vcs] = {}
HANDLERS[vcs][method] = f
return f
return decorate
def run_command(commands, args, cwd=None, verbose=False, hide_stderr=False,
env=None):
"""Call the given command(s)."""
assert isinstance(commands, list)
p = None
for c in commands:
try:
dispcmd = str([c] + args)
# remember shell=False, so use git.cmd on windows, not just git
p = subprocess.Popen([c] + args, cwd=cwd, env=env,
stdout=subprocess.PIPE,
stderr=(subprocess.PIPE if hide_stderr
else None))
break
except EnvironmentError:
e = sys.exc_info()[1]
if e.errno == errno.ENOENT:
continue
if verbose:
print("unable to run %s" % dispcmd)
print(e)
return None, None
else:
if verbose:
print("unable to find command, tried %s" % (commands,))
return None, None
stdout = p.communicate()[0].strip().decode()
if p.returncode != 0:
if verbose:
print("unable to run %s (error)" % dispcmd)
print("stdout was %s" % stdout)
return None, p.returncode
return stdout, p.returncode
LONG_VERSION_PY['git'] = r'''
# This file helps to compute a version number in source trees obtained from
# git-archive tarball (such as those provided by githubs download-from-tag
# feature). Distribution tarballs (built by setup.py sdist) and build
# directories (produced by setup.py build) will contain a much shorter file
# that just contains the computed version number.
# This file is released into the public domain. Generated by
# versioneer-0.19 (https://github.com/python-versioneer/python-versioneer)
"""Git implementation of _version.py."""
import errno
import os
import re
import subprocess
import sys
def get_keywords():
"""Get the keywords needed to look up the version information."""
# these strings will be replaced by git during git-archive.
# setup.py/versioneer.py will grep for the variable names, so they must
# each be defined on a line of their own. _version.py will just call
# get_keywords().
git_refnames = "%(DOLLAR)sFormat:%%d%(DOLLAR)s"
git_full = "%(DOLLAR)sFormat:%%H%(DOLLAR)s"
git_date = "%(DOLLAR)sFormat:%%ci%(DOLLAR)s"
keywords = {"refnames": git_refnames, "full": git_full, "date": git_date}
return keywords
class VersioneerConfig:
"""Container for Versioneer configuration parameters."""
def get_config():
"""Create, populate and return the VersioneerConfig() object."""
# these strings are filled in when 'setup.py versioneer' creates
# _version.py
cfg = VersioneerConfig()
cfg.VCS = "git"
cfg.style = "%(STYLE)s"
cfg.tag_prefix = "%(TAG_PREFIX)s"
cfg.parentdir_prefix = "%(PARENTDIR_PREFIX)s"
cfg.versionfile_source = "%(VERSIONFILE_SOURCE)s"
cfg.verbose = False
return cfg
class NotThisMethod(Exception):
"""Exception raised if a method is not valid for the current scenario."""
LONG_VERSION_PY = {}
HANDLERS = {}
def register_vcs_handler(vcs, method): # decorator
"""Create decorator to mark a method as the handler of a VCS."""
def decorate(f):
"""Store f in HANDLERS[vcs][method]."""
if vcs not in HANDLERS:
HANDLERS[vcs] = {}
HANDLERS[vcs][method] = f
return f
return decorate
def run_command(commands, args, cwd=None, verbose=False, hide_stderr=False,
env=None):
"""Call the given command(s)."""
assert isinstance(commands, list)
p = None
for c in commands:
try:
dispcmd = str([c] + args)
# remember shell=False, so use git.cmd on windows, not just git
p = subprocess.Popen([c] + args, cwd=cwd, env=env,
stdout=subprocess.PIPE,
stderr=(subprocess.PIPE if hide_stderr
else None))
break
except EnvironmentError:
e = sys.exc_info()[1]
if e.errno == errno.ENOENT:
continue
if verbose:
print("unable to run %%s" %% dispcmd)
print(e)
return None, None
else:
if verbose:
print("unable to find command, tried %%s" %% (commands,))
return None, None
stdout = p.communicate()[0].strip().decode()
if p.returncode != 0:
if verbose:
print("unable to run %%s (error)" %% dispcmd)
print("stdout was %%s" %% stdout)
return None, p.returncode
return stdout, p.returncode
def versions_from_parentdir(parentdir_prefix, root, verbose):
"""Try to determine the version from the parent directory name.
Source tarballs conventionally unpack into a directory that includes both
the project name and a version string. We will also support searching up
two directory levels for an appropriately named parent directory
"""
rootdirs = []
for i in range(3):
dirname = os.path.basename(root)
if dirname.startswith(parentdir_prefix):
return {"version": dirname[len(parentdir_prefix):],
"full-revisionid": None,
"dirty": False, "error": None, "date": None}
else:
rootdirs.append(root)
root = os.path.dirname(root) # up a level
if verbose:
print("Tried directories %%s but none started with prefix %%s" %%
(str(rootdirs), parentdir_prefix))
raise NotThisMethod("rootdir doesn't start with parentdir_prefix")
@register_vcs_handler("git", "get_keywords")
def git_get_keywords(versionfile_abs):
"""Extract version information from the given file."""
# the code embedded in _version.py can just fetch the value of these
# keywords. When used from setup.py, we don't want to import _version.py,
# so we do it with a regexp instead. This function is not used from
# _version.py.
keywords = {}
try:
f = open(versionfile_abs, "r")
for line in f.readlines():
if line.strip().startswith("git_refnames ="):
mo = re.search(r'=\s*"(.*)"', line)
if mo:
keywords["refnames"] = mo.group(1)
if line.strip().startswith("git_full ="):
mo = re.search(r'=\s*"(.*)"', line)
if mo:
keywords["full"] = mo.group(1)
if line.strip().startswith("git_date ="):
mo = re.search(r'=\s*"(.*)"', line)
if mo:
keywords["date"] = mo.group(1)
f.close()
except EnvironmentError:
pass
return keywords
@register_vcs_handler("git", "keywords")
def git_versions_from_keywords(keywords, tag_prefix, verbose):
"""Get version information from git keywords."""
if not keywords:
raise NotThisMethod("no keywords at all, weird")
date = keywords.get("date")
if date is not None:
# Use only the last line. Previous lines may contain GPG signature
# information.
date = date.splitlines()[-1]
# git-2.2.0 added "%%cI", which expands to an ISO-8601 -compliant
# datestamp. However we prefer "%%ci" (which expands to an "ISO-8601
# -like" string, which we must then edit to make compliant), because
# it's been around since git-1.5.3, and it's too difficult to
# discover which version we're using, or to work around using an
# older one.
date = date.strip().replace(" ", "T", 1).replace(" ", "", 1)
refnames = keywords["refnames"].strip()
if refnames.startswith("$Format"):
if verbose:
print("keywords are unexpanded, not using")
raise NotThisMethod("unexpanded keywords, not a git-archive tarball")
refs = set([r.strip() for r in refnames.strip("()").split(",")])
# starting in git-1.8.3, tags are listed as "tag: foo-1.0" instead of
# just "foo-1.0". If we see a "tag: " prefix, prefer those.
TAG = "tag: "
tags = set([r[len(TAG):] for r in refs if r.startswith(TAG)])
if not tags:
# Either we're using git < 1.8.3, or there really are no tags. We use
# a heuristic: assume all version tags have a digit. The old git %%d
# expansion behaves like git log --decorate=short and strips out the
# refs/heads/ and refs/tags/ prefixes that would let us distinguish
# between branches and tags. By ignoring refnames without digits, we
# filter out many common branch names like "release" and
# "stabilization", as well as "HEAD" and "master".
tags = set([r for r in refs if re.search(r'\d', r)])
if verbose:
print("discarding '%%s', no digits" %% ",".join(refs - tags))
if verbose:
print("likely tags: %%s" %% ",".join(sorted(tags)))
for ref in sorted(tags):
# sorting will prefer e.g. "2.0" over "2.0rc1"
if ref.startswith(tag_prefix):
r = ref[len(tag_prefix):]
if verbose:
print("picking %%s" %% r)
return {"version": r,
"full-revisionid": keywords["full"].strip(),
"dirty": False, "error": None,
"date": date}
# no suitable tags, so version is "0+unknown", but full hex is still there
if verbose:
print("no suitable tags, using unknown + full revision id")
return {"version": "0+unknown",
"full-revisionid": keywords["full"].strip(),
"dirty": False, "error": "no suitable tags", "date": None}
@register_vcs_handler("git", "pieces_from_vcs")
def git_pieces_from_vcs(tag_prefix, root, verbose, run_command=run_command):
"""Get version from 'git describe' in the root of the source tree.
This only gets called if the git-archive 'subst' keywords were *not*
expanded, and _version.py hasn't already been rewritten with a short
version string, meaning we're inside a checked out source tree.
"""
GITS = ["git"]
if sys.platform == "win32":
GITS = ["git.cmd", "git.exe"]
out, rc = run_command(GITS, ["rev-parse", "--git-dir"], cwd=root,
hide_stderr=True)
if rc != 0:
if verbose:
print("Directory %%s not under git control" %% root)
raise NotThisMethod("'git rev-parse --git-dir' returned error")
# if there is a tag matching tag_prefix, this yields TAG-NUM-gHEX[-dirty]
# if there isn't one, this yields HEX[-dirty] (no NUM)
describe_out, rc = run_command(GITS, ["describe", "--tags", "--dirty",
"--always", "--long",
"--match", "%%s*" %% tag_prefix],
cwd=root)
# --long was added in git-1.5.5
if describe_out is None:
raise NotThisMethod("'git describe' failed")
describe_out = describe_out.strip()
full_out, rc = run_command(GITS, ["rev-parse", "HEAD"], cwd=root)
if full_out is None:
raise NotThisMethod("'git rev-parse' failed")
full_out = full_out.strip()
pieces = {}
pieces["long"] = full_out
pieces["short"] = full_out[:7] # maybe improved later
pieces["error"] = None
# parse describe_out. It will be like TAG-NUM-gHEX[-dirty] or HEX[-dirty]
# TAG might have hyphens.
git_describe = describe_out
# look for -dirty suffix
dirty = git_describe.endswith("-dirty")
pieces["dirty"] = dirty
if dirty:
git_describe = git_describe[:git_describe.rindex("-dirty")]
# now we have TAG-NUM-gHEX or HEX
if "-" in git_describe:
# TAG-NUM-gHEX
mo = re.search(r'^(.+)-(\d+)-g([0-9a-f]+)$', git_describe)
if not mo:
# unparseable. Maybe git-describe is misbehaving?
pieces["error"] = ("unable to parse git-describe output: '%%s'"
%% describe_out)
return pieces
# tag
full_tag = mo.group(1)
if not full_tag.startswith(tag_prefix):
if verbose:
fmt = "tag '%%s' doesn't start with prefix '%%s'"
print(fmt %% (full_tag, tag_prefix))
pieces["error"] = ("tag '%%s' doesn't start with prefix '%%s'"
%% (full_tag, tag_prefix))
return pieces
pieces["closest-tag"] = full_tag[len(tag_prefix):]
# distance: number of commits since tag
pieces["distance"] = int(mo.group(2))
# commit: short hex revision ID
pieces["short"] = mo.group(3)
else:
# HEX: no tags
pieces["closest-tag"] = None
count_out, rc = run_command(GITS, ["rev-list", "HEAD", "--count"],
cwd=root)
pieces["distance"] = int(count_out) # total number of commits
# commit date: see ISO-8601 comment in git_versions_from_keywords()
date = run_command(GITS, ["show", "-s", "--format=%%ci", "HEAD"],
cwd=root)[0].strip()
# Use only the last line. Previous lines may contain GPG signature
# information.
date = date.splitlines()[-1]
pieces["date"] = date.strip().replace(" ", "T", 1).replace(" ", "", 1)
return pieces
def plus_or_dot(pieces):
"""Return a + if we don't already have one, else return a ."""
if "+" in pieces.get("closest-tag", ""):
return "."
return "+"
def render_pep440(pieces):
"""Build up version string, with post-release "local version identifier".
Our goal: TAG[+DISTANCE.gHEX[.dirty]] . Note that if you
get a tagged build and then dirty it, you'll get TAG+0.gHEX.dirty
Exceptions:
1: no tags. git_describe was just HEX. 0+untagged.DISTANCE.gHEX[.dirty]
"""
if pieces["closest-tag"]:
rendered = pieces["closest-tag"]
if pieces["distance"] or pieces["dirty"]:
rendered += plus_or_dot(pieces)
rendered += "%%d.g%%s" %% (pieces["distance"], pieces["short"])
if pieces["dirty"]:
rendered += ".dirty"
else:
# exception #1
rendered = "0+untagged.%%d.g%%s" %% (pieces["distance"],
pieces["short"])
if pieces["dirty"]:
rendered += ".dirty"
return rendered
def render_pep440_pre(pieces):
"""TAG[.post0.devDISTANCE] -- No -dirty.
Exceptions:
1: no tags. 0.post0.devDISTANCE
"""
if pieces["closest-tag"]:
rendered = pieces["closest-tag"]
if pieces["distance"]:
rendered += ".post0.dev%%d" %% pieces["distance"]
else:
# exception #1
rendered = "0.post0.dev%%d" %% pieces["distance"]
return rendered
def render_pep440_post(pieces):
"""TAG[.postDISTANCE[.dev0]+gHEX] .
The ".dev0" means dirty. Note that .dev0 sorts backwards
(a dirty tree will appear "older" than the corresponding clean one),
but you shouldn't be releasing software with -dirty anyways.
Exceptions:
1: no tags. 0.postDISTANCE[.dev0]
"""
if pieces["closest-tag"]:
rendered = pieces["closest-tag"]
if pieces["distance"] or pieces["dirty"]:
rendered += ".post%%d" %% pieces["distance"]
if pieces["dirty"]:
rendered += ".dev0"
rendered += plus_or_dot(pieces)
rendered += "g%%s" %% pieces["short"]
else:
# exception #1
rendered = "0.post%%d" %% pieces["distance"]
if pieces["dirty"]:
rendered += ".dev0"
rendered += "+g%%s" %% pieces["short"]
return rendered
def render_pep440_old(pieces):
"""TAG[.postDISTANCE[.dev0]] .
The ".dev0" means dirty.
Exceptions:
1: no tags. 0.postDISTANCE[.dev0]
"""
if pieces["closest-tag"]:
rendered = pieces["closest-tag"]
if pieces["distance"] or pieces["dirty"]:
rendered += ".post%%d" %% pieces["distance"]
if pieces["dirty"]:
rendered += ".dev0"
else:
# exception #1
rendered = "0.post%%d" %% pieces["distance"]
if pieces["dirty"]:
rendered += ".dev0"
return rendered
def render_git_describe(pieces):
"""TAG[-DISTANCE-gHEX][-dirty].
Like 'git describe --tags --dirty --always'.
Exceptions:
1: no tags. HEX[-dirty] (note: no 'g' prefix)
"""
if pieces["closest-tag"]:
rendered = pieces["closest-tag"]
if pieces["distance"]:
rendered += "-%%d-g%%s" %% (pieces["distance"], pieces["short"])
else:
# exception #1
rendered = pieces["short"]
if pieces["dirty"]:
rendered += "-dirty"
return rendered
def render_git_describe_long(pieces):
"""TAG-DISTANCE-gHEX[-dirty].
Like 'git describe --tags --dirty --always -long'.
The distance/hash is unconditional.
Exceptions:
1: no tags. HEX[-dirty] (note: no 'g' prefix)
"""
if pieces["closest-tag"]:
rendered = pieces["closest-tag"]
rendered += "-%%d-g%%s" %% (pieces["distance"], pieces["short"])
else:
# exception #1
rendered = pieces["short"]
if pieces["dirty"]:
rendered += "-dirty"
return rendered
def render(pieces, style):
"""Render the given version pieces into the requested style."""
if pieces["error"]:
return {"version": "unknown",
"full-revisionid": pieces.get("long"),
"dirty": None,
"error": pieces["error"],
"date": None}
if not style or style == "default":
style = "pep440" # the default
if style == "pep440":
rendered = render_pep440(pieces)
elif style == "pep440-pre":
rendered = render_pep440_pre(pieces)
elif style == "pep440-post":
rendered = render_pep440_post(pieces)
elif style == "pep440-old":
rendered = render_pep440_old(pieces)
elif style == "git-describe":
rendered = render_git_describe(pieces)
elif style == "git-describe-long":
rendered = render_git_describe_long(pieces)
else:
raise ValueError("unknown style '%%s'" %% style)
return {"version": rendered, "full-revisionid": pieces["long"],
"dirty": pieces["dirty"], "error": None,
"date": pieces.get("date")}
def get_versions():
"""Get version information or return default if unable to do so."""
# I am in _version.py, which lives at ROOT/VERSIONFILE_SOURCE. If we have
# __file__, we can work backwards from there to the root. Some
# py2exe/bbfreeze/non-CPython implementations don't do __file__, in which
# case we can only use expanded keywords.
cfg = get_config()
verbose = cfg.verbose
try:
return git_versions_from_keywords(get_keywords(), cfg.tag_prefix,
verbose)
except NotThisMethod:
pass
try:
root = os.path.realpath(__file__)
# versionfile_source is the relative path from the top of the source
# tree (where the .git directory might live) to this file. Invert
# this to find the root from __file__.
for i in cfg.versionfile_source.split('/'):
root = os.path.dirname(root)
except NameError:
return {"version": "0+unknown", "full-revisionid": None,
"dirty": None,
"error": "unable to find root of source tree",
"date": None}
try:
pieces = git_pieces_from_vcs(cfg.tag_prefix, root, verbose)
return render(pieces, cfg.style)
except NotThisMethod:
pass
try:
if cfg.parentdir_prefix:
return versions_from_parentdir(cfg.parentdir_prefix, root, verbose)
except NotThisMethod:
pass
return {"version": "0+unknown", "full-revisionid": None,
"dirty": None,
"error": "unable to compute version", "date": None}
'''
@register_vcs_handler("git", "get_keywords")
def git_get_keywords(versionfile_abs):
"""Extract version information from the given file."""
# the code embedded in _version.py can just fetch the value of these
# keywords. When used from setup.py, we don't want to import _version.py,
# so we do it with a regexp instead. This function is not used from
# _version.py.
keywords = {}
try:
f = open(versionfile_abs, "r")
for line in f.readlines():
if line.strip().startswith("git_refnames ="):
mo = re.search(r'=\s*"(.*)"', line)
if mo:
keywords["refnames"] = mo.group(1)
if line.strip().startswith("git_full ="):
mo = re.search(r'=\s*"(.*)"', line)
if mo:
keywords["full"] = mo.group(1)
if line.strip().startswith("git_date ="):
mo = re.search(r'=\s*"(.*)"', line)
if mo:
keywords["date"] = mo.group(1)
f.close()
except EnvironmentError:
pass
return keywords
@register_vcs_handler("git", "keywords")
def git_versions_from_keywords(keywords, tag_prefix, verbose):
"""Get version information from git keywords."""
if not keywords:
raise NotThisMethod("no keywords at all, weird")
date = keywords.get("date")
if date is not None:
# Use only the last line. Previous lines may contain GPG signature
# information.
date = date.splitlines()[-1]
# git-2.2.0 added "%cI", which expands to an ISO-8601 -compliant
# datestamp. However we prefer "%ci" (which expands to an "ISO-8601
# -like" string, which we must then edit to make compliant), because
# it's been around since git-1.5.3, and it's too difficult to
# discover which version we're using, or to work around using an
# older one.
date = date.strip().replace(" ", "T", 1).replace(" ", "", 1)
refnames = keywords["refnames"].strip()
if refnames.startswith("$Format"):
if verbose:
print("keywords are unexpanded, not using")
raise NotThisMethod("unexpanded keywords, not a git-archive tarball")
refs = set([r.strip() for r in refnames.strip("()").split(",")])
# starting in git-1.8.3, tags are listed as "tag: foo-1.0" instead of
# just "foo-1.0". If we see a "tag: " prefix, prefer those.
TAG = "tag: "
tags = set([r[len(TAG):] for r in refs if r.startswith(TAG)])
if not tags:
# Either we're using git < 1.8.3, or there really are no tags. We use
# a heuristic: assume all version tags have a digit. The old git %d
# expansion behaves like git log --decorate=short and strips out the
# refs/heads/ and refs/tags/ prefixes that would let us distinguish
# between branches and tags. By ignoring refnames without digits, we
# filter out many common branch names like "release" and
# "stabilization", as well as "HEAD" and "master".
tags = set([r for r in refs if re.search(r'\d', r)])
if verbose:
print("discarding '%s', no digits" % ",".join(refs - tags))
if verbose:
print("likely tags: %s" % ",".join(sorted(tags)))
for ref in sorted(tags):
# sorting will prefer e.g. "2.0" over "2.0rc1"
if ref.startswith(tag_prefix):
r = ref[len(tag_prefix):]
if verbose:
print("picking %s" % r)
return {"version": r,
"full-revisionid": keywords["full"].strip(),
"dirty": False, "error": None,
"date": date}
# no suitable tags, so version is "0+unknown", but full hex is still there
if verbose:
print("no suitable tags, using unknown + full revision id")
return {"version": "0+unknown",
"full-revisionid": keywords["full"].strip(),
"dirty": False, "error": "no suitable tags", "date": None}
@register_vcs_handler("git", "pieces_from_vcs")
def git_pieces_from_vcs(tag_prefix, root, verbose, run_command=run_command):
"""Get version from 'git describe' in the root of the source tree.
This only gets called if the git-archive 'subst' keywords were *not*
expanded, and _version.py hasn't already been rewritten with a short
version string, meaning we're inside a checked out source tree.
"""
GITS = ["git"]
if sys.platform == "win32":
GITS = ["git.cmd", "git.exe"]
out, rc = run_command(GITS, ["rev-parse", "--git-dir"], cwd=root,
hide_stderr=True)
if rc != 0:
if verbose:
print("Directory %s not under git control" % root)
raise NotThisMethod("'git rev-parse --git-dir' returned error")
# if there is a tag matching tag_prefix, this yields TAG-NUM-gHEX[-dirty]
# if there isn't one, this yields HEX[-dirty] (no NUM)
describe_out, rc = run_command(GITS, ["describe", "--tags", "--dirty",
"--always", "--long",
"--match", "%s*" % tag_prefix],
cwd=root)
# --long was added in git-1.5.5
if describe_out is None:
raise NotThisMethod("'git describe' failed")
describe_out = describe_out.strip()
full_out, rc = run_command(GITS, ["rev-parse", "HEAD"], cwd=root)
if full_out is None:
raise NotThisMethod("'git rev-parse' failed")
full_out = full_out.strip()
pieces = {}
pieces["long"] = full_out
pieces["short"] = full_out[:7] # maybe improved later
pieces["error"] = None
# parse describe_out. It will be like TAG-NUM-gHEX[-dirty] or HEX[-dirty]
# TAG might have hyphens.
git_describe = describe_out
# look for -dirty suffix
dirty = git_describe.endswith("-dirty")
pieces["dirty"] = dirty
if dirty:
git_describe = git_describe[:git_describe.rindex("-dirty")]
# now we have TAG-NUM-gHEX or HEX
if "-" in git_describe:
# TAG-NUM-gHEX
mo = re.search(r'^(.+)-(\d+)-g([0-9a-f]+)$', git_describe)
if not mo:
# unparseable. Maybe git-describe is misbehaving?
pieces["error"] = ("unable to parse git-describe output: '%s'"
% describe_out)
return pieces
# tag
full_tag = mo.group(1)
if not full_tag.startswith(tag_prefix):
if verbose:
fmt = "tag '%s' doesn't start with prefix '%s'"
print(fmt % (full_tag, tag_prefix))
pieces["error"] = ("tag '%s' doesn't start with prefix '%s'"
% (full_tag, tag_prefix))
return pieces
pieces["closest-tag"] = full_tag[len(tag_prefix):]
# distance: number of commits since tag
pieces["distance"] = int(mo.group(2))
# commit: short hex revision ID
pieces["short"] = mo.group(3)
else:
# HEX: no tags
pieces["closest-tag"] = None
count_out, rc = run_command(GITS, ["rev-list", "HEAD", "--count"],
cwd=root)
pieces["distance"] = int(count_out) # total number of commits
# commit date: see ISO-8601 comment in git_versions_from_keywords()
date = run_command(GITS, ["show", "-s", "--format=%ci", "HEAD"],
cwd=root)[0].strip()
# Use only the last line. Previous lines may contain GPG signature
# information.
date = date.splitlines()[-1]
pieces["date"] = date.strip().replace(" ", "T", 1).replace(" ", "", 1)
return pieces
def do_vcs_install(manifest_in, versionfile_source, ipy):
"""Git-specific installation logic for Versioneer.
For Git, this means creating/changing .gitattributes to mark _version.py
for export-subst keyword substitution.
"""
GITS = ["git"]
if sys.platform == "win32":
GITS = ["git.cmd", "git.exe"]
files = [manifest_in, versionfile_source]
if ipy:
files.append(ipy)
try:
me = __file__
if me.endswith(".pyc") or me.endswith(".pyo"):
me = os.path.splitext(me)[0] + ".py"
versioneer_file = os.path.relpath(me)
except NameError:
versioneer_file = "versioneer.py"
files.append(versioneer_file)
present = False
try:
f = open(".gitattributes", "r")
for line in f.readlines():
if line.strip().startswith(versionfile_source):
if "export-subst" in line.strip().split()[1:]:
present = True
f.close()
except EnvironmentError:
pass
if not present:
f = open(".gitattributes", "a+")
f.write("%s export-subst\n" % versionfile_source)
f.close()
files.append(".gitattributes")
run_command(GITS, ["add", "--"] + files)
def versions_from_parentdir(parentdir_prefix, root, verbose):
"""Try to determine the version from the parent directory name.
Source tarballs conventionally unpack into a directory that includes both
the project name and a version string. We will also support searching up
two directory levels for an appropriately named parent directory
"""
rootdirs = []
for i in range(3):
dirname = os.path.basename(root)
if dirname.startswith(parentdir_prefix):
return {"version": dirname[len(parentdir_prefix):],
"full-revisionid": None,
"dirty": False, "error": None, "date": None}
else:
rootdirs.append(root)
root = os.path.dirname(root) # up a level
if verbose:
print("Tried directories %s but none started with prefix %s" %
(str(rootdirs), parentdir_prefix))
raise NotThisMethod("rootdir doesn't start with parentdir_prefix")
SHORT_VERSION_PY = """
# This file was generated by 'versioneer.py' (0.19) from
# revision-control system data, or from the parent directory name of an
# unpacked source archive. Distribution tarballs contain a pre-generated copy
# of this file.
import json
version_json = '''
%s
''' # END VERSION_JSON
def get_versions():
return json.loads(version_json)
"""
def versions_from_file(filename):
"""Try to determine the version from _version.py if present."""
try:
with open(filename) as f:
contents = f.read()
except EnvironmentError:
raise NotThisMethod("unable to read _version.py")
mo = re.search(r"version_json = '''\n(.*)''' # END VERSION_JSON",
contents, re.M | re.S)
if not mo:
mo = re.search(r"version_json = '''\r\n(.*)''' # END VERSION_JSON",
contents, re.M | re.S)
if not mo:
raise NotThisMethod("no version_json in _version.py")
return json.loads(mo.group(1))
def write_to_version_file(filename, versions):
"""Write the given version number to the given _version.py file."""
os.unlink(filename)
contents = json.dumps(versions, sort_keys=True,
indent=1, separators=(",", ": "))
with open(filename, "w") as f:
f.write(SHORT_VERSION_PY % contents)
print("set %s to '%s'" % (filename, versions["version"]))
def plus_or_dot(pieces):
"""Return a + if we don't already have one, else return a ."""
if "+" in pieces.get("closest-tag", ""):
return "."
return "+"
def render_pep440(pieces):
"""Build up version string, with post-release "local version identifier".
Our goal: TAG[+DISTANCE.gHEX[.dirty]] . Note that if you
get a tagged build and then dirty it, you'll get TAG+0.gHEX.dirty
Exceptions:
1: no tags. git_describe was just HEX. 0+untagged.DISTANCE.gHEX[.dirty]
"""
if pieces["closest-tag"]:
rendered = pieces["closest-tag"]
if pieces["distance"] or pieces["dirty"]:
rendered += plus_or_dot(pieces)
rendered += "%d.g%s" % (pieces["distance"], pieces["short"])
if pieces["dirty"]:
rendered += ".dirty"
else:
# exception #1
rendered = "0+untagged.%d.g%s" % (pieces["distance"],
pieces["short"])
if pieces["dirty"]:
rendered += ".dirty"
return rendered
def render_pep440_pre(pieces):
"""TAG[.post0.devDISTANCE] -- No -dirty.
Exceptions:
1: no tags. 0.post0.devDISTANCE
"""
if pieces["closest-tag"]:
rendered = pieces["closest-tag"]
if pieces["distance"]:
rendered += ".post0.dev%d" % pieces["distance"]
else:
# exception #1
rendered = "0.post0.dev%d" % pieces["distance"]
return rendered
def render_pep440_post(pieces):
"""TAG[.postDISTANCE[.dev0]+gHEX] .
The ".dev0" means dirty. Note that .dev0 sorts backwards
(a dirty tree will appear "older" than the corresponding clean one),
but you shouldn't be releasing software with -dirty anyways.
Exceptions:
1: no tags. 0.postDISTANCE[.dev0]
"""
if pieces["closest-tag"]:
rendered = pieces["closest-tag"]
if pieces["distance"] or pieces["dirty"]:
rendered += ".post%d" % pieces["distance"]
if pieces["dirty"]:
rendered += ".dev0"
rendered += plus_or_dot(pieces)
rendered += "g%s" % pieces["short"]
else:
# exception #1
rendered = "0.post%d" % pieces["distance"]
if pieces["dirty"]:
rendered += ".dev0"
rendered += "+g%s" % pieces["short"]
return rendered
def render_pep440_old(pieces):
"""TAG[.postDISTANCE[.dev0]] .
The ".dev0" means dirty.
Exceptions:
1: no tags. 0.postDISTANCE[.dev0]
"""
if pieces["closest-tag"]:
rendered = pieces["closest-tag"]
if pieces["distance"] or pieces["dirty"]:
rendered += ".post%d" % pieces["distance"]
if pieces["dirty"]:
rendered += ".dev0"
else:
# exception #1
rendered = "0.post%d" % pieces["distance"]
if pieces["dirty"]:
rendered += ".dev0"
return rendered
def render_git_describe(pieces):
"""TAG[-DISTANCE-gHEX][-dirty].
Like 'git describe --tags --dirty --always'.
Exceptions:
1: no tags. HEX[-dirty] (note: no 'g' prefix)
"""
if pieces["closest-tag"]:
rendered = pieces["closest-tag"]
if pieces["distance"]:
rendered += "-%d-g%s" % (pieces["distance"], pieces["short"])
else:
# exception #1
rendered = pieces["short"]
if pieces["dirty"]:
rendered += "-dirty"
return rendered
def render_git_describe_long(pieces):
"""TAG-DISTANCE-gHEX[-dirty].
Like 'git describe --tags --dirty --always -long'.
The distance/hash is unconditional.
Exceptions:
1: no tags. HEX[-dirty] (note: no 'g' prefix)
"""
if pieces["closest-tag"]:
rendered = pieces["closest-tag"]
rendered += "-%d-g%s" % (pieces["distance"], pieces["short"])
else:
# exception #1
rendered = pieces["short"]
if pieces["dirty"]:
rendered += "-dirty"
return rendered
def render(pieces, style):
"""Render the given version pieces into the requested style."""
if pieces["error"]:
return {"version": "unknown",
"full-revisionid": pieces.get("long"),
"dirty": None,
"error": pieces["error"],
"date": None}
if not style or style == "default":
style = "pep440" # the default
if style == "pep440":
rendered = render_pep440(pieces)
elif style == "pep440-pre":
rendered = render_pep440_pre(pieces)
elif style == "pep440-post":
rendered = render_pep440_post(pieces)
elif style == "pep440-old":
rendered = render_pep440_old(pieces)
elif style == "git-describe":
rendered = render_git_describe(pieces)
elif style == "git-describe-long":
rendered = render_git_describe_long(pieces)
else:
raise ValueError("unknown style '%s'" % style)
return {"version": rendered, "full-revisionid": pieces["long"],
"dirty": pieces["dirty"], "error": None,
"date": pieces.get("date")}
class VersioneerBadRootError(Exception):
"""The project root directory is unknown or missing key files."""
def get_versions(verbose=False):
"""Get the project version from whatever source is available.
Returns dict with two keys: 'version' and 'full'.
"""
if "versioneer" in sys.modules:
# see the discussion in cmdclass.py:get_cmdclass()
del sys.modules["versioneer"]
root = get_root()
cfg = get_config_from_root(root)
assert cfg.VCS is not None, "please set [versioneer]VCS= in setup.cfg"
handlers = HANDLERS.get(cfg.VCS)
assert handlers, "unrecognized VCS '%s'" % cfg.VCS
verbose = verbose or cfg.verbose
assert cfg.versionfile_source is not None, \
"please set versioneer.versionfile_source"
assert cfg.tag_prefix is not None, "please set versioneer.tag_prefix"
versionfile_abs = os.path.join(root, cfg.versionfile_source)
# extract version from first of: _version.py, VCS command (e.g. 'git
# describe'), parentdir. This is meant to work for developers using a
# source checkout, for users of a tarball created by 'setup.py sdist',
# and for users of a tarball/zipball created by 'git archive' or github's
# download-from-tag feature or the equivalent in other VCSes.
get_keywords_f = handlers.get("get_keywords")
from_keywords_f = handlers.get("keywords")
if get_keywords_f and from_keywords_f:
try:
keywords = get_keywords_f(versionfile_abs)
ver = from_keywords_f(keywords, cfg.tag_prefix, verbose)
if verbose:
print("got version from expanded keyword %s" % ver)
return ver
except NotThisMethod:
pass
try:
ver = versions_from_file(versionfile_abs)
if verbose:
print("got version from file %s %s" % (versionfile_abs, ver))
return ver
except NotThisMethod:
pass
from_vcs_f = handlers.get("pieces_from_vcs")
if from_vcs_f:
try:
pieces = from_vcs_f(cfg.tag_prefix, root, verbose)
ver = render(pieces, cfg.style)
if verbose:
print("got version from VCS %s" % ver)
return ver
except NotThisMethod:
pass
try:
if cfg.parentdir_prefix:
ver = versions_from_parentdir(cfg.parentdir_prefix, root, verbose)
if verbose:
print("got version from parentdir %s" % ver)
return ver
except NotThisMethod:
pass
if verbose:
print("unable to compute version")
return {"version": "0+unknown", "full-revisionid": None,
"dirty": None, "error": "unable to compute version",
"date": None}
def get_version():
"""Get the short version string for this project."""
return get_versions()["version"]
def get_cmdclass(cmdclass=None):
"""Get the custom setuptools/distutils subclasses used by Versioneer.
If the package uses a different cmdclass (e.g. one from numpy), it
should be provide as an argument.
"""
if "versioneer" in sys.modules:
del sys.modules["versioneer"]
# this fixes the "python setup.py develop" case (also 'install' and
# 'easy_install .'), in which subdependencies of the main project are
# built (using setup.py bdist_egg) in the same python process. Assume
# a main project A and a dependency B, which use different versions
# of Versioneer. A's setup.py imports A's Versioneer, leaving it in
# sys.modules by the time B's setup.py is executed, causing B to run
# with the wrong versioneer. Setuptools wraps the sub-dep builds in a
# sandbox that restores sys.modules to it's pre-build state, so the
# parent is protected against the child's "import versioneer". By
# removing ourselves from sys.modules here, before the child build
# happens, we protect the child from the parent's versioneer too.
# Also see https://github.com/python-versioneer/python-versioneer/issues/52
cmds = {} if cmdclass is None else cmdclass.copy()
# we add "version" to both distutils and setuptools
from distutils.core import Command
class cmd_version(Command):
description = "report generated version string"
user_options = []
boolean_options = []
def initialize_options(self):
pass
def finalize_options(self):
pass
def run(self):
vers = get_versions(verbose=True)
print("Version: %s" % vers["version"])
print(" full-revisionid: %s" % vers.get("full-revisionid"))
print(" dirty: %s" % vers.get("dirty"))
print(" date: %s" % vers.get("date"))
if vers["error"]:
print(" error: %s" % vers["error"])
cmds["version"] = cmd_version
# we override "build_py" in both distutils and setuptools
#
# most invocation pathways end up running build_py:
# distutils/build -> build_py
# distutils/install -> distutils/build ->..
# setuptools/bdist_wheel -> distutils/install ->..
# setuptools/bdist_egg -> distutils/install_lib -> build_py
# setuptools/install -> bdist_egg ->..
# setuptools/develop -> ?
# pip install:
# copies source tree to a tempdir before running egg_info/etc
# if .git isn't copied too, 'git describe' will fail
# then does setup.py bdist_wheel, or sometimes setup.py install
# setup.py egg_info -> ?
# we override different "build_py" commands for both environments
if 'build_py' in cmds:
_build_py = cmds['build_py']
elif "setuptools" in sys.modules:
from setuptools.command.build_py import build_py as _build_py
else:
from distutils.command.build_py import build_py as _build_py
class cmd_build_py(_build_py):
def run(self):
root = get_root()
cfg = get_config_from_root(root)
versions = get_versions()
_build_py.run(self)
# now locate _version.py in the new build/ directory and replace
# it with an updated value
if cfg.versionfile_build:
target_versionfile = os.path.join(self.build_lib,
cfg.versionfile_build)
print("UPDATING %s" % target_versionfile)
write_to_version_file(target_versionfile, versions)
cmds["build_py"] = cmd_build_py
if "setuptools" in sys.modules:
from setuptools.command.build_ext import build_ext as _build_ext
else:
from distutils.command.build_ext import build_ext as _build_ext
class cmd_build_ext(_build_ext):
def run(self):
root = get_root()
cfg = get_config_from_root(root)
versions = get_versions()
_build_ext.run(self)
if self.inplace:
# build_ext --inplace will only build extensions in
# build/lib<..> dir with no _version.py to write to.
# As in place builds will already have a _version.py
# in the module dir, we do not need to write one.
return
# now locate _version.py in the new build/ directory and replace
# it with an updated value
target_versionfile = os.path.join(self.build_lib,
cfg.versionfile_source)
print("UPDATING %s" % target_versionfile)
write_to_version_file(target_versionfile, versions)
cmds["build_ext"] = cmd_build_ext
if "cx_Freeze" in sys.modules: # cx_freeze enabled?
from cx_Freeze.dist import build_exe as _build_exe
# nczeczulin reports that py2exe won't like the pep440-style string
# as FILEVERSION, but it can be used for PRODUCTVERSION, e.g.
# setup(console=[{
# "version": versioneer.get_version().split("+", 1)[0], # FILEVERSION
# "product_version": versioneer.get_version(),
# ...
class cmd_build_exe(_build_exe):
def run(self):
root = get_root()
cfg = get_config_from_root(root)
versions = get_versions()
target_versionfile = cfg.versionfile_source
print("UPDATING %s" % target_versionfile)
write_to_version_file(target_versionfile, versions)
_build_exe.run(self)
os.unlink(target_versionfile)
with open(cfg.versionfile_source, "w") as f:
LONG = LONG_VERSION_PY[cfg.VCS]
f.write(LONG %
{"DOLLAR": "$",
"STYLE": cfg.style,
"TAG_PREFIX": cfg.tag_prefix,
"PARENTDIR_PREFIX": cfg.parentdir_prefix,
"VERSIONFILE_SOURCE": cfg.versionfile_source,
})
cmds["build_exe"] = cmd_build_exe
del cmds["build_py"]
if 'py2exe' in sys.modules: # py2exe enabled?
from py2exe.distutils_buildexe import py2exe as _py2exe
class cmd_py2exe(_py2exe):
def run(self):
root = get_root()
cfg = get_config_from_root(root)
versions = get_versions()
target_versionfile = cfg.versionfile_source
print("UPDATING %s" % target_versionfile)
write_to_version_file(target_versionfile, versions)
_py2exe.run(self)
os.unlink(target_versionfile)
with open(cfg.versionfile_source, "w") as f:
LONG = LONG_VERSION_PY[cfg.VCS]
f.write(LONG %
{"DOLLAR": "$",
"STYLE": cfg.style,
"TAG_PREFIX": cfg.tag_prefix,
"PARENTDIR_PREFIX": cfg.parentdir_prefix,
"VERSIONFILE_SOURCE": cfg.versionfile_source,
})
cmds["py2exe"] = cmd_py2exe
# we override different "sdist" commands for both environments
if 'sdist' in cmds:
_sdist = cmds['sdist']
elif "setuptools" in sys.modules:
from setuptools.command.sdist import sdist as _sdist
else:
from distutils.command.sdist import sdist as _sdist
class cmd_sdist(_sdist):
def run(self):
versions = get_versions()
self._versioneer_generated_versions = versions
# unless we update this, the command will keep using the old
# version
self.distribution.metadata.version = versions["version"]
return _sdist.run(self)
def make_release_tree(self, base_dir, files):
root = get_root()
cfg = get_config_from_root(root)
_sdist.make_release_tree(self, base_dir, files)
# now locate _version.py in the new base_dir directory
# (remembering that it may be a hardlink) and replace it with an
# updated value
target_versionfile = os.path.join(base_dir, cfg.versionfile_source)
print("UPDATING %s" % target_versionfile)
write_to_version_file(target_versionfile,
self._versioneer_generated_versions)
cmds["sdist"] = cmd_sdist
return cmds
CONFIG_ERROR = """
setup.cfg is missing the necessary Versioneer configuration. You need
a section like:
[versioneer]
VCS = git
style = pep440
versionfile_source = src/myproject/_version.py
versionfile_build = myproject/_version.py
tag_prefix =
parentdir_prefix = myproject-
You will also need to edit your setup.py to use the results:
import versioneer
setup(version=versioneer.get_version(),
cmdclass=versioneer.get_cmdclass(), ...)
Please read the docstring in ./versioneer.py for configuration instructions,
edit setup.cfg, and re-run the installer or 'python versioneer.py setup'.
"""
SAMPLE_CONFIG = """
# See the docstring in versioneer.py for instructions. Note that you must
# re-run 'versioneer.py setup' after changing this section, and commit the
# resulting files.
[versioneer]
#VCS = git
#style = pep440
#versionfile_source =
#versionfile_build =
#tag_prefix =
#parentdir_prefix =
"""
INIT_PY_SNIPPET = """
from ._version import get_versions
__version__ = get_versions()['version']
del get_versions
"""
def do_setup():
"""Do main VCS-independent setup function for installing Versioneer."""
root = get_root()
try:
cfg = get_config_from_root(root)
except (EnvironmentError, configparser.NoSectionError,
configparser.NoOptionError) as e:
if isinstance(e, (EnvironmentError, configparser.NoSectionError)):
print("Adding sample versioneer config to setup.cfg",
file=sys.stderr)
with open(os.path.join(root, "setup.cfg"), "a") as f:
f.write(SAMPLE_CONFIG)
print(CONFIG_ERROR, file=sys.stderr)
return 1
print(" creating %s" % cfg.versionfile_source)
with open(cfg.versionfile_source, "w") as f:
LONG = LONG_VERSION_PY[cfg.VCS]
f.write(LONG % {"DOLLAR": "$",
"STYLE": cfg.style,
"TAG_PREFIX": cfg.tag_prefix,
"PARENTDIR_PREFIX": cfg.parentdir_prefix,
"VERSIONFILE_SOURCE": cfg.versionfile_source,
})
ipy = os.path.join(os.path.dirname(cfg.versionfile_source),
"__init__.py")
if os.path.exists(ipy):
try:
with open(ipy, "r") as f:
old = f.read()
except EnvironmentError:
old = ""
if INIT_PY_SNIPPET not in old:
print(" appending to %s" % ipy)
with open(ipy, "a") as f:
f.write(INIT_PY_SNIPPET)
else:
print(" %s unmodified" % ipy)
else:
print(" %s doesn't exist, ok" % ipy)
ipy = None
# Make sure both the top-level "versioneer.py" and versionfile_source
# (PKG/_version.py, used by runtime code) are in MANIFEST.in, so
# they'll be copied into source distributions. Pip won't be able to
# install the package without this.
manifest_in = os.path.join(root, "MANIFEST.in")
simple_includes = set()
try:
with open(manifest_in, "r") as f:
for line in f:
if line.startswith("include "):
for include in line.split()[1:]:
simple_includes.add(include)
except EnvironmentError:
pass
# That doesn't cover everything MANIFEST.in can do
# (http://docs.python.org/2/distutils/sourcedist.html#commands), so
# it might give some false negatives. Appending redundant 'include'
# lines is safe, though.
if "versioneer.py" not in simple_includes:
print(" appending 'versioneer.py' to MANIFEST.in")
with open(manifest_in, "a") as f:
f.write("include versioneer.py\n")
else:
print(" 'versioneer.py' already in MANIFEST.in")
if cfg.versionfile_source not in simple_includes:
print(" appending versionfile_source ('%s') to MANIFEST.in" %
cfg.versionfile_source)
with open(manifest_in, "a") as f:
f.write("include %s\n" % cfg.versionfile_source)
else:
print(" versionfile_source already in MANIFEST.in")
# Make VCS-specific changes. For git, this means creating/changing
# .gitattributes to mark _version.py for export-subst keyword
# substitution.
do_vcs_install(manifest_in, cfg.versionfile_source, ipy)
return 0
def scan_setup_py():
"""Validate the contents of setup.py against Versioneer's expectations."""
found = set()
setters = False
errors = 0
with open("setup.py", "r") as f:
for line in f.readlines():
if "import versioneer" in line:
found.add("import")
if "versioneer.get_cmdclass()" in line:
found.add("cmdclass")
if "versioneer.get_version()" in line:
found.add("get_version")
if "versioneer.VCS" in line:
setters = True
if "versioneer.versionfile_source" in line:
setters = True
if len(found) != 3:
print("")
print("Your setup.py appears to be missing some important items")
print("(but I might be wrong). Please make sure it has something")
print("roughly like the following:")
print("")
print(" import versioneer")
print(" setup( version=versioneer.get_version(),")
print(" cmdclass=versioneer.get_cmdclass(), ...)")
print("")
errors += 1
if setters:
print("You should remove lines like 'versioneer.VCS = ' and")
print("'versioneer.versionfile_source = ' . This configuration")
print("now lives in setup.cfg, and should be removed from setup.py")
print("")
errors += 1
return errors
if __name__ == "__main__":
cmd = sys.argv[1]
if cmd == "setup":
errors = do_setup()
errors += scan_setup_py()
if errors:
sys.exit(1) | zoho-books-prefect-tasks | /zoho_books_prefect_tasks-0.0.2.tar.gz/zoho_books_prefect_tasks-0.0.2/versioneer.py | versioneer.py |
from zoho_books_python_sdk.resources import Bills
from prefect import Task
from prefect.utilities.tasks import defaults_from_attrs
from typing import Any
class Create(Task):
def __init__(self, **kwargs: Any):
super().__init__(**kwargs)
@defaults_from_attrs()
def run(self, body: dict = None, path_params: dict = None, query: dict = None, **task_kwargs: Any):
if body is None:
raise ValueError("An object must be provided")
try:
bills = Bills()
response = bills.create(body=body, path_params=path_params, query=query, **task_kwargs)
return response
except Exception as error:
print(error)
raise error
class List(Task):
def __init__(self, **kwargs: Any):
super().__init__(**kwargs)
@defaults_from_attrs()
def run(self, **task_kwargs: Any):
try:
bills = Bills()
response = bills.list(**task_kwargs)
return response
except Exception as error:
print(error)
raise error
class Fetch(Task):
def __init__(self, **kwargs: Any):
super().__init__(**kwargs)
@defaults_from_attrs()
def run(self, id_: str = None, path_params: dict = None, query: dict = None, **task_kwargs: Any):
if id_ is None:
raise ValueError("An id must be provided")
try:
bills = Bills()
response = bills.get(id_=id_, path_params=path_params, query=query, **task_kwargs)
return response
except Exception as error:
print(error)
raise error
class Update(Task):
def __init__(self, **kwargs: Any):
super().__init__(**kwargs)
@defaults_from_attrs()
def run(self, id_: str = None, path_params: dict = None, query: dict = None, body: dict = None, **task_kwargs: Any):
if id_ is None:
raise ValueError("An id must be provided")
if body is None:
raise ValueError("An object must be provided")
try:
bills = Bills()
response = bills.update(id_=id_, path_params=path_params, query=query, body=body, **task_kwargs)
return response
except Exception as error:
print(error)
raise error
class Delete(Task):
def __init__(self, **kwargs: Any):
super().__init__(**kwargs)
@defaults_from_attrs()
def run(self, id_: str = None, path_params: dict = None, query: dict = None, **task_kwargs: Any):
if id_ is None:
raise ValueError("An id must be provided")
try:
bills = Bills()
response = bills.delete(id_=id_, path_params=path_params, query=query, **task_kwargs)
return response
except Exception as error:
print(error)
raise error | zoho-books-prefect-tasks | /zoho_books_prefect_tasks-0.0.2.tar.gz/zoho_books_prefect_tasks-0.0.2/src/zoho_books_prefect_tasks/tasks/bills.py | bills.py |
from zoho_books_python_sdk.resources import ContactPersons
from prefect import Task
from prefect.utilities.tasks import defaults_from_attrs
from typing import Any
class Create(Task):
def __init__(self, **kwargs: Any):
super().__init__(**kwargs)
@defaults_from_attrs()
def run(self, path_params: dict = None, query: dict = None, body: dict = None, **task_kwargs: Any):
if body is None:
raise ValueError("An object must be provided")
try:
contact_persons = ContactPersons()
response = contact_persons.create(path_params=path_params, query=query, body=body, **task_kwargs)
return response
except Exception as error:
print(error)
raise error
class List(Task):
def __init__(self, **kwargs: Any):
super().__init__(**kwargs)
@defaults_from_attrs()
def run(self, **task_kwargs: Any):
try:
contact_persons = ContactPersons()
response = contact_persons.list(**task_kwargs)
return response
except Exception as error:
print(error)
raise error
class Fetch(Task):
def __init__(self, **kwargs: Any):
super().__init__(**kwargs)
@defaults_from_attrs()
def run(self, id_: str = None, path_params: dict = None, query: dict = None, **task_kwargs: Any):
if id_ is None:
raise ValueError("An id must be provided")
try:
contact_persons = ContactPersons()
response = contact_persons.get(id_=id_, path_params=path_params, query=query, **task_kwargs)
return response
except Exception as error:
print(error)
raise error
class Update(Task):
def __init__(self, **kwargs: Any):
super().__init__(**kwargs)
@defaults_from_attrs()
def run(self, id_: str = None, path_params: dict = None, query: dict = None, body: dict = None, **task_kwargs: Any):
if id_ is None:
raise ValueError("An id must be provided")
if body is None:
raise ValueError("An object must be provided")
try:
contact_persons = ContactPersons()
response = contact_persons.update(id_=id_, path_params=path_params, query=query, body=body, **task_kwargs)
return response
except Exception as error:
print(error)
raise error
class Delete(Task):
def __init__(self, **kwargs: Any):
super().__init__(**kwargs)
@defaults_from_attrs()
def run(self, id_: str = None, path_params: dict = None, query: dict = None, **task_kwargs: Any):
if id_ is None:
raise ValueError("An object must be provided")
try:
contact_persons = ContactPersons()
response = contact_persons.delete(id_=id_, path_params=path_params, query=query, **task_kwargs)
return response
except Exception as error:
print(error)
raise error
class Primary(Task):
def __init__(self, **kwargs: Any):
super().__init__(**kwargs)
@defaults_from_attrs()
def run(self, id_: str = None):
if id_ is None:
raise ValueError("An id must be provided")
try:
contact_persons = ContactPersons()
response = contact_persons.primary(id_=id_)
return response
except Exception as error:
print(error)
raise error | zoho-books-prefect-tasks | /zoho_books_prefect_tasks-0.0.2.tar.gz/zoho_books_prefect_tasks-0.0.2/src/zoho_books_prefect_tasks/tasks/contact_persons.py | contact_persons.py |
from zoho_books_python_sdk.resources import SalesOrders
from prefect import Task
from prefect.utilities.tasks import defaults_from_attrs
from typing import Any
class Create(Task):
def __init__(self, **kwargs: Any):
super().__init__(**kwargs)
@defaults_from_attrs()
def run(self, path_params: dict = None, query: dict = None, body: dict = None, **task_kwargs: Any):
if body is None:
raise ValueError("An object must be provided")
try:
sales_orders = SalesOrders()
response = sales_orders.create(path_params=path_params, query=query, body=body, **task_kwargs)
return response
except Exception as error:
print(error)
raise error
class List(Task):
def __init__(self, **kwargs: Any):
super().__init__(**kwargs)
@defaults_from_attrs()
def run(self, **task_kwargs: Any):
try:
sales_orders = SalesOrders()
response = sales_orders.list(**task_kwargs)
return response
except Exception as error:
print(error)
raise error
class Fetch(Task):
def __init__(self, **kwargs: Any):
super().__init__(**kwargs)
@defaults_from_attrs()
def run(self, id_: str = None, path_params: dict = None, query: dict = None, **task_kwargs: Any):
if id_ is None:
raise ValueError("An id must be provided")
try:
sales_orders = SalesOrders()
response = sales_orders.get(id_=id_, path_params=path_params, query=query, **task_kwargs)
return response
except Exception as error:
print(error)
raise error
class Update(Task):
def __init__(self, **kwargs: Any):
super().__init__(**kwargs)
@defaults_from_attrs()
def run(self, id_: str = None, path_params: dict = None, query: dict = None, body: dict = None, **task_kwargs: Any):
if id_ is None:
raise ValueError("An id must be provided")
if body is None:
raise ValueError("An object must be provided")
try:
sales_orders = SalesOrders()
response = sales_orders.update(id_=id_, path_params=path_params, query=query, body=body, **task_kwargs)
return response
except Exception as error:
print(error)
raise error
class Delete(Task):
def __init__(self, **kwargs: Any):
super().__init__(**kwargs)
@defaults_from_attrs()
def run(self, id_: str = None, path_params: dict = None, query: dict = None, **task_kwargs: Any):
if id_ is None:
raise ValueError("An object must be provided")
try:
sales_orders = SalesOrders()
response = sales_orders.delete(id_=id_, path_params=path_params, query=query, **task_kwargs)
return response
except Exception as error:
print(error)
raise error
class Void(Task):
def __init__(self, **kwargs: Any):
super().__init__(**kwargs)
@defaults_from_attrs()
def run(self, id_: str = None, body: dict = None, **task_kwargs: Any):
if id_ is None:
raise ValueError("An object must be provided")
try:
sales_orders = SalesOrders()
response = sales_orders.void(id_=id_, body=body)
return response
except Exception as error:
print(error)
raise error
class Open(Task):
def __init__(self, **kwargs: Any):
super().__init__(**kwargs)
@defaults_from_attrs()
def run(self, id_: str = None, **task_kwargs: Any):
if id_ is None:
raise ValueError("An object must be provided")
try:
sales_orders = SalesOrders()
response = sales_orders.open(id_=id_)
return response
except Exception as error:
print(error)
raise error
class Submit(Task):
def __init__(self, **kwargs: Any):
super().__init__(**kwargs)
@defaults_from_attrs()
def run(self, id_: str = None, **task_kwargs: Any):
if id_ is None:
raise ValueError("An object must be provided")
try:
sales_orders = SalesOrders()
response = sales_orders.submit(id_=id_)
return response
except Exception as error:
print(error)
raise error
class Approve(Task):
def __init__(self, **kwargs: Any):
super().__init__(**kwargs)
@defaults_from_attrs()
def run(self, id_: str = None, **task_kwargs: Any):
if id_ is None:
raise ValueError("An object must be provided")
try:
sales_orders = SalesOrders()
response = sales_orders.approve(id_=id_)
return response
except Exception as error:
print(error)
raise error
class BulkExport(Task):
def __init__(self, **kwargs: Any):
super().__init__(**kwargs)
@defaults_from_attrs()
def run(self, body: dict = None, **task_kwargs: Any):
if body is None:
raise ValueError("An object must be provided")
try:
sales_orders = SalesOrders()
response = sales_orders.bulk_export(body=body)
return response
except Exception as error:
print(error)
raise error
class BulkPrint(Task):
def __init__(self, **kwargs: Any):
super().__init__(**kwargs)
@defaults_from_attrs()
def run(self, body: dict = None, **task_kwargs: Any):
if body is None:
raise ValueError("An object must be provided")
try:
sales_orders = SalesOrders()
response = sales_orders.bulk_print(body=body)
return response
except Exception as error:
print(error)
raise error
class UpdateBillingAddress(Task):
def __init__(self, **kwargs: Any):
super().__init__(**kwargs)
@defaults_from_attrs()
def run(self, id_: str = None, body: dict = None, **task_kwargs: Any):
if id_ is None:
raise ValueError("An object must be provided")
if body is None:
raise ValueError("An object must be provided")
try:
sales_orders = SalesOrders()
response = sales_orders.update_billing_address(id_=id_, body=body)
return response
except Exception as error:
print(error)
raise error
class UpdateShippingAddress(Task):
def __init__(self, **kwargs: Any):
super().__init__(**kwargs)
@defaults_from_attrs()
def run(self, id_: str = None, body: dict = None, **task_kwargs: Any):
if id_ is None:
raise ValueError("An object must be provided")
if body is None:
raise ValueError("An object must be provided")
try:
sales_orders = SalesOrders()
response = sales_orders.update_shipping_address(id_=id_, body=body)
return response
except Exception as error:
print(error)
raise error | zoho-books-prefect-tasks | /zoho_books_prefect_tasks-0.0.2.tar.gz/zoho_books_prefect_tasks-0.0.2/src/zoho_books_prefect_tasks/tasks/sales_orders.py | sales_orders.py |
from zoho_books_python_sdk.resources import Organizations
from prefect import Task
from prefect.utilities.tasks import defaults_from_attrs
from typing import Any
class Create(Task):
def __init__(self, **kwargs: Any):
super().__init__(**kwargs)
@defaults_from_attrs()
def run(self, body: dict = None, path_params: dict = None, query: dict = None, **task_kwargs: Any):
if body is None:
raise ValueError("An object must be provided")
try:
organizations = Organizations()
response = organizations.create(body=body, path_params=path_params, query=query, **task_kwargs)
return response
except Exception as error:
print(error)
raise error
class List(Task):
def __init__(self, **kwargs: Any):
super().__init__(**kwargs)
@defaults_from_attrs()
def run(self, **task_kwargs: Any):
try:
organizations = Organizations()
response = organizations.list(**task_kwargs)
return response
except Exception as error:
print(error)
raise error
class Fetch(Task):
def __init__(self, **kwargs: Any):
super().__init__(**kwargs)
@defaults_from_attrs()
def run(self, id_: str = None, path_params: dict = None, query: dict = None, **task_kwargs: Any):
if id_ is None:
raise ValueError("An id must be provided")
try:
organizations = Organizations()
response = organizations.get(id_=id_, path_params=path_params, query=query, **task_kwargs)
return response
except Exception as error:
print(error)
raise error
class Update(Task):
def __init__(self, **kwargs: Any):
super().__init__(**kwargs)
@defaults_from_attrs()
def run(self, id_: str = None, path_params: dict = None, query: dict = None, body: dict = None, **task_kwargs: Any):
if id_ is None:
raise ValueError("An id must be provided")
if body is None:
raise ValueError("An object must be provided")
try:
organizations = Organizations()
response = organizations.update(id_=id_, path_params=path_params, query=query, body=body, **task_kwargs)
return response
except Exception as error:
print(error)
raise error
class Delete(Task):
def __init__(self, **kwargs: Any):
super().__init__(**kwargs)
@defaults_from_attrs()
def run(self, id_: str = None, path_params: dict = None, query: dict = None, **task_kwargs: Any):
if id_ is None:
raise ValueError("An id must be provided")
try:
organizations = Organizations()
response = organizations.delete(id_=id_, path_params=path_params, query=query, **task_kwargs)
return response
except Exception as error:
print(error)
raise error | zoho-books-prefect-tasks | /zoho_books_prefect_tasks-0.0.2.tar.gz/zoho_books_prefect_tasks-0.0.2/src/zoho_books_prefect_tasks/tasks/organizations.py | organizations.py |
from zoho_books_python_sdk.resources import Contacts
from prefect import Task
from prefect.utilities.tasks import defaults_from_attrs
from typing import Any
class Create(Task):
def __init__(self, **kwargs: Any):
super().__init__(**kwargs)
@defaults_from_attrs()
def run(self, body: dict = None, path_params: dict = None, query: dict = None, **task_kwargs: Any):
if body is None:
raise ValueError("An object must be provided")
try:
contacts = Contacts()
response = contacts.create(body=body, path_params=path_params, query=query, **task_kwargs)
return response
except Exception as error:
print(error)
raise error
class List(Task):
def __init__(self, **kwargs: Any):
super().__init__(**kwargs)
@defaults_from_attrs()
def run(self, **task_kwargs: Any):
try:
contacts = Contacts()
response = contacts.list(**task_kwargs)
return response
except Exception as error:
print(error)
raise error
class Fetch(Task):
def __init__(self, **kwargs: Any):
super().__init__(**kwargs)
@defaults_from_attrs()
def run(self, id_: str = None, path_params: dict = None, query: dict = None, **task_kwargs: Any):
if id_ is None:
raise ValueError("An id must be provided")
try:
contacts = Contacts()
response = contacts.get(id_=id_, path_params=path_params, query=query, **task_kwargs)
return response
except Exception as error:
print(error)
raise error
class Update(Task):
def __init__(self, **kwargs: Any):
super().__init__(**kwargs)
@defaults_from_attrs()
def run(self, id_: str = None, path_params: dict = None, query: dict = None, body: dict = None, **task_kwargs: Any):
if id_ is None:
raise ValueError("An id must be provided")
if body is None:
raise ValueError("An object must be provided")
try:
contacts = Contacts()
response = contacts.update(id_=id_, path_params=path_params, query=query, body=body, **task_kwargs)
return response
except Exception as error:
print(error)
raise error
class Delete(Task):
def __init__(self, **kwargs: Any):
super().__init__(**kwargs)
@defaults_from_attrs()
def run(self, id_: str = None, path_params: dict = None, query: dict = None, **task_kwargs: Any):
if id_ is None:
raise ValueError("An id must be provided")
try:
contacts = Contacts()
response = contacts.delete(id_=id_, path_params=path_params, query=query, **task_kwargs)
return response
except Exception as error:
print(error)
raise error
class Active(Task):
def __init__(self, **kwargs: Any):
super().__init__(**kwargs)
@defaults_from_attrs()
def run(self, id_: str = None):
if id_ is None:
raise ValueError("An id must be provided")
try:
contacts = Contacts()
response = contacts.active(id_=id_)
return response
except Exception as error:
print(error)
raise error
class Inactive(Task):
def __init__(self, **kwargs: Any):
super().__init__(**kwargs)
@defaults_from_attrs()
def run(self, id_: str = None):
if id_ is None:
raise ValueError("An id must be provided")
try:
contacts = Contacts()
response = contacts.inactive(id_=id_)
return response
except Exception as error:
print(error)
raise error
class ListComments(Task):
def __init__(self, **kwargs: Any):
super().__init__(**kwargs)
@defaults_from_attrs()
def run(self, id_: str = None, query: dict = None):
if id_ is None:
raise ValueError("An id must be provided")
try:
contacts = Contacts()
response = contacts.list_comments(id_=id_, query=query)
return response
except Exception as error:
print(error)
raise error
'''
class ListContactPersons(Task):
def __init__(self, **kwargs: Any):
super().__init__(**kwargs)
@defaults_from_attrs()
def run(self, id_: str = None, **task_kwargs: Any):
if id_ is None:
raise ValueError("An id must be provided")
try:
contacts = Contacts()
response = contacts.list_contact_persons(id_=id_, **task_kwargs)
return response
except Exception as error:
print(error)
raise error
'''
class GetEmailStatement(Task):
def __init__(self, **kwargs: Any):
super().__init__(**kwargs)
@defaults_from_attrs()
def run(self, id_: str = None, query: dict = None):
if id_ is None:
raise ValueError("An id must be provided")
try:
contacts = Contacts()
response = contacts.get_email_statement(id_=id_, query=query)
return response
except Exception as error:
print(error)
raise error
class SendEmailStatement(Task):
def __init__(self, **kwargs: Any):
super().__init__(**kwargs)
@defaults_from_attrs()
def run(self, id_: str = None, body: dict = None, query: dict = None):
if id_ is None:
raise ValueError("An id must be provided")
try:
contacts = Contacts()
response = contacts.send_email_statement(id_=id_, body=body, query=query)
return response
except Exception as error:
print(error)
raise error
class SendEmail(Task):
def __init__(self, **kwargs: Any):
super().__init__(**kwargs)
@defaults_from_attrs()
def run(self, id_: str = None, body: dict = None, query: dict = None):
if id_ is None:
raise ValueError("An id must be provided")
try:
contacts = Contacts()
response = contacts.send_email(id_=id_, body=body, query=query)
return response
except Exception as error:
print(error)
raise error | zoho-books-prefect-tasks | /zoho_books_prefect_tasks-0.0.2.tar.gz/zoho_books_prefect_tasks-0.0.2/src/zoho_books_prefect_tasks/tasks/contacts.py | contacts.py |
from .base_client import BaseClient
from .oauth_manager import OAuthManager
class Client(BaseClient):
def __init__(self, **opts):
self.oauth_manager = OAuthManager(**opts)
super(Client, self).__init__(
organization_id=self.oauth_manager.client.storage.get('zoho_books', 'organization_id'),
region=self.oauth_manager.client.storage.get('zoho_books', 'region'), **opts)
def authenticated_fetch(self, path: str = None, method: str = None, path_params: dict = None, query: dict = None,
body: dict = None, headers: dict = None, mimetype: str = 'application/json',
encode_json_string: bool = False, **kwargs):
access_token = self.oauth_manager.get_access_token()
if headers:
headers_auth = {**headers, **{'authorization': 'Zoho-oauthtoken {}'.format(access_token)}}
else:
headers_auth = {**{'authorization': 'Zoho-oauthtoken {}'.format(access_token)}}
return self.fetch(path=path,
method=method or 'GET',
path_params=path_params or dict(),
query=query or dict(),
headers=headers_auth,
body=body or dict(),
mimetype=mimetype,
encode_json_string=encode_json_string,
**kwargs
)
def list(self, **options):
return self.authenticated_fetch(path='', **options)
def create(self, body: dict = None, path_params: dict = None, query: dict = None, **kwargs):
return self.authenticated_fetch(path='',
method='POST',
path_params=path_params or {},
query=query or {},
body=body or {},
**kwargs
)
def get(self, id_: str = '', path_params: dict = None, query: dict = None, **kwargs):
return self.authenticated_fetch(path=f'{id_}/',
path_params=path_params or {},
query=query or {},
**kwargs
)
def update(self, id_: str = '', body: dict = None, path_params: dict = None, query: dict = None, **kwargs):
return self.authenticated_fetch(path=f'{id_}/',
method='PUT',
path_params=path_params or {},
query=query or {},
body=body or {},
**kwargs
)
def delete(self, id_: str = '', path_params: dict = None, query: dict = None, **kwargs):
return self.authenticated_fetch(path=f'{id_}/',
method='DELETE',
path_params=path_params or {},
query=query or {},
**kwargs
) | zoho-books-python-sdk | /zoho_books_python_sdk-0.0.3-py3-none-any.whl/zoho_books_python_sdk/client.py | client.py |
import re
import requests
import json
class BaseClient:
def __init__(self, resource: str = None, path: str = None, origin: str = None, organization_id: str = None,
region: str = 'com', **opts):
self.resource = resource
self.path = path or resource
self.origin = origin or "https://books.zoho.{}/api/v3".format(region)
self.url = re.sub(r"\/$", "", self.origin) + "/" + self.path + "/"
self.headers = {}
self.query = {
'organization_id': organization_id
}
def fetch(self, path: str = None, method: str = None, path_params: dict = None, query: dict = None,
body: dict = None, headers: dict = None, mimetype: str = "application/json",
encode_json_string: bool = False, **kwargs):
if not method:
method = 'GET'
if not path_params:
path_params = {}
if not query:
query = {}
if not body:
body = {}
if not headers:
headers = {}
if encode_json_string:
body = self.form_encode(json.dumps(body))
else:
if mimetype == 'application/json':
body = json.dumps(body)
query.update(self.query)
target = self._replace_path(self.url + path, path_params)
headers = {**self.headers, **{"content-type": mimetype}, **headers}
response = requests.request(method, target.rstrip("/"), headers=headers, params=query, data=body)
print(query)
print(response)
if 'application/json' in response.headers['Content-Type']:
return response.json()
else:
return response
@staticmethod
def _replace_path(path: str = None, path_params: dict = None) -> str:
if path_params is None:
path_params = {}
new_path = path
for key in path_params:
new_path = new_path.replace(':' + key, path_params[key])
return new_path
@staticmethod
def form_encode(body: str = None) -> dict:
form = {'JSONString': json.dumps(body, separators=(',', ':'))}
return form | zoho-books-python-sdk | /zoho_books_python_sdk-0.0.3-py3-none-any.whl/zoho_books_python_sdk/base_client.py | base_client.py |
import json
import os
import time
from datetime import datetime, timedelta
from .base_client import BaseClient
from .aws import SecretsManagerClient, SecretsManagerStorage, MissingSetting
class OAuth(BaseClient):
def __init__(self, **opts):
self.aws_access_key = opts.get('aws_access_key', os.getenv('AWS_ACCESS_KEY_ID'))
self.aws_secret_key = opts.get('aws_secret_key', os.getenv('AWS_SECRET_ACCESS_KEY'))
self.aws_secretsmanager_secret_name = opts.get('aws_secretsmanager_secret_name',
os.getenv('AWS_SM_ZOHO_BOOKS_SECRET_NAME'))
self.aws_secretsmanager_region = opts.get('aws_secretsmanager_region', os.getenv('AWS_SECRETS_MANAGER_REGION'))
self.secretsmanager_client = SecretsManagerClient.get_instance(
self.aws_access_key, self.aws_secret_key,
region_name=self.aws_secretsmanager_region,
)
self.secretsmanager_client.name = self.aws_secretsmanager_secret_name
self.storage = SecretsManagerStorage(secretsmanager_client=self.secretsmanager_client,
name=self.secretsmanager_client.name)
self.storage.read_config()
super(OAuth, self).__init__(resource="oauth", path="oauth/v2", origin="https://accounts.zoho.{}".format(
self.storage.get('zoho_books', 'region')))
self.client_id = self.storage.get('zoho_books', 'client_id')
self.client_secret = self.storage.get('zoho_books', 'client_secret')
self.refresh_token = self.storage.get('zoho_books', 'refresh_token')
try:
self.expiry_time = self.storage.get('zoho_books', 'expiry_time')
except MissingSetting:
self.storage.set('zoho_books', 'expiry_time', str(time.mktime(datetime(1970, 1, 1, 0, 0, 1).timetuple())))
self.expiry_time = self.storage.get('zoho_books', 'expiry_time')
try:
self.access_token = self.storage.get('zoho_books', 'access_token')
except MissingSetting:
self.refresh_access_token()
self.access_token = self.storage.get('zoho_books', 'access_token')
def refresh_access_token(self):
token = self.fetch(
path='token',
method='POST',
query={
'client_id': self.client_id,
'client_secret': self.client_secret,
'refresh_token': self.refresh_token,
'grant_type': 'refresh_token'
}
)
print(token)
self.access_token = token.get('access_token')
self.storage.set('zoho_books', 'access_token', self.access_token)
expiry_time = datetime.now() + timedelta(seconds=token.get("expires_in", 0))
self.expiry_time = time.mktime(datetime(expiry_time.year, expiry_time.month, expiry_time.day,
expiry_time.hour, expiry_time.minute, expiry_time.second).timetuple())
self.storage.set('zoho_books', 'expiry_time', str(self.expiry_time))
print(self.storage.get('zoho_books', 'expiry_time'))
print('Saving token: {}'.format({s: dict(self.storage.items(s)) for s in self.storage.sections()}))
self.secretsmanager_client.put_value(
secret_value=json.dumps({s: dict(self.storage.items(s)) for s in self.storage.sections()}))
return token
def revoke_token(self):
return self.fetch(
path="token/revoke",
req={
"method": "POST",
"body": {
"refresh_token": self.refresh_token
}
}
) | zoho-books-python-sdk | /zoho_books_python_sdk-0.0.3-py3-none-any.whl/zoho_books_python_sdk/oauth.py | oauth.py |
from ..client import Client
class Bills(Client):
def __init__(self, **opts):
super(Bills, self).__init__(**{**opts, **{'resource': 'bills'}})
def void(self, id_: str = ''):
return self.authenticated_fetch(path=f'{id_}/status/void/',
method='POST',
mimetype='application/json',
)
def open(self, id_: str = ''):
return self.authenticated_fetch(path=f'{id_}/status/open/',
method='POST',
mimetype='application/json',
)
def submit(self, id_: str = ''):
return self.authenticated_fetch(path=f'{id_}/submit/',
method='POST',
mimetype='application/json',
)
def approve(self, id_: str = ''):
return self.authenticated_fetch(path=f'{id_}/approve/',
method='POST',
mimetype='application/json',
)
def update_billing_address(self, id_: str = '', body: dict = None):
return self.authenticated_fetch(path=f'{id_}/address/billing/',
method='PUT',
body=body or {},
mimetype='application/json',
encode_json_string=False,
)
def list_payments(self, id_: str = ''):
return self.authenticated_fetch(path=f'{id_}/payments/',
method='GET',
mimetype='application/json',
)
def delete_payment(self, id_: str = '', path_params: dict = None):
return self.authenticated_fetch(path=f'{id_}/payments/:payment_id/',
method='DELETE',
path_params=path_params or {},
mimetype='application/json',
)
def apply_credits(self, id_: str = '', body: dict = None):
return self.authenticated_fetch(path=f'{id_}/credits/',
method='POST',
body=body or {},
mimetype='application/json',
encode_json_string=False,
)
def list_comments(self, id_: str = ''):
return self.authenticated_fetch(path=f'{id_}/comments/',
method='GET',
mimetype='application/json',
)
def add_comment(self, id_: str = '', body: dict = None):
return self.authenticated_fetch(path=f'{id_}/comments/',
method='POST',
body=body or {},
mimetype='application/json',
encode_json_string=False,
)
def delete_comment(self, id_: str = '', path_params: dict = None):
return self.authenticated_fetch(path=f'{id_}/comments/:comment_id/',
method='DELETE',
path_params=path_params or {},
mimetype='application/json',
)
# TODO: Missing /attachments endpoints | zoho-books-python-sdk | /zoho_books_python_sdk-0.0.3-py3-none-any.whl/zoho_books_python_sdk/resources/bills.py | bills.py |
from ..client import Client
class Invoices(Client):
def __init__(self, **opts):
super(Invoices, self).__init__(**{**opts, **{'resource': 'invoices'}})
def sent(self, id_: str = ''):
return self.authenticated_fetch(path=f'{id_}/status/sent/',
method='POST',
mimetype='application/json',
)
def void(self, id_: str = ''):
return self.authenticated_fetch(path=f'{id_}/status/void/',
method='POST',
mimetype='application/json',
)
def draft(self, id_: str = ''):
return self.authenticated_fetch(path=f'{id_}/status/draft/',
method='POST',
mimetype='application/json',
)
def email(self, id_: str = '', body: dict = None, query: dict = None):
return self.authenticated_fetch(path=f'{id_}/email/',
method='POST',
body=body or {},
query=query or {},
mimetype='application/json',
)
def email_multiple(self, body: dict = None, query: dict = None):
return self.authenticated_fetch(path=f'email/',
method='POST',
body=body or {},
query=query or {},
mimetype='application/json',
)
def submit(self, id_: str = ''):
return self.authenticated_fetch(path=f'{id_}/submit/',
method='POST',
mimetype='application/json',
)
def approve(self, id_: str = ''):
return self.authenticated_fetch(path=f'{id_}/approve/',
method='POST',
mimetype='application/json',
)
def get_email_content(self, id_: str = '', query: dict = None):
return self.authenticated_fetch(path=f'{id_}/email/',
method='GET',
query=query or {},
mimetype='application/json',
)
def send_payment_reminder(self, id_: str = '', body: dict = None, query: dict = None):
return self.authenticated_fetch(path=f'{id_}/paymentreminder/',
method='POST',
body=body or {},
query=query or {},
mimetype='application/json',
)
def send_bulk_payment_reminder(self, query: dict = None):
return self.authenticated_fetch(path=f'paymentreminder/',
method='POST',
query=query or {},
mimetype='application/json',
)
def get_payment_reminder_email_content(self, id_: str = ''):
return self.authenticated_fetch(path=f'{id_}/paymentreminder/',
method='GET',
mimetype='application/json',
)
def bulk_export(self, query: dict = None):
return self.authenticated_fetch(path=f'pdf/',
method='GET',
query=query or {},
mimetype='application/json',
)
def bulk_print(self, query: dict = None):
return self.authenticated_fetch(path=f'print/',
method='GET',
query=query or {},
mimetype='application/json',
)
def disable_payment_reminder(self, id_: str = ''):
return self.authenticated_fetch(path=f'{id_}/paymentreminder/disable',
method='POST',
mimetype='application/json',
)
def enable_payment_reminder(self, id_: str = ''):
return self.authenticated_fetch(path=f'{id_}/paymentreminder/enable',
method='POST',
mimetype='application/json',
)
def write_off(self, id_: str = ''):
return self.authenticated_fetch(path=f'{id_}/writeoff/',
method='POST',
mimetype='application/json',
)
def cancel_write_off(self, id_: str = ''):
return self.authenticated_fetch(path=f'{id_}/writeoff/cancel/',
method='POST',
mimetype='application/json',
)
def update_billing_address(self, id_: str = '', body: dict = None):
return self.authenticated_fetch(path=f'{id_}/address/billing/',
method='PUT',
body=body or {},
mimetype='application/json',
)
def update_shipping_address(self, id_: str = '', body: dict = None):
return self.authenticated_fetch(path=f'{id_}/address/shipping/',
method='PUT',
body=body or {},
mimetype='application/json',
)
def list_templates(self):
return self.authenticated_fetch(path=f'templates/',
method='GET',
mimetype='application/json',
)
def update_template(self, id_: str = '', path_params: dict = None):
return self.authenticated_fetch(path=f'{id_}/templates/:template_id/',
method='PUT',
path_params=path_params or {},
mimetype='application/json',
)
def list_payments(self, id_: str = ''):
return self.authenticated_fetch(path=f'{id_}/payments/',
method='GET',
mimetype='application/json',
)
def delete_payment(self, id_: str = '', path_params: dict = None):
return self.authenticated_fetch(path=f'{id_}/payments/:payment_id/',
method='DELETE',
path_params=path_params or {},
mimetype='application/json',
)
def list_credits_applied(self, id_: str = ''):
return self.authenticated_fetch(path=f'{id_}/creditsapplied/',
method='GET',
mimetype='application/json',
)
def apply_credits(self, id_: str = '', body: dict = None):
return self.authenticated_fetch(path=f'{id_}/credits/',
method='POST',
body=body or {},
mimetype='application/json',
encode_json_string=False,
)
def delete_credits_applied(self, id_: str = '', path_params: dict = None):
return self.authenticated_fetch(path=f'{id_}/creditsapplied/:credit_note_id/',
method='DELETE',
path_params=path_params or {},
mimetype='application/json',
)
def get_attachment(self, id_: str = '', query: dict = None):
return self.authenticated_fetch(path=f'{id_}/attachment/',
method='GET',
query=query or {},
mimetype='application/json',
)
def add_attachment(self, id_: str = '', query: dict = None):
return self.authenticated_fetch(path=f'{id_}/attachment/',
method='POST',
query=query or {},
mimetype='application/json',
)
def update_attachment(self, id_: str = '', query: dict = None):
return self.authenticated_fetch(path=f'{id_}/attachment/',
method='PUT',
query=query or {},
mimetype='application/json',
)
def delete_attachment(self, id_: str = ''):
return self.authenticated_fetch(path=f'{id_}/attachment/',
method='DELETE',
mimetype='application/json',
)
def delete_expense_receipt(self, id_: str = ''):
return self.authenticated_fetch(path=f'expenses/{id_}/receipt',
method='DELETE',
mimetype='application/json',
)
def list_comments(self, id_: str = ''):
return self.authenticated_fetch(path=f'{id_}/comments/',
method='GET',
mimetype='application/json',
)
def add_comment(self, id_: str = '', query: dict = None):
return self.authenticated_fetch(path=f'{id_}/comments/',
method='POST',
query=query or {},
mimetype='application/json',
)
def update_comment(self, id_: str = '', query: dict = None, path_params: dict = None):
return self.authenticated_fetch(path=f'{id_}/comments/:comment_id/',
method='PUT',
query=query or {},
path_params=path_params or {},
mimetype='application/json',
)
def delete_comment(self, id_: str = '', path_params: dict = None):
return self.authenticated_fetch(path=f'{id_}/comments/:comment_id/',
method='DELETE',
path_params=path_params or {},
mimetype='application/json',
) | zoho-books-python-sdk | /zoho_books_python_sdk-0.0.3-py3-none-any.whl/zoho_books_python_sdk/resources/invoices.py | invoices.py |
from ..client import Client
class Estimates(Client):
def __init__(self, **opts):
super(Estimates, self).__init__(**{**opts, **{'resource': 'estimates'}})
def sent(self, id_: str = ''):
return self.authenticated_fetch(path=f'{id_}/status/sent/',
method='POST',
mimetype='application/json',
)
def accepted(self, id_: str = ''):
return self.authenticated_fetch(path=f'{id_}/status/accepted/',
method='POST',
mimetype='application/json',
)
def declined(self, id_: str = ''):
return self.authenticated_fetch(path=f'{id_}/status/declined/',
method='POST',
mimetype='application/json',
)
def submit(self, id_: str = ''):
return self.authenticated_fetch(path=f'{id_}/submit/',
method='POST',
mimetype='application/json',
)
def approve(self, id_: str = ''):
return self.authenticated_fetch(path=f'{id_}/approve/',
method='POST',
mimetype='application/json',
)
def email(self, id_: str = '', body: dict = None, query: dict = None):
return self.authenticated_fetch(path=f'{id_}/email/',
method='POST',
body=body or {},
query=query or {},
mimetype='application/json',
)
def email_multiple(self, query: dict = None):
return self.authenticated_fetch(path=f'email/',
method='POST',
query=query or {},
mimetype='application/json',
)
def get_email_content(self, id_: str = '', query: dict = None):
return self.authenticated_fetch(path=f'{id_}/email/',
method='GET',
query=query or {},
mimetype='application/json',
)
def bulk_export(self, query: dict = None):
return self.authenticated_fetch(path=f'pdf/',
method='GET',
query=query or {},
mimetype='application/json',
)
def bulk_print(self, query: dict = None):
return self.authenticated_fetch(path=f'print/',
method='GET',
query=query or {},
mimetype='application/json',
)
def update_billing_address(self, id_: str = '', body: dict = None):
return self.authenticated_fetch(path=f'{id_}/address/billing/',
method='PUT',
body=body or {},
mimetype='application/json',
)
def update_shipping_address(self, id_: str = '', body: dict = None):
return self.authenticated_fetch(path=f'{id_}/address/shipping/',
method='PUT',
body=body or {},
mimetype='application/json',
)
def list_templates(self):
return self.authenticated_fetch(path=f'templates/',
method='GET',
mimetype='application/json',
)
def update_template(self, id_: str = '', path_params: dict = None):
return self.authenticated_fetch(path=f'{id_}/templates/:template_id/',
method='PUT',
path_params=path_params or {},
mimetype='application/json',
)
def list_comments(self, id_: str = ''):
return self.authenticated_fetch(path=f'{id_}/comments/',
method='GET',
mimetype='application/json',
)
def add_comment(self, id_: str = '', body: dict = None):
return self.authenticated_fetch(path=f'{id_}/comments/',
method='POST',
body=body or {},
mimetype='application/json',
)
def update_comment(self, id_: str = '', body: dict = None, path_params: dict = None):
return self.authenticated_fetch(path=f'{id_}/comments/:comment_id/',
method='PUT',
body=body or {},
path_params=path_params or {},
mimetype='application/json',
)
def delete_comment(self, id_: str = '', path_params: dict = None):
return self.authenticated_fetch(path=f'{id_}/comments/:comment_id/',
method='DELETE',
path_params=path_params or {},
mimetype='application/json',
) | zoho-books-python-sdk | /zoho_books_python_sdk-0.0.3-py3-none-any.whl/zoho_books_python_sdk/resources/estimates.py | estimates.py |
from ..client import Client
class Contacts(Client):
def __init__(self, **opts):
super(Contacts, self).__init__(**{**opts, **{'resource': 'contacts'}})
def active(self, id_: str = ''):
return self.authenticated_fetch(path=f'{id_}/active/',
method='POST',
mimetype='application/json',
)
def inactive(self, id_: str = ''):
return self.authenticated_fetch(path=f'{id_}/inactive/',
method='POST',
mimetype='application/json',
)
def enable_payment_reminder(self, id_: str = ''):
return self.authenticated_fetch(path=f'{id_}/paymentreminder/enable/',
method='POST',
mimetype='application/json',
)
def disable_payment_reminder(self, id_: str = ''):
return self.authenticated_fetch(path=f'{id_}/paymentreminder/disable/',
method='POST',
mimetype='application/json',
)
def enable_portal_access(self, id_: str = '', body: dict = None):
return self.authenticated_fetch(path=f'{id_}/portal/enable/',
method='POST',
body=body or {},
mimetype='application/json',
)
def send_email_statement(self, id_: str = '', body: dict = None, query: dict = None):
return self.authenticated_fetch(path=f'{id_}/statements/email/',
method='POST',
body=body or {},
query=query or {},
mimetype='application/json',
)
def get_email_statement(self, id_: str = '', query: dict = None):
return self.authenticated_fetch(path=f'{id_}/statements/email/',
method='GET',
query=query or {},
mimetype='application/json',
)
def send_email(self, id_: str = '', body: dict = None, query: dict = None):
return self.authenticated_fetch(path=f'{id_}/email/',
method='POST',
body=body or {},
query=query or {},
mimetype='application/json',
)
def list_comments(self, id_: str = '', query: dict = None):
return self.authenticated_fetch(path=f'{id_}/comments/',
method='GET',
query=query or {},
mimetype='application/json',
)
def list_addresses(self, id_: str = ''):
return self.authenticated_fetch(path=f'{id_}/address/',
method='GET',
mimetype='application/json',
)
def create_address(self, id_: str = '', body: dict = None):
return self.authenticated_fetch(path=f'{id_}/address/',
method='POST',
body=body or {},
mimetype='application/json',
)
def update_address(self, id_: str = '', body: dict = None, path_params: dict = None):
return self.authenticated_fetch(path=f'{id_}/address/:address_id/',
method='PUT',
body=body or {},
path_params=path_params or {},
mimetype='application/json',
)
def delete_address(self, id_: str = '', path_params: dict = None):
return self.authenticated_fetch(path=f'{id_}/address/:address_id/',
method='DELETE',
path_params=path_params or {},
mimetype='application/json',
)
def list_refunds(self, id_: str = ''):
return self.authenticated_fetch(path=f'{id_}/refunds/',
method='GET',
mimetype='application/json',
)
def track_1099(self, id_: str = ''):
return self.authenticated_fetch(path=f'{id_}/track1099/',
method='POST',
mimetype='application/json',
)
def untrack_1099(self, id_: str = ''):
return self.authenticated_fetch(path=f'{id_}/untrack1099/',
method='POST',
mimetype='application/json',
) | zoho-books-python-sdk | /zoho_books_python_sdk-0.0.3-py3-none-any.whl/zoho_books_python_sdk/resources/contacts.py | contacts.py |
from ..client import Client
class CreditNotes(Client):
def __init__(self, **opts):
super(CreditNotes, self).__init__(**{**opts, **{'resource': 'creditnotes'}})
def email(self, id_: str = '', body: dict = None, query: dict = None):
return self.authenticated_fetch(path=f'{id_}/email/',
method='POST',
body=body or {},
query=query or {},
mimetype='application/json',
)
def get_email_history(self, id_: str = ''):
return self.authenticated_fetch(path=f'{id_}/emailhistory/',
method='GET',
mimetype='application/json',
)
def get_email_content(self, id_: str = '', query: dict = None):
return self.authenticated_fetch(path=f'{id_}/email/',
method='GET',
query=query or {},
mimetype='application/json',
)
def void(self, id_: str = ''):
return self.authenticated_fetch(path=f'{id_}/status/void/',
method='POST',
mimetype='application/json',
)
def draft(self, id_: str = ''):
return self.authenticated_fetch(path=f'{id_}/status/draft/',
method='POST',
mimetype='application/json',
)
def open(self, id_: str = ''):
return self.authenticated_fetch(path=f'{id_}/status/open/',
method='POST',
mimetype='application/json',
)
def submit(self, id_: str = ''):
return self.authenticated_fetch(path=f'{id_}/submit/',
method='POST',
mimetype='application/json',
)
def approve(self, id_: str = ''):
return self.authenticated_fetch(path=f'{id_}/approve/',
method='POST',
mimetype='application/json',
)
def update_billing_address(self, id_: str = '', body: dict = None):
return self.authenticated_fetch(path=f'{id_}/address/billing/',
method='PUT',
body=body or {},
mimetype='application/json',
)
def update_shipping_address(self, id_: str = '', body: dict = None):
return self.authenticated_fetch(path=f'{id_}/address/shipping/',
method='PUT',
body=body or {},
mimetype='application/json',
)
def list_templates(self):
return self.authenticated_fetch(path=f'templates/',
method='GET',
mimetype='application/json',
)
def update_template(self, id_: str = '', path_params: dict = None):
return self.authenticated_fetch(path=f'{id_}/templates/:template_id/',
method='PUT',
path_params=path_params or {},
mimetype='application/json',
)
def list_invoices(self, id_: str = ''):
return self.authenticated_fetch(path=f'{id_}/invoices/',
method='GET',
mimetype='application/json',
)
def apply_to_invoices(self, id_: str = '', body: dict = None):
return self.authenticated_fetch(path=f'{id_}/invoices/',
method='POST',
body=body or {},
mimetype='application/json',
)
def delete_applied_to_invoice(self, id_: str = '', path_params: dict = None):
return self.authenticated_fetch(path=f'{id_}/invoices/:invoice_id/',
method='DELETE',
path_params=path_params or {},
mimetype='application/json',
)
def list_comments(self, id_: str = ''):
return self.authenticated_fetch(path=f'{id_}/comments/',
method='GET',
mimetype='application/json',
)
def add_comment(self, id_: str = '', body: dict = None):
return self.authenticated_fetch(path=f'{id_}/comments/',
method='POST',
body=body or {},
mimetype='application/json',
)
def delete_comment(self, id_: str = '', path_params: dict = None):
return self.authenticated_fetch(path=f'{id_}/comments/:comment_id/',
method='DELETE',
path_params=path_params or {},
mimetype='application/json',
)
def list_all_refunds(self, query: dict = None):
return self.authenticated_fetch(path=f'refunds/',
method='GET',
query=query or {},
mimetype='application/json',
)
def list_refunds(self, id_: str = ''):
return self.authenticated_fetch(path=f'{id_}/refunds/',
method='GET',
mimetype='application/json',
)
def get_refund(self, id_: str = '', path_params: dict = None):
return self.authenticated_fetch(path=f'{id_}/refunds/:refund_id/',
method='GET',
path_params=path_params or {},
mimetype='application/json',
)
def create_refund(self, id_: str = '', body: dict = None):
return self.authenticated_fetch(path=f'{id_}/refunds/',
method='POST',
body=body or {},
mimetype='application/json',
)
def update_refund(self, id_: str = '', body: dict = None, path_params: dict = None):
return self.authenticated_fetch(path=f'{id_}/refunds/:refund_id/',
method='PUT',
body=body or {},
path_params=path_params or {},
mimetype='application/json',
)
def delete_refund(self, id_: str = '', path_params: dict = None):
return self.authenticated_fetch(path=f'{id_}/refunds/:refund_id/',
method='DELETE',
path_params=path_params or {},
mimetype='application/json',
) | zoho-books-python-sdk | /zoho_books_python_sdk-0.0.3-py3-none-any.whl/zoho_books_python_sdk/resources/creditnotes.py | creditnotes.py |
from ..client import Client
class SalesOrders(Client):
def __init__(self, **opts):
super(SalesOrders, self).__init__(**{**opts, **{'resource': 'salesorders'}})
def open(self, id_: str = ''):
return self.authenticated_fetch(path=f'{id_}/status/open/',
method='POST',
mimetype='application/json',
)
def void(self, id_: str = '', body: dict = None):
return self.authenticated_fetch(path=f'{id_}/status/void/',
method='POST',
body=body or {},
mimetype='application/json',
)
def email(self, id_: str = '', body: dict = None, query: dict = None):
return self.authenticated_fetch(path=f'{id_}/email/',
method='POST',
query=query or {},
body=body or {},
mimetype='application/json',
)
def submit(self, id_: str = ''):
return self.authenticated_fetch(path=f'{id_}/submit/',
method='POST',
mimetype='application/json',
)
def approve(self, id_: str = ''):
return self.authenticated_fetch(path=f'{id_}/approve/',
method='POST',
mimetype='application/json',
)
def get_email_content(self, id_: str = '', query: dict = None):
return self.authenticated_fetch(path=f'{id_}/email/',
method='GET',
query=query or {},
mimetype='application/json',
)
def bulk_export(self, body: dict = None):
return self.authenticated_fetch(path=f'pdf/',
method='GET',
body=body or {},
mimetype='application/json',
)
def bulk_print(self, body: dict = None):
return self.authenticated_fetch(path=f'print/',
method='GET',
body=body or {},
mimetype='application/json',
)
def update_billing_address(self, id_: str = '', body: dict = None):
return self.authenticated_fetch(path=f'{id_}/address/billing/',
method='PUT',
body=body or {},
mimetype='application/json',
)
def update_shipping_address(self, id_: str = '', body: dict = None):
return self.authenticated_fetch(path=f'{id_}/address/shipping/',
method='PUT',
body=body or {},
mimetype='application/json',
)
def list_templates(self):
return self.authenticated_fetch(path=f'templates/',
method='GET',
mimetype='application/json',
)
def update_template(self, id_: str = '', path_params: dict = None):
return self.authenticated_fetch(path=f'{id_}/templates/:template_id/',
method='PUT',
path_params=path_params or {},
mimetype='application/json',
)
def get_attachment(self, id_: str = '', query: dict = None):
return self.authenticated_fetch(path=f'{id_}/attachment/',
method='GET',
query=query or {},
mimetype='application/json',
)
def add_attachment(self, id_: str = '', query: dict = None):
return self.authenticated_fetch(path=f'{id_}/attachment/',
method='POST',
query=query or {},
mimetype='application/json',
)
def update_attachment(self, id_: str = '', query: dict = None):
return self.authenticated_fetch(path=f'{id_}/attachment/',
method='PUT',
query=query or {},
mimetype='application/json',
)
def delete_attachment(self, id_: str = ''):
return self.authenticated_fetch(path=f'{id_}/attachment/',
method='DELETE',
mimetype='application/json',
)
def list_comments(self, id_: str = ''):
return self.authenticated_fetch(path=f'{id_}/comments/',
method='GET',
mimetype='application/json',
)
def add_comment(self, id_: str = '', body: dict = None):
return self.authenticated_fetch(path=f'{id_}/comments/',
method='POST',
body=body or {},
mimetype='application/json',
)
def update_comment(self, id_: str = '', body: dict = None, path_params: dict = None):
return self.authenticated_fetch(path=f'{id_}/comments/:comment_id/',
method='PUT',
path_params=path_params or {},
body=body or {},
mimetype='application/json',
)
def delete_comment(self, id_: str = '', path_params: dict = None):
return self.authenticated_fetch(path=f'{id_}/comments/:comment_id/',
method='DELETE',
path_params=path_params or {},
mimetype='application/json',
)
# TODO: Update substatus | zoho-books-python-sdk | /zoho_books_python_sdk-0.0.3-py3-none-any.whl/zoho_books_python_sdk/resources/salesorders.py | salesorders.py |
import argparse
import base64
import json
import logging
from pprint import pprint
import time
import boto3
from botocore.exceptions import ClientError
logger = logging.getLogger(__name__)
class SecretsManagerClient:
"""Encapsulates Secrets Manager functions."""
_instance = None
def __init__(self, secretsmanager_client):
"""
:param secretsmanager_client: A Boto3 Secrets Manager client.
"""
self.secretsmanager_client = secretsmanager_client
self.name = None
@staticmethod
def get_instance(access_key, secret_key, region_name='us-west-2'):
if SecretsManagerClient._instance is None:
secretsmanager_client = boto3.client(
'secretsmanager',
region_name=region_name,
aws_access_key_id=access_key,
aws_secret_access_key=secret_key
)
SecretsManagerClient._instance = SecretsManagerClient(secretsmanager_client)
return SecretsManagerClient._instance
def _clear(self):
self.name = None
def create(self, name, secret_value):
"""
Creates a new secret. The secret value can be a string or bytes.
:param name: The name of the secret to create.
:param secret_value: The value of the secret.
:return: Metadata about the newly created secret.
"""
self._clear()
try:
kwargs = {'Name': name}
if isinstance(secret_value, str):
kwargs['SecretString'] = secret_value
elif isinstance(secret_value, bytes):
kwargs['SecretBinary'] = secret_value
response = self.secretsmanager_client.create_secret(**kwargs)
self.name = name
logger.info("Created secret %s.", name)
except ClientError:
logger.exception("Couldn't get secret %s.", name)
raise
else:
return response
def describe(self, name=None):
"""
Gets metadata about a secret.
:param name: The name of the secret to load. If `name` is None, metadata about
the current secret is retrieved.
:return: Metadata about the secret.
"""
if self.name is None and name is None:
raise ValueError
if name is None:
name = self.name
self._clear()
try:
response = self.secretsmanager_client.describe_secret(SecretId=name)
self.name = name
logger.info("Got secret metadata for %s.", name)
except ClientError:
logger.exception("Couldn't get secret metadata for %s.", name)
raise
else:
return response
def get_value(self, stage=None):
"""
Gets the value of a secret.
:param stage: The stage of the secret to retrieve. If this is None, the
current stage is retrieved.
:return: The value of the secret. When the secret is a string, the value is
contained in the `SecretString` field. When the secret is bytes,
it is contained in the `SecretBinary` field.
"""
if self.name is None:
raise ValueError
try:
kwargs = {'SecretId': self.name}
if stage is not None:
kwargs['VersionStage'] = stage
response = self.secretsmanager_client.get_secret_value(**kwargs)
logger.info("Got value for secret %s.", self.name)
except ClientError:
logger.exception("Couldn't get value for secret %s.", self.name)
raise
else:
return response
def get_random_password(self, pw_length):
"""
Gets a randomly generated password.
:param pw_length: The length of the password.
:return: The generated password.
"""
try:
response = self.secretsmanager_client.get_random_password(
PasswordLength=pw_length)
password = response['RandomPassword']
logger.info("Got random password.")
except ClientError:
logger.exception("Couldn't get random password.")
raise
else:
return password
def put_value(self, secret_value, stages=None):
"""
Puts a value into an existing secret. When no stages are specified, the
value is set as the current ('AWSCURRENT') stage and the previous value is
moved to the 'AWSPREVIOUS' stage. When a stage is specified that already
exists, the stage is associated with the new value and removed from the old
value.
:param secret_value: The value to add to the secret.
:param stages: The stages to associate with the secret.
:return: Metadata about the secret.
"""
if self.name is None:
raise ValueError
try:
kwargs = {'SecretId': self.name}
if isinstance(secret_value, str):
kwargs['SecretString'] = secret_value
elif isinstance(secret_value, bytes):
kwargs['SecretBinary'] = secret_value
if stages is not None:
kwargs['VersionStages'] = stages
response = self.secretsmanager_client.put_secret_value(**kwargs)
logger.info("Value put in secret %s.", self.name)
except ClientError:
logger.exception("Couldn't put value in secret %s.", self.name)
raise
else:
return response
def update_version_stage(self, stage, remove_from, move_to):
"""
Updates the stage associated with a version of the secret.
:param stage: The stage to update.
:param remove_from: The ID of the version to remove the stage from.
:param move_to: The ID of the version to add the stage to.
:return: Metadata about the secret.
"""
if self.name is None:
raise ValueError
try:
response = self.secretsmanager_client.update_secret_version_stage(
SecretId=self.name, VersionStage=stage, RemoveFromVersionId=remove_from,
MoveToVersionId=move_to)
logger.info("Updated version stage %s for secret %s.", stage, self.name)
except ClientError:
logger.exception(
"Couldn't update version stage %s for secret %s.", stage, self.name)
raise
else:
return response
def delete(self, without_recovery):
"""
Deletes the secret.
:param without_recovery: Permanently deletes the secret immediately when True;
otherwise, the deleted secret can be restored within
the recovery window. The default recovery window is
30 days.
"""
if self.name is None:
raise ValueError
try:
self.secretsmanager_client.delete_secret(
SecretId=self.name, ForceDeleteWithoutRecovery=without_recovery)
logger.info("Deleted secret %s.", self.name)
self._clear()
except ClientError:
logger.exception("Deleted secret %s.", self.name)
raise
def list(self, max_results):
"""
Lists secrets for the current account.
:param max_results: The maximum number of results to return.
:return: Yields secrets one at a time.
"""
try:
paginator = self.secretsmanager_client.get_paginator('list_secrets')
for page in paginator.paginate(
PaginationConfig={'MaxItems': max_results}):
for secret in page['SecretList']:
yield secret
except ClientError:
logger.exception("Couldn't list secrets.")
raise | zoho-books-python-sdk | /zoho_books_python_sdk-0.0.3-py3-none-any.whl/zoho_books_python_sdk/aws/secretsmanager_client.py | secretsmanager_client.py |
import os
from configparser import NoOptionError, NoSectionError, ConfigParser
class S3Storage(ExactOnlineConfig, ConfigParser):
"""
Configuration based on the SafeConfigParser and the
ExactOnlineConfig.
Takes an S3 key as input and writes/reads that file
"""
def __init__(self, key, config_path, s3_client, **kwargs):
super(S3Storage, self).__init__(**kwargs)
self.key = key
self.config_path = config_path
self.s3_client = s3_client
os.makedirs(os.path.dirname(self.config_path), exist_ok=True)
if hasattr(self.config_path, 'read'):
self.overwrite = False
else:
self.overwrite = self.config_path
def read_config(self):
if hasattr(self.config_path, 'read'):
if hasattr(self, 'read_file'):
self.read_file(self.config_path)
else:
self.readfp(self.config_path)
else:
self.read([self.config_path])
def get(self, section, option, **kwargs):
"""
Get method that raises MissingSetting if the value was unset.
This differs from the SafeConfigParser which may raise either a
NoOptionError or a NoSectionError.
We take extra **kwargs because the Python 3.5 configparser extends the
get method signature and it calls self with those parameters.
def get(self, section, option, *, raw=False, vars=None,
fallback=_UNSET):
"""
try:
ret = super(ExactOnlineConfig, self).get(section, option, **kwargs)
except (NoOptionError, NoSectionError):
raise MissingSetting(option, section)
return ret
def set(self, section, option, value: str = None):
"""
Set method that (1) auto-saves if possible and (2) auto-creates
sections.
"""
try:
super(ExactOnlineConfig, self).set(section, option, value)
except NoSectionError:
self.add_section(section)
super(ExactOnlineConfig, self).set(section, option, value)
# Save automatically!
self.save()
def save(self):
if self.overwrite:
with open(self.overwrite, 'w') as output:
self.write(output)
self.s3_client.upload_file(self.overwrite, self.key) | zoho-books-python-sdk | /zoho_books_python_sdk-0.0.3-py3-none-any.whl/zoho_books_python_sdk/aws/s3_storage.py | s3_storage.py |
# zoho-client-django
A modest Zoho CRM API client which will do the oAuth2 for you.
## Usage
```python
from zoho_client.zoho import ZohoClient
client = ZohoClient()
# GET all Sales_Orders
res = client.make_request(api_endpoint="Sales_Orders")
# res is an instance of requests.Response
for sales_order in res.json()['data']:
print(f"sales order #{sales_order['id']}")
# find the first record
sales_order_id = res.json()['data'][0]['id']
# update the sales order's subject
payload = {'data': [ {'Subject': 'CHANGED'}]}
# the make_request accpet any kwargs which the requests.request() method accpets
res = client.make_request(method='PUT', api_endpoint=f"Sales_Orders/{sales_order_id}", json=payload)
print(res.json()['data'][0]['status'])
# => success
# search for a record. the params are automatically encoded
res = client.make_request("GET", "Accounts/search", params= {"criteria": "(Account_Name:equals:Guy Moller)"})
print(f"found {resp.json()['info']['count']} records")
# create a record
account_data={'Account_Name': 'John Doe', 'Email': '[email protected]'}
res = client.make_request(method='POST', api_endpoint='Accounts',json={'data':[account_data]})
print(res.json()['data'][0]['details']['id'])
# => 5599334000006242002
```
# Setup
## ENV
the package expects to have it's configuration in the settings.py:
```python
# read it from .env
ZOHO_CLIENT_ID = env("ZOHO_CLIENT_ID")
ZOHO_CLIENT_SECRET = env("ZOHO_CLIENT_SECRET")
ZOHO_API_VERSION = "v2.1"
# sandbox
ZOHO_BASE_URL = "https://crmsandbox.zoho.com"
# production
# ZOHO_BASE_URL = "https://zohoapis.com"
```
and naturally, don't forget to include the app in your INSALLED_APS:
```python
INSTALLED_APPS = [
...
"zoho_client",
]
```
## Initilization of the client (one off)
goto https://api-console.zoho.com/ :
copy the client id and secret and save them in your django settings. the recommended way would be to save them as ENV variables.

generate an authorization code
scope:
ZohoCRM.modules.ALL,ZohoCRM.settings.ALL,ZohoCRM.users.ALL,ZohoCRM.bulk.ALL,ZohoCRM.notifications.ALL

choose either production or sandbox

press generate and copy the code and run this from the django console:

go to django admin to zoho_client/zoho_token and press the regenerate zoho oauth tokens

paste the authorization code you have copied before

you are good to go!
### Programmatically:
```python
from zoho_client.zoho import ZohoClient
# the code you have just copied
code = "1000.12c557408797e20c8432216dca0bbb5f.f1896d4f9e2329136806637798859a99"
ZohoClient().fetch_tokens(code)
# -> '1000.03b32b6490d8573e242664710bbc4f2c.e009198b6ab4b89013485657409e4913'
```
| zoho-client-django | /zoho-client-django-1.2.2.tar.gz/zoho-client-django-1.2.2/README.md | README.md |
from pathlib import Path
import environ
import os
# Build paths inside the project like this: BASE_DIR / 'subdir'.
BASE_DIR = Path(__file__).resolve().parent.parent
env = environ.Env()
environ.Env.read_env(os.path.join(BASE_DIR, ".env"))
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/4.1/howto/deployment/checklist/
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = "django-insecure--@qx33)h1c34f0zc5$ll!rmbv-wbh^rhtd=7(g-6kkoqzf9l6-"
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = True
ALLOWED_HOSTS = []
# Application definition
INSTALLED_APPS = [
"django.contrib.admin",
"django.contrib.auth",
"django.contrib.contenttypes",
"django.contrib.sessions",
"django.contrib.messages",
"django.contrib.staticfiles",
"django_extensions",
"zoho_client",
]
MIDDLEWARE = [
"django.middleware.security.SecurityMiddleware",
"django.contrib.sessions.middleware.SessionMiddleware",
"django.middleware.common.CommonMiddleware",
"django.middleware.csrf.CsrfViewMiddleware",
"django.contrib.auth.middleware.AuthenticationMiddleware",
"django.contrib.messages.middleware.MessageMiddleware",
"django.middleware.clickjacking.XFrameOptionsMiddleware",
]
ROOT_URLCONF = "zoho_test_project.urls"
TEMPLATES = [
{
"BACKEND": "django.template.backends.django.DjangoTemplates",
"DIRS": [],
"APP_DIRS": True,
"OPTIONS": {
"context_processors": [
"django.template.context_processors.debug",
"django.template.context_processors.request",
"django.contrib.auth.context_processors.auth",
"django.contrib.messages.context_processors.messages",
],
},
},
]
WSGI_APPLICATION = "zoho_test_project.wsgi.application"
# Database
# https://docs.djangoproject.com/en/4.1/ref/settings/#databases
DATABASES = {
"default": {
"ENGINE": "django.db.backends.sqlite3",
"NAME": BASE_DIR / "db.sqlite3",
}
}
# Password validation
# https://docs.djangoproject.com/en/4.1/ref/settings/#auth-password-validators
AUTH_PASSWORD_VALIDATORS = [
{
"NAME": "django.contrib.auth.password_validation.UserAttributeSimilarityValidator",
},
{
"NAME": "django.contrib.auth.password_validation.MinimumLengthValidator",
},
{
"NAME": "django.contrib.auth.password_validation.CommonPasswordValidator",
},
{
"NAME": "django.contrib.auth.password_validation.NumericPasswordValidator",
},
]
# Internationalization
# https://docs.djangoproject.com/en/4.1/topics/i18n/
LANGUAGE_CODE = "en-us"
TIME_ZONE = "UTC"
USE_I18N = True
USE_TZ = True
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/4.1/howto/static-files/
STATIC_URL = "static/"
# Default primary key field type
# https://docs.djangoproject.com/en/4.1/ref/settings/#default-auto-field
DEFAULT_AUTO_FIELD = "django.db.models.BigAutoField"
# read it from .env
ZOHO_CLIENT_ID = env("ZOHO_CLIENT_ID")
ZOHO_CLIENT_SECRET = env("ZOHO_CLIENT_SECRET")
ZOHO_API_VERSION = "v2.1"
# sandbox
ZOHO_BASE_URL = "https://crmsandbox.zoho.com"
# production
# ZOHO_BASE_URL = "https://zohoapis.com" | zoho-client-django | /zoho-client-django-1.2.2.tar.gz/zoho-client-django-1.2.2/zoho_test_project/settings.py | settings.py |
import requests
from requests import Response
from .models import ZohoToken
from django.conf import settings
class ZohoClient:
def __init__(self):
self.client_id = settings.ZOHO_CLIENT_ID
self.client_secret = settings.ZOHO_CLIENT_SECRET
self.token_url = "https://accounts.zoho.com/oauth/v2/token"
self.api_version = getattr(settings, "ZOHO_API_VERSION", "v2.1")
self.base_url = getattr(settings, "ZOHO_BASE_URL", "https://zohoapis.com")
self.headers = {}
try:
token = ZohoToken.objects.latest("timestamp")
self.refresh_token = token.refresh_token
self.access_token = token.access_token
except ZohoToken.DoesNotExist:
self.refresh_token = None
self.access_token = None
def make_request(self, method="GET", api_endpoint=None, **kwargs) -> Response:
if not self.access_token:
self.refresh_access_token()
url = f"{self.base_url}/crm/{self.api_version}/{api_endpoint}"
print(url)
response = requests.request(method, url, headers=self.headers, **kwargs)
# If the access token has expired, refresh it and try the request again
if response.status_code == 401:
self.refresh_access_token()
return self.make_request(method, api_endpoint, **kwargs)
return response
def refresh_access_token(self):
data = {
"refresh_token": self.refresh_token,
"client_id": self.client_id,
"client_secret": self.client_secret,
"grant_type": "refresh_token",
}
response = requests.post(self.token_url, data=data)
self.access_token = response.json().get("access_token")
# Save the new tokens to the database
ZohoToken.objects.create(
access_token=self.access_token, refresh_token=self.refresh_token
)
self.headers = {"Authorization": f"Zoho-oauthtoken {self.access_token}"}
def fetch_tokens(self, authorization_code):
data = {
"code": authorization_code,
"client_id": self.client_id,
"client_secret": self.client_secret,
"redirect_uri": "http://localhost", # Or your actual redirect URI
"grant_type": "authorization_code",
}
response = requests.post(self.token_url, data=data)
response_data = response.json()
if response.status_code == 200 and "refresh_token" in response_data:
self.refresh_token = response_data["refresh_token"]
self.access_token = response_data["access_token"]
# Save the new tokens to the database
ZohoToken.objects.create(
access_token=self.access_token, refresh_token=self.refresh_token
)
return self.refresh_token
else:
# Handle error
error = response_data.get("error", "Unknown error")
raise Exception(f"Failed to fetch tokens: {error}") | zoho-client-django | /zoho-client-django-1.2.2.tar.gz/zoho-client-django-1.2.2/zoho_client/zoho.py | zoho.py |
import json
import logging
import time
import urllib.parse
from datetime import datetime
from pathlib import Path
from typing import Dict, Generator, List, Optional, Tuple
import requests
from requests.adapters import HTTPAdapter, Retry
logger = logging.getLogger()
class APIQuotaExceeded(Exception):
pass
def _requests_retry_session(
retries=10,
backoff_factor=2,
status_forcelist=(500, 502, 503, 504),
# remove 429 here, the CRM retry functionality is a 24 hour rolling limit and can't be recovered by waiting for a minute or so
session=None,
) -> requests.Session:
session = session or requests.Session()
""" A set of integer HTTP status codes that we should force a retry on.
A retry is initiated if the request method is in ``method_whitelist``
and the response status code is in ``status_forcelist``."""
retry = Retry(
total=retries,
read=retries,
connect=retries,
backoff_factor=backoff_factor,
status_forcelist=status_forcelist,
)
adapter = HTTPAdapter(max_retries=retry)
session.mount('http://', adapter)
session.mount('https://', adapter)
return session
# requests hook to get new token if expiry
def __hook(self, res, *args, **kwargs):
if res.status_code == requests.codes.unauthorized:
logger.info('Token expired, refreshing')
self.auth() # sets the token on self.__session
req = res.request
logger.info('Resending request', req.method, req.url, req.headers)
req.headers['Authorization'] = self.__session.headers['Authorization'] # why is it needed?
return self.__session.send(res.request)
def escape_zoho_characters_v2(input_string) -> str:
""" Note: this is only needed for searching, as in the yield_from_page method.
This is an example
:param input_string:
:return:
"""
if r'\(' in input_string or r'\)' in input_string: # don't repeatedly escape
return input_string
else:
table = str.maketrans({'(': r'\(',
')': r'\)'})
return input_string.translate(table)
def convert_datetime_to_zoho_crm_time(dt: datetime) -> str:
# iso format but no fractional seconds
return datetime.strftime(dt, "%Y-%m-%dT%H:%M:%S%z")
class Zoho_crm:
""" An authenticated connection to zoho crm.
Initialise a Zoho CRM connection by providing authentication details including a refresh token.
Access tokens are obtained when needed.
The base_url defaults to the live API for US usage;
another base_url can be provided (for the sandbox API, for instance)"""
ACCOUNTS_HOST = {".COM": "accounts.zoho.com",
".AU": "accounts.zoho.com.au",
".EU": "accounts.zoho.eu",
".IN": "accounts.zoho.in",
".CN": "accounts.zoho.com.cn"
}
def __init__(self, refresh_token: str, client_id: str, client_secret: str, token_file_dir: Path,
base_url=None,
hosting=".COM",
default_zoho_user_name: str = None,
default_zoho_user_id: str = None,
):
""" Initialise a Zoho CRM connection by providing authentication details including a refresh token.
Access tokens are obtained when needed. The base_url defaults to the live API for US usage;
another base_url can be provided (for the sandbox API, for instance)
"""
token_file_name = 'access_token.json'
self.requests_session = _requests_retry_session()
self.refresh_token = refresh_token
self.client_id = client_id
self.client_secret = client_secret
self.base_url = base_url or "https://www.zohoapis.com/crm/v2/"
self.hosting = hosting.upper() or ".COM"
self.zoho_user_cache = None # type: Optional[dict]
self.default_zoho_user_name = default_zoho_user_name
self.default_zoho_user_id = default_zoho_user_id
self.token_file_path = token_file_dir / token_file_name
# self.current_token = self._load_access_token()
self.token_timestamp = time.time() #this is a safe default
self.__token = self._load_access_token()
@property
def current_token(self):
if time.time() - self.token_timestamp > 50 * 60 or not self.__token:
self.__token = self._load_access_token()
self.token_timestamp = time.time()
return self.__token
def _validate_response(self, r: requests.Response) -> Optional[dict]:
""" Called internally to deal with Zoho API responses. Will fetch a new access token if necessary.
Not all errors are explicity handled; errors not handled here have no recovery option anyway,
so an exception is raised."""
# https://www.zoho.com/crm/help/api/v2/#HTTP-Status-Codes
if r.status_code == 200:
return r.json()
elif r.status_code == 201:
return {'result': True} # insert succeeded
elif r.status_code == 202: # multiple insert succeeded
return {'result': True}
elif r.status_code == 204: # no content
return None
elif r.status_code == 304: # nothing changed since the requested modified-since timestamp
return None
elif r.status_code == 401:
# assume invalid token
self._refresh_access_token()
# retry the request somehow
# probably should use a 'retry' exception?
orig_request = r.request
orig_request.headers['Authorization'] = 'Zoho-oauthtoken ' + self.current_token['access_token']
new_resp = self.requests_session.send(orig_request)
return new_resp.json()
elif r.status_code == 429:
raise APIQuotaExceeded("API Quota exceeded, error 429")
# assume invalid token
else:
raise RuntimeError(
f"API failure trying: {r.reason} and status code: {r.status_code} and text {r.text}, attempted url was: {r.url}, unquoted is: {urllib.parse.unquote(r.url)}")
def yield_page_from_module(self, module_name: str, criteria: str = None,
parameters: dict = None, modified_since: datetime = None) -> Generator[
List[dict], None, None]:
""" Yields a page of results, each page being a list of dicts.
For use of the criteria parameter, please see search documentation: https://www.zoho.com/crm/help/api-diff/searchRecords.html
Parentheses must be escaped with a backspace.
A conversion function could be:
Performs search by the following shown criteria.
(({apiname}:{starts_with|equals}:{value}) and ({apiname}:{starts_with|equals}:{value}))
You can search a maximum of 10 criteria (with same or different columns) with equals and starts_with conditions as shown above.'
"""
page = 1
if not criteria:
url = self.base_url + module_name
else:
url = self.base_url + f'{module_name}/search'
headers = {'Authorization': 'Zoho-oauthtoken ' + self.current_token['access_token']}
parameters = parameters or {}
if criteria:
parameters['criteria'] = criteria
if modified_since:
# headers['If-Modified-Since'] = modified_since.isoformat()
headers['If-Modified-Since'] = convert_datetime_to_zoho_crm_time(
modified_since) # ensure no fractional seconds
while True:
parameters['page'] = page
r = self.requests_session.get(url=url, headers=headers, params=urllib.parse.urlencode(parameters))
r_json = self._validate_response(r)
if not r_json:
return None
if 'data' in r_json:
yield r_json['data']
else:
raise RuntimeError(
f"Did not receive the expected data format in the returned json when: url={url} parameters={parameters}")
if 'info' in r_json:
if not r_json['info']['more_records']:
break
else:
break
page += 1
def get_users(self, user_type: str = None) -> dict:
"""
Get zoho users, filtering by a Zoho CRM user type. The default value of None is mapped to 'AllUsers'
"""
if self.zoho_user_cache is None:
user_type = 'AllUsers' or user_type
url = self.base_url + f"users?type={user_type}"
headers = {'Authorization': 'Zoho-oauthtoken ' + self.current_token['access_token']}
r = self.requests_session.get(url=url, headers=headers)
self.zoho_user_cache = self._validate_response(r)
return self.zoho_user_cache
def finduser_by_name(self, full_name: str) -> Tuple[str, str]:
""" Tries to reutn the user as a tuple(full_name,Zoho user id), using the full full_name provided.
The user must be active. If no such user is found, return the default user provided
at initialisation of the Zoho_crm object."""
users = self.get_users()
default_user_name = self.default_zoho_user_name
default_user_id = self.default_zoho_user_id
for user in users['users']:
if user['full_name'] == full_name.strip():
if user['status'] == 'active':
return full_name, user['id']
else:
logger.debug(f"User is inactive in zoho crm: {full_name}")
return default_user_name, default_user_id
# not found
logger.info(f"User not found in zoho: {full_name}")
return default_user_name, default_user_id
def get_record_by_id(self, module_name, id) -> dict:
""" Call the get record endpoint with an id"""
url = self.base_url + f'{module_name}/{id}'
headers = {'Authorization': 'Zoho-oauthtoken ' + self.current_token['access_token']}
r = self.requests_session.get(url=url, headers=headers)
r_json = self._validate_response(r)
return r_json['data'][0]
def yield_deleted_records_from_module(self, module_name: str, type: str = 'all',
modified_since: datetime = None) -> Generator[List[dict], None, None]:
""" Yields a page of deleted record results.
Args:
module_name (str): The module API name.
type (str): Filter deleted records by the following types:
'all': To get the list of all deleted records.
'recycle': To get the list of deleted records from recycle bin.
'permanent': To get the list of permanently deleted records.
modified_since (datetime.datetime): Return records deleted after this date.
Returns:
A generator that yields pages of deleted records as a list of dictionaries.
"""
page = 1
url = self.base_url + f'{module_name}/deleted'
headers = {'Authorization': 'Zoho-oauthtoken ' + self.current_token['access_token']}
parameters = {'type': type}
if modified_since:
headers['If-Modified-Since'] = modified_since.isoformat()
while True:
parameters['page'] = page
r = self.requests_session.get(url=url, headers=headers, params=urllib.parse.urlencode(parameters))
r_json = self._validate_response(r)
if not r_json:
return None
if 'data' in r_json:
yield r_json['data']
else:
raise RuntimeError(
f"Did not receive the expected data format in the returned json when: url={url} parameters={parameters}")
if 'info' in r_json:
if not r_json['info']['more_records']:
break
else:
break
page += 1
def delete_from_module(self, module_name: str, record_id: str) -> Tuple[bool, dict]:
""" deletes from a named Zoho CRM module"""
url = self.base_url + f"{module_name}"
headers = {'Authorization': 'Zoho-oauthtoken ' + self.current_token['access_token']}
r = self.requests_session.delete(url=url, headers=headers, params={'ids': record_id})
if r.ok and r.status_code == 200:
return True, r.json()
else:
return False, r.json()
def update_zoho_module(self, module_name: str,
payload: Dict[str, List[Dict]]
) -> Tuple[bool, Dict]:
"""Update, modified from upsert
"""
url = self.base_url + module_name
headers = {
'Authorization':
'Zoho-oauthtoken ' + self.current_token['access_token']
}
if 'trigger' not in payload:
payload['trigger'] = []
r = self.requests_session.put(url=url,
headers=headers,
json=payload)
if r.ok:
return True, r.json()
else:
return False, r.json()
def upsert_zoho_module(self, module_name: str, payload: Dict[str, List[Dict]],
criteria: str = None, ) -> Tuple[bool, Dict]:
"""creation is done with the Record API and module "Accounts".
Zoho does not make mandatory fields such as Account_Name unique.
But here, a criteria string can be passed to identify a 'unique' record:
we will update the first record we find, and insert a new record without question if
there is no match (critera is None reverts to standard Zoho behaviour: it will always insert)
For notes on criteria string see yield_page_from_module()
Note: payload looks like this: payload={'data': [zoho_account]} where zoho_account is a dictionary
for one record. .
Returns a tuple with a success boolean, and the entire record if successful.
The Zoho API distinguishes between the record was already there and updated,
or it was not there and it was inserted: here, both are True.
If unsuccessful, it returns the json result in the API reply.
See https://www.zoho.com/crm/help/api/v2/#create-specify-records
"""
update_existing_record = False # by default, always insert
if criteria:
if len(payload['data']) != 1:
raise RuntimeError("Only pass one record when using criteria")
matches = []
for data_block in self.yield_page_from_module(module_name=module_name,
criteria=criteria):
matches += data_block
if len(matches) > 0:
payload['data'][0]['id'] = matches[0]['id'] # and need to do a put
update_existing_record = True
url = self.base_url + f'{module_name}'
headers = {'Authorization': 'Zoho-oauthtoken ' + self.current_token['access_token']}
if 'trigger' not in payload:
payload['trigger'] = []
if update_existing_record:
r = self.requests_session.put(url=url, headers=headers, json=payload)
else:
r = self.requests_session.post(url=url, headers=headers, json=payload)
if r.ok:
if r.status_code == 202: # could be duplicate
return False, r.json()
else:
try:
record_id = r.json()['data'][0]['details']['id']
return True, self.get_record_by_id(module_name=module_name, id=record_id)
except Exception as e:
raise e
else:
return False, r.json()
def get_related_records(self, parent_module_name: str, child_module_name: str, parent_id: str,
modified_since: datetime = None) \
-> Tuple[bool, Optional[List[Dict]]]:
url = self.base_url + f'{parent_module_name}/{parent_id}/{child_module_name}'
headers = {'Authorization': 'Zoho-oauthtoken ' + self.current_token['access_token']}
if modified_since:
headers['If-Modified-Since'] = modified_since.isoformat()
r = self.requests_session.get(url=url, headers=headers)
r_json = self._validate_response(r)
if r.ok and r_json is not None:
return True, r_json['data']
elif r.ok:
return True, r_json
else:
return False, r_json
def get_records_through_coql_query(self, query: str) -> List[Dict]:
url = self.base_url + "coql"
headers = {'Authorization': 'Zoho-oauthtoken ' + self.current_token['access_token']}
r = self.requests_session.post(url=url, headers=headers, json={"select_query": query})
r_json = self._validate_response(r)
if r.ok:
return r_json['data']
else:
return []
def get_module_field_api_names(self, module_name: str) -> List[str]:
""" uses Fields Meta Data but just returns a list of field API names """
url = self.base_url + f"settings/fields?module={module_name}"
headers = {'Authorization': 'Zoho-oauthtoken ' + self.current_token['access_token']}
r = self.requests_session.get(url=url, headers=headers)
r_json = self._validate_response(r)
if r.ok and r_json is not None:
field_list = [f["api_name"] for f in r_json["fields"]]
return field_list
else:
raise RuntimeError(f"did not receive valid data for get_module_field_names {module_name}")
def _load_access_token(self) -> dict:
try:
with self.token_file_path.open() as data_file:
data_loaded = json.load(data_file)
# validate it
url = self.base_url + f"users?type='AllUsers'"
headers = {'Authorization': 'Zoho-oauthtoken ' + data_loaded['access_token']}
r = self.requests_session.get(url=url, headers=headers)
r = self.requests_session.post(url=url)
if r.status_code == 401:
data_loaded = self._refresh_access_token()
return data_loaded
except (KeyError, FileNotFoundError, IOError) as e:
new_token = self._refresh_access_token()
return new_token
def _refresh_access_token(self) -> dict:
""" This forces a new token so it should only be called
after we know we need a new token.
Use load_access_token to get a token, it will call this if it needs to."""
auth_host = self.ACCOUNTS_HOST[self.hosting]
if not auth_host:
raise RuntimeError(f"Zoho hosting {self.hosting} is not implemented")
url = (f"https://{auth_host}/oauth/v2/token?refresh_token="
f"{self.refresh_token}&client_id={self.client_id}&"
f"client_secret={self.client_secret}&grant_type=refresh_token")
r = requests.post(url=url)
if r.status_code == 200:
new_token = r.json()
logger.info(f"New token: {new_token}")
if 'access_token' not in new_token:
logger.error(f"Token is not valid")
raise RuntimeError(f"Zoho refresh token is not valid: {new_token}")
else:
self.__token = new_token
with self.token_file_path.open('w') as outfile:
json.dump(new_token, outfile)
return new_token
else:
raise RuntimeError(f"API failure trying to get access token: {r.reason}") | zoho-crm-connector | /zoho_crm_connector-1.0.0-py3-none-any.whl/zoho_crm_connector/zoho_crm_api.py | zoho_crm_api.py |
import logging
import os
from dataclasses import dataclass, field
from enum import Enum
from typing import Dict, List, Tuple
import requests
logger = logging.getLogger(__name__)
class ZohoClientException(Exception):
"""handler errors from Zoho api"""
@dataclass
class ZohoAPIConfig:
api_version: str = "v3"
account_domain: str = "www.zohoapis.eu"
api_results_per_page: int = 200
api_discrete_pagination_limit: int = 2000
@property
def api_base_url(self) -> str:
return f"https://{self.account_domain}/crm/{self.api_version}/"
@dataclass
class ZohoAuthConfig:
client_id: str = os.environ.get("ZOHO_CLIENT_ID")
client_secret: str = os.environ.get("ZOHO_CLIENT_SECRET")
refresh_token: str = os.environ.get("ZOHO_REFRESH_TOKEN")
auth_api_version: str = "v2"
account_domain: str = "accounts.zoho.eu"
@property
def auth_base_url(self) -> str:
return f"https://{self.account_domain}/oauth/{self.auth_api_version}/token"
@property
def refresh_access_token_url(self) -> str:
return (
f"{self.auth_base_url}?"
f"refresh_token={self.refresh_token}&"
f"client_id={self.client_id}&"
f"client_secret={self.client_secret}&"
f"grant_type=refresh_token"
)
class FileTypes(Enum):
PHOTO = "photo"
ATTACHMENT = "Attachments"
@dataclass
class ZohoEndpoint:
module_name: str
method: str
response_data_key: str = "data"
file_type: FileTypes = None
params: dict = field(default_factory=dict)
data: dict = field(default_factory=dict)
files: dict = field(default_factory=dict)
@property
def url(self):
if "id" in self.params:
record_url = f"{self.module_name}/{self.params['id']}"
if self.file_type is not None:
record_url = f"{record_url}/{self.file_type}"
return record_url
if "criteria" in self.params:
return f"{self.module_name}/search"
return self.module_name
class ZohoClient:
def __init__(
self,
zoho_api_config: ZohoAPIConfig = None,
zoho_auth_config: ZohoAuthConfig = None,
):
self.zoho_config = zoho_api_config if zoho_api_config else ZohoAPIConfig()
self.zoho_auth = zoho_auth_config if zoho_auth_config else ZohoAuthConfig()
def _validate_response(self, response: Dict) -> Dict:
if "error" in response or "error" == (
response.get("status") or response.get("data", [{}])[0].get("status")
):
raise ZohoClientException(f"The zoho server returned an error: {response}")
return response
def _get_new_access_token(self) -> str:
"""
Renew the client access token and return the new one
This action revoke the older ones
"""
response_dict = requests.post(self.zoho_auth.refresh_access_token_url).json()
response_dict = self._validate_response(response_dict)
logger.debug("new access token got it: %s", response_dict)
return response_dict["access_token"]
@property
def request_headers(self) -> str:
try:
return self._request_headers
except AttributeError:
token = self._get_new_access_token()
self._request_headers = {"Authorization": f"Zoho-oauthtoken {token}"}
return self._request_headers
def _get_next_page_url(self, page_info: Dict, params: Dict) -> bool:
if not page_info["more_records"]:
return False
amount = int(page_info["page"]) * page_info["per_page"]
if amount >= self.zoho_config.api_discrete_pagination_limit:
params["page_token"] = page_info["next_page_token"]
params.pop("page", None)
return True
params["page"] = page_info["page"] + 1
return True
def _get_page(self, zoho_endpoint: ZohoEndpoint) -> Tuple[List[Dict], str | bool]:
next_url = False
try:
response_dict = requests.get(
self.zoho_config.api_base_url + zoho_endpoint.url,
headers=self.request_headers,
params=zoho_endpoint.params,
).json()
except requests.exceptions.JSONDecodeError:
return [], False
valid_response_dict = self._validate_response(response_dict)
if "info" in valid_response_dict:
next_url = self._get_next_page_url(
valid_response_dict["info"], zoho_endpoint.params
)
return valid_response_dict[zoho_endpoint.response_data_key], next_url
def _call_read_endpoint(self, zoho_endpoint: ZohoEndpoint) -> List[Dict]:
response_list = []
next_page = True
while next_page:
result_list, next_page = self._get_page(zoho_endpoint)
response_list.extend(result_list)
return response_list
def _call_write_endpoint(self, zoho_endpoint: ZohoEndpoint) -> Dict:
try:
response_dict = getattr(requests, zoho_endpoint.method)(
self.zoho_config.api_base_url + zoho_endpoint.url,
headers=self.request_headers,
json=zoho_endpoint.data,
files=zoho_endpoint.files,
).json()
except requests.exceptions.JSONDecodeError:
return {}
valid_response_dict = self._validate_response(response_dict)
return valid_response_dict
def call(self, zoho_endpoint: ZohoEndpoint) -> Dict:
if zoho_endpoint.method == "get":
return self._call_read_endpoint(zoho_endpoint)
return self._call_write_endpoint(zoho_endpoint) | zoho-crm | /zoho_crm-0.0.4.tar.gz/zoho_crm-0.0.4/zoho_crm/api_client.py | api_client.py |
import configparser
import errno
import json
import os
import re
import subprocess
import sys
class VersioneerConfig:
"""Container for Versioneer configuration parameters."""
def get_root():
"""Get the project root directory.
We require that all commands are run from the project root, i.e. the
directory that contains setup.py, setup.cfg, and versioneer.py .
"""
root = os.path.realpath(os.path.abspath(os.getcwd()))
setup_py = os.path.join(root, "setup.py")
versioneer_py = os.path.join(root, "versioneer.py")
if not (os.path.exists(setup_py) or os.path.exists(versioneer_py)):
# allow 'python path/to/setup.py COMMAND'
root = os.path.dirname(os.path.realpath(os.path.abspath(sys.argv[0])))
setup_py = os.path.join(root, "setup.py")
versioneer_py = os.path.join(root, "versioneer.py")
if not (os.path.exists(setup_py) or os.path.exists(versioneer_py)):
err = ("Versioneer was unable to run the project root directory. "
"Versioneer requires setup.py to be executed from "
"its immediate directory (like 'python setup.py COMMAND'), "
"or in a way that lets it use sys.argv[0] to find the root "
"(like 'python path/to/setup.py COMMAND').")
raise VersioneerBadRootError(err)
try:
# Certain runtime workflows (setup.py install/develop in a setuptools
# tree) execute all dependencies in a single python process, so
# "versioneer" may be imported multiple times, and python's shared
# module-import table will cache the first one. So we can't use
# os.path.dirname(__file__), as that will find whichever
# versioneer.py was first imported, even in later projects.
me = os.path.realpath(os.path.abspath(__file__))
me_dir = os.path.normcase(os.path.splitext(me)[0])
vsr_dir = os.path.normcase(os.path.splitext(versioneer_py)[0])
if me_dir != vsr_dir:
print("Warning: build in %s is using versioneer.py from %s"
% (os.path.dirname(me), versioneer_py))
except NameError:
pass
return root
def get_config_from_root(root):
"""Read the project setup.cfg file to determine Versioneer config."""
# This might raise EnvironmentError (if setup.cfg is missing), or
# configparser.NoSectionError (if it lacks a [versioneer] section), or
# configparser.NoOptionError (if it lacks "VCS="). See the docstring at
# the top of versioneer.py for instructions on writing your setup.cfg .
setup_cfg = os.path.join(root, "setup.cfg")
parser = configparser.ConfigParser()
with open(setup_cfg, "r") as f:
parser.read_file(f)
VCS = parser.get("versioneer", "VCS") # mandatory
def get(parser, name):
if parser.has_option("versioneer", name):
return parser.get("versioneer", name)
return None
cfg = VersioneerConfig()
cfg.VCS = VCS
cfg.style = get(parser, "style") or ""
cfg.versionfile_source = get(parser, "versionfile_source")
cfg.versionfile_build = get(parser, "versionfile_build")
cfg.tag_prefix = get(parser, "tag_prefix")
if cfg.tag_prefix in ("''", '""'):
cfg.tag_prefix = ""
cfg.parentdir_prefix = get(parser, "parentdir_prefix")
cfg.verbose = get(parser, "verbose")
return cfg
class NotThisMethod(Exception):
"""Exception raised if a method is not valid for the current scenario."""
# these dictionaries contain VCS-specific tools
LONG_VERSION_PY = {}
HANDLERS = {}
def register_vcs_handler(vcs, method): # decorator
"""Create decorator to mark a method as the handler of a VCS."""
def decorate(f):
"""Store f in HANDLERS[vcs][method]."""
if vcs not in HANDLERS:
HANDLERS[vcs] = {}
HANDLERS[vcs][method] = f
return f
return decorate
def run_command(commands, args, cwd=None, verbose=False, hide_stderr=False,
env=None):
"""Call the given command(s)."""
assert isinstance(commands, list)
p = None
for c in commands:
try:
dispcmd = str([c] + args)
# remember shell=False, so use git.cmd on windows, not just git
p = subprocess.Popen([c] + args, cwd=cwd, env=env,
stdout=subprocess.PIPE,
stderr=(subprocess.PIPE if hide_stderr
else None))
break
except EnvironmentError:
e = sys.exc_info()[1]
if e.errno == errno.ENOENT:
continue
if verbose:
print("unable to run %s" % dispcmd)
print(e)
return None, None
else:
if verbose:
print("unable to find command, tried %s" % (commands,))
return None, None
stdout = p.communicate()[0].strip().decode()
if p.returncode != 0:
if verbose:
print("unable to run %s (error)" % dispcmd)
print("stdout was %s" % stdout)
return None, p.returncode
return stdout, p.returncode
LONG_VERSION_PY['git'] = r'''
# This file helps to compute a version number in source trees obtained from
# git-archive tarball (such as those provided by githubs download-from-tag
# feature). Distribution tarballs (built by setup.py sdist) and build
# directories (produced by setup.py build) will contain a much shorter file
# that just contains the computed version number.
# This file is released into the public domain. Generated by
# versioneer-0.19 (https://github.com/python-versioneer/python-versioneer)
"""Git implementation of _version.py."""
import errno
import os
import re
import subprocess
import sys
def get_keywords():
"""Get the keywords needed to look up the version information."""
# these strings will be replaced by git during git-archive.
# setup.py/versioneer.py will grep for the variable names, so they must
# each be defined on a line of their own. _version.py will just call
# get_keywords().
git_refnames = "%(DOLLAR)sFormat:%%d%(DOLLAR)s"
git_full = "%(DOLLAR)sFormat:%%H%(DOLLAR)s"
git_date = "%(DOLLAR)sFormat:%%ci%(DOLLAR)s"
keywords = {"refnames": git_refnames, "full": git_full, "date": git_date}
return keywords
class VersioneerConfig:
"""Container for Versioneer configuration parameters."""
def get_config():
"""Create, populate and return the VersioneerConfig() object."""
# these strings are filled in when 'setup.py versioneer' creates
# _version.py
cfg = VersioneerConfig()
cfg.VCS = "git"
cfg.style = "%(STYLE)s"
cfg.tag_prefix = "%(TAG_PREFIX)s"
cfg.parentdir_prefix = "%(PARENTDIR_PREFIX)s"
cfg.versionfile_source = "%(VERSIONFILE_SOURCE)s"
cfg.verbose = False
return cfg
class NotThisMethod(Exception):
"""Exception raised if a method is not valid for the current scenario."""
LONG_VERSION_PY = {}
HANDLERS = {}
def register_vcs_handler(vcs, method): # decorator
"""Create decorator to mark a method as the handler of a VCS."""
def decorate(f):
"""Store f in HANDLERS[vcs][method]."""
if vcs not in HANDLERS:
HANDLERS[vcs] = {}
HANDLERS[vcs][method] = f
return f
return decorate
def run_command(commands, args, cwd=None, verbose=False, hide_stderr=False,
env=None):
"""Call the given command(s)."""
assert isinstance(commands, list)
p = None
for c in commands:
try:
dispcmd = str([c] + args)
# remember shell=False, so use git.cmd on windows, not just git
p = subprocess.Popen([c] + args, cwd=cwd, env=env,
stdout=subprocess.PIPE,
stderr=(subprocess.PIPE if hide_stderr
else None))
break
except EnvironmentError:
e = sys.exc_info()[1]
if e.errno == errno.ENOENT:
continue
if verbose:
print("unable to run %%s" %% dispcmd)
print(e)
return None, None
else:
if verbose:
print("unable to find command, tried %%s" %% (commands,))
return None, None
stdout = p.communicate()[0].strip().decode()
if p.returncode != 0:
if verbose:
print("unable to run %%s (error)" %% dispcmd)
print("stdout was %%s" %% stdout)
return None, p.returncode
return stdout, p.returncode
def versions_from_parentdir(parentdir_prefix, root, verbose):
"""Try to determine the version from the parent directory name.
Source tarballs conventionally unpack into a directory that includes both
the project name and a version string. We will also support searching up
two directory levels for an appropriately named parent directory
"""
rootdirs = []
for i in range(3):
dirname = os.path.basename(root)
if dirname.startswith(parentdir_prefix):
return {"version": dirname[len(parentdir_prefix):],
"full-revisionid": None,
"dirty": False, "error": None, "date": None}
else:
rootdirs.append(root)
root = os.path.dirname(root) # up a level
if verbose:
print("Tried directories %%s but none started with prefix %%s" %%
(str(rootdirs), parentdir_prefix))
raise NotThisMethod("rootdir doesn't start with parentdir_prefix")
@register_vcs_handler("git", "get_keywords")
def git_get_keywords(versionfile_abs):
"""Extract version information from the given file."""
# the code embedded in _version.py can just fetch the value of these
# keywords. When used from setup.py, we don't want to import _version.py,
# so we do it with a regexp instead. This function is not used from
# _version.py.
keywords = {}
try:
f = open(versionfile_abs, "r")
for line in f.readlines():
if line.strip().startswith("git_refnames ="):
mo = re.search(r'=\s*"(.*)"', line)
if mo:
keywords["refnames"] = mo.group(1)
if line.strip().startswith("git_full ="):
mo = re.search(r'=\s*"(.*)"', line)
if mo:
keywords["full"] = mo.group(1)
if line.strip().startswith("git_date ="):
mo = re.search(r'=\s*"(.*)"', line)
if mo:
keywords["date"] = mo.group(1)
f.close()
except EnvironmentError:
pass
return keywords
@register_vcs_handler("git", "keywords")
def git_versions_from_keywords(keywords, tag_prefix, verbose):
"""Get version information from git keywords."""
if not keywords:
raise NotThisMethod("no keywords at all, weird")
date = keywords.get("date")
if date is not None:
# Use only the last line. Previous lines may contain GPG signature
# information.
date = date.splitlines()[-1]
# git-2.2.0 added "%%cI", which expands to an ISO-8601 -compliant
# datestamp. However we prefer "%%ci" (which expands to an "ISO-8601
# -like" string, which we must then edit to make compliant), because
# it's been around since git-1.5.3, and it's too difficult to
# discover which version we're using, or to work around using an
# older one.
date = date.strip().replace(" ", "T", 1).replace(" ", "", 1)
refnames = keywords["refnames"].strip()
if refnames.startswith("$Format"):
if verbose:
print("keywords are unexpanded, not using")
raise NotThisMethod("unexpanded keywords, not a git-archive tarball")
refs = set([r.strip() for r in refnames.strip("()").split(",")])
# starting in git-1.8.3, tags are listed as "tag: foo-1.0" instead of
# just "foo-1.0". If we see a "tag: " prefix, prefer those.
TAG = "tag: "
tags = set([r[len(TAG):] for r in refs if r.startswith(TAG)])
if not tags:
# Either we're using git < 1.8.3, or there really are no tags. We use
# a heuristic: assume all version tags have a digit. The old git %%d
# expansion behaves like git log --decorate=short and strips out the
# refs/heads/ and refs/tags/ prefixes that would let us distinguish
# between branches and tags. By ignoring refnames without digits, we
# filter out many common branch names like "release" and
# "stabilization", as well as "HEAD" and "master".
tags = set([r for r in refs if re.search(r'\d', r)])
if verbose:
print("discarding '%%s', no digits" %% ",".join(refs - tags))
if verbose:
print("likely tags: %%s" %% ",".join(sorted(tags)))
for ref in sorted(tags):
# sorting will prefer e.g. "2.0" over "2.0rc1"
if ref.startswith(tag_prefix):
r = ref[len(tag_prefix):]
if verbose:
print("picking %%s" %% r)
return {"version": r,
"full-revisionid": keywords["full"].strip(),
"dirty": False, "error": None,
"date": date}
# no suitable tags, so version is "0+unknown", but full hex is still there
if verbose:
print("no suitable tags, using unknown + full revision id")
return {"version": "0+unknown",
"full-revisionid": keywords["full"].strip(),
"dirty": False, "error": "no suitable tags", "date": None}
@register_vcs_handler("git", "pieces_from_vcs")
def git_pieces_from_vcs(tag_prefix, root, verbose, run_command=run_command):
"""Get version from 'git describe' in the root of the source tree.
This only gets called if the git-archive 'subst' keywords were *not*
expanded, and _version.py hasn't already been rewritten with a short
version string, meaning we're inside a checked out source tree.
"""
GITS = ["git"]
if sys.platform == "win32":
GITS = ["git.cmd", "git.exe"]
out, rc = run_command(GITS, ["rev-parse", "--git-dir"], cwd=root,
hide_stderr=True)
if rc != 0:
if verbose:
print("Directory %%s not under git control" %% root)
raise NotThisMethod("'git rev-parse --git-dir' returned error")
# if there is a tag matching tag_prefix, this yields TAG-NUM-gHEX[-dirty]
# if there isn't one, this yields HEX[-dirty] (no NUM)
describe_out, rc = run_command(GITS, ["describe", "--tags", "--dirty",
"--always", "--long",
"--match", "%%s*" %% tag_prefix],
cwd=root)
# --long was added in git-1.5.5
if describe_out is None:
raise NotThisMethod("'git describe' failed")
describe_out = describe_out.strip()
full_out, rc = run_command(GITS, ["rev-parse", "HEAD"], cwd=root)
if full_out is None:
raise NotThisMethod("'git rev-parse' failed")
full_out = full_out.strip()
pieces = {}
pieces["long"] = full_out
pieces["short"] = full_out[:7] # maybe improved later
pieces["error"] = None
# parse describe_out. It will be like TAG-NUM-gHEX[-dirty] or HEX[-dirty]
# TAG might have hyphens.
git_describe = describe_out
# look for -dirty suffix
dirty = git_describe.endswith("-dirty")
pieces["dirty"] = dirty
if dirty:
git_describe = git_describe[:git_describe.rindex("-dirty")]
# now we have TAG-NUM-gHEX or HEX
if "-" in git_describe:
# TAG-NUM-gHEX
mo = re.search(r'^(.+)-(\d+)-g([0-9a-f]+)$', git_describe)
if not mo:
# unparseable. Maybe git-describe is misbehaving?
pieces["error"] = ("unable to parse git-describe output: '%%s'"
%% describe_out)
return pieces
# tag
full_tag = mo.group(1)
if not full_tag.startswith(tag_prefix):
if verbose:
fmt = "tag '%%s' doesn't start with prefix '%%s'"
print(fmt %% (full_tag, tag_prefix))
pieces["error"] = ("tag '%%s' doesn't start with prefix '%%s'"
%% (full_tag, tag_prefix))
return pieces
pieces["closest-tag"] = full_tag[len(tag_prefix):]
# distance: number of commits since tag
pieces["distance"] = int(mo.group(2))
# commit: short hex revision ID
pieces["short"] = mo.group(3)
else:
# HEX: no tags
pieces["closest-tag"] = None
count_out, rc = run_command(GITS, ["rev-list", "HEAD", "--count"],
cwd=root)
pieces["distance"] = int(count_out) # total number of commits
# commit date: see ISO-8601 comment in git_versions_from_keywords()
date = run_command(GITS, ["show", "-s", "--format=%%ci", "HEAD"],
cwd=root)[0].strip()
# Use only the last line. Previous lines may contain GPG signature
# information.
date = date.splitlines()[-1]
pieces["date"] = date.strip().replace(" ", "T", 1).replace(" ", "", 1)
return pieces
def plus_or_dot(pieces):
"""Return a + if we don't already have one, else return a ."""
if "+" in pieces.get("closest-tag", ""):
return "."
return "+"
def render_pep440(pieces):
"""Build up version string, with post-release "local version identifier".
Our goal: TAG[+DISTANCE.gHEX[.dirty]] . Note that if you
get a tagged build and then dirty it, you'll get TAG+0.gHEX.dirty
Exceptions:
1: no tags. git_describe was just HEX. 0+untagged.DISTANCE.gHEX[.dirty]
"""
if pieces["closest-tag"]:
rendered = pieces["closest-tag"]
if pieces["distance"] or pieces["dirty"]:
rendered += plus_or_dot(pieces)
rendered += "%%d.g%%s" %% (pieces["distance"], pieces["short"])
if pieces["dirty"]:
rendered += ".dirty"
else:
# exception #1
rendered = "0+untagged.%%d.g%%s" %% (pieces["distance"],
pieces["short"])
if pieces["dirty"]:
rendered += ".dirty"
return rendered
def render_pep440_pre(pieces):
"""TAG[.post0.devDISTANCE] -- No -dirty.
Exceptions:
1: no tags. 0.post0.devDISTANCE
"""
if pieces["closest-tag"]:
rendered = pieces["closest-tag"]
if pieces["distance"]:
rendered += ".post0.dev%%d" %% pieces["distance"]
else:
# exception #1
rendered = "0.post0.dev%%d" %% pieces["distance"]
return rendered
def render_pep440_post(pieces):
"""TAG[.postDISTANCE[.dev0]+gHEX] .
The ".dev0" means dirty. Note that .dev0 sorts backwards
(a dirty tree will appear "older" than the corresponding clean one),
but you shouldn't be releasing software with -dirty anyways.
Exceptions:
1: no tags. 0.postDISTANCE[.dev0]
"""
if pieces["closest-tag"]:
rendered = pieces["closest-tag"]
if pieces["distance"] or pieces["dirty"]:
rendered += ".post%%d" %% pieces["distance"]
if pieces["dirty"]:
rendered += ".dev0"
rendered += plus_or_dot(pieces)
rendered += "g%%s" %% pieces["short"]
else:
# exception #1
rendered = "0.post%%d" %% pieces["distance"]
if pieces["dirty"]:
rendered += ".dev0"
rendered += "+g%%s" %% pieces["short"]
return rendered
def render_pep440_old(pieces):
"""TAG[.postDISTANCE[.dev0]] .
The ".dev0" means dirty.
Exceptions:
1: no tags. 0.postDISTANCE[.dev0]
"""
if pieces["closest-tag"]:
rendered = pieces["closest-tag"]
if pieces["distance"] or pieces["dirty"]:
rendered += ".post%%d" %% pieces["distance"]
if pieces["dirty"]:
rendered += ".dev0"
else:
# exception #1
rendered = "0.post%%d" %% pieces["distance"]
if pieces["dirty"]:
rendered += ".dev0"
return rendered
def render_git_describe(pieces):
"""TAG[-DISTANCE-gHEX][-dirty].
Like 'git describe --tags --dirty --always'.
Exceptions:
1: no tags. HEX[-dirty] (note: no 'g' prefix)
"""
if pieces["closest-tag"]:
rendered = pieces["closest-tag"]
if pieces["distance"]:
rendered += "-%%d-g%%s" %% (pieces["distance"], pieces["short"])
else:
# exception #1
rendered = pieces["short"]
if pieces["dirty"]:
rendered += "-dirty"
return rendered
def render_git_describe_long(pieces):
"""TAG-DISTANCE-gHEX[-dirty].
Like 'git describe --tags --dirty --always -long'.
The distance/hash is unconditional.
Exceptions:
1: no tags. HEX[-dirty] (note: no 'g' prefix)
"""
if pieces["closest-tag"]:
rendered = pieces["closest-tag"]
rendered += "-%%d-g%%s" %% (pieces["distance"], pieces["short"])
else:
# exception #1
rendered = pieces["short"]
if pieces["dirty"]:
rendered += "-dirty"
return rendered
def render(pieces, style):
"""Render the given version pieces into the requested style."""
if pieces["error"]:
return {"version": "unknown",
"full-revisionid": pieces.get("long"),
"dirty": None,
"error": pieces["error"],
"date": None}
if not style or style == "default":
style = "pep440" # the default
if style == "pep440":
rendered = render_pep440(pieces)
elif style == "pep440-pre":
rendered = render_pep440_pre(pieces)
elif style == "pep440-post":
rendered = render_pep440_post(pieces)
elif style == "pep440-old":
rendered = render_pep440_old(pieces)
elif style == "git-describe":
rendered = render_git_describe(pieces)
elif style == "git-describe-long":
rendered = render_git_describe_long(pieces)
else:
raise ValueError("unknown style '%%s'" %% style)
return {"version": rendered, "full-revisionid": pieces["long"],
"dirty": pieces["dirty"], "error": None,
"date": pieces.get("date")}
def get_versions():
"""Get version information or return default if unable to do so."""
# I am in _version.py, which lives at ROOT/VERSIONFILE_SOURCE. If we have
# __file__, we can work backwards from there to the root. Some
# py2exe/bbfreeze/non-CPython implementations don't do __file__, in which
# case we can only use expanded keywords.
cfg = get_config()
verbose = cfg.verbose
try:
return git_versions_from_keywords(get_keywords(), cfg.tag_prefix,
verbose)
except NotThisMethod:
pass
try:
root = os.path.realpath(__file__)
# versionfile_source is the relative path from the top of the source
# tree (where the .git directory might live) to this file. Invert
# this to find the root from __file__.
for i in cfg.versionfile_source.split('/'):
root = os.path.dirname(root)
except NameError:
return {"version": "0+unknown", "full-revisionid": None,
"dirty": None,
"error": "unable to find root of source tree",
"date": None}
try:
pieces = git_pieces_from_vcs(cfg.tag_prefix, root, verbose)
return render(pieces, cfg.style)
except NotThisMethod:
pass
try:
if cfg.parentdir_prefix:
return versions_from_parentdir(cfg.parentdir_prefix, root, verbose)
except NotThisMethod:
pass
return {"version": "0+unknown", "full-revisionid": None,
"dirty": None,
"error": "unable to compute version", "date": None}
'''
@register_vcs_handler("git", "get_keywords")
def git_get_keywords(versionfile_abs):
"""Extract version information from the given file."""
# the code embedded in _version.py can just fetch the value of these
# keywords. When used from setup.py, we don't want to import _version.py,
# so we do it with a regexp instead. This function is not used from
# _version.py.
keywords = {}
try:
f = open(versionfile_abs, "r")
for line in f.readlines():
if line.strip().startswith("git_refnames ="):
mo = re.search(r'=\s*"(.*)"', line)
if mo:
keywords["refnames"] = mo.group(1)
if line.strip().startswith("git_full ="):
mo = re.search(r'=\s*"(.*)"', line)
if mo:
keywords["full"] = mo.group(1)
if line.strip().startswith("git_date ="):
mo = re.search(r'=\s*"(.*)"', line)
if mo:
keywords["date"] = mo.group(1)
f.close()
except EnvironmentError:
pass
return keywords
@register_vcs_handler("git", "keywords")
def git_versions_from_keywords(keywords, tag_prefix, verbose):
"""Get version information from git keywords."""
if not keywords:
raise NotThisMethod("no keywords at all, weird")
date = keywords.get("date")
if date is not None:
# Use only the last line. Previous lines may contain GPG signature
# information.
date = date.splitlines()[-1]
# git-2.2.0 added "%cI", which expands to an ISO-8601 -compliant
# datestamp. However we prefer "%ci" (which expands to an "ISO-8601
# -like" string, which we must then edit to make compliant), because
# it's been around since git-1.5.3, and it's too difficult to
# discover which version we're using, or to work around using an
# older one.
date = date.strip().replace(" ", "T", 1).replace(" ", "", 1)
refnames = keywords["refnames"].strip()
if refnames.startswith("$Format"):
if verbose:
print("keywords are unexpanded, not using")
raise NotThisMethod("unexpanded keywords, not a git-archive tarball")
refs = set([r.strip() for r in refnames.strip("()").split(",")])
# starting in git-1.8.3, tags are listed as "tag: foo-1.0" instead of
# just "foo-1.0". If we see a "tag: " prefix, prefer those.
TAG = "tag: "
tags = set([r[len(TAG):] for r in refs if r.startswith(TAG)])
if not tags:
# Either we're using git < 1.8.3, or there really are no tags. We use
# a heuristic: assume all version tags have a digit. The old git %d
# expansion behaves like git log --decorate=short and strips out the
# refs/heads/ and refs/tags/ prefixes that would let us distinguish
# between branches and tags. By ignoring refnames without digits, we
# filter out many common branch names like "release" and
# "stabilization", as well as "HEAD" and "master".
tags = set([r for r in refs if re.search(r'\d', r)])
if verbose:
print("discarding '%s', no digits" % ",".join(refs - tags))
if verbose:
print("likely tags: %s" % ",".join(sorted(tags)))
for ref in sorted(tags):
# sorting will prefer e.g. "2.0" over "2.0rc1"
if ref.startswith(tag_prefix):
r = ref[len(tag_prefix):]
if verbose:
print("picking %s" % r)
return {"version": r,
"full-revisionid": keywords["full"].strip(),
"dirty": False, "error": None,
"date": date}
# no suitable tags, so version is "0+unknown", but full hex is still there
if verbose:
print("no suitable tags, using unknown + full revision id")
return {"version": "0+unknown",
"full-revisionid": keywords["full"].strip(),
"dirty": False, "error": "no suitable tags", "date": None}
@register_vcs_handler("git", "pieces_from_vcs")
def git_pieces_from_vcs(tag_prefix, root, verbose, run_command=run_command):
"""Get version from 'git describe' in the root of the source tree.
This only gets called if the git-archive 'subst' keywords were *not*
expanded, and _version.py hasn't already been rewritten with a short
version string, meaning we're inside a checked out source tree.
"""
GITS = ["git"]
if sys.platform == "win32":
GITS = ["git.cmd", "git.exe"]
out, rc = run_command(GITS, ["rev-parse", "--git-dir"], cwd=root,
hide_stderr=True)
if rc != 0:
if verbose:
print("Directory %s not under git control" % root)
raise NotThisMethod("'git rev-parse --git-dir' returned error")
# if there is a tag matching tag_prefix, this yields TAG-NUM-gHEX[-dirty]
# if there isn't one, this yields HEX[-dirty] (no NUM)
describe_out, rc = run_command(GITS, ["describe", "--tags", "--dirty",
"--always", "--long",
"--match", "%s*" % tag_prefix],
cwd=root)
# --long was added in git-1.5.5
if describe_out is None:
raise NotThisMethod("'git describe' failed")
describe_out = describe_out.strip()
full_out, rc = run_command(GITS, ["rev-parse", "HEAD"], cwd=root)
if full_out is None:
raise NotThisMethod("'git rev-parse' failed")
full_out = full_out.strip()
pieces = {}
pieces["long"] = full_out
pieces["short"] = full_out[:7] # maybe improved later
pieces["error"] = None
# parse describe_out. It will be like TAG-NUM-gHEX[-dirty] or HEX[-dirty]
# TAG might have hyphens.
git_describe = describe_out
# look for -dirty suffix
dirty = git_describe.endswith("-dirty")
pieces["dirty"] = dirty
if dirty:
git_describe = git_describe[:git_describe.rindex("-dirty")]
# now we have TAG-NUM-gHEX or HEX
if "-" in git_describe:
# TAG-NUM-gHEX
mo = re.search(r'^(.+)-(\d+)-g([0-9a-f]+)$', git_describe)
if not mo:
# unparseable. Maybe git-describe is misbehaving?
pieces["error"] = ("unable to parse git-describe output: '%s'"
% describe_out)
return pieces
# tag
full_tag = mo.group(1)
if not full_tag.startswith(tag_prefix):
if verbose:
fmt = "tag '%s' doesn't start with prefix '%s'"
print(fmt % (full_tag, tag_prefix))
pieces["error"] = ("tag '%s' doesn't start with prefix '%s'"
% (full_tag, tag_prefix))
return pieces
pieces["closest-tag"] = full_tag[len(tag_prefix):]
# distance: number of commits since tag
pieces["distance"] = int(mo.group(2))
# commit: short hex revision ID
pieces["short"] = mo.group(3)
else:
# HEX: no tags
pieces["closest-tag"] = None
count_out, rc = run_command(GITS, ["rev-list", "HEAD", "--count"],
cwd=root)
pieces["distance"] = int(count_out) # total number of commits
# commit date: see ISO-8601 comment in git_versions_from_keywords()
date = run_command(GITS, ["show", "-s", "--format=%ci", "HEAD"],
cwd=root)[0].strip()
# Use only the last line. Previous lines may contain GPG signature
# information.
date = date.splitlines()[-1]
pieces["date"] = date.strip().replace(" ", "T", 1).replace(" ", "", 1)
return pieces
def do_vcs_install(manifest_in, versionfile_source, ipy):
"""Git-specific installation logic for Versioneer.
For Git, this means creating/changing .gitattributes to mark _version.py
for export-subst keyword substitution.
"""
GITS = ["git"]
if sys.platform == "win32":
GITS = ["git.cmd", "git.exe"]
files = [manifest_in, versionfile_source]
if ipy:
files.append(ipy)
try:
me = __file__
if me.endswith(".pyc") or me.endswith(".pyo"):
me = os.path.splitext(me)[0] + ".py"
versioneer_file = os.path.relpath(me)
except NameError:
versioneer_file = "versioneer.py"
files.append(versioneer_file)
present = False
try:
f = open(".gitattributes", "r")
for line in f.readlines():
if line.strip().startswith(versionfile_source):
if "export-subst" in line.strip().split()[1:]:
present = True
f.close()
except EnvironmentError:
pass
if not present:
f = open(".gitattributes", "a+")
f.write("%s export-subst\n" % versionfile_source)
f.close()
files.append(".gitattributes")
run_command(GITS, ["add", "--"] + files)
def versions_from_parentdir(parentdir_prefix, root, verbose):
"""Try to determine the version from the parent directory name.
Source tarballs conventionally unpack into a directory that includes both
the project name and a version string. We will also support searching up
two directory levels for an appropriately named parent directory
"""
rootdirs = []
for i in range(3):
dirname = os.path.basename(root)
if dirname.startswith(parentdir_prefix):
return {"version": dirname[len(parentdir_prefix):],
"full-revisionid": None,
"dirty": False, "error": None, "date": None}
else:
rootdirs.append(root)
root = os.path.dirname(root) # up a level
if verbose:
print("Tried directories %s but none started with prefix %s" %
(str(rootdirs), parentdir_prefix))
raise NotThisMethod("rootdir doesn't start with parentdir_prefix")
SHORT_VERSION_PY = """
# This file was generated by 'versioneer.py' (0.19) from
# revision-control system data, or from the parent directory name of an
# unpacked source archive. Distribution tarballs contain a pre-generated copy
# of this file.
import json
version_json = '''
%s
''' # END VERSION_JSON
def get_versions():
return json.loads(version_json)
"""
def versions_from_file(filename):
"""Try to determine the version from _version.py if present."""
try:
with open(filename) as f:
contents = f.read()
except EnvironmentError:
raise NotThisMethod("unable to read _version.py")
mo = re.search(r"version_json = '''\n(.*)''' # END VERSION_JSON",
contents, re.M | re.S)
if not mo:
mo = re.search(r"version_json = '''\r\n(.*)''' # END VERSION_JSON",
contents, re.M | re.S)
if not mo:
raise NotThisMethod("no version_json in _version.py")
return json.loads(mo.group(1))
def write_to_version_file(filename, versions):
"""Write the given version number to the given _version.py file."""
os.unlink(filename)
contents = json.dumps(versions, sort_keys=True,
indent=1, separators=(",", ": "))
with open(filename, "w") as f:
f.write(SHORT_VERSION_PY % contents)
print("set %s to '%s'" % (filename, versions["version"]))
def plus_or_dot(pieces):
"""Return a + if we don't already have one, else return a ."""
if "+" in pieces.get("closest-tag", ""):
return "."
return "+"
def render_pep440(pieces):
"""Build up version string, with post-release "local version identifier".
Our goal: TAG[+DISTANCE.gHEX[.dirty]] . Note that if you
get a tagged build and then dirty it, you'll get TAG+0.gHEX.dirty
Exceptions:
1: no tags. git_describe was just HEX. 0+untagged.DISTANCE.gHEX[.dirty]
"""
if pieces["closest-tag"]:
rendered = pieces["closest-tag"]
if pieces["distance"] or pieces["dirty"]:
rendered += plus_or_dot(pieces)
rendered += "%d.g%s" % (pieces["distance"], pieces["short"])
if pieces["dirty"]:
rendered += ".dirty"
else:
# exception #1
rendered = "0+untagged.%d.g%s" % (pieces["distance"],
pieces["short"])
if pieces["dirty"]:
rendered += ".dirty"
return rendered
def render_pep440_pre(pieces):
"""TAG[.post0.devDISTANCE] -- No -dirty.
Exceptions:
1: no tags. 0.post0.devDISTANCE
"""
if pieces["closest-tag"]:
rendered = pieces["closest-tag"]
if pieces["distance"]:
rendered += ".post0.dev%d" % pieces["distance"]
else:
# exception #1
rendered = "0.post0.dev%d" % pieces["distance"]
return rendered
def render_pep440_post(pieces):
"""TAG[.postDISTANCE[.dev0]+gHEX] .
The ".dev0" means dirty. Note that .dev0 sorts backwards
(a dirty tree will appear "older" than the corresponding clean one),
but you shouldn't be releasing software with -dirty anyways.
Exceptions:
1: no tags. 0.postDISTANCE[.dev0]
"""
if pieces["closest-tag"]:
rendered = pieces["closest-tag"]
if pieces["distance"] or pieces["dirty"]:
rendered += ".post%d" % pieces["distance"]
if pieces["dirty"]:
rendered += ".dev0"
rendered += plus_or_dot(pieces)
rendered += "g%s" % pieces["short"]
else:
# exception #1
rendered = "0.post%d" % pieces["distance"]
if pieces["dirty"]:
rendered += ".dev0"
rendered += "+g%s" % pieces["short"]
return rendered
def render_pep440_old(pieces):
"""TAG[.postDISTANCE[.dev0]] .
The ".dev0" means dirty.
Exceptions:
1: no tags. 0.postDISTANCE[.dev0]
"""
if pieces["closest-tag"]:
rendered = pieces["closest-tag"]
if pieces["distance"] or pieces["dirty"]:
rendered += ".post%d" % pieces["distance"]
if pieces["dirty"]:
rendered += ".dev0"
else:
# exception #1
rendered = "0.post%d" % pieces["distance"]
if pieces["dirty"]:
rendered += ".dev0"
return rendered
def render_git_describe(pieces):
"""TAG[-DISTANCE-gHEX][-dirty].
Like 'git describe --tags --dirty --always'.
Exceptions:
1: no tags. HEX[-dirty] (note: no 'g' prefix)
"""
if pieces["closest-tag"]:
rendered = pieces["closest-tag"]
if pieces["distance"]:
rendered += "-%d-g%s" % (pieces["distance"], pieces["short"])
else:
# exception #1
rendered = pieces["short"]
if pieces["dirty"]:
rendered += "-dirty"
return rendered
def render_git_describe_long(pieces):
"""TAG-DISTANCE-gHEX[-dirty].
Like 'git describe --tags --dirty --always -long'.
The distance/hash is unconditional.
Exceptions:
1: no tags. HEX[-dirty] (note: no 'g' prefix)
"""
if pieces["closest-tag"]:
rendered = pieces["closest-tag"]
rendered += "-%d-g%s" % (pieces["distance"], pieces["short"])
else:
# exception #1
rendered = pieces["short"]
if pieces["dirty"]:
rendered += "-dirty"
return rendered
def render(pieces, style):
"""Render the given version pieces into the requested style."""
if pieces["error"]:
return {"version": "unknown",
"full-revisionid": pieces.get("long"),
"dirty": None,
"error": pieces["error"],
"date": None}
if not style or style == "default":
style = "pep440" # the default
if style == "pep440":
rendered = render_pep440(pieces)
elif style == "pep440-pre":
rendered = render_pep440_pre(pieces)
elif style == "pep440-post":
rendered = render_pep440_post(pieces)
elif style == "pep440-old":
rendered = render_pep440_old(pieces)
elif style == "git-describe":
rendered = render_git_describe(pieces)
elif style == "git-describe-long":
rendered = render_git_describe_long(pieces)
else:
raise ValueError("unknown style '%s'" % style)
return {"version": rendered, "full-revisionid": pieces["long"],
"dirty": pieces["dirty"], "error": None,
"date": pieces.get("date")}
class VersioneerBadRootError(Exception):
"""The project root directory is unknown or missing key files."""
def get_versions(verbose=False):
"""Get the project version from whatever source is available.
Returns dict with two keys: 'version' and 'full'.
"""
if "versioneer" in sys.modules:
# see the discussion in cmdclass.py:get_cmdclass()
del sys.modules["versioneer"]
root = get_root()
cfg = get_config_from_root(root)
assert cfg.VCS is not None, "please set [versioneer]VCS= in setup.cfg"
handlers = HANDLERS.get(cfg.VCS)
assert handlers, "unrecognized VCS '%s'" % cfg.VCS
verbose = verbose or cfg.verbose
assert cfg.versionfile_source is not None, \
"please set versioneer.versionfile_source"
assert cfg.tag_prefix is not None, "please set versioneer.tag_prefix"
versionfile_abs = os.path.join(root, cfg.versionfile_source)
# extract version from first of: _version.py, VCS command (e.g. 'git
# describe'), parentdir. This is meant to work for developers using a
# source checkout, for users of a tarball created by 'setup.py sdist',
# and for users of a tarball/zipball created by 'git archive' or github's
# download-from-tag feature or the equivalent in other VCSes.
get_keywords_f = handlers.get("get_keywords")
from_keywords_f = handlers.get("keywords")
if get_keywords_f and from_keywords_f:
try:
keywords = get_keywords_f(versionfile_abs)
ver = from_keywords_f(keywords, cfg.tag_prefix, verbose)
if verbose:
print("got version from expanded keyword %s" % ver)
return ver
except NotThisMethod:
pass
try:
ver = versions_from_file(versionfile_abs)
if verbose:
print("got version from file %s %s" % (versionfile_abs, ver))
return ver
except NotThisMethod:
pass
from_vcs_f = handlers.get("pieces_from_vcs")
if from_vcs_f:
try:
pieces = from_vcs_f(cfg.tag_prefix, root, verbose)
ver = render(pieces, cfg.style)
if verbose:
print("got version from VCS %s" % ver)
return ver
except NotThisMethod:
pass
try:
if cfg.parentdir_prefix:
ver = versions_from_parentdir(cfg.parentdir_prefix, root, verbose)
if verbose:
print("got version from parentdir %s" % ver)
return ver
except NotThisMethod:
pass
if verbose:
print("unable to compute version")
return {"version": "0+unknown", "full-revisionid": None,
"dirty": None, "error": "unable to compute version",
"date": None}
def get_version():
"""Get the short version string for this project."""
return get_versions()["version"]
def get_cmdclass(cmdclass=None):
"""Get the custom setuptools/distutils subclasses used by Versioneer.
If the package uses a different cmdclass (e.g. one from numpy), it
should be provide as an argument.
"""
if "versioneer" in sys.modules:
del sys.modules["versioneer"]
# this fixes the "python setup.py develop" case (also 'install' and
# 'easy_install .'), in which subdependencies of the main project are
# built (using setup.py bdist_egg) in the same python process. Assume
# a main project A and a dependency B, which use different versions
# of Versioneer. A's setup.py imports A's Versioneer, leaving it in
# sys.modules by the time B's setup.py is executed, causing B to run
# with the wrong versioneer. Setuptools wraps the sub-dep builds in a
# sandbox that restores sys.modules to it's pre-build state, so the
# parent is protected against the child's "import versioneer". By
# removing ourselves from sys.modules here, before the child build
# happens, we protect the child from the parent's versioneer too.
# Also see https://github.com/python-versioneer/python-versioneer/issues/52
cmds = {} if cmdclass is None else cmdclass.copy()
# we add "version" to both distutils and setuptools
from distutils.core import Command
class cmd_version(Command):
description = "report generated version string"
user_options = []
boolean_options = []
def initialize_options(self):
pass
def finalize_options(self):
pass
def run(self):
vers = get_versions(verbose=True)
print("Version: %s" % vers["version"])
print(" full-revisionid: %s" % vers.get("full-revisionid"))
print(" dirty: %s" % vers.get("dirty"))
print(" date: %s" % vers.get("date"))
if vers["error"]:
print(" error: %s" % vers["error"])
cmds["version"] = cmd_version
# we override "build_py" in both distutils and setuptools
#
# most invocation pathways end up running build_py:
# distutils/build -> build_py
# distutils/install -> distutils/build ->..
# setuptools/bdist_wheel -> distutils/install ->..
# setuptools/bdist_egg -> distutils/install_lib -> build_py
# setuptools/install -> bdist_egg ->..
# setuptools/develop -> ?
# pip install:
# copies source tree to a tempdir before running egg_info/etc
# if .git isn't copied too, 'git describe' will fail
# then does setup.py bdist_wheel, or sometimes setup.py install
# setup.py egg_info -> ?
# we override different "build_py" commands for both environments
if 'build_py' in cmds:
_build_py = cmds['build_py']
elif "setuptools" in sys.modules:
from setuptools.command.build_py import build_py as _build_py
else:
from distutils.command.build_py import build_py as _build_py
class cmd_build_py(_build_py):
def run(self):
root = get_root()
cfg = get_config_from_root(root)
versions = get_versions()
_build_py.run(self)
# now locate _version.py in the new build/ directory and replace
# it with an updated value
if cfg.versionfile_build:
target_versionfile = os.path.join(self.build_lib,
cfg.versionfile_build)
print("UPDATING %s" % target_versionfile)
write_to_version_file(target_versionfile, versions)
cmds["build_py"] = cmd_build_py
if "setuptools" in sys.modules:
from setuptools.command.build_ext import build_ext as _build_ext
else:
from distutils.command.build_ext import build_ext as _build_ext
class cmd_build_ext(_build_ext):
def run(self):
root = get_root()
cfg = get_config_from_root(root)
versions = get_versions()
_build_ext.run(self)
if self.inplace:
# build_ext --inplace will only build extensions in
# build/lib<..> dir with no _version.py to write to.
# As in place builds will already have a _version.py
# in the module dir, we do not need to write one.
return
# now locate _version.py in the new build/ directory and replace
# it with an updated value
target_versionfile = os.path.join(self.build_lib,
cfg.versionfile_source)
print("UPDATING %s" % target_versionfile)
write_to_version_file(target_versionfile, versions)
cmds["build_ext"] = cmd_build_ext
if "cx_Freeze" in sys.modules: # cx_freeze enabled?
from cx_Freeze.dist import build_exe as _build_exe
# nczeczulin reports that py2exe won't like the pep440-style string
# as FILEVERSION, but it can be used for PRODUCTVERSION, e.g.
# setup(console=[{
# "version": versioneer.get_version().split("+", 1)[0], # FILEVERSION
# "product_version": versioneer.get_version(),
# ...
class cmd_build_exe(_build_exe):
def run(self):
root = get_root()
cfg = get_config_from_root(root)
versions = get_versions()
target_versionfile = cfg.versionfile_source
print("UPDATING %s" % target_versionfile)
write_to_version_file(target_versionfile, versions)
_build_exe.run(self)
os.unlink(target_versionfile)
with open(cfg.versionfile_source, "w") as f:
LONG = LONG_VERSION_PY[cfg.VCS]
f.write(LONG %
{"DOLLAR": "$",
"STYLE": cfg.style,
"TAG_PREFIX": cfg.tag_prefix,
"PARENTDIR_PREFIX": cfg.parentdir_prefix,
"VERSIONFILE_SOURCE": cfg.versionfile_source,
})
cmds["build_exe"] = cmd_build_exe
del cmds["build_py"]
if 'py2exe' in sys.modules: # py2exe enabled?
from py2exe.distutils_buildexe import py2exe as _py2exe
class cmd_py2exe(_py2exe):
def run(self):
root = get_root()
cfg = get_config_from_root(root)
versions = get_versions()
target_versionfile = cfg.versionfile_source
print("UPDATING %s" % target_versionfile)
write_to_version_file(target_versionfile, versions)
_py2exe.run(self)
os.unlink(target_versionfile)
with open(cfg.versionfile_source, "w") as f:
LONG = LONG_VERSION_PY[cfg.VCS]
f.write(LONG %
{"DOLLAR": "$",
"STYLE": cfg.style,
"TAG_PREFIX": cfg.tag_prefix,
"PARENTDIR_PREFIX": cfg.parentdir_prefix,
"VERSIONFILE_SOURCE": cfg.versionfile_source,
})
cmds["py2exe"] = cmd_py2exe
# we override different "sdist" commands for both environments
if 'sdist' in cmds:
_sdist = cmds['sdist']
elif "setuptools" in sys.modules:
from setuptools.command.sdist import sdist as _sdist
else:
from distutils.command.sdist import sdist as _sdist
class cmd_sdist(_sdist):
def run(self):
versions = get_versions()
self._versioneer_generated_versions = versions
# unless we update this, the command will keep using the old
# version
self.distribution.metadata.version = versions["version"]
return _sdist.run(self)
def make_release_tree(self, base_dir, files):
root = get_root()
cfg = get_config_from_root(root)
_sdist.make_release_tree(self, base_dir, files)
# now locate _version.py in the new base_dir directory
# (remembering that it may be a hardlink) and replace it with an
# updated value
target_versionfile = os.path.join(base_dir, cfg.versionfile_source)
print("UPDATING %s" % target_versionfile)
write_to_version_file(target_versionfile,
self._versioneer_generated_versions)
cmds["sdist"] = cmd_sdist
return cmds
CONFIG_ERROR = """
setup.cfg is missing the necessary Versioneer configuration. You need
a section like:
[versioneer]
VCS = git
style = pep440
versionfile_source = src/myproject/_version.py
versionfile_build = myproject/_version.py
tag_prefix =
parentdir_prefix = myproject-
You will also need to edit your setup.py to use the results:
import versioneer
setup(version=versioneer.get_version(),
cmdclass=versioneer.get_cmdclass(), ...)
Please read the docstring in ./versioneer.py for configuration instructions,
edit setup.cfg, and re-run the installer or 'python versioneer.py setup'.
"""
SAMPLE_CONFIG = """
# See the docstring in versioneer.py for instructions. Note that you must
# re-run 'versioneer.py setup' after changing this section, and commit the
# resulting files.
[versioneer]
#VCS = git
#style = pep440
#versionfile_source =
#versionfile_build =
#tag_prefix =
#parentdir_prefix =
"""
INIT_PY_SNIPPET = """
from ._version import get_versions
__version__ = get_versions()['version']
del get_versions
"""
def do_setup():
"""Do main VCS-independent setup function for installing Versioneer."""
root = get_root()
try:
cfg = get_config_from_root(root)
except (EnvironmentError, configparser.NoSectionError,
configparser.NoOptionError) as e:
if isinstance(e, (EnvironmentError, configparser.NoSectionError)):
print("Adding sample versioneer config to setup.cfg",
file=sys.stderr)
with open(os.path.join(root, "setup.cfg"), "a") as f:
f.write(SAMPLE_CONFIG)
print(CONFIG_ERROR, file=sys.stderr)
return 1
print(" creating %s" % cfg.versionfile_source)
with open(cfg.versionfile_source, "w") as f:
LONG = LONG_VERSION_PY[cfg.VCS]
f.write(LONG % {"DOLLAR": "$",
"STYLE": cfg.style,
"TAG_PREFIX": cfg.tag_prefix,
"PARENTDIR_PREFIX": cfg.parentdir_prefix,
"VERSIONFILE_SOURCE": cfg.versionfile_source,
})
ipy = os.path.join(os.path.dirname(cfg.versionfile_source),
"__init__.py")
if os.path.exists(ipy):
try:
with open(ipy, "r") as f:
old = f.read()
except EnvironmentError:
old = ""
if INIT_PY_SNIPPET not in old:
print(" appending to %s" % ipy)
with open(ipy, "a") as f:
f.write(INIT_PY_SNIPPET)
else:
print(" %s unmodified" % ipy)
else:
print(" %s doesn't exist, ok" % ipy)
ipy = None
# Make sure both the top-level "versioneer.py" and versionfile_source
# (PKG/_version.py, used by runtime code) are in MANIFEST.in, so
# they'll be copied into source distributions. Pip won't be able to
# install the package without this.
manifest_in = os.path.join(root, "MANIFEST.in")
simple_includes = set()
try:
with open(manifest_in, "r") as f:
for line in f:
if line.startswith("include "):
for include in line.split()[1:]:
simple_includes.add(include)
except EnvironmentError:
pass
# That doesn't cover everything MANIFEST.in can do
# (http://docs.python.org/2/distutils/sourcedist.html#commands), so
# it might give some false negatives. Appending redundant 'include'
# lines is safe, though.
if "versioneer.py" not in simple_includes:
print(" appending 'versioneer.py' to MANIFEST.in")
with open(manifest_in, "a") as f:
f.write("include versioneer.py\n")
else:
print(" 'versioneer.py' already in MANIFEST.in")
if cfg.versionfile_source not in simple_includes:
print(" appending versionfile_source ('%s') to MANIFEST.in" %
cfg.versionfile_source)
with open(manifest_in, "a") as f:
f.write("include %s\n" % cfg.versionfile_source)
else:
print(" versionfile_source already in MANIFEST.in")
# Make VCS-specific changes. For git, this means creating/changing
# .gitattributes to mark _version.py for export-subst keyword
# substitution.
do_vcs_install(manifest_in, cfg.versionfile_source, ipy)
return 0
def scan_setup_py():
"""Validate the contents of setup.py against Versioneer's expectations."""
found = set()
setters = False
errors = 0
with open("setup.py", "r") as f:
for line in f.readlines():
if "import versioneer" in line:
found.add("import")
if "versioneer.get_cmdclass()" in line:
found.add("cmdclass")
if "versioneer.get_version()" in line:
found.add("get_version")
if "versioneer.VCS" in line:
setters = True
if "versioneer.versionfile_source" in line:
setters = True
if len(found) != 3:
print("")
print("Your setup.py appears to be missing some important items")
print("(but I might be wrong). Please make sure it has something")
print("roughly like the following:")
print("")
print(" import versioneer")
print(" setup( version=versioneer.get_version(),")
print(" cmdclass=versioneer.get_cmdclass(), ...)")
print("")
errors += 1
if setters:
print("You should remove lines like 'versioneer.VCS = ' and")
print("'versioneer.versionfile_source = ' . This configuration")
print("now lives in setup.cfg, and should be removed from setup.py")
print("")
errors += 1
return errors
if __name__ == "__main__":
cmd = sys.argv[1]
if cmd == "setup":
errors = do_setup()
errors += scan_setup_py()
if errors:
sys.exit(1) | zoho-inventory-prefect-tasks | /zoho_inventory_prefect_tasks-0.0.5.tar.gz/zoho_inventory_prefect_tasks-0.0.5/versioneer.py | versioneer.py |
from zoho_inventory_python_sdk.resources import ContactPersons
from prefect import Task
from prefect.utilities.tasks import defaults_from_attrs
from typing import Any
class Create(Task):
def __init__(self, **kwargs: Any):
super().__init__(**kwargs)
@defaults_from_attrs()
def run(self, body: dict = None, path_params: dict = None, **task_kwargs: Any):
if body is None:
raise ValueError("An object must be provided")
try:
contact_persons = ContactPersons()
response = contact_persons.create(body=body, path_params=path_params, **task_kwargs)
return response
except Exception as error:
print(error)
raise error
class List(Task):
def __init__(self, **kwargs: Any):
super().__init__(**kwargs)
@defaults_from_attrs()
def run(self, **task_kwargs: Any):
try:
contact_persons = ContactPersons()
response = contact_persons.list(**task_kwargs)
return response
except Exception as error:
print(error)
raise error
class Fetch(Task):
def __init__(self, **kwargs: Any):
super().__init__(**kwargs)
@defaults_from_attrs()
def run(self, id_: str = None, path_params: dict = None, **task_kwargs: Any):
if id_ is None:
raise ValueError("An id must be provided")
try:
contact_persons = ContactPersons()
response = contact_persons.get(id_=id_, path_params=path_params, **task_kwargs)
return response
except Exception as error:
print(error)
raise error
class Update(Task):
def __init__(self, **kwargs: Any):
super().__init__(**kwargs)
@defaults_from_attrs()
def run(self, id_: str = None, body: dict = None, path_params: dict = None, **task_kwargs: Any):
if id_ is None:
raise ValueError("An id must be provided")
if body is None:
raise ValueError("An object must be provided")
try:
contact_persons = ContactPersons()
response = contact_persons.update(id_=id_, body=body, path_params=path_params, **task_kwargs)
return response
except Exception as error:
print(error)
raise error
class Delete(Task):
def __init__(self, **kwargs: Any):
super().__init__(**kwargs)
@defaults_from_attrs()
def run(self, id_: str = None, path_params: dict = None, **task_kwargs: Any):
if id_ is None:
raise ValueError("An object must be provided")
try:
contact_persons = ContactPersons()
response = contact_persons.delete(id_=id_, path_params=path_params, **task_kwargs)
return response
except Exception as error:
print(error)
raise error
class MarkAsPrimary(Task):
def __init__(self, **kwargs: Any):
super().__init__(**kwargs)
@defaults_from_attrs()
def run(self, id_: str = None):
if id_ is None:
raise ValueError("An id must be provided")
try:
contact_persons = ContactPersons()
response = contact_persons.mark_as_primary(id_=id_)
return response
except Exception as error:
print(error)
raise error | zoho-inventory-prefect-tasks | /zoho_inventory_prefect_tasks-0.0.5.tar.gz/zoho_inventory_prefect_tasks-0.0.5/src/zoho_inventory_prefect_tasks/tasks/contact_persons.py | contact_persons.py |
from zoho_inventory_python_sdk.resources import ShipmentOrders
from prefect import Task
from prefect.utilities.tasks import defaults_from_attrs
from typing import Any
class Create(Task):
def __init__(self, **kwargs: Any):
super().__init__(**kwargs)
@defaults_from_attrs()
def run(self, body: dict = None, path_params: dict = None, **task_kwargs: Any):
if body is None:
raise ValueError("An object must be provided")
try:
shipment_orders = ShipmentOrders()
response = shipment_orders.create(body=body, path_params=path_params, **task_kwargs)
return response
except Exception as error:
print(error)
raise error
class Fetch(Task):
def __init__(self, **kwargs: Any):
super().__init__(**kwargs)
@defaults_from_attrs()
def run(self, id_: str = None, path_params: dict = None, **task_kwargs: Any):
if id_ is None:
raise ValueError("An id must be provided")
try:
shipment_orders = ShipmentOrders()
response = shipment_orders.get(id_=id_, path_params=path_params, **task_kwargs)
return response
except Exception as error:
print(error)
raise error
class Delete(Task):
def __init__(self, **kwargs: Any):
super().__init__(**kwargs)
@defaults_from_attrs()
def run(self, id_: str = None, path_params: dict = None, **task_kwargs: Any):
if id_ is None:
raise ValueError("An object must be provided")
try:
shipment_orders = ShipmentOrders()
response = shipment_orders.delete(id_=id_, path_params=path_params, **task_kwargs)
return response
except Exception as error:
print(error)
raise error
class MarkAsDelivered(Task):
def __init__(self, **kwargs: Any):
super().__init__(**kwargs)
@defaults_from_attrs()
def run(self, id_: str = None, path_params: dict = None, **task_kwargs: Any):
if id_ is None:
raise ValueError("An object must be provided")
try:
shipment_orders = ShipmentOrders()
response = shipment_orders.mark_as_delivered(id_=id_)
return response
except Exception as error:
print(error)
raise error | zoho-inventory-prefect-tasks | /zoho_inventory_prefect_tasks-0.0.5.tar.gz/zoho_inventory_prefect_tasks-0.0.5/src/zoho_inventory_prefect_tasks/tasks/shipment_orders.py | shipment_orders.py |
from zoho_inventory_python_sdk.resources import Items
from prefect import Task
from prefect.utilities.tasks import defaults_from_attrs
from typing import Any
class Create(Task):
def __init__(self, **kwargs: Any):
super().__init__(**kwargs)
@defaults_from_attrs()
def run(self, body: dict = None, path_params: dict = None, **task_kwargs: Any):
if body is None:
raise ValueError("An object must be provided")
try:
items = Items()
response = items.create(body=body, path_params=path_params, **task_kwargs)
return response
except Exception as error:
print(error)
raise error
class List(Task):
def __init__(self, **kwargs: Any):
super().__init__(**kwargs)
@defaults_from_attrs()
def run(self, **task_kwargs: Any):
try:
items = Items()
response = items.list(**task_kwargs)
return response
except Exception as error:
print(error)
raise error
class Fetch(Task):
def __init__(self, **kwargs: Any):
super().__init__(**kwargs)
@defaults_from_attrs()
def run(self, id_: str = None, path_params: dict = None, **task_kwargs: Any):
if id_ is None:
raise ValueError("An id must be provided")
try:
items = Items()
response = items.get(id_=id_, path_params=path_params, **task_kwargs)
return response
except Exception as error:
print(error)
raise error
class Update(Task):
def __init__(self, **kwargs: Any):
super().__init__(**kwargs)
@defaults_from_attrs()
def run(self, id_: str = None, body: dict = None, path_params: dict = None, **task_kwargs: Any):
if id_ is None:
raise ValueError("An id must be provided")
if body is None:
raise ValueError("An object must be provided")
try:
items = Items()
response = items.update(id_=id_, body=body, path_params=path_params, **task_kwargs)
return response
except Exception as error:
print(error)
raise error
class Delete(Task):
def __init__(self, **kwargs: Any):
super().__init__(**kwargs)
@defaults_from_attrs()
def run(self, id_: str = None, path_params: dict = None, **task_kwargs: Any):
if id_ is None:
raise ValueError("An id must be provided")
try:
items = Items()
response = items.delete(id_=id_, path_params=path_params, **task_kwargs)
return response
except Exception as error:
print(error)
raise error
class MarkAsActive(Task):
def __init__(self, **kwargs: Any):
super().__init__(**kwargs)
@defaults_from_attrs()
def run(self, id_: str = None):
if id_ is None:
raise ValueError("An id must be provided")
try:
items = Items()
response = items.mark_as_active(id_=id_)
return response
except Exception as error:
print(error)
raise error
class MarkAsInactive(Task):
def __init__(self, **kwargs: Any):
super().__init__(**kwargs)
@defaults_from_attrs()
def run(self, id_: str = None):
if id_ is None:
raise ValueError("An id must be provided")
try:
items = Items()
response = items.mark_as_inactive(id_=id_)
return response
except Exception as error:
print(error)
raise error
class FetchPriceBookRate(Task):
def __init__(self, **kwargs: Any):
super().__init__(**kwargs)
@defaults_from_attrs()
def run(self, query: dict = None, **task_kwargs: Any):
try:
items = Items()
response = items.get_price_book_rate(query=query)
return response
except Exception as error:
print(error)
raise error
class FetchImage(Task):
def __init__(self, **kwargs: Any):
super().__init__(**kwargs)
@defaults_from_attrs()
def run(self, id_: str = None, **task_kwargs: Any):
try:
items = Items()
response = items.get_item_image(id_=id_)
return response.content
except Exception as error:
print(error)
raise error | zoho-inventory-prefect-tasks | /zoho_inventory_prefect_tasks-0.0.5.tar.gz/zoho_inventory_prefect_tasks-0.0.5/src/zoho_inventory_prefect_tasks/tasks/items.py | items.py |
from zoho_inventory_python_sdk.resources import SalesOrders
from prefect import Task
from prefect.utilities.tasks import defaults_from_attrs
from typing import Any
class Create(Task):
def __init__(self, **kwargs: Any):
super().__init__(**kwargs)
@defaults_from_attrs()
def run(self, body: dict = None, path_params: dict = None, **task_kwargs: Any):
if body is None:
raise ValueError("An object must be provided")
try:
sales_orders = SalesOrders()
response = sales_orders.create(body=body, path_params=path_params, **task_kwargs)
return response
except Exception as error:
print(error)
raise error
class List(Task):
def __init__(self, **kwargs: Any):
super().__init__(**kwargs)
@defaults_from_attrs()
def run(self, **task_kwargs: Any):
try:
sales_orders = SalesOrders()
response = sales_orders.list(**task_kwargs)
return response
except Exception as error:
print(error)
raise error
class Fetch(Task):
def __init__(self, **kwargs: Any):
super().__init__(**kwargs)
@defaults_from_attrs()
def run(self, id_: str = None, path_params: dict = None, **task_kwargs: Any):
if id_ is None:
raise ValueError("An id must be provided")
try:
sales_orders = SalesOrders()
response = sales_orders.get(id_=id_, path_params=path_params, **task_kwargs)
return response
except Exception as error:
print(error)
raise error
class Update(Task):
def __init__(self, **kwargs: Any):
super().__init__(**kwargs)
@defaults_from_attrs()
def run(self, id_: str = None, body: dict = None, path_params: dict = None, **task_kwargs: Any):
if id_ is None:
raise ValueError("An id must be provided")
if body is None:
raise ValueError("An object must be provided")
try:
sales_orders = SalesOrders()
response = sales_orders.update(id_=id_, body=body, path_params=path_params, **task_kwargs)
return response
except Exception as error:
print(error)
raise error
class Delete(Task):
def __init__(self, **kwargs: Any):
super().__init__(**kwargs)
@defaults_from_attrs()
def run(self, id_: str = None, path_params: dict = None, **task_kwargs: Any):
if id_ is None:
raise ValueError("An object must be provided")
try:
sales_orders = SalesOrders()
response = sales_orders.delete(id_=id_, path_params=path_params, **task_kwargs)
return response
except Exception as error:
print(error)
raise error
class MarkAsVoid(Task):
def __init__(self, **kwargs: Any):
super().__init__(**kwargs)
@defaults_from_attrs()
def run(self, id_: str = None, path_params: dict = None, **task_kwargs: Any):
if id_ is None:
raise ValueError("An object must be provided")
try:
sales_orders = SalesOrders()
response = sales_orders.mark_as_void(id_=id_)
return response
except Exception as error:
print(error)
raise error
class MarkAsConfirmed(Task):
def __init__(self, **kwargs: Any):
super().__init__(**kwargs)
@defaults_from_attrs()
def run(self, id_: str = None, path_params: dict = None, **task_kwargs: Any):
if id_ is None:
raise ValueError("An object must be provided")
try:
sales_orders = SalesOrders()
response = sales_orders.mark_as_confirmed(id_=id_)
return response
except Exception as error:
print(error)
raise error | zoho-inventory-prefect-tasks | /zoho_inventory_prefect_tasks-0.0.5.tar.gz/zoho_inventory_prefect_tasks-0.0.5/src/zoho_inventory_prefect_tasks/tasks/sales_orders.py | sales_orders.py |
from zoho_inventory_python_sdk.resources import Contacts
from prefect import Task
from prefect.utilities.tasks import defaults_from_attrs
from typing import Any
class Create(Task):
def __init__(self, **kwargs: Any):
super().__init__(**kwargs)
@defaults_from_attrs()
def run(self, body: dict = None, path_params: dict = None, **task_kwargs: Any):
if body is None:
raise ValueError("An object must be provided")
try:
contacts = Contacts()
response = contacts.create(body=body, path_params=path_params, **task_kwargs)
return response
except Exception as error:
print(error)
raise error
class List(Task):
def __init__(self, **kwargs: Any):
super().__init__(**kwargs)
@defaults_from_attrs()
def run(self, **task_kwargs: Any):
try:
contacts = Contacts()
response = contacts.list(**task_kwargs)
return response
except Exception as error:
print(error)
raise error
class Fetch(Task):
def __init__(self, **kwargs: Any):
super().__init__(**kwargs)
@defaults_from_attrs()
def run(self, id_: str = None, path_params: dict = None, **task_kwargs: Any):
if id_ is None:
raise ValueError("An id must be provided")
try:
contacts = Contacts()
response = contacts.get(id_=id_, path_params=path_params, **task_kwargs)
return response
except Exception as error:
print(error)
raise error
class Update(Task):
def __init__(self, **kwargs: Any):
super().__init__(**kwargs)
@defaults_from_attrs()
def run(self, id_: str = None, body: dict = None, path_params: dict = None, **task_kwargs: Any):
if id_ is None:
raise ValueError("An id must be provided")
if body is None:
raise ValueError("An object must be provided")
try:
contacts = Contacts()
response = contacts.update(id_=id_, body=body, path_params=path_params, **task_kwargs)
return response
except Exception as error:
print(error)
raise error
class Delete(Task):
def __init__(self, **kwargs: Any):
super().__init__(**kwargs)
@defaults_from_attrs()
def run(self, id_: str = None, path_params: dict = None, **task_kwargs: Any):
if id_ is None:
raise ValueError("An id must be provided")
try:
contacts = Contacts()
response = contacts.delete(id_=id_, path_params=path_params, **task_kwargs)
return response
except Exception as error:
print(error)
raise error
class MarkAsActive(Task):
def __init__(self, **kwargs: Any):
super().__init__(**kwargs)
@defaults_from_attrs()
def run(self, id_: str = None):
if id_ is None:
raise ValueError("An id must be provided")
try:
contacts = Contacts()
response = contacts.mark_as_active(id_=id_)
return response
except Exception as error:
print(error)
raise error
class MarkAsInactive(Task):
def __init__(self, **kwargs: Any):
super().__init__(**kwargs)
@defaults_from_attrs()
def run(self, id_: str = None):
if id_ is None:
raise ValueError("An id must be provided")
try:
contacts = Contacts()
response = contacts.mark_as_inactive(id_=id_)
return response
except Exception as error:
print(error)
raise error
class ListComments(Task):
def __init__(self, **kwargs: Any):
super().__init__(**kwargs)
@defaults_from_attrs()
def run(self, id_: str = None, query: dict = None):
if id_ is None:
raise ValueError("An id must be provided")
try:
contacts = Contacts()
response = contacts.list_comments(id_=id_, query=query)
return response
except Exception as error:
print(error)
raise error
class ListContactPersons(Task):
def __init__(self, **kwargs: Any):
super().__init__(**kwargs)
@defaults_from_attrs()
def run(self, id_: str = None, **task_kwargs: Any):
if id_ is None:
raise ValueError("An id must be provided")
try:
contacts = Contacts()
response = contacts.list_contact_persons(id_=id_, **task_kwargs)
return response
except Exception as error:
print(error)
raise error
class GetEmailStatement(Task):
def __init__(self, **kwargs: Any):
super().__init__(**kwargs)
@defaults_from_attrs()
def run(self, id_: str = None, query: dict = None):
if id_ is None:
raise ValueError("An id must be provided")
try:
contacts = Contacts()
response = contacts.get_email_statement(id_=id_, query=query)
return response
except Exception as error:
print(error)
raise error
class SendEmailStatement(Task):
def __init__(self, **kwargs: Any):
super().__init__(**kwargs)
@defaults_from_attrs()
def run(self, id_: str = None, body: dict = None, query: dict = None):
if id_ is None:
raise ValueError("An id must be provided")
try:
contacts = Contacts()
response = contacts.send_email_statement(id_=id_, body=body, query=query)
return response
except Exception as error:
print(error)
raise error
class SendEmail(Task):
def __init__(self, **kwargs: Any):
super().__init__(**kwargs)
@defaults_from_attrs()
def run(self, id_: str = None, body: dict = None, query: dict = None):
if id_ is None:
raise ValueError("An id must be provided")
try:
contacts = Contacts()
response = contacts.send_email(id_=id_, body=body, query=query)
return response
except Exception as error:
print(error)
raise error | zoho-inventory-prefect-tasks | /zoho_inventory_prefect_tasks-0.0.5.tar.gz/zoho_inventory_prefect_tasks-0.0.5/src/zoho_inventory_prefect_tasks/tasks/contacts.py | contacts.py |
from .base_client import BaseClient
from .oauth_manager import OAuthManager
class Client(BaseClient):
def __init__(self, **opts):
self.oauth_manager = OAuthManager(**opts)
super(Client, self).__init__(organization_id=self.oauth_manager.client.storage.get('zoho_inventory',
'organization_id'),
region=self.oauth_manager.client.storage.get('zoho_inventory', 'region'), **opts)
def authenticated_fetch(self, path: str = None, req: dict = None, mimetype: str = "application/json",
encode_json_string: bool = False):
access_token = self.oauth_manager.get_access_token()
if req is None:
req = dict()
if "headers" in req:
headers_auth = {**req["headers"], **{"authorization": "Zoho-oauthtoken {}".format(access_token)}}
else:
headers_auth = {**{"authorization": "Zoho-oauthtoken {}".format(access_token)}}
return self.fetch(path=path,
req={
"method": req.get("method", "GET"),
"path_params": req.get("path_params", dict()),
"query": req.get("query", dict()),
"headers": headers_auth,
"body": req.get("body", dict()),
},
mimetype=mimetype,
encode_json_string=encode_json_string
)
def list(self, **options):
return self.authenticated_fetch(path="", **options)
def create(self, body: dict = None, path_params: dict = None, **kwargs):
return self.authenticated_fetch(path="",
req={
"method": "POST",
"path_params": path_params,
"body": body,
},
mimetype=kwargs.get("mimetype", "application/x-www-form-urlencoded"),
encode_json_string=kwargs.get("encode_json_string", True)
)
def get(self, id_: str = "", path_params: dict = None, **kwargs):
return self.authenticated_fetch(path=f"{id_}/", req={"path_params": path_params},
mimetype=kwargs.get("mimetype", "application/x-www-form-urlencoded"),
encode_json_string=kwargs.get("encode_json_string", False))
def update(self, id_: str = "", body: dict = None, path_params: dict = None, **kwargs):
return self.authenticated_fetch(path=f"{id_}/",
req={
"method": "PUT",
"path_params": path_params,
"body": body,
},
mimetype=kwargs.get("mimetype", "application/x-www-form-urlencoded"),
encode_json_string=kwargs.get("encode_json_string", True)
)
def delete(self, id_: str = "", path_params: dict = None, **kwargs):
return self.authenticated_fetch(path=f"{id_}/",
req={
"method": "DELETE",
"path_params": path_params,
},
mimetype=kwargs.get("mimetype", "application/x-www-form-urlencoded"),
encode_json_string=kwargs.get("encode_json_string", False)
) | zoho-inventory-python-sdk | /zoho_inventory_python_sdk-0.0.9-py3-none-any.whl/zoho_inventory_python_sdk/client.py | client.py |
import re
import requests
import json
class BaseClient:
def __init__(self, resource: str = None, path: str = None, origin: str = None, organization_id: str = None,
region: str = 'com', **opts):
self.resource = resource
self.path = path or resource
self.origin = origin or "https://inventory.zoho.{}/api/v1".format(region)
self.url = re.sub(r"\/$", "", self.origin) + "/" + self.path + "/"
self.headers = {}
self.query = {
'organization_id': organization_id
}
def fetch(self, path: str = None, req: dict = None, mimetype: str = "application/json",
encode_json_string: bool = False):
if "method" not in req:
req["method"] = "GET"
if "path_params" not in req:
req["path_params"] = {}
if "query" not in req:
req["query"] = {}
if "body" not in req:
req["body"] = {}
if "headers" not in req:
req["headers"] = {}
if encode_json_string:
req["body"] = self.form_encode(req.get("body"))
req.get('query', {}).update(self.query)
target = self._replace_path(self.url + path, req["path_params"])
headers = {**self.headers, **{"content-type": mimetype}, **req["headers"]}
response = requests.request(req["method"], target.rstrip("/"), headers=headers, params=req["query"],
data=req["body"])
if 'application/json' in response.headers['Content-Type']:
return response.json()
else:
return response
@staticmethod
def _replace_path(path: str = None, path_params: dict = None) -> str:
if path_params is None:
path_params = {}
new_path = path
for key in path_params:
new_path = new_path.replace(':' + key, path_params[key])
return new_path
@staticmethod
def form_encode(body: str = None) -> dict:
form = {'JSONString': json.dumps(body, separators=(',', ':'))}
return form | zoho-inventory-python-sdk | /zoho_inventory_python_sdk-0.0.9-py3-none-any.whl/zoho_inventory_python_sdk/base_client.py | base_client.py |
import json
import os
import time
from datetime import datetime, timedelta
from .base_client import BaseClient
from .aws import SecretsManagerClient, SecretsManagerStorage, MissingSetting
class OAuth(BaseClient):
def __init__(self, **opts):
self.aws_access_key = os.getenv('AWS_ACCESS_KEY_ID')
self.aws_secret_key = os.getenv('AWS_SECRET_ACCESS_KEY')
self.aws_secretsmanager_secret_name = os.getenv('AWS_SM_ZOHO_INVENTORY_SECRET_NAME')
self.aws_secretsmanager_region = os.getenv('AWS_DEFAULT_REGION')
self.secretsmanager_client = SecretsManagerClient.get_instance(
self.aws_access_key, self.aws_secret_key,
region_name=self.aws_secretsmanager_region,
)
self.secretsmanager_client.name = self.aws_secretsmanager_secret_name
self.storage = SecretsManagerStorage(secretsmanager_client=self.secretsmanager_client,
name=self.secretsmanager_client.name)
self.storage.read_config()
super(OAuth, self).__init__(resource="oauth", path="oauth/v2", origin="https://accounts.zoho.{}".format(
self.storage.get('zoho_inventory', 'region')))
self.client_id = self.storage.get('zoho_inventory', 'client_id')
self.client_secret = self.storage.get('zoho_inventory', 'client_secret')
self.refresh_token = self.storage.get('zoho_inventory', 'refresh_token')
try:
self.expiry_time = self.storage.get('zoho_inventory', 'expiry_time')
except MissingSetting:
self.storage.set('zoho_inventory', 'expiry_time', str(time.mktime(datetime(1970, 1, 1, 0, 0,
1).timetuple())))
self.expiry_time = self.storage.get('zoho_inventory', 'expiry_time')
try:
self.access_token = self.storage.get('zoho_inventory', 'access_token')
except MissingSetting:
self.refresh_access_token()
self.access_token = self.storage.get('zoho_inventory', 'access_token')
def refresh_access_token(self):
token = self.fetch(
path="token",
req={
"method": "POST",
"query": {
"client_id": self.client_id,
"client_secret": self.client_secret,
"refresh_token": self.refresh_token,
"grant_type": "refresh_token"
}
}
)
self.access_token = token.get('access_token')
self.storage.set('zoho_inventory', 'access_token', self.access_token)
expiry_time = datetime.now() + timedelta(seconds=token.get("expires_in", 0))
self.expiry_time = time.mktime(datetime(expiry_time.year, expiry_time.month, expiry_time.day,
expiry_time.hour, expiry_time.minute, expiry_time.second).timetuple())
self.storage.set('zoho_inventory', 'expiry_time', str(self.expiry_time))
print(self.storage.get('zoho_inventory', 'expiry_time'))
print('Saving token: {}'.format(
{s: dict(self.storage.items(s)) for s in self.storage.sections()}))
self.secretsmanager_client.put_value(
secret_value=json.dumps(
{s: dict(self.storage.items(s)) for s in self.storage.sections()}))
return token
def revoke_token(self):
return self.fetch(
path="token/revoke",
req={
"method": "POST",
"body": {
"refresh_token": self.refresh_token
}
}
) | zoho-inventory-python-sdk | /zoho_inventory_python_sdk-0.0.9-py3-none-any.whl/zoho_inventory_python_sdk/oauth.py | oauth.py |
from ..client import Client
'''
TODO:
- Override or refactor Client methods create and update to accept query parameters.
- Missing methods to manage file attachments.
'''
class RetainerInvoices(Client):
def __init__(self, **opts):
super(RetainerInvoices, self).__init__(**{**opts, **{"resource": "retainerinvoices"}})
def mark_as_sent(self, id_: str = ""):
return self.authenticated_fetch(path=f"{id_}/status/sent/",
req={
"method": "POST"
},
mimetype="application/x-www-form-urlencoded",
)
def mark_as_void(self, id_: str = ""):
return self.authenticated_fetch(path=f"{id_}/status/void/",
req={
"method": "POST"
},
mimetype="application/x-www-form-urlencoded",
)
def mark_as_draft(self, id_: str = ""):
return self.authenticated_fetch(path=f"{id_}/status/draft/",
req={
"method": "POST"
},
mimetype="application/x-www-form-urlencoded",
)
def submit(self, id_: str = ""):
return self.authenticated_fetch(path=f"{id_}/submit/",
req={
"method": "POST"
},
mimetype="application/x-www-form-urlencoded",
)
def approve(self, id_: str = ""):
return self.authenticated_fetch(path=f"{id_}/approve/",
req={
"method": "POST"
},
mimetype="application/x-www-form-urlencoded",
)
def update_template(self, id_: str = "", path_params: dict = None):
return self.authenticated_fetch(path=f"{id_}/templates/:template_id/",
req={
"method": "PUT",
"path_params": path_params or {},
},
mimetype="application/x-www-form-urlencoded",
)
def send_by_email(self, id_: str = "", body: dict = None, query: dict = None):
return self.authenticated_fetch(path=f"{id_}/email/",
req={
"method": "POST",
"body": body or {},
"query": query or {}
},
mimetype="application/x-www-form-urlencoded",
encode_json_string=True,
)
def get_email_content(self, id_: str = "", body: dict = None):
return self.authenticated_fetch(path=f"{id_}/email/",
req={
"method": "GET",
"body": body or {},
},
mimetype="application/x-www-form-urlencoded",
encode_json_string=True,
)
def update_billing_address(self, id_: str = "", body: dict = None):
return self.authenticated_fetch(path=f"{id_}/address/billing/",
req={
"method": "PUT",
"body": body or {},
},
mimetype="application/x-www-form-urlencoded",
encode_json_string=True,
)
def list_templates(self):
return self.authenticated_fetch(path=f"templates/",
req={
"method": "GET",
},
mimetype="application/x-www-form-urlencoded",
)
def list_comments(self, id_: str = ""):
return self.authenticated_fetch(path=f"{id_}/comments/",
req={
"method": "GET",
},
mimetype="application/x-www-form-urlencoded",
)
def add_comment(self, id_: str = "", body: dict = None):
return self.authenticated_fetch(path=f"{id_}/comments/",
req={
"method": "POST",
"body": body or {},
},
mimetype="application/x-www-form-urlencoded",
encode_json_string=True,
)
def delete_comment(self, id_: str = "", path_params: dict = None):
return self.authenticated_fetch(path=f"{id_}/comments/:comment_id/",
req={
"method": "DELETE",
"path_params": path_params or {},
},
mimetype="application/x-www-form-urlencoded",
)
def update_comment(self, id_: str = "", body: dict = None, path_params: dict = None):
return self.authenticated_fetch(path=f"{id_}/comments/:comment_id/",
req={
"method": "PUT",
"body": body or {},
"path_params": path_params or {},
},
mimetype="application/x-www-form-urlencoded",
encode_json_string=True,
) | zoho-inventory-python-sdk | /zoho_inventory_python_sdk-0.0.9-py3-none-any.whl/zoho_inventory_python_sdk/resources/retainerinvoices.py | retainerinvoices.py |
from ..client import Client
'''
TODO:
- Override or refactor Client methods create and update to accept query parameters.
'''
class VendorCredits(Client):
def __init__(self, **opts):
super(VendorCredits, self).__init__(**{**opts, **{"resource": "vendorcredits"}})
def mark_as_open(self, id_: str = ""):
return self.authenticated_fetch(path=f"{id_}/status/open/",
req={
"method": "POST"
},
mimetype="application/x-www-form-urlencoded",
)
def mark_as_void(self, id_: str = ""):
return self.authenticated_fetch(path=f"{id_}/status/void/",
req={
"method": "POST"
},
mimetype="application/x-www-form-urlencoded",
)
def submit(self, id_: str = ""):
return self.authenticated_fetch(path=f"{id_}/submit/",
req={
"method": "POST"
},
mimetype="application/x-www-form-urlencoded",
)
def approve(self, id_: str = ""):
return self.authenticated_fetch(path=f"{id_}/approve/",
req={
"method": "POST"
},
mimetype="application/x-www-form-urlencoded",
)
def list_bills(self, id_: str = ""):
return self.authenticated_fetch(path=f"{id_}/bills/",
req={
"method": "GET"
},
mimetype="application/x-www-form-urlencoded",
)
def apply_credits_to_bill(self, id_: str = "", body: dict = None):
return self.authenticated_fetch(path=f"{id_}/bills/",
req={
"method": "POST",
"body": body or {},
},
mimetype="application/x-www-form-urlencoded",
encode_json_string=True,
)
def delete_credits_applied_to_bill(self, id_: str = "", path_params: dict = None):
return self.authenticated_fetch(path=f"{id_}/bills/:bill_id/",
req={
"method": "DELETE",
"path_params": path_params or {},
},
mimetype="application/x-www-form-urlencoded",
)
def refund(self, id_: str = "", body: dict = None):
return self.authenticated_fetch(path=f"{id_}/refunds/",
req={
"method": "POST",
"body": body or {},
},
mimetype="application/x-www-form-urlencoded",
encode_json_string=True,
)
def update_refund(self, id_: str = "", body: dict = None, path_params: dict = None):
return self.authenticated_fetch(path=f"{id_}/refunds/:refund_id/",
req={
"method": "PUT",
"body": body or {},
"path_params": path_params or {},
},
mimetype="application/x-www-form-urlencoded",
encode_json_string=True,
)
def get_refund(self, id_: str = "", path_params: dict = None):
return self.authenticated_fetch(path=f"{id_}/refunds/:refund_id/",
req={
"method": "GET",
"path_params": path_params or {},
},
mimetype="application/x-www-form-urlencoded",
)
def list_vendor_credit_refunds(self, id_: str = "", query: dict = None):
return self.authenticated_fetch(path=f"{id_}/refunds/",
req={
"method": "GET",
"query": query or {},
},
mimetype="application/x-www-form-urlencoded",
)
def delete_refund(self, id_: str = "", path_params: dict = None):
return self.authenticated_fetch(path=f"{id_}/refunds/:refund_id/",
req={
"method": "DELETE",
"path_params": path_params or {},
},
mimetype="application/x-www-form-urlencoded",
)
def list_refunds(self, query: dict = None):
return self.authenticated_fetch(path=f"refunds/",
req={
"method": "GET",
"query": query or {},
},
mimetype="application/x-www-form-urlencoded",
)
def list_comments(self, id_: str = ""):
return self.authenticated_fetch(path=f"{id_}/comments/",
req={
"method": "GET",
},
mimetype="application/x-www-form-urlencoded",
)
def add_comment(self, id_: str = "", body: dict = None):
return self.authenticated_fetch(path=f"{id_}/comments/",
req={
"method": "POST",
"body": body or {},
},
mimetype="application/x-www-form-urlencoded",
encode_json_string=True,
)
def delete_comment(self, id_: str = "", path_params: dict = None):
return self.authenticated_fetch(path=f"{id_}/comments/:comment_id/",
req={
"method": "DELETE",
"path_params": path_params or {},
},
mimetype="application/x-www-form-urlencoded",
) | zoho-inventory-python-sdk | /zoho_inventory_python_sdk-0.0.9-py3-none-any.whl/zoho_inventory_python_sdk/resources/vendorcredits.py | vendorcredits.py |
from ..client import Client
class Items(Client):
def __init__(self, **opts):
super(Items, self).__init__(**{**opts, **{"resource": "items"}})
def mark_as_active(self, id_: str = ""):
return self.authenticated_fetch(path=f"{id_}/active/",
req={
"method": "POST"
},
mimetype="application/x-www-form-urlencoded",
)
def mark_as_inactive(self, id_: str = ""):
return self.authenticated_fetch(path=f"{id_}/inactive/",
req={
"method": "POST"
},
mimetype="application/x-www-form-urlencoded",
)
def get_item_image(self, id_: str = ""):
return self.authenticated_fetch(path=f"{id_}/image/",
req={
"method": "GET"
},
mimetype="application/x-www-form-urlencoded",
)
def delete_item_image(self, id_: str = ""):
return self.authenticated_fetch(path=f"{id_}/image/",
req={
"method": "DELETE"
},
mimetype="application/x-www-form-urlencoded",
)
def get_price_book_rate(self, query: dict = None):
return self.authenticated_fetch(path=f"pricebookrate/",
req={
"method": "POST",
"query": query or {}
},
mimetype="application/x-www-form-urlencoded",
encode_json_string=True,
) | zoho-inventory-python-sdk | /zoho_inventory_python_sdk-0.0.9-py3-none-any.whl/zoho_inventory_python_sdk/resources/items.py | items.py |
from ..client import Client
'''
TODO:
- Override or refactor Client methods create and update to accept query parameters.
- Missing methods to manage file attachments.
'''
class Invoices(Client):
def __init__(self, **opts):
super(Invoices, self).__init__(**{**opts, **{"resource": "invoices"}})
def mark_as_sent(self, id_: str = ""):
return self.authenticated_fetch(path=f"{id_}/status/sent/",
req={
"method": "POST"
},
mimetype="application/x-www-form-urlencoded",
)
def mark_as_void(self, id_: str = ""):
return self.authenticated_fetch(path=f"{id_}/status/void/",
req={
"method": "POST"
},
mimetype="application/x-www-form-urlencoded",
)
def mark_as_draft(self, id_: str = ""):
return self.authenticated_fetch(path=f"{id_}/status/draft/",
req={
"method": "POST"
},
mimetype="application/x-www-form-urlencoded",
)
def send_invoice_by_email(self, id_: str = "", body: dict = None, query: dict = None):
return self.authenticated_fetch(path=f"{id_}/email/",
req={
"method": "POST",
"body": body or {},
"query": query or {}
},
mimetype="application/x-www-form-urlencoded",
encode_json_string=True,
)
def send_invoices(self, query: dict = None):
return self.authenticated_fetch(path=f"email/",
req={
"method": "POST",
"query": query or {}
},
mimetype="application/x-www-form-urlencoded",
)
def get_invoice_email_content(self, id_: str = "", query: dict = None):
return self.authenticated_fetch(path=f"{id_}/email/",
req={
"method": "GET",
"query": query or {}
},
mimetype="application/x-www-form-urlencoded",
)
def get_payment_reminder_email_content(self, id_: str = ""):
return self.authenticated_fetch(path=f"{id_}/paymentreminder/",
req={
"method": "GET",
},
mimetype="application/x-www-form-urlencoded",
)
def disable_payment_reminder(self, id_: str = ""):
return self.authenticated_fetch(path=f"{id_}/paymentreminder/disable",
req={
"method": "POST",
},
mimetype="application/x-www-form-urlencoded",
)
def enable_payment_reminder(self, id_: str = ""):
return self.authenticated_fetch(path=f"{id_}/paymentreminder/enable",
req={
"method": "POST",
},
mimetype="application/x-www-form-urlencoded",
)
def bulk_export(self, query: dict = None):
return self.authenticated_fetch(path=f"pdf/",
req={
"method": "GET",
"query": query or {},
},
mimetype="application/x-www-form-urlencoded",
)
def bulk_print(self, query: dict = None):
return self.authenticated_fetch(path=f"print/",
req={
"method": "GET",
"query": query or {},
},
mimetype="application/x-www-form-urlencoded",
)
def write_off(self, id_: str = ""):
return self.authenticated_fetch(path=f"{id_}/writeoff/",
req={
"method": "POST",
},
mimetype="application/x-www-form-urlencoded",
)
def cancel_write_off(self, id_: str = ""):
return self.authenticated_fetch(path=f"{id_}/writeoff/cancel/",
req={
"method": "POST",
},
mimetype="application/x-www-form-urlencoded",
)
def update_billing_address(self, id_: str = "", body: dict = None):
return self.authenticated_fetch(path=f"{id_}/address/billing/",
req={
"method": "PUT",
"body": body or {},
},
mimetype="application/x-www-form-urlencoded",
encode_json_string=True,
)
def update_shipping_address(self, id_: str = "", body: dict = None):
return self.authenticated_fetch(path=f"{id_}/address/shipping/",
req={
"method": "PUT",
"body": body or {},
},
mimetype="application/x-www-form-urlencoded",
encode_json_string=True,
)
def list_templates(self):
return self.authenticated_fetch(path=f"templates/",
req={
"method": "GET",
},
mimetype="application/x-www-form-urlencoded",
)
def update_template(self, id_: str = "", path_params: dict = None):
return self.authenticated_fetch(path=f"{id_}/templates/:template_id/",
req={
"method": "PUT",
"path_params": path_params or {},
},
mimetype="application/x-www-form-urlencoded",
)
def list_payments(self, id_: str = ""):
return self.authenticated_fetch(path=f"{id_}/payments/",
req={
"method": "GET",
},
mimetype="application/x-www-form-urlencoded",
)
def delete_payment(self, id_: str = "", path_params: dict = None):
return self.authenticated_fetch(path=f"{id_}/payments/:payment_id/",
req={
"method": "DELETE",
"path_params": path_params or {},
},
mimetype="application/x-www-form-urlencoded",
)
def list_credits_applied(self, id_: str = ""):
return self.authenticated_fetch(path=f"{id_}/creditsapplied/",
req={
"method": "GET",
},
mimetype="application/x-www-form-urlencoded",
)
def apply_credits(self, id_: str = "", body: dict = None):
return self.authenticated_fetch(path=f"{id_}/credits/",
req={
"method": "POST",
"body": body or {},
},
mimetype="application/x-www-form-urlencoded",
encode_json_string=True,
)
def delete_credits_applied(self, id_: str = "", path_params: dict = None):
return self.authenticated_fetch(path=f"{id_}/creditsapplied/:credit_note_id/",
req={
"method": "DELETE",
"path_params": path_params or {},
},
mimetype="application/x-www-form-urlencoded",
)
def list_comments(self, id_: str = ""):
return self.authenticated_fetch(path=f"{id_}/comments/",
req={
"method": "GET",
},
mimetype="application/x-www-form-urlencoded",
)
def add_comment(self, id_: str = "", query: dict = None):
return self.authenticated_fetch(path=f"{id_}/comments/",
req={
"method": "POST",
"query": query or {},
},
mimetype="application/x-www-form-urlencoded",
) | zoho-inventory-python-sdk | /zoho_inventory_python_sdk-0.0.9-py3-none-any.whl/zoho_inventory_python_sdk/resources/invoices.py | invoices.py |
from ..client import Client
class Contacts(Client):
def __init__(self, **opts):
super(Contacts, self).__init__(**{**opts, **{"resource": "contacts"}})
def mark_as_active(self, id_: str = ""):
return self.authenticated_fetch(path=f"{id_}/active/",
req={
"method": "POST"
},
mimetype="application/x-www-form-urlencoded",
)
def mark_as_inactive(self, id_: str = ""):
return self.authenticated_fetch(path=f"{id_}/inactive/",
req={
"method": "POST"
},
mimetype="application/x-www-form-urlencoded",
)
def send_email_statement(self, id_: str = "", body: dict = None, query: dict = None):
return self.authenticated_fetch(path=f"{id_}/statements/email/",
req={
"method": "POST",
"body": body,
"query": query,
},
mimetype="application/x-www-form-urlencoded",
encode_json_string=True
)
def get_email_statement(self, id_: str = "", query: dict = None):
return self.authenticated_fetch(path=f"{id_}/statements/email/",
req={
"method": "GET",
"query": query,
},
mimetype="application/x-www-form-urlencoded",
)
def send_email(self, id_: str = "", body: dict = None, query: dict = None):
return self.authenticated_fetch(path=f"{id_}/email/",
req={
"method": "POST",
"body": body,
"query": query,
},
mimetype="application/x-www-form-urlencoded",
encode_json_string=True
)
def list_comments(self, id_: str = "", query: dict = None):
return self.authenticated_fetch(path=f"{id_}/comments/",
req={
"method": "GET",
"query": query,
},
mimetype="application/x-www-form-urlencoded",
)
def list_contact_persons(self, id_: str = '', **options):
return self.authenticated_fetch(path=f"{id_}/contactpersons/", **options) | zoho-inventory-python-sdk | /zoho_inventory_python_sdk-0.0.9-py3-none-any.whl/zoho_inventory_python_sdk/resources/contacts.py | contacts.py |
from ..client import Client
'''
TODO:
- Override or refactor Client methods create and update to accept query parameters.
'''
class CreditNotes(Client):
def __init__(self, **opts):
super(CreditNotes, self).__init__(**{**opts, **{"resource": "creditnotes"}})
def send_credit_note_by_email(self, id_: str = "", body: dict = None, query: dict = None):
return self.authenticated_fetch(path=f"{id_}/email/",
req={
"method": "POST",
"body": body or {},
"query": query or {}
},
mimetype="application/x-www-form-urlencoded",
encode_json_string=True,
)
def get_credit_note_email_history(self, id_: str = ""):
return self.authenticated_fetch(path=f"{id_}/emailhistory/",
req={
"method": "GET",
},
mimetype="application/x-www-form-urlencoded",
)
def get_credit_note_email_content(self, id_: str = "", query: dict = None):
return self.authenticated_fetch(path=f"{id_}/email/",
req={
"method": "GET",
"query": query or {},
},
mimetype="application/x-www-form-urlencoded",
)
def mark_as_void(self, id_: str = ""):
return self.authenticated_fetch(path=f"{id_}/void/",
req={
"method": "POST"
},
mimetype="application/x-www-form-urlencoded",
)
def mark_as_draft(self, id_: str = ""):
return self.authenticated_fetch(path=f"{id_}/draft/",
req={
"method": "POST"
},
mimetype="application/x-www-form-urlencoded",
)
def mark_as_open(self, id_: str = ""):
return self.authenticated_fetch(path=f"{id_}/converttoopen/",
req={
"method": "POST"
},
mimetype="application/x-www-form-urlencoded",
)
def submit(self, id_: str = ""):
return self.authenticated_fetch(path=f"{id_}/submit/",
req={
"method": "POST"
},
mimetype="application/x-www-form-urlencoded",
)
def approve(self, id_: str = ""):
return self.authenticated_fetch(path=f"{id_}/approve/",
req={
"method": "POST"
},
mimetype="application/x-www-form-urlencoded",
)
def update_billing_address(self, id_: str = "", body: dict = None):
return self.authenticated_fetch(path=f"{id_}/address/billing/",
req={
"method": "PUT",
"body": body or {},
},
mimetype="application/x-www-form-urlencoded",
encode_json_string=True,
)
def update_shipping_address(self, id_: str = "", body: dict = None):
return self.authenticated_fetch(path=f"{id_}/address/shipping/",
req={
"method": "PUT",
"body": body or {},
},
mimetype="application/x-www-form-urlencoded",
encode_json_string=True,
)
def list_templates(self):
return self.authenticated_fetch(path=f"templates/",
req={
"method": "GET",
},
mimetype="application/x-www-form-urlencoded",
)
def update_template(self, id_: str = "", path_params: dict = None):
return self.authenticated_fetch(path=f"{id_}/templates/:template_id/",
req={
"method": "PUT",
"path_params": path_params or {},
},
mimetype="application/x-www-form-urlencoded",
)
def list_invoices_credited(self, id_: str = ""):
return self.authenticated_fetch(path=f"{id_}/invoices/",
req={
"method": "GET",
},
mimetype="application/x-www-form-urlencoded",
)
def apply_credits_to_invoices(self, id_: str = "", body: dict = None):
return self.authenticated_fetch(path=f"{id_}/invoices/",
req={
"method": "POST",
"body": body or {},
},
mimetype="application/x-www-form-urlencoded",
encode_json_string=True,
)
def delete_credits_applied_to_invoice(self, id_: str = "", path_params: dict = None):
return self.authenticated_fetch(path=f"{id_}/invoices/:invoice_id/",
req={
"method": "DELETE",
"path_params": path_params or {},
},
mimetype="application/x-www-form-urlencoded",
)
def list_comments(self, id_: str = ""):
return self.authenticated_fetch(path=f"{id_}/comments/",
req={
"method": "GET",
},
mimetype="application/x-www-form-urlencoded",
)
def add_comment(self, id_: str = "", body: dict = None):
return self.authenticated_fetch(path=f"{id_}/comments/",
req={
"method": "POST",
"body": body or {},
},
mimetype="application/x-www-form-urlencoded",
encode_json_string=True,
)
def delete_comment(self, id_: str = "", path_params: dict = None):
return self.authenticated_fetch(path=f"{id_}/comments/:comment_id/",
req={
"method": "DELETE",
"path_params": path_params or {},
},
mimetype="application/x-www-form-urlencoded",
)
def list_refunds(self, query: dict = None):
return self.authenticated_fetch(path=f"refunds/",
req={
"method": "GET",
"query": query or {},
},
mimetype="application/x-www-form-urlencoded",
)
def list_credit_note_refunds(self, id_: str = ""):
return self.authenticated_fetch(path=f"{id_}/refunds/",
req={
"method": "GET",
},
mimetype="application/x-www-form-urlencoded",
)
def get_credit_note_refund(self, id_: str = "", path_params: dict = None):
return self.authenticated_fetch(path=f"{id_}/refunds/:refund_id/",
req={
"method": "GET",
"path_params": path_params or {},
},
mimetype="application/x-www-form-urlencoded",
)
def refund_credit_note(self, id_: str = "", body: dict = None):
return self.authenticated_fetch(path=f"{id_}/refunds/",
req={
"method": "POST",
"body": body or {},
},
mimetype="application/x-www-form-urlencoded",
encode_json_string=True,
)
def update_credit_note_refund(self, id_: str = "", body: dict = None, path_params: dict = None):
return self.authenticated_fetch(path=f"{id_}/refunds/:refund_id/",
req={
"method": "PUT",
"body": body or {},
"path_params": path_params or {},
},
mimetype="application/x-www-form-urlencoded",
encode_json_string=True,
)
def delete_credit_note_refund(self, id_: str = "", path_params: dict = None):
return self.authenticated_fetch(path=f"{id_}/refunds/:refund_id/",
req={
"method": "DELETE",
"path_params": path_params or {},
},
mimetype="application/x-www-form-urlencoded",
) | zoho-inventory-python-sdk | /zoho_inventory_python_sdk-0.0.9-py3-none-any.whl/zoho_inventory_python_sdk/resources/creditnotes.py | creditnotes.py |
import argparse
import base64
import json
import logging
from pprint import pprint
import time
import boto3
from botocore.exceptions import ClientError
logger = logging.getLogger(__name__)
class SecretsManagerClient:
"""Encapsulates Secrets Manager functions."""
_instance = None
def __init__(self, secretsmanager_client):
"""
:param secretsmanager_client: A Boto3 Secrets Manager client.
"""
self.secretsmanager_client = secretsmanager_client
self.name = None
@staticmethod
def get_instance(access_key, secret_key, region_name='us-west-2'):
if SecretsManagerClient._instance is None:
secretsmanager_client = boto3.client(
'secretsmanager',
region_name=region_name,
aws_access_key_id=access_key,
aws_secret_access_key=secret_key
)
SecretsManagerClient._instance = SecretsManagerClient(secretsmanager_client)
return SecretsManagerClient._instance
def _clear(self):
self.name = None
def create(self, name, secret_value):
"""
Creates a new secret. The secret value can be a string or bytes.
:param name: The name of the secret to create.
:param secret_value: The value of the secret.
:return: Metadata about the newly created secret.
"""
self._clear()
try:
kwargs = {'Name': name}
if isinstance(secret_value, str):
kwargs['SecretString'] = secret_value
elif isinstance(secret_value, bytes):
kwargs['SecretBinary'] = secret_value
response = self.secretsmanager_client.create_secret(**kwargs)
self.name = name
logger.info("Created secret %s.", name)
except ClientError:
logger.exception("Couldn't get secret %s.", name)
raise
else:
return response
def describe(self, name=None):
"""
Gets metadata about a secret.
:param name: The name of the secret to load. If `name` is None, metadata about
the current secret is retrieved.
:return: Metadata about the secret.
"""
if self.name is None and name is None:
raise ValueError
if name is None:
name = self.name
self._clear()
try:
response = self.secretsmanager_client.describe_secret(SecretId=name)
self.name = name
logger.info("Got secret metadata for %s.", name)
except ClientError:
logger.exception("Couldn't get secret metadata for %s.", name)
raise
else:
return response
def get_value(self, stage=None):
"""
Gets the value of a secret.
:param stage: The stage of the secret to retrieve. If this is None, the
current stage is retrieved.
:return: The value of the secret. When the secret is a string, the value is
contained in the `SecretString` field. When the secret is bytes,
it is contained in the `SecretBinary` field.
"""
if self.name is None:
raise ValueError
try:
kwargs = {'SecretId': self.name}
if stage is not None:
kwargs['VersionStage'] = stage
response = self.secretsmanager_client.get_secret_value(**kwargs)
logger.info("Got value for secret %s.", self.name)
except ClientError:
logger.exception("Couldn't get value for secret %s.", self.name)
raise
else:
return response
def get_random_password(self, pw_length):
"""
Gets a randomly generated password.
:param pw_length: The length of the password.
:return: The generated password.
"""
try:
response = self.secretsmanager_client.get_random_password(
PasswordLength=pw_length)
password = response['RandomPassword']
logger.info("Got random password.")
except ClientError:
logger.exception("Couldn't get random password.")
raise
else:
return password
def put_value(self, secret_value, stages=None):
"""
Puts a value into an existing secret. When no stages are specified, the
value is set as the current ('AWSCURRENT') stage and the previous value is
moved to the 'AWSPREVIOUS' stage. When a stage is specified that already
exists, the stage is associated with the new value and removed from the old
value.
:param secret_value: The value to add to the secret.
:param stages: The stages to associate with the secret.
:return: Metadata about the secret.
"""
if self.name is None:
raise ValueError
try:
kwargs = {'SecretId': self.name}
if isinstance(secret_value, str):
kwargs['SecretString'] = secret_value
elif isinstance(secret_value, bytes):
kwargs['SecretBinary'] = secret_value
if stages is not None:
kwargs['VersionStages'] = stages
response = self.secretsmanager_client.put_secret_value(**kwargs)
logger.info("Value put in secret %s.", self.name)
except ClientError:
logger.exception("Couldn't put value in secret %s.", self.name)
raise
else:
return response
def update_version_stage(self, stage, remove_from, move_to):
"""
Updates the stage associated with a version of the secret.
:param stage: The stage to update.
:param remove_from: The ID of the version to remove the stage from.
:param move_to: The ID of the version to add the stage to.
:return: Metadata about the secret.
"""
if self.name is None:
raise ValueError
try:
response = self.secretsmanager_client.update_secret_version_stage(
SecretId=self.name, VersionStage=stage, RemoveFromVersionId=remove_from,
MoveToVersionId=move_to)
logger.info("Updated version stage %s for secret %s.", stage, self.name)
except ClientError:
logger.exception(
"Couldn't update version stage %s for secret %s.", stage, self.name)
raise
else:
return response
def delete(self, without_recovery):
"""
Deletes the secret.
:param without_recovery: Permanently deletes the secret immediately when True;
otherwise, the deleted secret can be restored within
the recovery window. The default recovery window is
30 days.
"""
if self.name is None:
raise ValueError
try:
self.secretsmanager_client.delete_secret(
SecretId=self.name, ForceDeleteWithoutRecovery=without_recovery)
logger.info("Deleted secret %s.", self.name)
self._clear()
except ClientError:
logger.exception("Deleted secret %s.", self.name)
raise
def list(self, max_results):
"""
Lists secrets for the current account.
:param max_results: The maximum number of results to return.
:return: Yields secrets one at a time.
"""
try:
paginator = self.secretsmanager_client.get_paginator('list_secrets')
for page in paginator.paginate(
PaginationConfig={'MaxItems': max_results}):
for secret in page['SecretList']:
yield secret
except ClientError:
logger.exception("Couldn't list secrets.")
raise | zoho-inventory-python-sdk | /zoho_inventory_python_sdk-0.0.9-py3-none-any.whl/zoho_inventory_python_sdk/aws/secretsmanager_client.py | secretsmanager_client.py |
import os
from configparser import NoOptionError, NoSectionError, ConfigParser
class S3Storage(ExactOnlineConfig, ConfigParser):
"""
Configuration based on the SafeConfigParser and the
ExactOnlineConfig.
Takes an S3 key as input and writes/reads that file
"""
def __init__(self, key, config_path, s3_client, **kwargs):
super(S3Storage, self).__init__(**kwargs)
self.key = key
self.config_path = config_path
self.s3_client = s3_client
os.makedirs(os.path.dirname(self.config_path), exist_ok=True)
if hasattr(self.config_path, 'read'):
self.overwrite = False
else:
self.overwrite = self.config_path
def read_config(self):
if hasattr(self.config_path, 'read'):
if hasattr(self, 'read_file'):
self.read_file(self.config_path)
else:
self.readfp(self.config_path)
else:
self.read([self.config_path])
def get(self, section, option, **kwargs):
"""
Get method that raises MissingSetting if the value was unset.
This differs from the SafeConfigParser which may raise either a
NoOptionError or a NoSectionError.
We take extra **kwargs because the Python 3.5 configparser extends the
get method signature and it calls self with those parameters.
def get(self, section, option, *, raw=False, vars=None,
fallback=_UNSET):
"""
try:
ret = super(ExactOnlineConfig, self).get(section, option, **kwargs)
except (NoOptionError, NoSectionError):
raise MissingSetting(option, section)
return ret
def set(self, section, option, value: str = None):
"""
Set method that (1) auto-saves if possible and (2) auto-creates
sections.
"""
try:
super(ExactOnlineConfig, self).set(section, option, value)
except NoSectionError:
self.add_section(section)
super(ExactOnlineConfig, self).set(section, option, value)
# Save automatically!
self.save()
def save(self):
if self.overwrite:
with open(self.overwrite, 'w') as output:
self.write(output)
self.s3_client.upload_file(self.overwrite, self.key) | zoho-inventory-python-sdk | /zoho_inventory_python_sdk-0.0.9-py3-none-any.whl/zoho_inventory_python_sdk/aws/s3_storage.py | s3_storage.py |
# Zoho API - OAuth 2.0
Uses OAuth 2.0 for generating access_tokens for using Zoho APIs.
## Installation
Install the package using pip.
pip install zoho-oauth2
## Generate required credentials
- **CLIENT_ID**
- **CLIENT_SECRET**
>Generate the two values from [https://api-console.zoho.com/](https://api-console.zoho.com/).
Add a self client, and copy the CLIENT_ID and CLIENT_SECRET as seen in the example below.
- **redirect_uri**
>Add the redirect uri of your instance.
- **scope**
>You can find the list of scopes in the api documentation of the products you wish to use the API for.
- **region**
>Select the region if required. Chooses https://accounts.zoho.com by default. Following regions are supported as per Zoho Documentation.
>>EU : https://accounts.zoho.eu
>>CN : https://accounts.zoho.com.cn
>>IN : https://accounts.zoho.in
## API Documentation
The access tokens have only been tested with Manage Engine Service Desk instance in a demo environment, although it should work with most Zoho products.
Learn more about the APIs from the link below.
- [ZOHO People API Documentation](https://www.zoho.com/people/api/overview.html)
- [ManageEngine ServiceDesk Plus Cloud API Documentation](https://www.manageengine.com/products/service-desk/sdpod-v3-api/index.html)
## Available methods to use.
- To generate the access token for making the API Requests.
```Python
ZohoAPITokens(
client_id=CLIENT_ID, #Required
client_secret=CLIENT_SECRET, #Required
redirect_uri=REDIRECT_URI, #Required
scope=SCOPES(as string, each scope separated by comma), #Required
region=REGION #Optional
)
```
- To revoke the refresh token. The method will revoke the access token and proceed with deleting the token.pickle file.
```Python
revokeRefreshToken()
```
## Example Usage
from zoho_oauth2 import ZohoAPITokens
if __name__ == '__main__':
test = ZohoAPITokens(
client_id=CLIENT_ID,
client_secret=CLIENT_SECRET,
redirect_uri=REDIRECT_URI,
scope=SCOPES(as string, each scope separated by comma)
)
| zoho-oauth2 | /zoho_oauth2-1.0.7.tar.gz/zoho_oauth2-1.0.7/README.md | README.md |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.