code
stringlengths 501
5.19M
| package
stringlengths 2
81
| path
stringlengths 9
304
| filename
stringlengths 4
145
|
---|---|---|---|
zim2txt is a Python module that scrapes through a .zim file and creates .txt files from each article it contains. This tool is designed for Linux systems but it works with WSL (Windows Subsystem for Linux) as well. You must install zim-tools `(sudo apt-get install zimtools)` in advance for this module to work. Here is how to use the module:
```
import zim2txt
zim2txt.ZimTools.Export("Path for .zim file", "Path for a temporary folder that will be deleted later (I used /usr/games/newfolder with WSL since it didn't work for any folder that is out of root directory. If it does for you, then you can use any other folder as well.)", "Path for .txt files to be saved (do not use same path with temporary files)", "encoding method, default set to utf8")
# Example
import zim2txt
zim2txt.ZimTools.Export("/data/articles.zim", "/usr/games/newfolder") # You don't have to pass encoding method any argument if you're cool with utf8
``` | zim2txt | /zim2txt-1.0.0.tar.gz/zim2txt-1.0.0/README.md | README.md |
# Zimbra Permissions Inspector
This CLI utility allows you to query sending permissions on a Zimbra server.
You can retrieve sending permissions for a particular Zimbra Distribution List
(ZDL) or get a complete list of 'Send as' permissions. The most basic query,
list all the existing ZDLs (both dynamic and static).
### Requirements
Make sure you meet the following requirements:
* Make sure to [setup the Zimbra LDAP admin user](https://wiki.zimbra.com/wiki/Setting_zimbra_admin_password_in_LDAP),
on your Zimbra server.
* You need the [python-ldap](https://pypi.python.org/pypi/python-ldap/) package.
Tested on Zimbra v8.8.12
### Installation
You can install it with `pip`:
```
pip install zimbra-permissions-inspector
```
### Usage
Help output:
```
usage: zimbra-permissions-inspector.py [-h] [-l ZDL] [-sa] [-v]
SERVER BASEDN LDAP_ADMIN
Query sending permissions on a Zimbra server
positional arguments:
SERVER URI formatted address of the Zimbra server
BASEDN Specify the searchbase or base DN of the Zimbra LDAP
server
LDAP_ADMIN Admin user of the Zimbra LDAP server
optional arguments:
-h, --help show this help message and exit
-l ZDL, --zdl ZDL Query which Zimbra accounts have permissions to send
mails to the given ZDL
-sa, --sendas Query 'send as' permissions on both Zimbra accounts and
ZDLs
-v, --version Show current version
```
Note that positional arguments are mandatory!.
##### Usage examples
If no optional arguments are given, it'll list all the existing ZDLs (both dynamic and static):
```
zimbra-permissions-inspector ldap://zimbra.somecorp.com dc=somecorp,dc=com uid=zimbra,cn=admins,cn=zimbra
```
Query which Zimbra accounts have permissions to send mails to a ZDL ("Zimbra Distribution Lists") named "my-zdl-list":
```
zimbra-permissions-inspector ldap://zimbra.somecorp.com dc=somecorp,dc=com uid=zimbra,cn=admins,cn=zimbra -l zdl-list
```
Get a list of "send as" permissions:
```
zimbra-permissions-inspector ldap://zimbra.somecorp.com dc=somecorp,dc=com uid=zimbra,cn=admins,cn=zimbra -sa
```
| zimbra-permissions-inspector | /zimbra-permissions-inspector-0.1.tar.gz/zimbra-permissions-inspector-0.1/README.md | README.md |
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import argparse
import sys
import getpass
from distutils.version import LooseVersion
from _version import __version__
import ldap
def main():
""" Call appropriate functions according to the invoked arguments """
m = menu_handler()
server = m.SERVER
basedn = m.BASEDN
adminuser = m.LDAP_ADMIN
try:
creds = getpass.getpass('\nPlease, enter your Zimbra credentials: ')
ldap_data = connect_to_zimbra_ldap(server, basedn, adminuser, creds)
get_zdl_data(ldap_data, chosen_list=False)
except (KeyboardInterrupt, ldap.SERVER_DOWN, ldap.UNWILLING_TO_PERFORM, \
ldap.INVALID_CREDENTIALS) as e:
sys.exit(e)
if m.zdl:
get_users(ldap_data)
elif m.sendas:
get_sendas_permissions(ldap_data)
# If no optional arguments are given, show existing ZDLs!.
else:
get_lists(ldap_data)
def menu_handler():
""" Handle and return command line arguments """
parser = argparse.ArgumentParser(
description='Query sending permissions on a Zimbra server')
parser.add_argument('SERVER', help='URI formatted address of the Zimbra server')
parser.add_argument('BASEDN', help='Specify the searchbase or base DN of the Zimbra LDAP server')
parser.add_argument('LDAP_ADMIN', help='Admin user of the Zimbra LDAP server')
parser.add_argument('-l', '--zdl', required=False, action='store',
help='Query which Zimbra accounts have permissions to send mails to the given ZDL')
parser.add_argument('-sa', '--sendas', required=False, action='store_true',
help="Query 'send as' permissions on both Zimbra accounts and ZDLs")
parser.add_argument('-v', '--version', action='version',
version="%(prog)s {version}".format(version=__version__),
help='Show current version')
args = parser.parse_args()
return args
def connect_to_zimbra_ldap(server, basedn, adminuser, creds):
""" Connect to the LDAP Zimbra server """
l = ldap.initialize(server)
l.simple_bind_s(adminuser, creds)
ldap_data = l.search_s(basedn, ldap.SCOPE_SUBTREE, 'objectClass=*')
return ldap_data
def get_dynamic_static_lists(attrs, objectclass, authorized_id, lists):
""" Collect both static and dynamic ZDL data """
# Dynamic Lists!.
if objectclass == "zimbraGroup":
zdl_list_type = "dynamic"
key_value = "cn"
# Static lists!.
elif objectclass == "zimbraDistributionList":
zdl_list_type = "static"
key_value = "uid"
for attr in attrs:
# Contains a list with the different types of objectClass.
decoded_oc = [a.decode() for a in attr.get("objectClass")]
if objectclass in decoded_oc:
list_name = attr[key_value][0].decode()
idsauth = attr.get("zimbraACE", "")
for authorized in idsauth:
authorized_id.append((list_name, authorized.decode()))
lists.append((list_name, zdl_list_type))
def get_zdl_data(ldap_data, chosen_list=False):
"""Retrieve and return ZDL and related accounts data.
chosen_list=<bool> argument identifies if the -l CLI argument is invoked or not.
"""
attrs = [i[1] for i in ldap_data]
authorized_id = []
lists = []
authorized_accounts = []
get_dynamic_static_lists(attrs, "zimbraGroup", authorized_id, lists)
get_dynamic_static_lists(attrs, "zimbraDistributionList", authorized_id, lists)
if chosen_list:
for authorized in authorized_id:
if chosen_list in authorized:
if not "-sendToDistList" in authorized[1] and "sendToDistList" in authorized[1]:
permission = authorized[1].split(" ")
authorized = permission[0]
authorized_accounts.append(authorized)
return (lists, authorized_id, authorized_accounts)
def get_users(ldap_data):
""" Identify which Zimbra accounts have sending permissions to provided ZDL """
m = menu_handler()
chosen_list = m.zdl
zdl_data = get_zdl_data(ldap_data, chosen_list)
authorized_accounts = zdl_data[2]
attrs = [i[1] for i in ldap_data]
# Check that chosen_list exist!!.
zdl_lists = [l[0] for l in zdl_data[0]]
if not chosen_list in zdl_lists:
print("\nSorry, list not found: {}\n".format(chosen_list))
sys.exit()
# Check that authorized_accounts is not empty.
if not authorized_accounts:
print("\nSorry, no permissions were found in list: {}!\n".format(chosen_list))
sys.exit()
print("\nAuthorized accounts to send mails to %s :\n " % (chosen_list))
for attr in attrs:
zimbraid = attr.get("zimbraId")
uid = attr.get("uid")
if zimbraid and uid:
for authorized in authorized_accounts:
if authorized in zimbraid[0].decode():
print(uid[0].decode())
def zimbraid_to_uid(ldap_data, target_zimbraid):
""" Given a ZimbraId value, returns its corresponding UID """
attrs = [i[1] for i in ldap_data]
for attr in attrs:
decoded_oc = [a.decode() for a in attr.get("objectClass")]
if "inetOrgPerson" in decoded_oc and attr.get("uid"):
zimbraid = attr.get("zimbraId")[0].decode()
uid = attr.get("uid")[0].decode()
if target_zimbraid == zimbraid:
return uid
def get_sendas_permissions(ldap_data):
""" Outputs 'Send as' permissions """
attrs = [i[1] for i in ldap_data]
sendas_auth_accounts = []
# 'Send as' permissions for ZDLs!.
zdl_data = get_zdl_data(ldap_data, chosen_list=False)
authorized_id = zdl_data[1]
for props in authorized_id:
if "sendAsDistList" in props[1]:
sendasperms = props[1].split(" ")
authorized_id = sendasperms[0]
zdl = props[0]
sendas_auth_accounts.append((authorized_id, zdl))
# 'Send as' permissions for Zimbra accounts!
for props in attrs:
decoded_oc = [a.decode() for a in props.get("objectClass")]
if "inetOrgPerson" in decoded_oc and props.get("zimbraACE"):
zimbraace = props["zimbraACE"][0].decode().split(" ")
#zimbraid = props.get("zimbraId")[0].decode()
uid = props.get("uid")[0].decode()
if "sendAs" in zimbraace:
sendas_auth_accounts.append((zimbraace[0], uid))
print("\nZimbra 'Send as' permissions: \n\nPERMISSION OWNER\tTARGET\n")
for sendas_permission in sendas_auth_accounts:
user_with_permission = zimbraid_to_uid(ldap_data, sendas_permission[0])
print("{:<24}{:<20}".format(user_with_permission, sendas_permission[1]))
def get_lists(ldap_data):
""" Print all existing ZDLs """
zdl_data = get_zdl_data(ldap_data, chosen_list=False)
zdl_lists = zdl_data[0]
# Print each list and its type (static or dynamic).
for l in zdl_lists:
print("%s ( %s )" % (l[0], l[1]))
if __name__ == "__main__":
main() | zimbra-permissions-inspector | /zimbra-permissions-inspector-0.1.tar.gz/zimbra-permissions-inspector-0.1/zimbra_permissions_inspector/zimbra_permissions_inspector.py | zimbra_permissions_inspector.py |
# Python Zimbra Web
| branch | status |
|-----------|------------------|
| main | [](https://github.com/cirosec-studis/python-zimbra-web/actions/workflows/tests.yml) |
| develop | [](https://github.com/cirosec-studis/python-zimbra-web/actions/workflows/tests.yml) |
## Usage
For the entire documentation please see [https://cirosec-studis.github.io/python-zimbra-web](https://cirosec-studis.github.io/python-zimbra-web).
The documentation for the develop branch can be found here: [https://cirosec-studis.github.io/python-zimbra-web/develop/](https://cirosec-studis.github.io/python-zimbra-web/develop)
You can use `ZimbraUser` to send E-mails. You can send multiple E-mails within a single session.
```python
from zimbraweb import ZimbraUser
user = ZimbraUser("https://myzimbra.server")
user.login("s000000", "hunter2")
user.send_mail(to="[email protected]", subject="subject", body="body", cc="[email protected]")
user.logout()
```
### Sending EMLs
Please note the [Limitations](#known-limitations) when trying to parse EML.
```python
from zimbraweb import ZimbraUser
user = ZimbraUser("https://myzimbra.server")
user.login("s000000", "hunter2")
emlstr = open("myemlfile.eml").read()
user.send_eml(emlstr)
```
### Sending raw WebkitPayloads
If you don't want to rely on us to generate the payload, you can generate a payload yourself and send it using
```python
from zimbraweb import ZimbraUser
user = ZimbraUser("https://myzimbra.server")
user.login("s000000", "hunter2")
# you could also generate the payload yourself or use our library
raw_payload, boundary = user.generate_webkit_payload(to="[email protected]", subject="hello world!", body="this is a raw payload.")
# then send the raw_payload bytes
user.send_raw_payload(raw_payload, boundary)
user.logout()
```
### Attachments
You can generate attachments using the `WebkitAttachment` class:
```python
from zimbraweb import ZimbraUser, WebkitAttachment
user = ZimbraUser("https://myzimbra.server")
user.login("s000000", "hunter2")
attachments = []
with open("myfile.jpg", "rb") as f:
attachments.append(WebkitAttachment(content=f.read(), filename="attachment.jpg"))
user.send_mail(to="[email protected]", subject="subject", body="body", attachments=attachments)
user.logout()
```
## Known Limitations
* Emoji is not supported, even though other UTF-8 characters are. See Issue #3
* This package is made with German UIs in mind. If your UI is in a different language, feel free to fork and adjust the language-specific strings as needed. [Issue #43](https://github.com/cirosec-studis/python-zimbra-web/issues/43)
* The EML parsing can strictly only parse plaintext emails, optionally with attachments. Any emails with a Content-Type other than `text/plain` or `multipart/mixed` will be rejected. This is because the zimbra web interface does not allow HTML emails. Parsing `multipart/mixed` will only succeed if there is exactly one `text/plain` part and, optionally, attachments with the `Content-Disposition: attachment` header. If there are any `multipart/alternative` parts, the parsing will fail because we cannot deliver them to the Zimbra web interface.
## Install
```
pip install zimbraweb
```
## Contributing
1. Best practice is to develop in a python3.8 virtual env: `python3.8 -m venv env`, `source env/bin/activate` (Unix) or `env\Scripts\activate.ps1` (Windows)
2. Install dev-requirements `pip install -r requirements_dev.txt`
3. When working on a new feature, checkout to `git branch -b feature_myfeaturename`. We are using [this branching model](https://nvie.com/posts/a-successful-git-branching-model/)
4. Before committing, check
1. `mypy src` returns no failures.
2. `flake8 src tests` returns no problems.
3. `pytest` has no unexpected failed tests.
4. Optionoally, test with `tox`. Might take a few minutes so maybe only run before push.
### Development Install
```bash
$ git clone https://github.com/cirosec-studis/python-zimbra-web/
$ cd python-zimbra-web
$ pip install -e .
```
This installs the package with symlink, so the package is automatically updated, when files are changed.
It can then be called in a python console.
| zimbraweb | /zimbraweb-2.1.0.tar.gz/zimbraweb-2.1.0/README.md | README.md |
**ZIMply** is an easy to use, offline reader for `Wikipedia <https://www.wikipedia.org>`__ - or any other ZIM file - which provides access to them offline through any ordinary browser. **ZIMply** is written entirely in `Python <https://www.python.org>`__ and, as the name implies, relies on `ZIM files <http://www.openzim.org/wiki/OpenZIM>`__. Each ZIM file is a bundle containing thousands of articles, images, etc. as found on websites such as `Wikipedia <https://www.wikipedia.org>`__. The format is made popular by `Kiwix <http://www.kiwix.org>`__, which is a program to read such files offline on your device. As indicated, **ZIMply** differs from `Kiwix <http://www.kiwix.org>`__ in that it provides access through the browser. It accomplishes this by running its own HTTP server. This furthermore makes it easy to install **ZIMply** on one device (a *server*, such as a `Raspberry Pi <https://www.raspberrypi.org/products/>`__) and access it on others (the *clients*). To install Python3 on a `Raspbian lite distribution <https://www.raspberrypi.org/downloads/raspbian/>`__ it suffices to install the following packages:
.. code:: bash
sudo apt-get -qq python3 python3-setuptools python3-dev python3-pip
Once you have Python 3 up and running, the easiest way to install **ZIMply** is through pip:
.. code:: bash
pip install zimply
When you have both Python 2.* and Python 3.* installed on your system, you may need to replace `pip` with `pip3` depending on your setup. All you need to do then is download a ZIM file from `this site <https://www.mirrorservice.org/sites/download.kiwix.org/zim/wikipedia/>`__ and use a command such as: **(Be careful! Executing the next command downloads the full English Wikipedia, which is a massive file. Instead, replace the url with your desired ZIM file!)**
.. code:: bash
curl -o wiki.zim https://www.mirrorservice.org/sites/download.kiwix.org/zim/wikipedia/wikipedia_en_all_novid_2017-08.zim
All that's left is for you to create your own Python file to start the server:
.. code:: python
from zimply import ZIMServer
ZIMServer("wiki.zim")
That is all there is to it. Using the default settings, you can now access your offline Wiki from http://localhost:9454 - spelling out as :WIKI on a keypad. To access **ZIMply** from another system, you need to know the IP address of the system that is running **ZIMply**. You can access it by visiting http://ip_address:9454 where you replace ip_address with the actual IP address.
*Note:* the first time you run **ZIMply**, it will take care of creating the index to enable searching. **This can take some time**. Unless you see error messages, you can assume that it all works as planned and **ZIMply** will notify you as soon as the index is fully created. The largest ZIM file (the full English Wikipedia) takes about half an hour to index on a core i7 processor, and can take over half a day on a Raspberry Pi Zero. Creating the index is a step that only needs to be done once though, and subsequent restarts of **ZIMply** will only take a matter of seconds. **ZIMply** is heavily optimised, and *will* run comfortably on a Raspberry Pi Zero.
To modify the default settings, simply call ZIMServer with your desired options:
.. code:: python
from zimply import ZIMServer
ZIMServer("wiki.zim", index_file="index.idx", template="template.html", ip_address="192.168.1.200", port=9454, encoding="utf-8")
# please leave '192.168.1.200' to blank("") to serve the ZIM on localhost, or replace it with your real ip_address
Want to tinker with the code yourself? **ZIMply** depends on `gevent <http://www.gevent.org>`__ (for networking), `falcon <https://falconframework.org>`__ (for the web service), and `mako <http://www.makotemplates.org>`__ (for templating). The easiest way to install these dependencies is by using:
.. code:: bash
sudo pip install gevent falcon mako
As before, when you have both Python 2.* and Python 3.* installed on your system, you may need to replace `pip` with `pip3` depending on your setup. | zimply | /zimply-1.1.4.tar.gz/zimply-1.1.4/README.rst | README.rst |
========
zimports
========
Reformats Python imports so that they can pass flake8-import-order. This is
roughly:
* one import per line
* alphabetically sorted, with stylistic options for how dots, case sensitivity,
and dotted names are sorted
* grouped by builtin / external library / current application (also
stylistically controllable)
* unused imports removed, using pyflakes to match "unused import" warnings
to actual lines of code
* duplicate imports removed (note this does not yet include duplicate symbol
names against different imports)
* no star imports (e.g. ``from <foo> import *``); these are rewritten as
explicit names, by importing all the names from each target module and then
removing all the unused names
* support for TYPE_CHECKING import blocks.
The program currently bolts itself on top of `flake8-import-order
<https://github.com/PyCQA/flake8-import-order/>`_, in order to reuse the import
classification and sorting styles that tool provides. Without options given,
the script will look directly for a ``setup.cfg`` file with a ``[flake8]``
section and will consume flake8-import-order parameters ``"application-import-
names"``, ``"application-package-names"``, and ``"import-order-style"``, to
sort imports exactly as this linter then expects to find them. All of the
single-line import styles, e.g. google, cryptography, pycharm, should just
work.
Special classifications can be given to imports, as either a " # noqa" comment
indicating the import should not be removed, and optionally
the comment " # noqa nosort" which will place the import into a special
"don't sort" category, placing all of the "nosort" imports in the order
they originally appeared, grouped after all the sorted imports. This can
be used for special situations where a few imports have to be in a certain
order against each other (SQLAlchemy has two lines like this at the moment).
The application also does not affect imports that are inside of conditionals
or defs, or otherwise indented in any way, with the exception of TYPE_CHECKING
imports. This is also the behavior of
flake8-import-order; only imports in column zero of the source file are
counted, although imports that are on lines below other definitions are
counted, which are moved up to the top section of the source file.
.. note:: This application runs in **Python 3 only**. It can reformat
imports for Python 2 code as well but internally it uses library
and language features only available in Python 3.
zzzeek why are you writing one of these, there are a dozen pep8 import fixers
=============================================================================
I've just gone through a whole bunch. I need one that:
* works directly with flake8-import-order so we are guaranteed to have a match
* has shell capability, not only a plugin for vim or sublime text (Python Fix
Imports, gratis)
* Removes unused imports, not just reformats them (importanize)
* Reformats imports, not just removes unused ones (autoflake)
* Doesn't miss removing an import that isn't used just because it's on a
multiline import (autoflake)
* Breaks up *all* imports into individual lines, not just if the line is >80 char
(importanize)
* Is still pretty simple (we're a bit beyond our original "extremely" simple
baseline, because all problems are ultimately not that simple) because (since
pyflakes and now flake8-import-order do most of the hard work) this is an
extremely simple job, there's (still) no need for a giant application here.
Usage
=====
The script can run without any configuration, options are as follows::
$ zimports --help
usage: zimports [-h] [-m APPLICATION_IMPORT_NAMES]
[-p APPLICATION_PACKAGE_NAMES] [--style STYLE] [--multi-imports]
[-k] [-kt] [--heuristic-unused HEURISTIC_UNUSED] [--statsonly]
[-e] [--diff] [--stdout]
filename [filename ...]
positional arguments:
filename Python filename(s) or directories
optional arguments:
-h, --help show this help message and exit
-m APPLICATION_IMPORT_NAMES, --application-import-names APPLICATION_IMPORT_NAMES
comma separated list of names that should be
considered local to the application. reads from
[flake8] application-import-names by default.
-p APPLICATION_PACKAGE_NAMES, --application-package-names APPLICATION_PACKAGE_NAMES
comma separated list of names that should be
considered local to the organization. reads from
[flake8] application-package-names by default.
--style STYLE import order styling, reads from [flake8] import-
order-style by default, or defaults to 'google'
--multi-imports If set, multiple imports can exist on one line
-k, --keep-unused keep unused imports even though detected as unused.
Implies keep-unused-type-checking
-kt, --keep-unused-type-checking
keep unused imports even though detected as unused in
type checking blocks. zimports does not detect type usage
in comments or when used as string
--heuristic-unused HEURISTIC_UNUSED
Remove unused imports only if number of imports is
less than <HEURISTIC_UNUSED> percent of the total
lines of code. Ignored in type checking blocks
--statsonly don't write or display anything except the file stats
-e, --expand-stars Expand star imports into the names in the actual
module, which can then have unused names removed.
Requires modules can be imported
--diff don't modify files, just dump out diffs
--stdout dump file output to stdout
Configuration is currently broken up between consumption of flake8 parameters
from ``setup.cfg``, and then additional zimports parameters in
``pyproject.toml`` (as of version 0.5.0) - unification of these two files will
be in a future release, possibly when flake8 adds toml support::
# setup.cfg
[flake8]
enable-extensions = G
ignore =
A003,
E203,E305,E711,E712,E721,E722,E741,
F841,
N801,N802,N806,
W503,W504
import-order-style = google
application-import-names = sqlalchemy,test
# pyproject.toml, integrated with black
[tool.black]
line-length = 79
target-version = ['py37']
[tool.zimports]
black-line-length = 79
keep-unused-type-checking = true
# other options:
# multi-imports = true
# keep-unused = true
Then, a typical run on a mostly clean source tree looks like::
$ zimports lib/
[Unchanged] lib/sqlalchemy/inspection.py (in 0.0058 sec)
[Unchanged] lib/sqlalchemy/log.py (in 0.0221 sec)
...
[Unchanged] lib/sqlalchemy/orm/attributes.py (in 0.2152 sec)
[Unchanged] lib/sqlalchemy/orm/base.py (in 0.0363 sec)
[Writing] lib/sqlalchemy/orm/relationships.py ([2% of lines are imports] [source +0L/-2L] [3 imports removed in 0.3287 sec])
[Unchanged] lib/sqlalchemy/orm/strategies.py (in 0.2237 sec)
The program has two general modes of usage. One is that of day-to-day usage
for an application that already has clean imports. Running zimports on the
source files of such an application should produce no changes, except for
whatever source files were recently edited, and may have some changes to
imports that need to be placed into the correct order. This usage model is
similar to that of `Black <https://github.com/ambv/black>`_, where you can run
"zimports ." and it will find whatever files need adjusting and leave the rest
alone.
The other mode of usage is that of the up-front cleaning up of an application
that has un- organized imports. In this mode of usage, the goal is to get
the source files to be cleaned up so that ``zimports`` can be run straight
without any modifications to the file needed, including that all necessary
imports are either used locally or marked as not to be removed.
Problems that can occur during this phase are that some imports are unused and
should be removed, while other imports that are apparently unused are still in
fact imported by other parts of the program. Another issue is that changing
the ordering of imports in complex cases may cause the application to no longer
run due to the creation of unresolvable import cycles. Finally, some
programs have use of ``import *``, pulling in a large list of names for which
an unknown portion of them are needed by the application. The options
``--keep-unused``, ``--heuristic-unused`` and ``--expand-stars`` are
provided to assist in working through these issues until the code can be
fully reformatted such that running ``zimports`` no longer produces changes.
The issue of apparently unused imports that are externally imported can be
prominent in some applications. In order to allow imports that aren't locally
used to remain in the source file, symbols that are part of
``__all__`` will not be removed, as will imports that are followed by a `` #
noqa`` comment. Either of these techniques should be applied to imports that
are used from other modules but not otherwise referenced within the immediate
source file. For the less common case that a few imports really need a very
specific import order for things to work, those imports can be followed by a ``
# noqa nosort`` comment that will add these lines to a special group at the end
of all imports, where they will not be removed and their order relative to each
other will be maintained.
The program does currently require that you pass it at least one file or
directory name as an argument. It also does not have the file caching feature
that Black has, which can allow it to only look at files that have changed
since the last run. The plan is to have it check that it's inside a git
repository where it will run through files to be committed if no filenames are
given.
Usage as a ``git`` hook
=======================
``zimports`` can be used with the pre-commit_ git hooks framework. To add
the plugin, add the following to your ``.pre-commit-config.yaml``. Note
the ``rev:`` attribute refers to a git tag or revision number of
zimports to be used, such as ``"master"`` or ``"0.1.3"``:
.. code-block:: yaml
repos:
- repo: https://github.com/sqlalchemyorg/zimports/
rev: v0.4.5
hooks:
- id: zimports
.. _pre-commit: https://pre-commit.com/
| zimports | /zimports-0.6.0.tar.gz/zimports-0.6.0/README.rst | README.rst |
from sqlalchemy import Column
from sqlalchemy import exc as sa_exc
from sqlalchemy import ForeignKey
from sqlalchemy import Integer
from sqlalchemy import MetaData
from sqlalchemy import String
from sqlalchemy import Table
from sqlalchemy import testing
from sqlalchemy import Unicode
from sqlalchemy.orm import backref
from sqlalchemy.orm import clear_mappers
from sqlalchemy.orm import configure_mappers
from sqlalchemy.orm import create_session
from sqlalchemy.orm import mapper
from sqlalchemy.orm import relationship
from sqlalchemy.testing import assert_raises_message
from sqlalchemy.testing import fixtures
class CompileTest(fixtures.ORMTest):
"""test various mapper compilation scenarios"""
def teardown(self):
clear_mappers()
def test_with_polymorphic(self):
metadata = MetaData(testing.db)
order = Table('orders', metadata,
Column('id', Integer, primary_key=True),
Column('employee_id', Integer, ForeignKey(
'employees.id'), nullable=False),
Column('type', Unicode(16)))
employee = Table('employees', metadata,
Column('id', Integer, primary_key=True),
Column('name', Unicode(16), unique=True,
nullable=False))
product = Table('products', metadata,
Column('id', Integer, primary_key=True))
orderproduct = Table('orderproducts', metadata,
Column('id', Integer, primary_key=True),
Column('order_id', Integer, ForeignKey(
"orders.id"), nullable=False),
Column('product_id', Integer, ForeignKey(
"products.id"), nullable=False))
class Order(object):
pass
class Employee(object):
pass
class Product(object):
pass
class OrderProduct(object):
pass
order_join = order.select().alias('pjoin')
order_mapper = mapper(Order, order,
with_polymorphic=('*', order_join),
polymorphic_on=order_join.c.type,
polymorphic_identity='order',
properties={
'orderproducts': relationship(
OrderProduct, lazy='select',
backref='order')}
)
mapper(Product, product,
properties={
'orderproducts': relationship(OrderProduct, lazy='select',
backref='product')}
)
mapper(Employee, employee,
properties={
'orders': relationship(Order, lazy='select',
backref='employee')})
mapper(OrderProduct, orderproduct)
# this requires that the compilation of order_mapper's "surrogate
# mapper" occur after the initial setup of MapperProperty objects on
# the mapper.
configure_mappers()
def test_conflicting_backref_one(self):
"""test that conflicting backrefs raises an exception"""
metadata = MetaData(testing.db)
order = Table('orders', metadata,
Column('id', Integer, primary_key=True),
Column('type', Unicode(16)))
product = Table('products', metadata,
Column('id', Integer, primary_key=True))
orderproduct = Table('orderproducts', metadata,
Column('id', Integer, primary_key=True),
Column('order_id', Integer,
ForeignKey("orders.id"), nullable=False),
Column('product_id', Integer,
ForeignKey("products.id"),
nullable=False))
class Order(object):
pass
class Product(object):
pass
class OrderProduct(object):
pass
order_join = order.select().alias('pjoin')
order_mapper = mapper(Order, order,
with_polymorphic=('*', order_join),
polymorphic_on=order_join.c.type,
polymorphic_identity='order',
properties={
'orderproducts': relationship(
OrderProduct, lazy='select',
backref='product')}
)
mapper(Product, product,
properties={
'orderproducts': relationship(OrderProduct, lazy='select',
backref='product')}
)
mapper(OrderProduct, orderproduct)
assert_raises_message(
sa_exc.ArgumentError,
"Error creating backref",
configure_mappers
)
def test_misc_one(self):
metadata = MetaData(testing.db)
node_table = Table("node", metadata,
Column('node_id', Integer, primary_key=True),
Column('name_index', Integer, nullable=True))
node_name_table = Table("node_name", metadata,
Column('node_name_id', Integer,
primary_key=True),
Column('node_id', Integer,
ForeignKey('node.node_id')),
Column('host_id', Integer,
ForeignKey('host.host_id')),
Column('name', String(64), nullable=False))
host_table = Table("host", metadata,
Column('host_id', Integer, primary_key=True),
Column('hostname', String(64), nullable=False,
unique=True))
metadata.create_all()
try:
node_table.insert().execute(node_id=1, node_index=5)
class Node(object):
pass
class NodeName(object):
pass
class Host(object):
pass
node_mapper = mapper(Node, node_table)
host_mapper = mapper(Host, host_table)
node_name_mapper = mapper(NodeName, node_name_table,
properties={
'node': relationship(
Node, backref=backref('names')),
'host': relationship(Host),
})
sess = create_session()
assert sess.query(Node).get(1).names == []
finally:
metadata.drop_all()
def test_conflicting_backref_two(self):
meta = MetaData()
a = Table('a', meta, Column('id', Integer, primary_key=True))
b = Table('b', meta, Column('id', Integer, primary_key=True),
Column('a_id', Integer, ForeignKey('a.id')))
class A(object):
pass
class B(object):
pass
mapper(A, a, properties={
'b': relationship(B, backref='a')
})
mapper(B, b, properties={
'a': relationship(A, backref='b')
})
assert_raises_message(
sa_exc.ArgumentError,
"Error creating backref",
configure_mappers
)
def test_conflicting_backref_subclass(self):
meta = MetaData()
a = Table('a', meta, Column('id', Integer, primary_key=True))
b = Table('b', meta, Column('id', Integer, primary_key=True),
Column('a_id', Integer, ForeignKey('a.id')))
class A(object):
pass
class B(object):
pass
class C(B):
pass
mapper(A, a, properties={
'b': relationship(B, backref='a'),
'c': relationship(C, backref='a')
})
mapper(B, b)
mapper(C, None, inherits=B)
assert_raises_message(
sa_exc.ArgumentError,
"Error creating backref",
configure_mappers
) | zimports | /zimports-0.6.0.tar.gz/zimports-0.6.0/test_files/star_imports_two.expected.py | star_imports_two.expected.py |
import collections
import contextlib
import operator
import sys
import time
py36 = sys.version_info >= (3, 6)
py33 = sys.version_info >= (3, 3)
py35 = sys.version_info >= (3, 5)
py32 = sys.version_info >= (3, 2)
py3k = sys.version_info >= (3, 0)
py2k = sys.version_info < (3, 0)
py265 = sys.version_info >= (2, 6, 5)
jython = sys.platform.startswith("java")
pypy = hasattr(sys, "pypy_version_info")
win32 = sys.platform.startswith("win")
cpython = not pypy and not jython # TODO: something better for this ?
contextmanager = contextlib.contextmanager
dottedgetter = operator.attrgetter
namedtuple = collections.namedtuple
next = next # noqa
ArgSpec = collections.namedtuple(
"ArgSpec", ["args", "varargs", "keywords", "defaults"]
)
try:
import threading
except ImportError:
import dummy_threading as threading # noqa
# work around http://bugs.python.org/issue2646
if py265:
safe_kwarg = lambda arg: arg # noqa
else:
safe_kwarg = str
if py3k:
import base64
import builtins
import configparser
import itertools
import pickle
from functools import reduce
from inspect import getfullargspec as inspect_getfullargspec
from io import BytesIO as byte_buffer
from io import StringIO
from itertools import zip_longest
from urllib.parse import (
quote_plus,
unquote_plus,
parse_qsl,
quote,
unquote,
)
string_types = (str,)
binary_types = (bytes,)
binary_type = bytes
text_type = str
int_types = (int,)
iterbytes = iter
itertools_filterfalse = itertools.filterfalse
itertools_filter = filter
itertools_imap = map
exec_ = getattr(builtins, "exec")
import_ = getattr(builtins, "__import__")
print_ = getattr(builtins, "print")
def b(s):
return s.encode("latin-1")
def b64decode(x):
return base64.b64decode(x.encode("ascii"))
def b64encode(x):
return base64.b64encode(x).decode("ascii")
def cmp(a, b):
return (a > b) - (a < b)
def inspect_getargspec(func):
return ArgSpec(*inspect_getfullargspec(func)[0:4])
def reraise(tp, value, tb=None, cause=None):
if cause is not None:
assert cause is not value, "Same cause emitted"
value.__cause__ = cause
if value.__traceback__ is not tb:
raise value.with_traceback(tb)
raise value
def u(s):
return s
def ue(s):
return s
if py32:
callable = callable # noqa
else:
def callable(fn): # noqa
return hasattr(fn, "__call__")
else:
import base64
import ConfigParser as configparser # noqa
import itertools
from StringIO import StringIO # noqa
from cStringIO import StringIO as byte_buffer # noqa
from inspect import getargspec as inspect_getfullargspec # noqa
from itertools import izip_longest as zip_longest # noqa
from urllib import quote # noqa
from urllib import quote_plus # noqa
from urllib import unquote # noqa
from urllib import unquote_plus # noqa
from urlparse import parse_qsl # noqa
try:
import cPickle as pickle
except ImportError:
import pickle # noqa
string_types = (basestring,) # noqa
binary_types = (bytes,)
binary_type = str
text_type = unicode # noqa
int_types = int, long # noqa
inspect_getargspec = inspect_getfullargspec
callable = callable # noqa
cmp = cmp # noqa
reduce = reduce # noqa
b64encode = base64.b64encode
b64decode = base64.b64decode
itertools_filterfalse = itertools.ifilterfalse
itertools_filter = itertools.ifilter
itertools_imap = itertools.imap
def b(s):
return s
def exec_(func_text, globals_, lcl=None):
if lcl is None:
exec("exec func_text in globals_")
else:
exec("exec func_text in globals_, lcl")
def iterbytes(buf):
return (ord(byte) for byte in buf)
def import_(*args):
if len(args) == 4:
args = args[0:3] + ([str(arg) for arg in args[3]],)
return __import__(*args)
def print_(*args, **kwargs):
fp = kwargs.pop("file", sys.stdout)
if fp is None:
return
for arg in enumerate(args):
if not isinstance(arg, basestring): # noqa
arg = str(arg)
fp.write(arg)
def u(s):
# this differs from what six does, which doesn't support non-ASCII
# strings - we only use u() with
# literal source strings, and all our source files with non-ascii
# in them (all are tests) are utf-8 encoded.
return unicode(s, "utf-8") # noqa
def ue(s):
return unicode(s, "unicode_escape") # noqa
# not as nice as that of Py3K, but at least preserves
# the code line where the issue occurred
exec(
"def reraise(tp, value, tb=None, cause=None):\n"
" if cause is not None:\n"
" assert cause is not value, 'Same cause emitted'\n"
" raise tp, value, tb\n"
)
if py35:
from inspect import formatannotation
def inspect_formatargspec(
args,
varargs=None,
varkw=None,
defaults=None,
kwonlyargs=(),
kwonlydefaults={},
annotations={},
formatarg=str,
formatvarargs=lambda name: "*" + name,
formatvarkw=lambda name: "**" + name,
formatvalue=lambda value: "=" + repr(value),
formatreturns=lambda text: " -> " + text,
formatannotation=formatannotation,
):
"""Copy formatargspec from python 3.7 standard library.
Python 3 has deprecated formatargspec and requested that Signature
be used instead, however this requires a full reimplementation
of formatargspec() in terms of creating Parameter objects and such.
Instead of introducing all the object-creation overhead and having
to reinvent from scratch, just copy their compatibility routine.
Utimately we would need to rewrite our "decorator" routine completely
which is not really worth it right now, until all Python 2.x support
is dropped.
"""
def formatargandannotation(arg):
result = formatarg(arg)
if arg in annotations:
result += ": " + formatannotation(annotations[arg])
return result
specs = []
if defaults:
firstdefault = len(args) - len(defaults)
for i, arg in enumerate(args):
spec = formatargandannotation(arg)
if defaults and i >= firstdefault:
spec = spec + formatvalue(defaults[i - firstdefault])
specs.append(spec)
if varargs is not None:
specs.append(formatvarargs(formatargandannotation(varargs)))
else:
if kwonlyargs:
specs.append("*")
if kwonlyargs:
for kwonlyarg in kwonlyargs:
spec = formatargandannotation(kwonlyarg)
if kwonlydefaults and kwonlyarg in kwonlydefaults:
spec += formatvalue(kwonlydefaults[kwonlyarg])
specs.append(spec)
if varkw is not None:
specs.append(formatvarkw(formatargandannotation(varkw)))
result = "(" + ", ".join(specs) + ")"
if "return" in annotations:
result += formatreturns(formatannotation(annotations["return"]))
return result
else:
from inspect import formatargspec as inspect_formatargspec # noqa
if win32 or jython:
time_func = time.clock
else:
time_func = time.time
# Fix deprecation of accessing ABCs straight from collections module
# (which will stop working in 3.8).
if py33:
import collections.abc as collections_abc
else:
import collections as collections_abc # noqa
@contextlib.contextmanager
def nested(*managers):
"""Implement contextlib.nested, mostly for unit tests.
As tests still need to run on py2.6 we can't use multiple-with yet.
Function is removed in py3k but also emits deprecation warning in 2.7
so just roll it here for everyone.
"""
exits = []
vars_ = []
exc = (None, None, None)
try:
for mgr in managers:
exit_ = mgr.__exit__
enter = mgr.__enter__
vars_.append(enter())
exits.append(exit_)
yield vars_
except:
exc = sys.exc_info()
finally:
while exits:
exit_ = exits.pop() # noqa
try:
if exit_(*exc):
exc = (None, None, None)
except:
exc = sys.exc_info()
if exc != (None, None, None):
reraise(exc[0], exc[1], exc[2])
def raise_from_cause(exception, exc_info=None):
if exc_info is None:
exc_info = sys.exc_info()
exc_type, exc_value, exc_tb = exc_info
cause = exc_value if exc_value is not exception else None
reraise(type(exception), exception, tb=exc_tb, cause=cause)
def with_metaclass(meta, *bases):
"""Create a base class with a metaclass.
Drops the middle class upon creation.
Source: http://lucumr.pocoo.org/2013/5/21/porting-to-python-3-redux/
"""
class metaclass(meta):
__call__ = type.__call__
__init__ = type.__init__
def __new__(cls, name, this_bases, d):
if this_bases is None:
return type.__new__(cls, name, (), d)
return meta(name, bases, d)
return metaclass("temporary_class", None, {}) | zimports | /zimports-0.6.0.tar.gz/zimports-0.6.0/test_files/conditional_imports.expected.py | conditional_imports.expected.py |
from types import TYPE_CHECKING
from sqlalchemy import testing
if TYPE_CHECKING:
from sqlalchemy import alias
from sqlalchemy import all_
from sqlalchemy import and_
from sqlalchemy import any_
from sqlalchemy import ARRAY
from sqlalchemy import asc
from sqlalchemy import between
from sqlalchemy import BIGINT
from sqlalchemy import BigInteger
from sqlalchemy import BINARY
from sqlalchemy import Binary
from sqlalchemy import bindparam
from sqlalchemy import BLANK_SCHEMA
from sqlalchemy import BLOB
from sqlalchemy import BOOLEAN
from sqlalchemy import Boolean
from sqlalchemy import case
from sqlalchemy import cast
from sqlalchemy import CHAR
from sqlalchemy import CheckConstraint
from sqlalchemy import CLOB
from sqlalchemy import collate
from sqlalchemy import Column
from sqlalchemy import column
from sqlalchemy import ColumnDefault
from sqlalchemy import Constraint
from sqlalchemy import create_engine
from sqlalchemy import DATE
from sqlalchemy import Date
from sqlalchemy import DATETIME
from sqlalchemy import DateTime
from sqlalchemy import DDL
from sqlalchemy import DECIMAL
from sqlalchemy import DefaultClause
from sqlalchemy import delete
from sqlalchemy import desc
from sqlalchemy import distinct
from sqlalchemy import engine_from_config
from sqlalchemy import Enum
from sqlalchemy import exc as sa_exc
from sqlalchemy import except_
from sqlalchemy import except_all
from sqlalchemy import exists
from sqlalchemy import extract
from sqlalchemy import false
from sqlalchemy import FetchedValue
from sqlalchemy import FLOAT
from sqlalchemy import Float
from sqlalchemy import ForeignKey
from sqlalchemy import ForeignKeyConstraint
from sqlalchemy import func
from sqlalchemy import funcfilter
from sqlalchemy import Index
from sqlalchemy import insert
from sqlalchemy import inspect
from sqlalchemy import INT
from sqlalchemy import INTEGER
from sqlalchemy import Integer
from sqlalchemy import intersect
from sqlalchemy import intersect_all
from sqlalchemy import Interval
from sqlalchemy import join
from sqlalchemy import JSON
from sqlalchemy import LargeBinary
from sqlalchemy import lateral
from sqlalchemy import literal
from sqlalchemy import literal_column
from sqlalchemy import MetaData
from sqlalchemy import modifier
from sqlalchemy import NCHAR
from sqlalchemy import not_
from sqlalchemy import null
from sqlalchemy import nullsfirst
from sqlalchemy import nullslast
from sqlalchemy import NUMERIC
from sqlalchemy import Numeric
from sqlalchemy import NVARCHAR
from sqlalchemy import or_
from sqlalchemy import outerjoin
from sqlalchemy import outparam
from sqlalchemy import over
from sqlalchemy import PassiveDefault
from sqlalchemy import PickleType
from sqlalchemy import PrimaryKeyConstraint
from sqlalchemy import REAL
from sqlalchemy import select
from sqlalchemy import Sequence
from sqlalchemy import SMALLINT
from sqlalchemy import SmallInteger
from sqlalchemy import String
from sqlalchemy import subquery
from sqlalchemy import Table
from sqlalchemy import table
from sqlalchemy import tablesample
from sqlalchemy import TEXT
from sqlalchemy import Text
from sqlalchemy import text
from sqlalchemy import ThreadLocalMetaData
from sqlalchemy import TIME
from sqlalchemy import Time
from sqlalchemy import TIMESTAMP
from sqlalchemy import true
from sqlalchemy import tuple_
from sqlalchemy import type_coerce
from sqlalchemy import TypeDecorator
from sqlalchemy import Unicode
from sqlalchemy import UnicodeText
from sqlalchemy import union
from sqlalchemy import union_all
from sqlalchemy import UniqueConstraint
from sqlalchemy import update
from sqlalchemy import VARBINARY
from sqlalchemy import VARCHAR
from sqlalchemy import within_group
from sqlalchemy.orm import aliased
from sqlalchemy.orm import AliasOption
from sqlalchemy.orm import AttributeExtension
from sqlalchemy.orm import backref
from sqlalchemy.orm import Bundle
from sqlalchemy.orm import class_mapper
from sqlalchemy.orm import clear_mappers
from sqlalchemy.orm import column_property
from sqlalchemy.orm import ColumnProperty
from sqlalchemy.orm import comparable_property
from sqlalchemy.orm import ComparableProperty
from sqlalchemy.orm import compile_mappers
from sqlalchemy.orm import composite
from sqlalchemy.orm import CompositeProperty
from sqlalchemy.orm import configure_mappers
from sqlalchemy.orm import contains_alias
from sqlalchemy.orm import contains_eager
from sqlalchemy.orm import create_session
from sqlalchemy.orm import defaultload
from sqlalchemy.orm import defer
from sqlalchemy.orm import deferred
from sqlalchemy.orm import dynamic_loader
from sqlalchemy.orm import eagerload
from sqlalchemy.orm import eagerload_all
from sqlalchemy.orm import EXT_CONTINUE
from sqlalchemy.orm import EXT_SKIP
from sqlalchemy.orm import EXT_STOP
from sqlalchemy.orm import foreign
from sqlalchemy.orm import immediateload
from sqlalchemy.orm import join
from sqlalchemy.orm import joinedload
from sqlalchemy.orm import joinedload_all
from sqlalchemy.orm import lazyload
from sqlalchemy.orm import lazyload_all
from sqlalchemy.orm import Load
from sqlalchemy.orm import load_only
from sqlalchemy.orm import make_transient
from sqlalchemy.orm import make_transient_to_detached
from sqlalchemy.orm import Mapper
from sqlalchemy.orm import mapper
from sqlalchemy.orm import MapperExtension
from sqlalchemy.orm import noload
from sqlalchemy.orm import object_mapper
from sqlalchemy.orm import object_session
from sqlalchemy.orm import outerjoin
from sqlalchemy.orm import polymorphic_union
from sqlalchemy.orm import PropComparator
from sqlalchemy.orm import public_factory
from sqlalchemy.orm import Query
from sqlalchemy.orm import query_expression
from sqlalchemy.orm import raiseload
from sqlalchemy.orm import reconstructor
from sqlalchemy.orm import relation
from sqlalchemy.orm import relationship
from sqlalchemy.orm import RelationshipProperty
from sqlalchemy.orm import remote
from sqlalchemy.orm import scoped_session
from sqlalchemy.orm import selectin_polymorphic
from sqlalchemy.orm import selectinload
from sqlalchemy.orm import selectinload_all
from sqlalchemy.orm import Session
from sqlalchemy.orm import SessionExtension
from sqlalchemy.orm import sessionmaker
from sqlalchemy.orm import subqueryload
from sqlalchemy.orm import subqueryload_all
from sqlalchemy.orm import synonym
from sqlalchemy.orm import SynonymProperty
from sqlalchemy.orm import undefer
from sqlalchemy.orm import undefer_group
from sqlalchemy.orm import validates
from sqlalchemy.orm import was_deleted
from sqlalchemy.orm import with_expression
from sqlalchemy.orm import with_parent
from sqlalchemy.orm import with_polymorphic
from sqlalchemy.testing import assert_raises_message
from sqlalchemy.testing import fixtures | zimports | /zimports-0.6.0.tar.gz/zimports-0.6.0/test_files/type_checking3.expected.py | type_checking3.expected.py |
import collections
import contextlib
import operator
import sys
import time
py36 = sys.version_info >= (3, 6)
py33 = sys.version_info >= (3, 3)
py35 = sys.version_info >= (3, 5)
py32 = sys.version_info >= (3, 2)
py3k = sys.version_info >= (3, 0)
py2k = sys.version_info < (3, 0)
py265 = sys.version_info >= (2, 6, 5)
jython = sys.platform.startswith("java")
pypy = hasattr(sys, "pypy_version_info")
win32 = sys.platform.startswith("win")
cpython = not pypy and not jython # TODO: something better for this ?
contextmanager = contextlib.contextmanager
dottedgetter = operator.attrgetter
namedtuple = collections.namedtuple
next = next # noqa
ArgSpec = collections.namedtuple(
"ArgSpec", ["args", "varargs", "keywords", "defaults"]
)
try:
import threading
except ImportError:
import dummy_threading as threading # noqa
# work around http://bugs.python.org/issue2646
if py265:
safe_kwarg = lambda arg: arg # noqa
else:
safe_kwarg = str
if py3k:
import base64
import builtins
import configparser
import itertools
import pickle
from functools import reduce
from inspect import getfullargspec as inspect_getfullargspec
from io import BytesIO as byte_buffer
from io import StringIO
from itertools import zip_longest
from urllib.parse import (
quote_plus,
unquote_plus,
parse_qsl,
quote,
unquote,
)
string_types = (str,)
binary_types = (bytes,)
binary_type = bytes
text_type = str
int_types = (int,)
iterbytes = iter
itertools_filterfalse = itertools.filterfalse
itertools_filter = filter
itertools_imap = map
exec_ = getattr(builtins, "exec")
import_ = getattr(builtins, "__import__")
print_ = getattr(builtins, "print")
def b(s):
return s.encode("latin-1")
def b64decode(x):
return base64.b64decode(x.encode("ascii"))
def b64encode(x):
return base64.b64encode(x).decode("ascii")
def cmp(a, b):
return (a > b) - (a < b)
def inspect_getargspec(func):
return ArgSpec(*inspect_getfullargspec(func)[0:4])
def reraise(tp, value, tb=None, cause=None):
if cause is not None:
assert cause is not value, "Same cause emitted"
value.__cause__ = cause
if value.__traceback__ is not tb:
raise value.with_traceback(tb)
raise value
def u(s):
return s
def ue(s):
return s
if py32:
callable = callable # noqa
else:
def callable(fn): # noqa
return hasattr(fn, "__call__")
else:
import base64
import ConfigParser as configparser # noqa
import itertools
from StringIO import StringIO # noqa
from cStringIO import StringIO as byte_buffer # noqa
from inspect import getargspec as inspect_getfullargspec # noqa
from itertools import izip_longest as zip_longest # noqa
from urllib import quote # noqa
from urllib import quote_plus # noqa
from urllib import unquote # noqa
from urllib import unquote_plus # noqa
from urlparse import parse_qsl # noqa
try:
import cPickle as pickle
except ImportError:
import pickle # noqa
string_types = (basestring,) # noqa
binary_types = (bytes,)
binary_type = str
text_type = unicode # noqa
int_types = int, long # noqa
inspect_getargspec = inspect_getfullargspec
callable = callable # noqa
cmp = cmp # noqa
reduce = reduce # noqa
b64encode = base64.b64encode
b64decode = base64.b64decode
itertools_filterfalse = itertools.ifilterfalse
itertools_filter = itertools.ifilter
itertools_imap = itertools.imap
def b(s):
return s
def exec_(func_text, globals_, lcl=None):
if lcl is None:
exec("exec func_text in globals_")
else:
exec("exec func_text in globals_, lcl")
def iterbytes(buf):
return (ord(byte) for byte in buf)
def import_(*args):
if len(args) == 4:
args = args[0:3] + ([str(arg) for arg in args[3]],)
return __import__(*args)
def print_(*args, **kwargs):
fp = kwargs.pop("file", sys.stdout)
if fp is None:
return
for arg in enumerate(args):
if not isinstance(arg, basestring): # noqa
arg = str(arg)
fp.write(arg)
def u(s):
# this differs from what six does, which doesn't support non-ASCII
# strings - we only use u() with
# literal source strings, and all our source files with non-ascii
# in them (all are tests) are utf-8 encoded.
return unicode(s, "utf-8") # noqa
def ue(s):
return unicode(s, "unicode_escape") # noqa
# not as nice as that of Py3K, but at least preserves
# the code line where the issue occurred
exec(
"def reraise(tp, value, tb=None, cause=None):\n"
" if cause is not None:\n"
" assert cause is not value, 'Same cause emitted'\n"
" raise tp, value, tb\n"
)
if py35:
from inspect import formatannotation
def inspect_formatargspec(
args,
varargs=None,
varkw=None,
defaults=None,
kwonlyargs=(),
kwonlydefaults={},
annotations={},
formatarg=str,
formatvarargs=lambda name: "*" + name,
formatvarkw=lambda name: "**" + name,
formatvalue=lambda value: "=" + repr(value),
formatreturns=lambda text: " -> " + text,
formatannotation=formatannotation,
):
"""Copy formatargspec from python 3.7 standard library.
Python 3 has deprecated formatargspec and requested that Signature
be used instead, however this requires a full reimplementation
of formatargspec() in terms of creating Parameter objects and such.
Instead of introducing all the object-creation overhead and having
to reinvent from scratch, just copy their compatibility routine.
Utimately we would need to rewrite our "decorator" routine completely
which is not really worth it right now, until all Python 2.x support
is dropped.
"""
def formatargandannotation(arg):
result = formatarg(arg)
if arg in annotations:
result += ": " + formatannotation(annotations[arg])
return result
specs = []
if defaults:
firstdefault = len(args) - len(defaults)
for i, arg in enumerate(args):
spec = formatargandannotation(arg)
if defaults and i >= firstdefault:
spec = spec + formatvalue(defaults[i - firstdefault])
specs.append(spec)
if varargs is not None:
specs.append(formatvarargs(formatargandannotation(varargs)))
else:
if kwonlyargs:
specs.append("*")
if kwonlyargs:
for kwonlyarg in kwonlyargs:
spec = formatargandannotation(kwonlyarg)
if kwonlydefaults and kwonlyarg in kwonlydefaults:
spec += formatvalue(kwonlydefaults[kwonlyarg])
specs.append(spec)
if varkw is not None:
specs.append(formatvarkw(formatargandannotation(varkw)))
result = "(" + ", ".join(specs) + ")"
if "return" in annotations:
result += formatreturns(formatannotation(annotations["return"]))
return result
else:
from inspect import formatargspec as inspect_formatargspec # noqa
if win32 or jython:
time_func = time.clock
else:
time_func = time.time
# Fix deprecation of accessing ABCs straight from collections module
# (which will stop working in 3.8).
if py33:
import collections.abc as collections_abc
else:
import collections as collections_abc # noqa
@contextlib.contextmanager
def nested(*managers):
"""Implement contextlib.nested, mostly for unit tests.
As tests still need to run on py2.6 we can't use multiple-with yet.
Function is removed in py3k but also emits deprecation warning in 2.7
so just roll it here for everyone.
"""
exits = []
vars_ = []
exc = (None, None, None)
try:
for mgr in managers:
exit_ = mgr.__exit__
enter = mgr.__enter__
vars_.append(enter())
exits.append(exit_)
yield vars_
except:
exc = sys.exc_info()
finally:
while exits:
exit_ = exits.pop() # noqa
try:
if exit_(*exc):
exc = (None, None, None)
except:
exc = sys.exc_info()
if exc != (None, None, None):
reraise(exc[0], exc[1], exc[2])
def raise_from_cause(exception, exc_info=None):
if exc_info is None:
exc_info = sys.exc_info()
exc_type, exc_value, exc_tb = exc_info
cause = exc_value if exc_value is not exception else None
reraise(type(exception), exception, tb=exc_tb, cause=cause)
def with_metaclass(meta, *bases):
"""Create a base class with a metaclass.
Drops the middle class upon creation.
Source: http://lucumr.pocoo.org/2013/5/21/porting-to-python-3-redux/
"""
class metaclass(meta):
__call__ = type.__call__
__init__ = type.__init__
def __new__(cls, name, this_bases, d):
if this_bases is None:
return type.__new__(cls, name, (), d)
return meta(name, bases, d)
return metaclass("temporary_class", None, {}) | zimports | /zimports-0.6.0.tar.gz/zimports-0.6.0/test_files/conditional_imports.py | conditional_imports.py |
from typing import TYPE_CHECKING
from .sql.base import SchemaVisitor
from .sql.ddl import _CreateDropBase
from .sql.ddl import _DDLCompiles
from .sql.ddl import _DropView
from .sql.ddl import AddConstraint
from .sql.ddl import CreateColumn
from .sql.ddl import CreateIndex
from .sql.ddl import CreateSchema
from .sql.ddl import CreateSequence
from .sql.ddl import CreateTable
from .sql.ddl import DDL
from .sql.ddl import DDLBase
from .sql.ddl import DDLElement
from .sql.ddl import DropColumnComment
from .sql.ddl import DropConstraint
from .sql.ddl import DropIndex
from .sql.ddl import DropSchema
from .sql.ddl import DropSequence
from .sql.ddl import DropTable
from .sql.ddl import DropTableComment
from .sql.ddl import SetColumnComment
from .sql.ddl import SetTableComment
from .sql.ddl import sort_tables
from .sql.ddl import sort_tables_and_constraints
if TYPE_CHECKING:
from .sql.naming import conv
from .sql.schema import _get_table_key
from .sql.schema import BLANK_SCHEMA
from .sql.schema import CheckConstraint
from .sql.schema import Column
from .sql.schema import ColumnCollectionConstraint
from .sql.schema import ColumnCollectionMixin
from .sql.schema import ColumnDefault
from .sql.schema import Constraint
from .sql.schema import DefaultClause
from .sql.schema import DefaultGenerator
from .sql.schema import FetchedValue
from .sql.schema import ForeignKey
from .sql.schema import ForeignKeyConstraint
from .sql.schema import Index
from .sql.schema import MetaData
from .sql.schema import PassiveDefault
from .sql.schema import PrimaryKeyConstraint
from .sql.schema import SchemaItem
from .sql.schema import Sequence
from .sql.schema import Table
from .sql.schema import ThreadLocalMetaData
from .sql.schema import UniqueConstraint
else:
from .sql.c_naming import conv
from .sql.c_schema import _get_table_key
from .sql.c_schema import BLANK_SCHEMA
from .sql.c_schema import CheckConstraint
from .sql.c_schema import Column
from .sql.c_schema import ColumnCollectionConstraint
from .sql.c_schema import ColumnCollectionMixin
from .sql.c_schema import ColumnDefault
from .sql.c_schema import Constraint
from .sql.c_schema import DefaultClause
from .sql.c_schema import DefaultGenerator
from .sql.c_schema import FetchedValue
from .sql.c_schema import ForeignKey
from .sql.c_schema import ForeignKeyConstraint
from .sql.c_schema import Index
from .sql.c_schema import MetaData
from .sql.c_schema import PassiveDefault
from .sql.c_schema import PrimaryKeyConstraint
from .sql.c_schema import SchemaItem
from .sql.c_schema import Sequence
from .sql.c_schema import Table
from .sql.c_schema import ThreadLocalMetaData
from .sql.c_schema import UniqueConstraint | zimports | /zimports-0.6.0.tar.gz/zimports-0.6.0/test_files/type_checking6.expected.py | type_checking6.expected.py |
from sqlalchemy import *
from sqlalchemy import exc as sa_exc
from sqlalchemy.orm import *
from sqlalchemy.testing import assert_raises_message
from sqlalchemy.testing import fixtures
from sqlalchemy import testing
class CompileTest(fixtures.ORMTest):
"""test various mapper compilation scenarios"""
def teardown(self):
clear_mappers()
def test_with_polymorphic(self):
metadata = MetaData(testing.db)
order = Table('orders', metadata,
Column('id', Integer, primary_key=True),
Column('employee_id', Integer, ForeignKey(
'employees.id'), nullable=False),
Column('type', Unicode(16)))
employee = Table('employees', metadata,
Column('id', Integer, primary_key=True),
Column('name', Unicode(16), unique=True,
nullable=False))
product = Table('products', metadata,
Column('id', Integer, primary_key=True))
orderproduct = Table('orderproducts', metadata,
Column('id', Integer, primary_key=True),
Column('order_id', Integer, ForeignKey(
"orders.id"), nullable=False),
Column('product_id', Integer, ForeignKey(
"products.id"), nullable=False))
class Order(object):
pass
class Employee(object):
pass
class Product(object):
pass
class OrderProduct(object):
pass
order_join = order.select().alias('pjoin')
order_mapper = mapper(Order, order,
with_polymorphic=('*', order_join),
polymorphic_on=order_join.c.type,
polymorphic_identity='order',
properties={
'orderproducts': relationship(
OrderProduct, lazy='select',
backref='order')}
)
mapper(Product, product,
properties={
'orderproducts': relationship(OrderProduct, lazy='select',
backref='product')}
)
mapper(Employee, employee,
properties={
'orders': relationship(Order, lazy='select',
backref='employee')})
mapper(OrderProduct, orderproduct)
# this requires that the compilation of order_mapper's "surrogate
# mapper" occur after the initial setup of MapperProperty objects on
# the mapper.
configure_mappers()
def test_conflicting_backref_one(self):
"""test that conflicting backrefs raises an exception"""
metadata = MetaData(testing.db)
order = Table('orders', metadata,
Column('id', Integer, primary_key=True),
Column('type', Unicode(16)))
product = Table('products', metadata,
Column('id', Integer, primary_key=True))
orderproduct = Table('orderproducts', metadata,
Column('id', Integer, primary_key=True),
Column('order_id', Integer,
ForeignKey("orders.id"), nullable=False),
Column('product_id', Integer,
ForeignKey("products.id"),
nullable=False))
class Order(object):
pass
class Product(object):
pass
class OrderProduct(object):
pass
order_join = order.select().alias('pjoin')
order_mapper = mapper(Order, order,
with_polymorphic=('*', order_join),
polymorphic_on=order_join.c.type,
polymorphic_identity='order',
properties={
'orderproducts': relationship(
OrderProduct, lazy='select',
backref='product')}
)
mapper(Product, product,
properties={
'orderproducts': relationship(OrderProduct, lazy='select',
backref='product')}
)
mapper(OrderProduct, orderproduct)
assert_raises_message(
sa_exc.ArgumentError,
"Error creating backref",
configure_mappers
)
def test_misc_one(self):
metadata = MetaData(testing.db)
node_table = Table("node", metadata,
Column('node_id', Integer, primary_key=True),
Column('name_index', Integer, nullable=True))
node_name_table = Table("node_name", metadata,
Column('node_name_id', Integer,
primary_key=True),
Column('node_id', Integer,
ForeignKey('node.node_id')),
Column('host_id', Integer,
ForeignKey('host.host_id')),
Column('name', String(64), nullable=False))
host_table = Table("host", metadata,
Column('host_id', Integer, primary_key=True),
Column('hostname', String(64), nullable=False,
unique=True))
metadata.create_all()
try:
node_table.insert().execute(node_id=1, node_index=5)
class Node(object):
pass
class NodeName(object):
pass
class Host(object):
pass
node_mapper = mapper(Node, node_table)
host_mapper = mapper(Host, host_table)
node_name_mapper = mapper(NodeName, node_name_table,
properties={
'node': relationship(
Node, backref=backref('names')),
'host': relationship(Host),
})
sess = create_session()
assert sess.query(Node).get(1).names == []
finally:
metadata.drop_all()
def test_conflicting_backref_two(self):
meta = MetaData()
a = Table('a', meta, Column('id', Integer, primary_key=True))
b = Table('b', meta, Column('id', Integer, primary_key=True),
Column('a_id', Integer, ForeignKey('a.id')))
class A(object):
pass
class B(object):
pass
mapper(A, a, properties={
'b': relationship(B, backref='a')
})
mapper(B, b, properties={
'a': relationship(A, backref='b')
})
assert_raises_message(
sa_exc.ArgumentError,
"Error creating backref",
configure_mappers
)
def test_conflicting_backref_subclass(self):
meta = MetaData()
a = Table('a', meta, Column('id', Integer, primary_key=True))
b = Table('b', meta, Column('id', Integer, primary_key=True),
Column('a_id', Integer, ForeignKey('a.id')))
class A(object):
pass
class B(object):
pass
class C(B):
pass
mapper(A, a, properties={
'b': relationship(B, backref='a'),
'c': relationship(C, backref='a')
})
mapper(B, b)
mapper(C, None, inherits=B)
assert_raises_message(
sa_exc.ArgumentError,
"Error creating backref",
configure_mappers
) | zimports | /zimports-0.6.0.tar.gz/zimports-0.6.0/test_files/star_imports_two.py | star_imports_two.py |
from sqlalchemy import Column
from sqlalchemy import exc as sa_exc
from sqlalchemy import ForeignKey
from sqlalchemy import Integer
from sqlalchemy import MetaData
from sqlalchemy import String
from sqlalchemy import Table
from sqlalchemy import testing
from sqlalchemy import Unicode
from sqlalchemy.orm import backref
from sqlalchemy.orm import clear_mappers
from sqlalchemy.orm import configure_mappers
from sqlalchemy.orm import create_session
from sqlalchemy.orm import mapper
from sqlalchemy.orm import relationship
from sqlalchemy.testing import assert_raises_message
from sqlalchemy.testing import fixtures
class CompileTest(fixtures.ORMTest):
"""test various mapper compilation scenarios"""
def teardown(self):
clear_mappers()
def test_with_polymorphic(self):
metadata = MetaData(testing.db)
order = Table('orders', metadata,
Column('id', Integer, primary_key=True),
Column('employee_id', Integer, ForeignKey(
'employees.id'), nullable=False),
Column('type', Unicode(16)))
employee = Table('employees', metadata,
Column('id', Integer, primary_key=True),
Column('name', Unicode(16), unique=True,
nullable=False))
product = Table('products', metadata,
Column('id', Integer, primary_key=True))
orderproduct = Table('orderproducts', metadata,
Column('id', Integer, primary_key=True),
Column('order_id', Integer, ForeignKey(
"orders.id"), nullable=False),
Column('product_id', Integer, ForeignKey(
"products.id"), nullable=False))
class Order(object):
pass
class Employee(object):
pass
class Product(object):
pass
class OrderProduct(object):
pass
order_join = order.select().alias('pjoin')
order_mapper = mapper(Order, order,
with_polymorphic=('*', order_join),
polymorphic_on=order_join.c.type,
polymorphic_identity='order',
properties={
'orderproducts': relationship(
OrderProduct, lazy='select',
backref='order')}
)
mapper(Product, product,
properties={
'orderproducts': relationship(OrderProduct, lazy='select',
backref='product')}
)
mapper(Employee, employee,
properties={
'orders': relationship(Order, lazy='select',
backref='employee')})
mapper(OrderProduct, orderproduct)
# this requires that the compilation of order_mapper's "surrogate
# mapper" occur after the initial setup of MapperProperty objects on
# the mapper.
configure_mappers()
def test_conflicting_backref_one(self):
"""test that conflicting backrefs raises an exception"""
metadata = MetaData(testing.db)
order = Table('orders', metadata,
Column('id', Integer, primary_key=True),
Column('type', Unicode(16)))
product = Table('products', metadata,
Column('id', Integer, primary_key=True))
orderproduct = Table('orderproducts', metadata,
Column('id', Integer, primary_key=True),
Column('order_id', Integer,
ForeignKey("orders.id"), nullable=False),
Column('product_id', Integer,
ForeignKey("products.id"),
nullable=False))
class Order(object):
pass
class Product(object):
pass
class OrderProduct(object):
pass
order_join = order.select().alias('pjoin')
order_mapper = mapper(Order, order,
with_polymorphic=('*', order_join),
polymorphic_on=order_join.c.type,
polymorphic_identity='order',
properties={
'orderproducts': relationship(
OrderProduct, lazy='select',
backref='product')}
)
mapper(Product, product,
properties={
'orderproducts': relationship(OrderProduct, lazy='select',
backref='product')}
)
mapper(OrderProduct, orderproduct)
assert_raises_message(
sa_exc.ArgumentError,
"Error creating backref",
configure_mappers
)
def test_misc_one(self):
metadata = MetaData(testing.db)
node_table = Table("node", metadata,
Column('node_id', Integer, primary_key=True),
Column('name_index', Integer, nullable=True))
node_name_table = Table("node_name", metadata,
Column('node_name_id', Integer,
primary_key=True),
Column('node_id', Integer,
ForeignKey('node.node_id')),
Column('host_id', Integer,
ForeignKey('host.host_id')),
Column('name', String(64), nullable=False))
host_table = Table("host", metadata,
Column('host_id', Integer, primary_key=True),
Column('hostname', String(64), nullable=False,
unique=True))
metadata.create_all()
try:
node_table.insert().execute(node_id=1, node_index=5)
class Node(object):
pass
class NodeName(object):
pass
class Host(object):
pass
node_mapper = mapper(Node, node_table)
host_mapper = mapper(Host, host_table)
node_name_mapper = mapper(NodeName, node_name_table,
properties={
'node': relationship(
Node, backref=backref('names')),
'host': relationship(Host),
})
sess = create_session()
assert sess.query(Node).get(1).names == []
finally:
metadata.drop_all()
def test_conflicting_backref_two(self):
meta = MetaData()
a = Table('a', meta, Column('id', Integer, primary_key=True))
b = Table('b', meta, Column('id', Integer, primary_key=True),
Column('a_id', Integer, ForeignKey('a.id')))
class A(object):
pass
class B(object):
pass
mapper(A, a, properties={
'b': relationship(B, backref='a')
})
mapper(B, b, properties={
'a': relationship(A, backref='b')
})
assert_raises_message(
sa_exc.ArgumentError,
"Error creating backref",
configure_mappers
)
def test_conflicting_backref_subclass(self):
meta = MetaData()
a = Table('a', meta, Column('id', Integer, primary_key=True))
b = Table('b', meta, Column('id', Integer, primary_key=True),
Column('a_id', Integer, ForeignKey('a.id')))
class A(object):
pass
class B(object):
pass
class C(B):
pass
mapper(A, a, properties={
'b': relationship(B, backref='a'),
'c': relationship(C, backref='a')
})
mapper(B, b)
mapper(C, None, inherits=B)
assert_raises_message(
sa_exc.ArgumentError,
"Error creating backref",
configure_mappers
) | zimports | /zimports-0.6.0.tar.gz/zimports-0.6.0/test_files/very_long_import.expected.py | very_long_import.expected.py |
from types import TYPE_CHECKING
if TYPE_CHECKING:
from sqlalchemy import alias
from sqlalchemy import all_
from sqlalchemy import and_
from sqlalchemy import any_
from sqlalchemy import ARRAY
from sqlalchemy import asc
from sqlalchemy import between
from sqlalchemy import BIGINT
from sqlalchemy import BigInteger
from sqlalchemy import BINARY
from sqlalchemy import Binary
from sqlalchemy import bindparam
from sqlalchemy import BLANK_SCHEMA
from sqlalchemy import BLOB
from sqlalchemy import BOOLEAN
from sqlalchemy import Boolean
from sqlalchemy import case
from sqlalchemy import cast
from sqlalchemy import CHAR
from sqlalchemy import CheckConstraint
from sqlalchemy import CLOB
from sqlalchemy import collate
from sqlalchemy import Column
from sqlalchemy import column
from sqlalchemy import ColumnDefault
from sqlalchemy import Constraint
from sqlalchemy import create_engine
from sqlalchemy import DATE
from sqlalchemy import Date
from sqlalchemy import DATETIME
from sqlalchemy import DateTime
from sqlalchemy import DDL
from sqlalchemy import DECIMAL
from sqlalchemy import DefaultClause
from sqlalchemy import delete
from sqlalchemy import desc
from sqlalchemy import distinct
from sqlalchemy import engine_from_config
from sqlalchemy import Enum
from sqlalchemy import exc as sa_exc
from sqlalchemy import except_
from sqlalchemy import except_all
from sqlalchemy import exists
from sqlalchemy import extract
from sqlalchemy import false
from sqlalchemy import FetchedValue
from sqlalchemy import FLOAT
from sqlalchemy import Float
from sqlalchemy import ForeignKey
from sqlalchemy import ForeignKeyConstraint
from sqlalchemy import func
from sqlalchemy import funcfilter
from sqlalchemy import Index
from sqlalchemy import insert
from sqlalchemy import inspect
from sqlalchemy import INT
from sqlalchemy import INTEGER
from sqlalchemy import Integer
from sqlalchemy import intersect
from sqlalchemy import intersect_all
from sqlalchemy import Interval
from sqlalchemy import join
from sqlalchemy import JSON
from sqlalchemy import LargeBinary
from sqlalchemy import lateral
from sqlalchemy import literal
from sqlalchemy import literal_column
from sqlalchemy import MetaData
from sqlalchemy import modifier
from sqlalchemy import NCHAR
from sqlalchemy import not_
from sqlalchemy import null
from sqlalchemy import nullsfirst
from sqlalchemy import nullslast
from sqlalchemy import NUMERIC
from sqlalchemy import Numeric
from sqlalchemy import NVARCHAR
from sqlalchemy import or_
from sqlalchemy import outerjoin
from sqlalchemy import outparam
from sqlalchemy import over
from sqlalchemy import PassiveDefault
from sqlalchemy import PickleType
from sqlalchemy import PrimaryKeyConstraint
from sqlalchemy import REAL
from sqlalchemy import select
from sqlalchemy import Sequence
from sqlalchemy import SMALLINT
from sqlalchemy import SmallInteger
from sqlalchemy import String
from sqlalchemy import subquery
from sqlalchemy import Table
from sqlalchemy import table
from sqlalchemy import tablesample
from sqlalchemy import TEXT
from sqlalchemy import Text
from sqlalchemy import text
from sqlalchemy import ThreadLocalMetaData
from sqlalchemy import TIME
from sqlalchemy import Time
from sqlalchemy import TIMESTAMP
from sqlalchemy import true
from sqlalchemy import tuple_
from sqlalchemy import type_coerce
from sqlalchemy import TypeDecorator
from sqlalchemy import Unicode
from sqlalchemy import UnicodeText
from sqlalchemy import union
from sqlalchemy import union_all
from sqlalchemy import UniqueConstraint
from sqlalchemy import update
from sqlalchemy import VARBINARY
from sqlalchemy import VARCHAR
from sqlalchemy import within_group
from sqlalchemy.orm import aliased
from sqlalchemy.orm import AliasOption
from sqlalchemy.orm import AttributeExtension
from sqlalchemy.orm import backref
from sqlalchemy.orm import Bundle
from sqlalchemy.orm import class_mapper
from sqlalchemy.orm import clear_mappers
from sqlalchemy.orm import column_property
from sqlalchemy.orm import ColumnProperty
from sqlalchemy.orm import comparable_property
from sqlalchemy.orm import ComparableProperty
from sqlalchemy.orm import compile_mappers
from sqlalchemy.orm import composite
from sqlalchemy.orm import CompositeProperty
from sqlalchemy.orm import configure_mappers
from sqlalchemy.orm import contains_alias
from sqlalchemy.orm import contains_eager
from sqlalchemy.orm import create_session
from sqlalchemy.orm import defaultload
from sqlalchemy.orm import defer
from sqlalchemy.orm import deferred
from sqlalchemy.orm import dynamic_loader
from sqlalchemy.orm import eagerload
from sqlalchemy.orm import eagerload_all
from sqlalchemy.orm import EXT_CONTINUE
from sqlalchemy.orm import EXT_SKIP
from sqlalchemy.orm import EXT_STOP
from sqlalchemy.orm import foreign
from sqlalchemy.orm import immediateload
from sqlalchemy.orm import join
from sqlalchemy.orm import joinedload
from sqlalchemy.orm import joinedload_all
from sqlalchemy.orm import lazyload
from sqlalchemy.orm import lazyload_all
from sqlalchemy.orm import Load
from sqlalchemy.orm import load_only
from sqlalchemy.orm import make_transient
from sqlalchemy.orm import make_transient_to_detached
from sqlalchemy.orm import Mapper
from sqlalchemy.orm import mapper
from sqlalchemy.orm import MapperExtension
from sqlalchemy.orm import noload
from sqlalchemy.orm import object_mapper
from sqlalchemy.orm import object_session
from sqlalchemy.orm import outerjoin
from sqlalchemy.orm import polymorphic_union
from sqlalchemy.orm import PropComparator
from sqlalchemy.orm import public_factory
from sqlalchemy.orm import Query
from sqlalchemy.orm import query_expression
from sqlalchemy.orm import raiseload
from sqlalchemy.orm import reconstructor
from sqlalchemy.orm import relation
from sqlalchemy.orm import relationship
from sqlalchemy.orm import RelationshipProperty
from sqlalchemy.orm import remote
from sqlalchemy.orm import scoped_session
from sqlalchemy.orm import selectin_polymorphic
from sqlalchemy.orm import selectinload
from sqlalchemy.orm import selectinload_all
from sqlalchemy.orm import Session
from sqlalchemy.orm import SessionExtension
from sqlalchemy.orm import sessionmaker
from sqlalchemy.orm import subqueryload
from sqlalchemy.orm import subqueryload_all
from sqlalchemy.orm import synonym
from sqlalchemy.orm import SynonymProperty
from sqlalchemy.orm import undefer
from sqlalchemy.orm import undefer_group
from sqlalchemy.orm import validates
from sqlalchemy.orm import was_deleted
from sqlalchemy.orm import with_expression
from sqlalchemy.orm import with_parent
from sqlalchemy.orm import with_polymorphic
from sqlalchemy.testing import assert_raises_message
from sqlalchemy.testing import fixtures | zimports | /zimports-0.6.0.tar.gz/zimports-0.6.0/test_files/type_checking3.no_unused_types.py | type_checking3.no_unused_types.py |
from sqlalchemy import ARRAY, BIGINT, BINARY, BLANK_SCHEMA, BLOB, BOOLEAN, BigInteger, Binary, Boolean, CHAR, CLOB, CheckConstraint, Column, ColumnDefault, Constraint, DATE, DATETIME, DDL, DECIMAL, Date, DateTime, DefaultClause, Enum, FLOAT, FetchedValue, Float, ForeignKey, ForeignKeyConstraint, INT, INTEGER, Index, Integer, Interval, JSON, LargeBinary, MetaData, NCHAR, NUMERIC, NVARCHAR, Numeric, PassiveDefault, PickleType, PrimaryKeyConstraint, REAL, SMALLINT, Sequence, SmallInteger, String, TEXT, TIME, TIMESTAMP, Table, Text, ThreadLocalMetaData, Time, TypeDecorator, Unicode, UnicodeText, UniqueConstraint, VARBINARY, VARCHAR, alias, all_, and_, any_, asc, between, bindparam, case, cast, collate, column, create_engine, delete, desc, distinct, engine_from_config, except_, except_all, exists, extract, false, func, funcfilter, insert, inspect, intersect, intersect_all, join, lateral, literal, literal_column, modifier, not_, null, nullsfirst, nullslast, or_, outerjoin, outparam, over, select, subquery, table, tablesample, text, true, tuple_, type_coerce, union, union_all, update, within_group
from sqlalchemy import exc as sa_exc
from sqlalchemy.orm import AliasOption, AttributeExtension, Bundle, ColumnProperty, ComparableProperty, CompositeProperty, EXT_CONTINUE, EXT_SKIP, EXT_STOP, Load, Mapper, MapperExtension, PropComparator, Query, RelationshipProperty, Session, SessionExtension, SynonymProperty, aliased, backref, class_mapper, clear_mappers, column_property, comparable_property, compile_mappers, composite, configure_mappers, contains_alias, contains_eager, create_session, defaultload, defer, deferred, dynamic_loader, eagerload, eagerload_all, foreign, immediateload, join, joinedload, joinedload_all, lazyload, lazyload_all, load_only, make_transient, make_transient_to_detached, mapper, noload, object_mapper, object_session, outerjoin, polymorphic_union, public_factory, query_expression, raiseload, reconstructor, relation, relationship, remote, scoped_session, selectin_polymorphic, selectinload, selectinload_all, sessionmaker, subqueryload, subqueryload_all, synonym, undefer, undefer_group, validates, was_deleted, with_expression, with_parent, with_polymorphic
from sqlalchemy.testing import assert_raises_message
from sqlalchemy.testing import fixtures
from sqlalchemy import testing
class CompileTest(fixtures.ORMTest):
"""test various mapper compilation scenarios"""
def teardown(self):
clear_mappers()
def test_with_polymorphic(self):
metadata = MetaData(testing.db)
order = Table('orders', metadata,
Column('id', Integer, primary_key=True),
Column('employee_id', Integer, ForeignKey(
'employees.id'), nullable=False),
Column('type', Unicode(16)))
employee = Table('employees', metadata,
Column('id', Integer, primary_key=True),
Column('name', Unicode(16), unique=True,
nullable=False))
product = Table('products', metadata,
Column('id', Integer, primary_key=True))
orderproduct = Table('orderproducts', metadata,
Column('id', Integer, primary_key=True),
Column('order_id', Integer, ForeignKey(
"orders.id"), nullable=False),
Column('product_id', Integer, ForeignKey(
"products.id"), nullable=False))
class Order(object):
pass
class Employee(object):
pass
class Product(object):
pass
class OrderProduct(object):
pass
order_join = order.select().alias('pjoin')
order_mapper = mapper(Order, order,
with_polymorphic=('*', order_join),
polymorphic_on=order_join.c.type,
polymorphic_identity='order',
properties={
'orderproducts': relationship(
OrderProduct, lazy='select',
backref='order')}
)
mapper(Product, product,
properties={
'orderproducts': relationship(OrderProduct, lazy='select',
backref='product')}
)
mapper(Employee, employee,
properties={
'orders': relationship(Order, lazy='select',
backref='employee')})
mapper(OrderProduct, orderproduct)
# this requires that the compilation of order_mapper's "surrogate
# mapper" occur after the initial setup of MapperProperty objects on
# the mapper.
configure_mappers()
def test_conflicting_backref_one(self):
"""test that conflicting backrefs raises an exception"""
metadata = MetaData(testing.db)
order = Table('orders', metadata,
Column('id', Integer, primary_key=True),
Column('type', Unicode(16)))
product = Table('products', metadata,
Column('id', Integer, primary_key=True))
orderproduct = Table('orderproducts', metadata,
Column('id', Integer, primary_key=True),
Column('order_id', Integer,
ForeignKey("orders.id"), nullable=False),
Column('product_id', Integer,
ForeignKey("products.id"),
nullable=False))
class Order(object):
pass
class Product(object):
pass
class OrderProduct(object):
pass
order_join = order.select().alias('pjoin')
order_mapper = mapper(Order, order,
with_polymorphic=('*', order_join),
polymorphic_on=order_join.c.type,
polymorphic_identity='order',
properties={
'orderproducts': relationship(
OrderProduct, lazy='select',
backref='product')}
)
mapper(Product, product,
properties={
'orderproducts': relationship(OrderProduct, lazy='select',
backref='product')}
)
mapper(OrderProduct, orderproduct)
assert_raises_message(
sa_exc.ArgumentError,
"Error creating backref",
configure_mappers
)
def test_misc_one(self):
metadata = MetaData(testing.db)
node_table = Table("node", metadata,
Column('node_id', Integer, primary_key=True),
Column('name_index', Integer, nullable=True))
node_name_table = Table("node_name", metadata,
Column('node_name_id', Integer,
primary_key=True),
Column('node_id', Integer,
ForeignKey('node.node_id')),
Column('host_id', Integer,
ForeignKey('host.host_id')),
Column('name', String(64), nullable=False))
host_table = Table("host", metadata,
Column('host_id', Integer, primary_key=True),
Column('hostname', String(64), nullable=False,
unique=True))
metadata.create_all()
try:
node_table.insert().execute(node_id=1, node_index=5)
class Node(object):
pass
class NodeName(object):
pass
class Host(object):
pass
node_mapper = mapper(Node, node_table)
host_mapper = mapper(Host, host_table)
node_name_mapper = mapper(NodeName, node_name_table,
properties={
'node': relationship(
Node, backref=backref('names')),
'host': relationship(Host),
})
sess = create_session()
assert sess.query(Node).get(1).names == []
finally:
metadata.drop_all()
def test_conflicting_backref_two(self):
meta = MetaData()
a = Table('a', meta, Column('id', Integer, primary_key=True))
b = Table('b', meta, Column('id', Integer, primary_key=True),
Column('a_id', Integer, ForeignKey('a.id')))
class A(object):
pass
class B(object):
pass
mapper(A, a, properties={
'b': relationship(B, backref='a')
})
mapper(B, b, properties={
'a': relationship(A, backref='b')
})
assert_raises_message(
sa_exc.ArgumentError,
"Error creating backref",
configure_mappers
)
def test_conflicting_backref_subclass(self):
meta = MetaData()
a = Table('a', meta, Column('id', Integer, primary_key=True))
b = Table('b', meta, Column('id', Integer, primary_key=True),
Column('a_id', Integer, ForeignKey('a.id')))
class A(object):
pass
class B(object):
pass
class C(B):
pass
mapper(A, a, properties={
'b': relationship(B, backref='a'),
'c': relationship(C, backref='a')
})
mapper(B, b)
mapper(C, None, inherits=B)
assert_raises_message(
sa_exc.ArgumentError,
"Error creating backref",
configure_mappers
) | zimports | /zimports-0.6.0.tar.gz/zimports-0.6.0/test_files/very_long_import.py | very_long_import.py |
from sqlalchemy.testing import eq_, assert_raises, assert_raises_message
import operator
from sqlalchemy import *
from sqlalchemy import exc as sa_exc, util
from sqlalchemy.sql import compiler, table, column
from sqlalchemy.engine import default
from sqlalchemy.orm import *
from sqlalchemy.orm import attributes
from sqlalchemy.testing import eq_
import sqlalchemy as sa
from sqlalchemy import testing
from sqlalchemy.testing import AssertsCompiledSQL, engines
from sqlalchemy.testing.schema import Column
from test.orm import _fixtures
from sqlalchemy.testing import fixtures
from sqlalchemy.orm.util import join, outerjoin, with_parent
class QueryTest(_fixtures.FixtureTest):
run_setup_mappers = 'once'
run_inserts = 'once'
run_deletes = None
@classmethod
def setup_mappers(cls):
Node, composite_pk_table, users, Keyword, items, Dingaling, \
order_items, item_keywords, Item, User, dingalings, \
Address, keywords, CompositePk, nodes, Order, orders, \
addresses = cls.classes.Node, \
cls.tables.composite_pk_table, cls.tables.users, \
cls.classes.Keyword, cls.tables.items, \
cls.classes.Dingaling, cls.tables.order_items, \
cls.tables.item_keywords, cls.classes.Item, \
cls.classes.User, cls.tables.dingalings, \
cls.classes.Address, cls.tables.keywords, \
cls.classes.CompositePk, cls.tables.nodes, \
cls.classes.Order, cls.tables.orders, cls.tables.addresses
mapper(User, users, properties={
'addresses': relationship(Address, backref='user',
order_by=addresses.c.id),
# o2m, m2o
'orders': relationship(Order, backref='user', order_by=orders.c.id)
})
mapper(Address, addresses, properties={
# o2o
'dingaling': relationship(Dingaling, uselist=False,
backref="address")
})
mapper(Dingaling, dingalings)
mapper(Order, orders, properties={
# m2m
'items': relationship(Item, secondary=order_items,
order_by=items.c.id),
'address': relationship(Address), # m2o
})
mapper(Item, items, properties={
'keywords': relationship(Keyword, secondary=item_keywords) # m2m
})
mapper(Keyword, keywords)
mapper(Node, nodes, properties={
'children': relationship(Node,
backref=backref(
'parent', remote_side=[nodes.c.id]))
})
mapper(CompositePk, composite_pk_table)
configure_mappers()
class InheritedJoinTest(fixtures.MappedTest, AssertsCompiledSQL):
run_setup_mappers = 'once'
@classmethod
def define_tables(cls, metadata):
Table('companies', metadata,
Column('company_id', Integer, primary_key=True,
test_needs_autoincrement=True),
Column('name', String(50)))
Table('people', metadata,
Column('person_id', Integer, primary_key=True,
test_needs_autoincrement=True),
Column('company_id', Integer,
ForeignKey('companies.company_id')),
Column('name', String(50)),
Column('type', String(30)))
Table('engineers', metadata,
Column('person_id', Integer, ForeignKey(
'people.person_id'), primary_key=True),
Column('status', String(30)),
Column('engineer_name', String(50)),
Column('primary_language', String(50)))
Table('machines', metadata,
Column('machine_id', Integer, primary_key=True,
test_needs_autoincrement=True),
Column('name', String(50)),
Column('engineer_id', Integer,
ForeignKey('engineers.person_id')))
Table('managers', metadata,
Column('person_id', Integer, ForeignKey(
'people.person_id'), primary_key=True),
Column('status', String(30)),
Column('manager_name', String(50)))
Table('boss', metadata,
Column('boss_id', Integer, ForeignKey(
'managers.person_id'), primary_key=True),
Column('golf_swing', String(30)),
)
Table('paperwork', metadata,
Column('paperwork_id', Integer, primary_key=True,
test_needs_autoincrement=True),
Column('description', String(50)),
Column('person_id', Integer, ForeignKey('people.person_id')))
@classmethod
def setup_classes(cls):
paperwork, people, companies, boss, managers, machines, engineers = (
cls.tables.paperwork,
cls.tables.people,
cls.tables.companies,
cls.tables.boss,
cls.tables.managers,
cls.tables.machines,
cls.tables.engineers)
class Company(cls.Comparable):
pass
class Person(cls.Comparable):
pass
class Engineer(Person):
pass
class Manager(Person):
pass
class Boss(Manager):
pass
class Machine(cls.Comparable):
pass
class Paperwork(cls.Comparable):
pass
mapper(Company, companies, properties={
'employees': relationship(Person, order_by=people.c.person_id)
})
mapper(Machine, machines)
mapper(Person, people,
polymorphic_on=people.c.type,
polymorphic_identity='person',
properties={
'paperwork': relationship(Paperwork,
order_by=paperwork.c.paperwork_id)
})
mapper(Engineer, engineers, inherits=Person,
polymorphic_identity='engineer',
properties={'machines': relationship(
Machine, order_by=machines.c.machine_id)})
mapper(Manager, managers,
inherits=Person, polymorphic_identity='manager')
mapper(Boss, boss, inherits=Manager, polymorphic_identity='boss')
mapper(Paperwork, paperwork)
def test_single_prop(self):
Company = self.classes.Company
sess = create_session()
self.assert_compile(
sess.query(Company).join(Company.employees),
"SELECT companies.company_id AS companies_company_id, "
"companies.name AS companies_name "
"FROM companies JOIN people "
"ON companies.company_id = people.company_id",
use_default_dialect=True)
def test_force_via_select_from(self):
Company, Engineer = self.classes.Company, self.classes.Engineer
sess = create_session()
self.assert_compile(
sess.query(Company)
.filter(Company.company_id == Engineer.company_id)
.filter(Engineer.primary_language == 'java'),
"SELECT companies.company_id AS companies_company_id, "
"companies.name AS companies_name "
"FROM companies, people, engineers "
"WHERE companies.company_id = people.company_id "
"AND engineers.primary_language "
"= :primary_language_1", use_default_dialect=True)
self.assert_compile(
sess.query(Company).select_from(Company, Engineer)
.filter(Company.company_id == Engineer.company_id)
.filter(Engineer.primary_language == 'java'),
"SELECT companies.company_id AS companies_company_id, "
"companies.name AS companies_name "
"FROM companies, people JOIN engineers "
"ON people.person_id = engineers.person_id "
"WHERE companies.company_id = people.company_id "
"AND engineers.primary_language ="
" :primary_language_1", use_default_dialect=True)
def test_single_prop_of_type(self):
Company, Engineer = self.classes.Company, self.classes.Engineer
sess = create_session()
self.assert_compile(
sess.query(Company).join(Company.employees.of_type(Engineer)),
"SELECT companies.company_id AS companies_company_id, "
"companies.name AS companies_name "
"FROM companies JOIN "
"(people JOIN engineers "
"ON people.person_id = engineers.person_id) "
"ON companies.company_id = people.company_id",
use_default_dialect=True)
def test_prop_with_polymorphic_1(self):
Person, Manager, Paperwork = (self.classes.Person,
self.classes.Manager,
self.classes.Paperwork)
sess = create_session()
self.assert_compile(
sess.query(Person).with_polymorphic(Manager).
order_by(Person.person_id).join('paperwork')
.filter(Paperwork.description.like('%review%')),
"SELECT people.person_id AS people_person_id, people.company_id AS"
" people_company_id, "
"people.name AS people_name, people.type AS people_type, "
"managers.person_id AS managers_person_id, "
"managers.status AS managers_status, managers.manager_name AS "
"managers_manager_name FROM people "
"LEFT OUTER JOIN managers "
"ON people.person_id = managers.person_id "
"JOIN paperwork "
"ON people.person_id = paperwork.person_id "
"WHERE paperwork.description LIKE :description_1 "
"ORDER BY people.person_id", use_default_dialect=True)
def test_prop_with_polymorphic_2(self):
Person, Manager, Paperwork = (self.classes.Person,
self.classes.Manager,
self.classes.Paperwork)
sess = create_session()
self.assert_compile(
sess.query(Person).with_polymorphic(Manager).
order_by(Person.person_id).join('paperwork', aliased=True)
.filter(Paperwork.description.like('%review%')),
"SELECT people.person_id AS people_person_id, "
"people.company_id AS people_company_id, "
"people.name AS people_name, people.type AS people_type, "
"managers.person_id AS managers_person_id, "
"managers.status AS managers_status, "
"managers.manager_name AS managers_manager_name "
"FROM people LEFT OUTER JOIN managers "
"ON people.person_id = managers.person_id "
"JOIN paperwork AS paperwork_1 "
"ON people.person_id = paperwork_1.person_id "
"WHERE paperwork_1.description "
"LIKE :description_1 ORDER BY people.person_id",
use_default_dialect=True)
def test_explicit_polymorphic_join_one(self):
Company, Engineer = self.classes.Company, self.classes.Engineer
sess = create_session()
self.assert_compile(
sess.query(Company).join(Engineer)
.filter(Engineer.engineer_name == 'vlad'),
"SELECT companies.company_id AS companies_company_id, "
"companies.name AS companies_name "
"FROM companies JOIN (people JOIN engineers "
"ON people.person_id = engineers.person_id) "
"ON "
"companies.company_id = people.company_id "
"WHERE engineers.engineer_name = :engineer_name_1",
use_default_dialect=True)
def test_explicit_polymorphic_join_two(self):
Company, Engineer = self.classes.Company, self.classes.Engineer
sess = create_session()
self.assert_compile(
sess.query(Company)
.join(Engineer, Company.company_id == Engineer.company_id)
.filter(Engineer.engineer_name == 'vlad'),
"SELECT companies.company_id AS companies_company_id, "
"companies.name AS companies_name "
"FROM companies JOIN "
"(people JOIN engineers "
"ON people.person_id = engineers.person_id) "
"ON "
"companies.company_id = people.company_id "
"WHERE engineers.engineer_name = :engineer_name_1",
use_default_dialect=True)
def test_multiple_adaption(self):
"""test that multiple filter() adapters get chained together "
and work correctly within a multiple-entry join()."""
people, Company, Machine, engineers, machines, Engineer = (
self.tables.people,
self.classes.Company,
self.classes.Machine,
self.tables.engineers,
self.tables.machines,
self.classes.Engineer)
sess = create_session()
self.assert_compile(
sess.query(Company)
.join(people.join(engineers), Company.employees)
.filter(Engineer.name == 'dilbert'),
"SELECT companies.company_id AS companies_company_id, "
"companies.name AS companies_name "
"FROM companies JOIN (people "
"JOIN engineers ON people.person_id = "
"engineers.person_id) ON companies.company_id = "
"people.company_id WHERE people.name = :name_1",
use_default_dialect=True
)
mach_alias = machines.select()
self.assert_compile(
sess.query(Company).join(people.join(engineers), Company.employees)
.join(mach_alias, Engineer.machines, from_joinpoint=True).
filter(Engineer.name == 'dilbert').filter(Machine.name == 'foo'),
"SELECT companies.company_id AS companies_company_id, "
"companies.name AS companies_name "
"FROM companies JOIN (people "
"JOIN engineers ON people.person_id = "
"engineers.person_id) ON companies.company_id = "
"people.company_id JOIN "
"(SELECT machines.machine_id AS machine_id, "
"machines.name AS name, "
"machines.engineer_id AS engineer_id "
"FROM machines) AS anon_1 "
"ON engineers.person_id = anon_1.engineer_id "
"WHERE people.name = :name_1 AND anon_1.name = :name_2",
use_default_dialect=True
)
def test_auto_aliasing_multi_link(self):
# test [ticket:2903]
sess = create_session()
Company, Engineer, Manager, Boss = self.classes.Company, \
self.classes.Engineer, \
self.classes.Manager, self.classes.Boss
q = sess.query(Company).\
join(Company.employees.of_type(Engineer)).\
join(Company.employees.of_type(Manager)).\
join(Company.employees.of_type(Boss))
self.assert_compile(
q,
"SELECT companies.company_id AS companies_company_id, "
"companies.name AS companies_name FROM companies "
"JOIN (people JOIN engineers "
"ON people.person_id = engineers.person_id) "
"ON companies.company_id = people.company_id "
"JOIN (people AS people_1 JOIN managers AS managers_1 "
"ON people_1.person_id = managers_1.person_id) "
"ON companies.company_id = people_1.company_id "
"JOIN (people AS people_2 JOIN managers AS managers_2 "
"ON people_2.person_id = managers_2.person_id JOIN boss AS boss_1 "
"ON managers_2.person_id = boss_1.boss_id) "
"ON companies.company_id = people_2.company_id",
use_default_dialect=True)
class JoinOnSynonymTest(_fixtures.FixtureTest, AssertsCompiledSQL):
__dialect__ = 'default'
@classmethod
def setup_mappers(cls):
User = cls.classes.User
Address = cls.classes.Address
users, addresses = (cls.tables.users, cls.tables.addresses)
mapper(User, users, properties={
'addresses': relationship(Address),
'ad_syn': synonym("addresses")
})
mapper(Address, addresses)
def test_join_on_synonym(self):
User = self.classes.User
self.assert_compile(
Session().query(User).join(User.ad_syn),
"SELECT users.id AS users_id, users.name AS users_name "
"FROM users JOIN addresses ON users.id = addresses.user_id"
)
class JoinTest(QueryTest, AssertsCompiledSQL):
__dialect__ = 'default'
def test_single_name(self):
User = self.classes.User
sess = create_session()
self.assert_compile(
sess.query(User).join("orders"),
"SELECT users.id AS users_id, users.name AS users_name "
"FROM users JOIN orders ON users.id = orders.user_id"
)
assert_raises(
sa_exc.InvalidRequestError,
sess.query(User).join, "user",
)
self.assert_compile(
sess.query(User).join("orders", "items"),
"SELECT users.id AS users_id, users.name AS users_name FROM users "
"JOIN orders ON users.id = orders.user_id "
"JOIN order_items AS order_items_1 "
"ON orders.id = order_items_1.order_id JOIN items "
"ON items.id = order_items_1.item_id"
)
# test overlapping paths. User->orders is used by both joins, but
# rendered once.
self.assert_compile(
sess.query(User).join("orders", "items").join(
"orders", "address"),
"SELECT users.id AS users_id, users.name AS users_name FROM users "
"JOIN orders "
"ON users.id = orders.user_id "
"JOIN order_items AS order_items_1 "
"ON orders.id = order_items_1.order_id "
"JOIN items ON items.id = order_items_1.item_id JOIN addresses "
"ON addresses.id = orders.address_id")
def test_invalid_kwarg_join(self):
User = self.classes.User
sess = create_session()
assert_raises_message(
TypeError,
"unknown arguments: bar, foob",
sess.query(User).join, "address", foob="bar", bar="bat"
)
assert_raises_message(
TypeError,
"unknown arguments: bar, foob",
sess.query(User).outerjoin, "address", foob="bar", bar="bat"
)
def test_left_w_no_entity(self):
User = self.classes.User
Address = self.classes.Address
sess = create_session()
self.assert_compile(
sess.query(User, literal_column('x'), ).join(Address),
"SELECT users.id AS users_id, users.name AS users_name, x "
"FROM users JOIN addresses ON users.id = addresses.user_id"
)
self.assert_compile(
sess.query(literal_column('x'), User).join(Address),
"SELECT x, users.id AS users_id, users.name AS users_name "
"FROM users JOIN addresses ON users.id = addresses.user_id"
)
def test_left_is_none_and_query_has_no_entities(self):
User = self.classes.User
Address = self.classes.Address
sess = create_session()
assert_raises_message(
sa_exc.InvalidRequestError,
r"No entities to join from; please use select_from\(\) to "
r"establish the left entity/selectable of this join",
sess.query().join, Address
)
def test_isouter_flag(self):
User = self.classes.User
self.assert_compile(
create_session().query(User).join('orders', isouter=True),
"SELECT users.id AS users_id, users.name AS users_name "
"FROM users LEFT OUTER JOIN orders ON users.id = orders.user_id"
)
def test_full_flag(self):
User = self.classes.User
self.assert_compile(
create_session().query(User).outerjoin('orders', full=True),
"SELECT users.id AS users_id, users.name AS users_name "
"FROM users FULL OUTER JOIN orders ON users.id = orders.user_id"
)
def test_multi_tuple_form(self):
"""test the 'tuple' form of join, now superseded
by the two-element join() form.
Not deprecating this style as of yet.
"""
Item, Order, User = (self.classes.Item,
self.classes.Order,
self.classes.User)
sess = create_session()
# assert_raises(
# sa.exc.SADeprecationWarning,
# sess.query(User).join, (Order, User.id==Order.user_id)
# )
self.assert_compile(
sess.query(User).join((Order, User.id == Order.user_id)),
"SELECT users.id AS users_id, users.name AS users_name "
"FROM users JOIN orders ON users.id = orders.user_id",
)
self.assert_compile(
sess.query(User).join(
(Order, User.id == Order.user_id),
(Item, Order.items)),
"SELECT users.id AS users_id, users.name AS users_name "
"FROM users JOIN orders ON users.id = orders.user_id "
"JOIN order_items AS order_items_1 ON orders.id = "
"order_items_1.order_id JOIN items ON items.id = "
"order_items_1.item_id",
)
# the old "backwards" form
self.assert_compile(
sess.query(User).join(("orders", Order)),
"SELECT users.id AS users_id, users.name AS users_name "
"FROM users JOIN orders ON users.id = orders.user_id",
)
def test_single_prop_1(self):
Item, Order, User, Address = (self.classes.Item,
self.classes.Order,
self.classes.User,
self.classes.Address)
sess = create_session()
self.assert_compile(
sess.query(User).join(User.orders),
"SELECT users.id AS users_id, users.name AS users_name "
"FROM users JOIN orders ON users.id = orders.user_id"
)
def test_single_prop_2(self):
Item, Order, User, Address = (self.classes.Item,
self.classes.Order,
self.classes.User,
self.classes.Address)
sess = create_session()
self.assert_compile(
sess.query(User).join(Order.user),
"SELECT users.id AS users_id, users.name AS users_name "
"FROM orders JOIN users ON users.id = orders.user_id"
)
def test_single_prop_3(self):
Item, Order, User, Address = (self.classes.Item,
self.classes.Order,
self.classes.User,
self.classes.Address)
sess = create_session()
oalias1 = aliased(Order)
self.assert_compile(
sess.query(User).join(oalias1.user),
"SELECT users.id AS users_id, users.name AS users_name "
"FROM orders AS orders_1 JOIN users ON users.id = orders_1.user_id"
)
def test_single_prop_4(self):
Item, Order, User, Address = (self.classes.Item,
self.classes.Order,
self.classes.User,
self.classes.Address)
sess = create_session()
oalias1 = aliased(Order)
oalias2 = aliased(Order)
# another nonsensical query. (from [ticket:1537]).
# in this case, the contract of "left to right" is honored
self.assert_compile(
sess.query(User).join(oalias1.user).join(oalias2.user),
"SELECT users.id AS users_id, users.name AS users_name "
"FROM orders AS orders_1 JOIN users "
"ON users.id = orders_1.user_id, "
"orders AS orders_2 JOIN users ON users.id = orders_2.user_id")
def test_single_prop_5(self):
Item, Order, User, Address = (self.classes.Item,
self.classes.Order,
self.classes.User,
self.classes.Address)
sess = create_session()
self.assert_compile(
sess.query(User).join(User.orders, Order.items),
"SELECT users.id AS users_id, users.name AS users_name FROM users "
"JOIN orders ON users.id = orders.user_id "
"JOIN order_items AS order_items_1 "
"ON orders.id = order_items_1.order_id JOIN items "
"ON items.id = order_items_1.item_id"
)
def test_single_prop_6(self):
Item, Order, User, Address = (self.classes.Item,
self.classes.Order,
self.classes.User,
self.classes.Address)
sess = create_session()
ualias = aliased(User)
self.assert_compile(
sess.query(ualias).join(ualias.orders),
"SELECT users_1.id AS users_1_id, users_1.name AS users_1_name "
"FROM users AS users_1 JOIN orders ON users_1.id = orders.user_id"
)
def test_single_prop_7(self):
Item, Order, User, Address = (self.classes.Item,
self.classes.Order,
self.classes.User,
self.classes.Address)
sess = create_session()
# this query is somewhat nonsensical. the old system didn't render a
# correct query for this. In this case its the most faithful to what
# was asked - there's no linkage between User.orders and "oalias",
# so two FROM elements are generated.
oalias = aliased(Order)
self.assert_compile(
sess.query(User).join(User.orders, oalias.items),
"SELECT users.id AS users_id, users.name AS users_name FROM users "
"JOIN orders ON users.id = orders.user_id, "
"orders AS orders_1 JOIN order_items AS order_items_1 "
"ON orders_1.id = order_items_1.order_id "
"JOIN items ON items.id = order_items_1.item_id")
def test_single_prop_8(self):
Item, Order, User, Address = (self.classes.Item,
self.classes.Order,
self.classes.User,
self.classes.Address)
sess = create_session()
# same as before using an aliased() for User as well
ualias = aliased(User)
oalias = aliased(Order)
self.assert_compile(
sess.query(ualias).join(ualias.orders, oalias.items),
"SELECT users_1.id AS users_1_id, users_1.name AS users_1_name "
"FROM users AS users_1 "
"JOIN orders ON users_1.id = orders.user_id, "
"orders AS orders_1 JOIN order_items AS order_items_1 "
"ON orders_1.id = order_items_1.order_id "
"JOIN items ON items.id = order_items_1.item_id")
def test_single_prop_9(self):
Item, Order, User, Address = (self.classes.Item,
self.classes.Order,
self.classes.User,
self.classes.Address)
sess = create_session()
self.assert_compile(
sess.query(User).filter(User.name == 'ed').from_self().
join(User.orders),
"SELECT anon_1.users_id AS anon_1_users_id, "
"anon_1.users_name AS anon_1_users_name "
"FROM (SELECT users.id AS users_id, users.name AS users_name "
"FROM users "
"WHERE users.name = :name_1) AS anon_1 JOIN orders "
"ON anon_1.users_id = orders.user_id"
)
def test_single_prop_10(self):
Item, Order, User, Address = (self.classes.Item,
self.classes.Order,
self.classes.User,
self.classes.Address)
sess = create_session()
self.assert_compile(
sess.query(User).join(User.addresses, aliased=True).
filter(Address.email_address == 'foo'),
"SELECT users.id AS users_id, users.name AS users_name "
"FROM users JOIN addresses AS addresses_1 "
"ON users.id = addresses_1.user_id "
"WHERE addresses_1.email_address = :email_address_1"
)
def test_single_prop_11(self):
Item, Order, User, Address = (self.classes.Item,
self.classes.Order,
self.classes.User,
self.classes.Address)
sess = create_session()
self.assert_compile(
sess.query(User).join(User.orders, Order.items, aliased=True).
filter(Item.id == 10),
"SELECT users.id AS users_id, users.name AS users_name "
"FROM users JOIN orders AS orders_1 "
"ON users.id = orders_1.user_id "
"JOIN order_items AS order_items_1 "
"ON orders_1.id = order_items_1.order_id "
"JOIN items AS items_1 ON items_1.id = order_items_1.item_id "
"WHERE items_1.id = :id_1")
def test_single_prop_12(self):
Item, Order, User, Address = (self.classes.Item,
self.classes.Order,
self.classes.User,
self.classes.Address)
sess = create_session()
oalias1 = aliased(Order)
# test #1 for [ticket:1706]
ualias = aliased(User)
self.assert_compile(
sess.query(ualias).
join(oalias1, ualias.orders).
join(Address, ualias.addresses),
"SELECT users_1.id AS users_1_id, users_1.name AS "
"users_1_name FROM users AS users_1 JOIN orders AS orders_1 "
"ON users_1.id = orders_1.user_id JOIN addresses ON users_1.id "
"= addresses.user_id"
)
def test_single_prop_13(self):
Item, Order, User, Address = (self.classes.Item,
self.classes.Order,
self.classes.User,
self.classes.Address)
sess = create_session()
# test #2 for [ticket:1706]
ualias = aliased(User)
ualias2 = aliased(User)
self.assert_compile(
sess.query(ualias).
join(Address, ualias.addresses).
join(ualias2, Address.user).
join(Order, ualias.orders),
"SELECT users_1.id AS users_1_id, users_1.name AS users_1_name "
"FROM users "
"AS users_1 JOIN addresses ON users_1.id = addresses.user_id "
"JOIN users AS users_2 "
"ON users_2.id = addresses.user_id JOIN orders "
"ON users_1.id = orders.user_id"
)
def test_overlapping_paths(self):
User = self.classes.User
for aliased in (True, False):
# load a user who has an order that contains item id 3 and address
# id 1 (order 3, owned by jack)
result = create_session().query(User) \
.join('orders', 'items', aliased=aliased) \
.filter_by(id=3) \
.join('orders', 'address', aliased=aliased) \
.filter_by(id=1).all()
assert [User(id=7, name='jack')] == result
def test_overlapping_paths_multilevel(self):
User = self.classes.User
s = Session()
q = s.query(User).\
join('orders').\
join('addresses').\
join('orders', 'items').\
join('addresses', 'dingaling')
self.assert_compile(
q,
"SELECT users.id AS users_id, users.name AS users_name "
"FROM users JOIN orders ON users.id = orders.user_id "
"JOIN addresses ON users.id = addresses.user_id "
"JOIN order_items AS order_items_1 ON orders.id = "
"order_items_1.order_id "
"JOIN items ON items.id = order_items_1.item_id "
"JOIN dingalings ON addresses.id = dingalings.address_id"
)
def test_overlapping_paths_outerjoin(self):
User = self.classes.User
result = create_session().query(User).outerjoin('orders', 'items') \
.filter_by(id=3).outerjoin('orders', 'address') \
.filter_by(id=1).all()
assert [User(id=7, name='jack')] == result
def test_raises_on_dupe_target_rel(self):
User = self.classes.User
assert_raises_message(
sa.exc.SAWarning,
"Pathed join target Order.items has already been joined to; "
"skipping",
lambda: create_session().query(User).outerjoin('orders', 'items').
outerjoin('orders', 'items')
)
def test_from_joinpoint(self):
Item, User, Order = (self.classes.Item,
self.classes.User,
self.classes.Order)
sess = create_session()
for oalias, ialias in [
(True, True),
(False, False),
(True, False),
(False, True)]:
eq_(
sess.query(User).join('orders', aliased=oalias)
.join('items', from_joinpoint=True, aliased=ialias)
.filter(Item.description == 'item 4').all(),
[User(name='jack')]
)
# use middle criterion
eq_(
sess.query(User).join('orders', aliased=oalias)
.filter(Order.user_id == 9)
.join('items', from_joinpoint=True, aliased=ialias)
.filter(Item.description == 'item 4').all(),
[]
)
orderalias = aliased(Order)
itemalias = aliased(Item)
eq_(
sess.query(User).join(orderalias, 'orders')
.join(itemalias, 'items', from_joinpoint=True)
.filter(itemalias.description == 'item 4').all(),
[User(name='jack')]
)
eq_(
sess.query(User).join(orderalias, 'orders')
.join(itemalias, 'items', from_joinpoint=True)
.filter(orderalias.user_id == 9)
.filter(itemalias.description == 'item 4').all(),
[]
) | zimports | /zimports-0.6.0.tar.gz/zimports-0.6.0/test_files/star_imports.py | star_imports.py |
from types import TYPE_CHECKING
if TYPE_CHECKING:
from sqlalchemy import ARRAY, BIGINT, BINARY, BLANK_SCHEMA, BLOB, BOOLEAN, BigInteger, Binary, Boolean, CHAR, CLOB, CheckConstraint, Column, ColumnDefault, Constraint, DATE, DATETIME, DDL, DECIMAL, Date, DateTime, DefaultClause, Enum, FLOAT, FetchedValue, Float, ForeignKey, ForeignKeyConstraint, INT, INTEGER, Index, Integer, Interval, JSON, LargeBinary, MetaData, NCHAR, NUMERIC, NVARCHAR, Numeric, PassiveDefault, PickleType, PrimaryKeyConstraint, REAL, SMALLINT, Sequence, SmallInteger, String, TEXT, TIME, TIMESTAMP, Table, Text, ThreadLocalMetaData, Time, TypeDecorator, Unicode, UnicodeText, UniqueConstraint, VARBINARY, VARCHAR, alias, all_, and_, any_, asc, between, bindparam, case, cast, collate, column, create_engine, delete, desc, distinct, engine_from_config, except_, except_all, exists, extract, false, func, funcfilter, insert, inspect, intersect, intersect_all, join, lateral, literal, literal_column, modifier, not_, null, nullsfirst, nullslast, or_, outerjoin, outparam, over, select, subquery, table, tablesample, text, true, tuple_, type_coerce, union, union_all, update, within_group
from sqlalchemy import exc as sa_exc
from sqlalchemy.orm import AliasOption, AttributeExtension, Bundle, ColumnProperty, ComparableProperty, CompositeProperty, EXT_CONTINUE, EXT_SKIP, EXT_STOP, Load, Mapper, MapperExtension, PropComparator, Query, RelationshipProperty, Session, SessionExtension, SynonymProperty, aliased, backref, class_mapper, clear_mappers, column_property, comparable_property, compile_mappers, composite, configure_mappers, contains_alias, contains_eager, create_session, defaultload, defer, deferred, dynamic_loader, eagerload, eagerload_all, foreign, immediateload, join, joinedload, joinedload_all, lazyload, lazyload_all, load_only, make_transient, make_transient_to_detached, mapper, noload, object_mapper, object_session, outerjoin, polymorphic_union, public_factory, query_expression, raiseload, reconstructor, relation, relationship, remote, scoped_session, selectin_polymorphic, selectinload, selectinload_all, sessionmaker, subqueryload, subqueryload_all, synonym, undefer, undefer_group, validates, was_deleted, with_expression, with_parent, with_polymorphic
from sqlalchemy.testing import assert_raises_message
from sqlalchemy.testing import fixtures
from sqlalchemy import testing | zimports | /zimports-0.6.0.tar.gz/zimports-0.6.0/test_files/type_checking3.py | type_checking3.py |
from test.orm import _fixtures
import sqlalchemy as sa
from sqlalchemy import Column
from sqlalchemy import ForeignKey
from sqlalchemy import Integer
from sqlalchemy import String
from sqlalchemy import Table
from sqlalchemy import exc as sa_exc
from sqlalchemy import literal_column
from sqlalchemy.orm import Session
from sqlalchemy.orm import aliased
from sqlalchemy.orm import backref
from sqlalchemy.orm import configure_mappers
from sqlalchemy.orm import create_session
from sqlalchemy.orm import mapper
from sqlalchemy.orm import relationship
from sqlalchemy.orm import synonym
from sqlalchemy.testing import AssertsCompiledSQL
from sqlalchemy.testing import assert_raises
from sqlalchemy.testing import assert_raises_message
from sqlalchemy.testing import eq_
from sqlalchemy.testing import fixtures
from sqlalchemy.testing.schema import Column
class QueryTest(_fixtures.FixtureTest):
run_setup_mappers = 'once'
run_inserts = 'once'
run_deletes = None
@classmethod
def setup_mappers(cls):
Node, composite_pk_table, users, Keyword, items, Dingaling, \
order_items, item_keywords, Item, User, dingalings, \
Address, keywords, CompositePk, nodes, Order, orders, \
addresses = cls.classes.Node, \
cls.tables.composite_pk_table, cls.tables.users, \
cls.classes.Keyword, cls.tables.items, \
cls.classes.Dingaling, cls.tables.order_items, \
cls.tables.item_keywords, cls.classes.Item, \
cls.classes.User, cls.tables.dingalings, \
cls.classes.Address, cls.tables.keywords, \
cls.classes.CompositePk, cls.tables.nodes, \
cls.classes.Order, cls.tables.orders, cls.tables.addresses
mapper(User, users, properties={
'addresses': relationship(Address, backref='user',
order_by=addresses.c.id),
# o2m, m2o
'orders': relationship(Order, backref='user', order_by=orders.c.id)
})
mapper(Address, addresses, properties={
# o2o
'dingaling': relationship(Dingaling, uselist=False,
backref="address")
})
mapper(Dingaling, dingalings)
mapper(Order, orders, properties={
# m2m
'items': relationship(Item, secondary=order_items,
order_by=items.c.id),
'address': relationship(Address), # m2o
})
mapper(Item, items, properties={
'keywords': relationship(Keyword, secondary=item_keywords) # m2m
})
mapper(Keyword, keywords)
mapper(Node, nodes, properties={
'children': relationship(Node,
backref=backref(
'parent', remote_side=[nodes.c.id]))
})
mapper(CompositePk, composite_pk_table)
configure_mappers()
class InheritedJoinTest(fixtures.MappedTest, AssertsCompiledSQL):
run_setup_mappers = 'once'
@classmethod
def define_tables(cls, metadata):
Table('companies', metadata,
Column('company_id', Integer, primary_key=True,
test_needs_autoincrement=True),
Column('name', String(50)))
Table('people', metadata,
Column('person_id', Integer, primary_key=True,
test_needs_autoincrement=True),
Column('company_id', Integer,
ForeignKey('companies.company_id')),
Column('name', String(50)),
Column('type', String(30)))
Table('engineers', metadata,
Column('person_id', Integer, ForeignKey(
'people.person_id'), primary_key=True),
Column('status', String(30)),
Column('engineer_name', String(50)),
Column('primary_language', String(50)))
Table('machines', metadata,
Column('machine_id', Integer, primary_key=True,
test_needs_autoincrement=True),
Column('name', String(50)),
Column('engineer_id', Integer,
ForeignKey('engineers.person_id')))
Table('managers', metadata,
Column('person_id', Integer, ForeignKey(
'people.person_id'), primary_key=True),
Column('status', String(30)),
Column('manager_name', String(50)))
Table('boss', metadata,
Column('boss_id', Integer, ForeignKey(
'managers.person_id'), primary_key=True),
Column('golf_swing', String(30)),
)
Table('paperwork', metadata,
Column('paperwork_id', Integer, primary_key=True,
test_needs_autoincrement=True),
Column('description', String(50)),
Column('person_id', Integer, ForeignKey('people.person_id')))
@classmethod
def setup_classes(cls):
paperwork, people, companies, boss, managers, machines, engineers = (
cls.tables.paperwork,
cls.tables.people,
cls.tables.companies,
cls.tables.boss,
cls.tables.managers,
cls.tables.machines,
cls.tables.engineers)
class Company(cls.Comparable):
pass
class Person(cls.Comparable):
pass
class Engineer(Person):
pass
class Manager(Person):
pass
class Boss(Manager):
pass
class Machine(cls.Comparable):
pass
class Paperwork(cls.Comparable):
pass
mapper(Company, companies, properties={
'employees': relationship(Person, order_by=people.c.person_id)
})
mapper(Machine, machines)
mapper(Person, people,
polymorphic_on=people.c.type,
polymorphic_identity='person',
properties={
'paperwork': relationship(Paperwork,
order_by=paperwork.c.paperwork_id)
})
mapper(Engineer, engineers, inherits=Person,
polymorphic_identity='engineer',
properties={'machines': relationship(
Machine, order_by=machines.c.machine_id)})
mapper(Manager, managers,
inherits=Person, polymorphic_identity='manager')
mapper(Boss, boss, inherits=Manager, polymorphic_identity='boss')
mapper(Paperwork, paperwork)
def test_single_prop(self):
Company = self.classes.Company
sess = create_session()
self.assert_compile(
sess.query(Company).join(Company.employees),
"SELECT companies.company_id AS companies_company_id, "
"companies.name AS companies_name "
"FROM companies JOIN people "
"ON companies.company_id = people.company_id",
use_default_dialect=True)
def test_force_via_select_from(self):
Company, Engineer = self.classes.Company, self.classes.Engineer
sess = create_session()
self.assert_compile(
sess.query(Company)
.filter(Company.company_id == Engineer.company_id)
.filter(Engineer.primary_language == 'java'),
"SELECT companies.company_id AS companies_company_id, "
"companies.name AS companies_name "
"FROM companies, people, engineers "
"WHERE companies.company_id = people.company_id "
"AND engineers.primary_language "
"= :primary_language_1", use_default_dialect=True)
self.assert_compile(
sess.query(Company).select_from(Company, Engineer)
.filter(Company.company_id == Engineer.company_id)
.filter(Engineer.primary_language == 'java'),
"SELECT companies.company_id AS companies_company_id, "
"companies.name AS companies_name "
"FROM companies, people JOIN engineers "
"ON people.person_id = engineers.person_id "
"WHERE companies.company_id = people.company_id "
"AND engineers.primary_language ="
" :primary_language_1", use_default_dialect=True)
def test_single_prop_of_type(self):
Company, Engineer = self.classes.Company, self.classes.Engineer
sess = create_session()
self.assert_compile(
sess.query(Company).join(Company.employees.of_type(Engineer)),
"SELECT companies.company_id AS companies_company_id, "
"companies.name AS companies_name "
"FROM companies JOIN "
"(people JOIN engineers "
"ON people.person_id = engineers.person_id) "
"ON companies.company_id = people.company_id",
use_default_dialect=True)
def test_prop_with_polymorphic_1(self):
Person, Manager, Paperwork = (self.classes.Person,
self.classes.Manager,
self.classes.Paperwork)
sess = create_session()
self.assert_compile(
sess.query(Person).with_polymorphic(Manager).
order_by(Person.person_id).join('paperwork')
.filter(Paperwork.description.like('%review%')),
"SELECT people.person_id AS people_person_id, people.company_id AS"
" people_company_id, "
"people.name AS people_name, people.type AS people_type, "
"managers.person_id AS managers_person_id, "
"managers.status AS managers_status, managers.manager_name AS "
"managers_manager_name FROM people "
"LEFT OUTER JOIN managers "
"ON people.person_id = managers.person_id "
"JOIN paperwork "
"ON people.person_id = paperwork.person_id "
"WHERE paperwork.description LIKE :description_1 "
"ORDER BY people.person_id", use_default_dialect=True)
def test_prop_with_polymorphic_2(self):
Person, Manager, Paperwork = (self.classes.Person,
self.classes.Manager,
self.classes.Paperwork)
sess = create_session()
self.assert_compile(
sess.query(Person).with_polymorphic(Manager).
order_by(Person.person_id).join('paperwork', aliased=True)
.filter(Paperwork.description.like('%review%')),
"SELECT people.person_id AS people_person_id, "
"people.company_id AS people_company_id, "
"people.name AS people_name, people.type AS people_type, "
"managers.person_id AS managers_person_id, "
"managers.status AS managers_status, "
"managers.manager_name AS managers_manager_name "
"FROM people LEFT OUTER JOIN managers "
"ON people.person_id = managers.person_id "
"JOIN paperwork AS paperwork_1 "
"ON people.person_id = paperwork_1.person_id "
"WHERE paperwork_1.description "
"LIKE :description_1 ORDER BY people.person_id",
use_default_dialect=True)
def test_explicit_polymorphic_join_one(self):
Company, Engineer = self.classes.Company, self.classes.Engineer
sess = create_session()
self.assert_compile(
sess.query(Company).join(Engineer)
.filter(Engineer.engineer_name == 'vlad'),
"SELECT companies.company_id AS companies_company_id, "
"companies.name AS companies_name "
"FROM companies JOIN (people JOIN engineers "
"ON people.person_id = engineers.person_id) "
"ON "
"companies.company_id = people.company_id "
"WHERE engineers.engineer_name = :engineer_name_1",
use_default_dialect=True)
def test_explicit_polymorphic_join_two(self):
Company, Engineer = self.classes.Company, self.classes.Engineer
sess = create_session()
self.assert_compile(
sess.query(Company)
.join(Engineer, Company.company_id == Engineer.company_id)
.filter(Engineer.engineer_name == 'vlad'),
"SELECT companies.company_id AS companies_company_id, "
"companies.name AS companies_name "
"FROM companies JOIN "
"(people JOIN engineers "
"ON people.person_id = engineers.person_id) "
"ON "
"companies.company_id = people.company_id "
"WHERE engineers.engineer_name = :engineer_name_1",
use_default_dialect=True)
def test_multiple_adaption(self):
"""test that multiple filter() adapters get chained together "
and work correctly within a multiple-entry join()."""
people, Company, Machine, engineers, machines, Engineer = (
self.tables.people,
self.classes.Company,
self.classes.Machine,
self.tables.engineers,
self.tables.machines,
self.classes.Engineer)
sess = create_session()
self.assert_compile(
sess.query(Company)
.join(people.join(engineers), Company.employees)
.filter(Engineer.name == 'dilbert'),
"SELECT companies.company_id AS companies_company_id, "
"companies.name AS companies_name "
"FROM companies JOIN (people "
"JOIN engineers ON people.person_id = "
"engineers.person_id) ON companies.company_id = "
"people.company_id WHERE people.name = :name_1",
use_default_dialect=True
)
mach_alias = machines.select()
self.assert_compile(
sess.query(Company).join(people.join(engineers), Company.employees)
.join(mach_alias, Engineer.machines, from_joinpoint=True).
filter(Engineer.name == 'dilbert').filter(Machine.name == 'foo'),
"SELECT companies.company_id AS companies_company_id, "
"companies.name AS companies_name "
"FROM companies JOIN (people "
"JOIN engineers ON people.person_id = "
"engineers.person_id) ON companies.company_id = "
"people.company_id JOIN "
"(SELECT machines.machine_id AS machine_id, "
"machines.name AS name, "
"machines.engineer_id AS engineer_id "
"FROM machines) AS anon_1 "
"ON engineers.person_id = anon_1.engineer_id "
"WHERE people.name = :name_1 AND anon_1.name = :name_2",
use_default_dialect=True
)
def test_auto_aliasing_multi_link(self):
# test [ticket:2903]
sess = create_session()
Company, Engineer, Manager, Boss = self.classes.Company, \
self.classes.Engineer, \
self.classes.Manager, self.classes.Boss
q = sess.query(Company).\
join(Company.employees.of_type(Engineer)).\
join(Company.employees.of_type(Manager)).\
join(Company.employees.of_type(Boss))
self.assert_compile(
q,
"SELECT companies.company_id AS companies_company_id, "
"companies.name AS companies_name FROM companies "
"JOIN (people JOIN engineers "
"ON people.person_id = engineers.person_id) "
"ON companies.company_id = people.company_id "
"JOIN (people AS people_1 JOIN managers AS managers_1 "
"ON people_1.person_id = managers_1.person_id) "
"ON companies.company_id = people_1.company_id "
"JOIN (people AS people_2 JOIN managers AS managers_2 "
"ON people_2.person_id = managers_2.person_id JOIN boss AS boss_1 "
"ON managers_2.person_id = boss_1.boss_id) "
"ON companies.company_id = people_2.company_id",
use_default_dialect=True)
class JoinOnSynonymTest(_fixtures.FixtureTest, AssertsCompiledSQL):
__dialect__ = 'default'
@classmethod
def setup_mappers(cls):
User = cls.classes.User
Address = cls.classes.Address
users, addresses = (cls.tables.users, cls.tables.addresses)
mapper(User, users, properties={
'addresses': relationship(Address),
'ad_syn': synonym("addresses")
})
mapper(Address, addresses)
def test_join_on_synonym(self):
User = self.classes.User
self.assert_compile(
Session().query(User).join(User.ad_syn),
"SELECT users.id AS users_id, users.name AS users_name "
"FROM users JOIN addresses ON users.id = addresses.user_id"
)
class JoinTest(QueryTest, AssertsCompiledSQL):
__dialect__ = 'default'
def test_single_name(self):
User = self.classes.User
sess = create_session()
self.assert_compile(
sess.query(User).join("orders"),
"SELECT users.id AS users_id, users.name AS users_name "
"FROM users JOIN orders ON users.id = orders.user_id"
)
assert_raises(
sa_exc.InvalidRequestError,
sess.query(User).join, "user",
)
self.assert_compile(
sess.query(User).join("orders", "items"),
"SELECT users.id AS users_id, users.name AS users_name FROM users "
"JOIN orders ON users.id = orders.user_id "
"JOIN order_items AS order_items_1 "
"ON orders.id = order_items_1.order_id JOIN items "
"ON items.id = order_items_1.item_id"
)
# test overlapping paths. User->orders is used by both joins, but
# rendered once.
self.assert_compile(
sess.query(User).join("orders", "items").join(
"orders", "address"),
"SELECT users.id AS users_id, users.name AS users_name FROM users "
"JOIN orders "
"ON users.id = orders.user_id "
"JOIN order_items AS order_items_1 "
"ON orders.id = order_items_1.order_id "
"JOIN items ON items.id = order_items_1.item_id JOIN addresses "
"ON addresses.id = orders.address_id")
def test_invalid_kwarg_join(self):
User = self.classes.User
sess = create_session()
assert_raises_message(
TypeError,
"unknown arguments: bar, foob",
sess.query(User).join, "address", foob="bar", bar="bat"
)
assert_raises_message(
TypeError,
"unknown arguments: bar, foob",
sess.query(User).outerjoin, "address", foob="bar", bar="bat"
)
def test_left_w_no_entity(self):
User = self.classes.User
Address = self.classes.Address
sess = create_session()
self.assert_compile(
sess.query(User, literal_column('x'), ).join(Address),
"SELECT users.id AS users_id, users.name AS users_name, x "
"FROM users JOIN addresses ON users.id = addresses.user_id"
)
self.assert_compile(
sess.query(literal_column('x'), User).join(Address),
"SELECT x, users.id AS users_id, users.name AS users_name "
"FROM users JOIN addresses ON users.id = addresses.user_id"
)
def test_left_is_none_and_query_has_no_entities(self):
User = self.classes.User
Address = self.classes.Address
sess = create_session()
assert_raises_message(
sa_exc.InvalidRequestError,
r"No entities to join from; please use select_from\(\) to "
r"establish the left entity/selectable of this join",
sess.query().join, Address
)
def test_isouter_flag(self):
User = self.classes.User
self.assert_compile(
create_session().query(User).join('orders', isouter=True),
"SELECT users.id AS users_id, users.name AS users_name "
"FROM users LEFT OUTER JOIN orders ON users.id = orders.user_id"
)
def test_full_flag(self):
User = self.classes.User
self.assert_compile(
create_session().query(User).outerjoin('orders', full=True),
"SELECT users.id AS users_id, users.name AS users_name "
"FROM users FULL OUTER JOIN orders ON users.id = orders.user_id"
)
def test_multi_tuple_form(self):
"""test the 'tuple' form of join, now superseded
by the two-element join() form.
Not deprecating this style as of yet.
"""
Item, Order, User = (self.classes.Item,
self.classes.Order,
self.classes.User)
sess = create_session()
# assert_raises(
# sa.exc.SADeprecationWarning,
# sess.query(User).join, (Order, User.id==Order.user_id)
# )
self.assert_compile(
sess.query(User).join((Order, User.id == Order.user_id)),
"SELECT users.id AS users_id, users.name AS users_name "
"FROM users JOIN orders ON users.id = orders.user_id",
)
self.assert_compile(
sess.query(User).join(
(Order, User.id == Order.user_id),
(Item, Order.items)),
"SELECT users.id AS users_id, users.name AS users_name "
"FROM users JOIN orders ON users.id = orders.user_id "
"JOIN order_items AS order_items_1 ON orders.id = "
"order_items_1.order_id JOIN items ON items.id = "
"order_items_1.item_id",
)
# the old "backwards" form
self.assert_compile(
sess.query(User).join(("orders", Order)),
"SELECT users.id AS users_id, users.name AS users_name "
"FROM users JOIN orders ON users.id = orders.user_id",
)
def test_single_prop_1(self):
Item, Order, User, Address = (self.classes.Item,
self.classes.Order,
self.classes.User,
self.classes.Address)
sess = create_session()
self.assert_compile(
sess.query(User).join(User.orders),
"SELECT users.id AS users_id, users.name AS users_name "
"FROM users JOIN orders ON users.id = orders.user_id"
)
def test_single_prop_2(self):
Item, Order, User, Address = (self.classes.Item,
self.classes.Order,
self.classes.User,
self.classes.Address)
sess = create_session()
self.assert_compile(
sess.query(User).join(Order.user),
"SELECT users.id AS users_id, users.name AS users_name "
"FROM orders JOIN users ON users.id = orders.user_id"
)
def test_single_prop_3(self):
Item, Order, User, Address = (self.classes.Item,
self.classes.Order,
self.classes.User,
self.classes.Address)
sess = create_session()
oalias1 = aliased(Order)
self.assert_compile(
sess.query(User).join(oalias1.user),
"SELECT users.id AS users_id, users.name AS users_name "
"FROM orders AS orders_1 JOIN users ON users.id = orders_1.user_id"
)
def test_single_prop_4(self):
Item, Order, User, Address = (self.classes.Item,
self.classes.Order,
self.classes.User,
self.classes.Address)
sess = create_session()
oalias1 = aliased(Order)
oalias2 = aliased(Order)
# another nonsensical query. (from [ticket:1537]).
# in this case, the contract of "left to right" is honored
self.assert_compile(
sess.query(User).join(oalias1.user).join(oalias2.user),
"SELECT users.id AS users_id, users.name AS users_name "
"FROM orders AS orders_1 JOIN users "
"ON users.id = orders_1.user_id, "
"orders AS orders_2 JOIN users ON users.id = orders_2.user_id")
def test_single_prop_5(self):
Item, Order, User, Address = (self.classes.Item,
self.classes.Order,
self.classes.User,
self.classes.Address)
sess = create_session()
self.assert_compile(
sess.query(User).join(User.orders, Order.items),
"SELECT users.id AS users_id, users.name AS users_name FROM users "
"JOIN orders ON users.id = orders.user_id "
"JOIN order_items AS order_items_1 "
"ON orders.id = order_items_1.order_id JOIN items "
"ON items.id = order_items_1.item_id"
)
def test_single_prop_6(self):
Item, Order, User, Address = (self.classes.Item,
self.classes.Order,
self.classes.User,
self.classes.Address)
sess = create_session()
ualias = aliased(User)
self.assert_compile(
sess.query(ualias).join(ualias.orders),
"SELECT users_1.id AS users_1_id, users_1.name AS users_1_name "
"FROM users AS users_1 JOIN orders ON users_1.id = orders.user_id"
)
def test_single_prop_7(self):
Item, Order, User, Address = (self.classes.Item,
self.classes.Order,
self.classes.User,
self.classes.Address)
sess = create_session()
# this query is somewhat nonsensical. the old system didn't render a
# correct query for this. In this case its the most faithful to what
# was asked - there's no linkage between User.orders and "oalias",
# so two FROM elements are generated.
oalias = aliased(Order)
self.assert_compile(
sess.query(User).join(User.orders, oalias.items),
"SELECT users.id AS users_id, users.name AS users_name FROM users "
"JOIN orders ON users.id = orders.user_id, "
"orders AS orders_1 JOIN order_items AS order_items_1 "
"ON orders_1.id = order_items_1.order_id "
"JOIN items ON items.id = order_items_1.item_id")
def test_single_prop_8(self):
Item, Order, User, Address = (self.classes.Item,
self.classes.Order,
self.classes.User,
self.classes.Address)
sess = create_session()
# same as before using an aliased() for User as well
ualias = aliased(User)
oalias = aliased(Order)
self.assert_compile(
sess.query(ualias).join(ualias.orders, oalias.items),
"SELECT users_1.id AS users_1_id, users_1.name AS users_1_name "
"FROM users AS users_1 "
"JOIN orders ON users_1.id = orders.user_id, "
"orders AS orders_1 JOIN order_items AS order_items_1 "
"ON orders_1.id = order_items_1.order_id "
"JOIN items ON items.id = order_items_1.item_id")
def test_single_prop_9(self):
Item, Order, User, Address = (self.classes.Item,
self.classes.Order,
self.classes.User,
self.classes.Address)
sess = create_session()
self.assert_compile(
sess.query(User).filter(User.name == 'ed').from_self().
join(User.orders),
"SELECT anon_1.users_id AS anon_1_users_id, "
"anon_1.users_name AS anon_1_users_name "
"FROM (SELECT users.id AS users_id, users.name AS users_name "
"FROM users "
"WHERE users.name = :name_1) AS anon_1 JOIN orders "
"ON anon_1.users_id = orders.user_id"
)
def test_single_prop_10(self):
Item, Order, User, Address = (self.classes.Item,
self.classes.Order,
self.classes.User,
self.classes.Address)
sess = create_session()
self.assert_compile(
sess.query(User).join(User.addresses, aliased=True).
filter(Address.email_address == 'foo'),
"SELECT users.id AS users_id, users.name AS users_name "
"FROM users JOIN addresses AS addresses_1 "
"ON users.id = addresses_1.user_id "
"WHERE addresses_1.email_address = :email_address_1"
)
def test_single_prop_11(self):
Item, Order, User, Address = (self.classes.Item,
self.classes.Order,
self.classes.User,
self.classes.Address)
sess = create_session()
self.assert_compile(
sess.query(User).join(User.orders, Order.items, aliased=True).
filter(Item.id == 10),
"SELECT users.id AS users_id, users.name AS users_name "
"FROM users JOIN orders AS orders_1 "
"ON users.id = orders_1.user_id "
"JOIN order_items AS order_items_1 "
"ON orders_1.id = order_items_1.order_id "
"JOIN items AS items_1 ON items_1.id = order_items_1.item_id "
"WHERE items_1.id = :id_1")
def test_single_prop_12(self):
Item, Order, User, Address = (self.classes.Item,
self.classes.Order,
self.classes.User,
self.classes.Address)
sess = create_session()
oalias1 = aliased(Order)
# test #1 for [ticket:1706]
ualias = aliased(User)
self.assert_compile(
sess.query(ualias).
join(oalias1, ualias.orders).
join(Address, ualias.addresses),
"SELECT users_1.id AS users_1_id, users_1.name AS "
"users_1_name FROM users AS users_1 JOIN orders AS orders_1 "
"ON users_1.id = orders_1.user_id JOIN addresses ON users_1.id "
"= addresses.user_id"
)
def test_single_prop_13(self):
Item, Order, User, Address = (self.classes.Item,
self.classes.Order,
self.classes.User,
self.classes.Address)
sess = create_session()
# test #2 for [ticket:1706]
ualias = aliased(User)
ualias2 = aliased(User)
self.assert_compile(
sess.query(ualias).
join(Address, ualias.addresses).
join(ualias2, Address.user).
join(Order, ualias.orders),
"SELECT users_1.id AS users_1_id, users_1.name AS users_1_name "
"FROM users "
"AS users_1 JOIN addresses ON users_1.id = addresses.user_id "
"JOIN users AS users_2 "
"ON users_2.id = addresses.user_id JOIN orders "
"ON users_1.id = orders.user_id"
)
def test_overlapping_paths(self):
User = self.classes.User
for aliased in (True, False):
# load a user who has an order that contains item id 3 and address
# id 1 (order 3, owned by jack)
result = create_session().query(User) \
.join('orders', 'items', aliased=aliased) \
.filter_by(id=3) \
.join('orders', 'address', aliased=aliased) \
.filter_by(id=1).all()
assert [User(id=7, name='jack')] == result
def test_overlapping_paths_multilevel(self):
User = self.classes.User
s = Session()
q = s.query(User).\
join('orders').\
join('addresses').\
join('orders', 'items').\
join('addresses', 'dingaling')
self.assert_compile(
q,
"SELECT users.id AS users_id, users.name AS users_name "
"FROM users JOIN orders ON users.id = orders.user_id "
"JOIN addresses ON users.id = addresses.user_id "
"JOIN order_items AS order_items_1 ON orders.id = "
"order_items_1.order_id "
"JOIN items ON items.id = order_items_1.item_id "
"JOIN dingalings ON addresses.id = dingalings.address_id"
)
def test_overlapping_paths_outerjoin(self):
User = self.classes.User
result = create_session().query(User).outerjoin('orders', 'items') \
.filter_by(id=3).outerjoin('orders', 'address') \
.filter_by(id=1).all()
assert [User(id=7, name='jack')] == result
def test_raises_on_dupe_target_rel(self):
User = self.classes.User
assert_raises_message(
sa.exc.SAWarning,
"Pathed join target Order.items has already been joined to; "
"skipping",
lambda: create_session().query(User).outerjoin('orders', 'items').
outerjoin('orders', 'items')
)
def test_from_joinpoint(self):
Item, User, Order = (self.classes.Item,
self.classes.User,
self.classes.Order)
sess = create_session()
for oalias, ialias in [
(True, True),
(False, False),
(True, False),
(False, True)]:
eq_(
sess.query(User).join('orders', aliased=oalias)
.join('items', from_joinpoint=True, aliased=ialias)
.filter(Item.description == 'item 4').all(),
[User(name='jack')]
)
# use middle criterion
eq_(
sess.query(User).join('orders', aliased=oalias)
.filter(Order.user_id == 9)
.join('items', from_joinpoint=True, aliased=ialias)
.filter(Item.description == 'item 4').all(),
[]
)
orderalias = aliased(Order)
itemalias = aliased(Item)
eq_(
sess.query(User).join(orderalias, 'orders')
.join(itemalias, 'items', from_joinpoint=True)
.filter(itemalias.description == 'item 4').all(),
[User(name='jack')]
)
eq_(
sess.query(User).join(orderalias, 'orders')
.join(itemalias, 'items', from_joinpoint=True)
.filter(orderalias.user_id == 9)
.filter(itemalias.description == 'item 4').all(),
[]
) | zimports | /zimports-0.6.0.tar.gz/zimports-0.6.0/test_files/star_imports.cryptography.expected.py | star_imports.cryptography.expected.py |
from test.orm import _fixtures
import sqlalchemy as sa
from sqlalchemy import Column
from sqlalchemy import exc as sa_exc
from sqlalchemy import ForeignKey
from sqlalchemy import Integer
from sqlalchemy import literal_column
from sqlalchemy import String
from sqlalchemy import Table
from sqlalchemy.orm import aliased
from sqlalchemy.orm import backref
from sqlalchemy.orm import configure_mappers
from sqlalchemy.orm import create_session
from sqlalchemy.orm import mapper
from sqlalchemy.orm import relationship
from sqlalchemy.orm import Session
from sqlalchemy.orm import synonym
from sqlalchemy.testing import assert_raises
from sqlalchemy.testing import assert_raises_message
from sqlalchemy.testing import AssertsCompiledSQL
from sqlalchemy.testing import eq_
from sqlalchemy.testing import fixtures
from sqlalchemy.testing.schema import Column
class QueryTest(_fixtures.FixtureTest):
run_setup_mappers = 'once'
run_inserts = 'once'
run_deletes = None
@classmethod
def setup_mappers(cls):
Node, composite_pk_table, users, Keyword, items, Dingaling, \
order_items, item_keywords, Item, User, dingalings, \
Address, keywords, CompositePk, nodes, Order, orders, \
addresses = cls.classes.Node, \
cls.tables.composite_pk_table, cls.tables.users, \
cls.classes.Keyword, cls.tables.items, \
cls.classes.Dingaling, cls.tables.order_items, \
cls.tables.item_keywords, cls.classes.Item, \
cls.classes.User, cls.tables.dingalings, \
cls.classes.Address, cls.tables.keywords, \
cls.classes.CompositePk, cls.tables.nodes, \
cls.classes.Order, cls.tables.orders, cls.tables.addresses
mapper(User, users, properties={
'addresses': relationship(Address, backref='user',
order_by=addresses.c.id),
# o2m, m2o
'orders': relationship(Order, backref='user', order_by=orders.c.id)
})
mapper(Address, addresses, properties={
# o2o
'dingaling': relationship(Dingaling, uselist=False,
backref="address")
})
mapper(Dingaling, dingalings)
mapper(Order, orders, properties={
# m2m
'items': relationship(Item, secondary=order_items,
order_by=items.c.id),
'address': relationship(Address), # m2o
})
mapper(Item, items, properties={
'keywords': relationship(Keyword, secondary=item_keywords) # m2m
})
mapper(Keyword, keywords)
mapper(Node, nodes, properties={
'children': relationship(Node,
backref=backref(
'parent', remote_side=[nodes.c.id]))
})
mapper(CompositePk, composite_pk_table)
configure_mappers()
class InheritedJoinTest(fixtures.MappedTest, AssertsCompiledSQL):
run_setup_mappers = 'once'
@classmethod
def define_tables(cls, metadata):
Table('companies', metadata,
Column('company_id', Integer, primary_key=True,
test_needs_autoincrement=True),
Column('name', String(50)))
Table('people', metadata,
Column('person_id', Integer, primary_key=True,
test_needs_autoincrement=True),
Column('company_id', Integer,
ForeignKey('companies.company_id')),
Column('name', String(50)),
Column('type', String(30)))
Table('engineers', metadata,
Column('person_id', Integer, ForeignKey(
'people.person_id'), primary_key=True),
Column('status', String(30)),
Column('engineer_name', String(50)),
Column('primary_language', String(50)))
Table('machines', metadata,
Column('machine_id', Integer, primary_key=True,
test_needs_autoincrement=True),
Column('name', String(50)),
Column('engineer_id', Integer,
ForeignKey('engineers.person_id')))
Table('managers', metadata,
Column('person_id', Integer, ForeignKey(
'people.person_id'), primary_key=True),
Column('status', String(30)),
Column('manager_name', String(50)))
Table('boss', metadata,
Column('boss_id', Integer, ForeignKey(
'managers.person_id'), primary_key=True),
Column('golf_swing', String(30)),
)
Table('paperwork', metadata,
Column('paperwork_id', Integer, primary_key=True,
test_needs_autoincrement=True),
Column('description', String(50)),
Column('person_id', Integer, ForeignKey('people.person_id')))
@classmethod
def setup_classes(cls):
paperwork, people, companies, boss, managers, machines, engineers = (
cls.tables.paperwork,
cls.tables.people,
cls.tables.companies,
cls.tables.boss,
cls.tables.managers,
cls.tables.machines,
cls.tables.engineers)
class Company(cls.Comparable):
pass
class Person(cls.Comparable):
pass
class Engineer(Person):
pass
class Manager(Person):
pass
class Boss(Manager):
pass
class Machine(cls.Comparable):
pass
class Paperwork(cls.Comparable):
pass
mapper(Company, companies, properties={
'employees': relationship(Person, order_by=people.c.person_id)
})
mapper(Machine, machines)
mapper(Person, people,
polymorphic_on=people.c.type,
polymorphic_identity='person',
properties={
'paperwork': relationship(Paperwork,
order_by=paperwork.c.paperwork_id)
})
mapper(Engineer, engineers, inherits=Person,
polymorphic_identity='engineer',
properties={'machines': relationship(
Machine, order_by=machines.c.machine_id)})
mapper(Manager, managers,
inherits=Person, polymorphic_identity='manager')
mapper(Boss, boss, inherits=Manager, polymorphic_identity='boss')
mapper(Paperwork, paperwork)
def test_single_prop(self):
Company = self.classes.Company
sess = create_session()
self.assert_compile(
sess.query(Company).join(Company.employees),
"SELECT companies.company_id AS companies_company_id, "
"companies.name AS companies_name "
"FROM companies JOIN people "
"ON companies.company_id = people.company_id",
use_default_dialect=True)
def test_force_via_select_from(self):
Company, Engineer = self.classes.Company, self.classes.Engineer
sess = create_session()
self.assert_compile(
sess.query(Company)
.filter(Company.company_id == Engineer.company_id)
.filter(Engineer.primary_language == 'java'),
"SELECT companies.company_id AS companies_company_id, "
"companies.name AS companies_name "
"FROM companies, people, engineers "
"WHERE companies.company_id = people.company_id "
"AND engineers.primary_language "
"= :primary_language_1", use_default_dialect=True)
self.assert_compile(
sess.query(Company).select_from(Company, Engineer)
.filter(Company.company_id == Engineer.company_id)
.filter(Engineer.primary_language == 'java'),
"SELECT companies.company_id AS companies_company_id, "
"companies.name AS companies_name "
"FROM companies, people JOIN engineers "
"ON people.person_id = engineers.person_id "
"WHERE companies.company_id = people.company_id "
"AND engineers.primary_language ="
" :primary_language_1", use_default_dialect=True)
def test_single_prop_of_type(self):
Company, Engineer = self.classes.Company, self.classes.Engineer
sess = create_session()
self.assert_compile(
sess.query(Company).join(Company.employees.of_type(Engineer)),
"SELECT companies.company_id AS companies_company_id, "
"companies.name AS companies_name "
"FROM companies JOIN "
"(people JOIN engineers "
"ON people.person_id = engineers.person_id) "
"ON companies.company_id = people.company_id",
use_default_dialect=True)
def test_prop_with_polymorphic_1(self):
Person, Manager, Paperwork = (self.classes.Person,
self.classes.Manager,
self.classes.Paperwork)
sess = create_session()
self.assert_compile(
sess.query(Person).with_polymorphic(Manager).
order_by(Person.person_id).join('paperwork')
.filter(Paperwork.description.like('%review%')),
"SELECT people.person_id AS people_person_id, people.company_id AS"
" people_company_id, "
"people.name AS people_name, people.type AS people_type, "
"managers.person_id AS managers_person_id, "
"managers.status AS managers_status, managers.manager_name AS "
"managers_manager_name FROM people "
"LEFT OUTER JOIN managers "
"ON people.person_id = managers.person_id "
"JOIN paperwork "
"ON people.person_id = paperwork.person_id "
"WHERE paperwork.description LIKE :description_1 "
"ORDER BY people.person_id", use_default_dialect=True)
def test_prop_with_polymorphic_2(self):
Person, Manager, Paperwork = (self.classes.Person,
self.classes.Manager,
self.classes.Paperwork)
sess = create_session()
self.assert_compile(
sess.query(Person).with_polymorphic(Manager).
order_by(Person.person_id).join('paperwork', aliased=True)
.filter(Paperwork.description.like('%review%')),
"SELECT people.person_id AS people_person_id, "
"people.company_id AS people_company_id, "
"people.name AS people_name, people.type AS people_type, "
"managers.person_id AS managers_person_id, "
"managers.status AS managers_status, "
"managers.manager_name AS managers_manager_name "
"FROM people LEFT OUTER JOIN managers "
"ON people.person_id = managers.person_id "
"JOIN paperwork AS paperwork_1 "
"ON people.person_id = paperwork_1.person_id "
"WHERE paperwork_1.description "
"LIKE :description_1 ORDER BY people.person_id",
use_default_dialect=True)
def test_explicit_polymorphic_join_one(self):
Company, Engineer = self.classes.Company, self.classes.Engineer
sess = create_session()
self.assert_compile(
sess.query(Company).join(Engineer)
.filter(Engineer.engineer_name == 'vlad'),
"SELECT companies.company_id AS companies_company_id, "
"companies.name AS companies_name "
"FROM companies JOIN (people JOIN engineers "
"ON people.person_id = engineers.person_id) "
"ON "
"companies.company_id = people.company_id "
"WHERE engineers.engineer_name = :engineer_name_1",
use_default_dialect=True)
def test_explicit_polymorphic_join_two(self):
Company, Engineer = self.classes.Company, self.classes.Engineer
sess = create_session()
self.assert_compile(
sess.query(Company)
.join(Engineer, Company.company_id == Engineer.company_id)
.filter(Engineer.engineer_name == 'vlad'),
"SELECT companies.company_id AS companies_company_id, "
"companies.name AS companies_name "
"FROM companies JOIN "
"(people JOIN engineers "
"ON people.person_id = engineers.person_id) "
"ON "
"companies.company_id = people.company_id "
"WHERE engineers.engineer_name = :engineer_name_1",
use_default_dialect=True)
def test_multiple_adaption(self):
"""test that multiple filter() adapters get chained together "
and work correctly within a multiple-entry join()."""
people, Company, Machine, engineers, machines, Engineer = (
self.tables.people,
self.classes.Company,
self.classes.Machine,
self.tables.engineers,
self.tables.machines,
self.classes.Engineer)
sess = create_session()
self.assert_compile(
sess.query(Company)
.join(people.join(engineers), Company.employees)
.filter(Engineer.name == 'dilbert'),
"SELECT companies.company_id AS companies_company_id, "
"companies.name AS companies_name "
"FROM companies JOIN (people "
"JOIN engineers ON people.person_id = "
"engineers.person_id) ON companies.company_id = "
"people.company_id WHERE people.name = :name_1",
use_default_dialect=True
)
mach_alias = machines.select()
self.assert_compile(
sess.query(Company).join(people.join(engineers), Company.employees)
.join(mach_alias, Engineer.machines, from_joinpoint=True).
filter(Engineer.name == 'dilbert').filter(Machine.name == 'foo'),
"SELECT companies.company_id AS companies_company_id, "
"companies.name AS companies_name "
"FROM companies JOIN (people "
"JOIN engineers ON people.person_id = "
"engineers.person_id) ON companies.company_id = "
"people.company_id JOIN "
"(SELECT machines.machine_id AS machine_id, "
"machines.name AS name, "
"machines.engineer_id AS engineer_id "
"FROM machines) AS anon_1 "
"ON engineers.person_id = anon_1.engineer_id "
"WHERE people.name = :name_1 AND anon_1.name = :name_2",
use_default_dialect=True
)
def test_auto_aliasing_multi_link(self):
# test [ticket:2903]
sess = create_session()
Company, Engineer, Manager, Boss = self.classes.Company, \
self.classes.Engineer, \
self.classes.Manager, self.classes.Boss
q = sess.query(Company).\
join(Company.employees.of_type(Engineer)).\
join(Company.employees.of_type(Manager)).\
join(Company.employees.of_type(Boss))
self.assert_compile(
q,
"SELECT companies.company_id AS companies_company_id, "
"companies.name AS companies_name FROM companies "
"JOIN (people JOIN engineers "
"ON people.person_id = engineers.person_id) "
"ON companies.company_id = people.company_id "
"JOIN (people AS people_1 JOIN managers AS managers_1 "
"ON people_1.person_id = managers_1.person_id) "
"ON companies.company_id = people_1.company_id "
"JOIN (people AS people_2 JOIN managers AS managers_2 "
"ON people_2.person_id = managers_2.person_id JOIN boss AS boss_1 "
"ON managers_2.person_id = boss_1.boss_id) "
"ON companies.company_id = people_2.company_id",
use_default_dialect=True)
class JoinOnSynonymTest(_fixtures.FixtureTest, AssertsCompiledSQL):
__dialect__ = 'default'
@classmethod
def setup_mappers(cls):
User = cls.classes.User
Address = cls.classes.Address
users, addresses = (cls.tables.users, cls.tables.addresses)
mapper(User, users, properties={
'addresses': relationship(Address),
'ad_syn': synonym("addresses")
})
mapper(Address, addresses)
def test_join_on_synonym(self):
User = self.classes.User
self.assert_compile(
Session().query(User).join(User.ad_syn),
"SELECT users.id AS users_id, users.name AS users_name "
"FROM users JOIN addresses ON users.id = addresses.user_id"
)
class JoinTest(QueryTest, AssertsCompiledSQL):
__dialect__ = 'default'
def test_single_name(self):
User = self.classes.User
sess = create_session()
self.assert_compile(
sess.query(User).join("orders"),
"SELECT users.id AS users_id, users.name AS users_name "
"FROM users JOIN orders ON users.id = orders.user_id"
)
assert_raises(
sa_exc.InvalidRequestError,
sess.query(User).join, "user",
)
self.assert_compile(
sess.query(User).join("orders", "items"),
"SELECT users.id AS users_id, users.name AS users_name FROM users "
"JOIN orders ON users.id = orders.user_id "
"JOIN order_items AS order_items_1 "
"ON orders.id = order_items_1.order_id JOIN items "
"ON items.id = order_items_1.item_id"
)
# test overlapping paths. User->orders is used by both joins, but
# rendered once.
self.assert_compile(
sess.query(User).join("orders", "items").join(
"orders", "address"),
"SELECT users.id AS users_id, users.name AS users_name FROM users "
"JOIN orders "
"ON users.id = orders.user_id "
"JOIN order_items AS order_items_1 "
"ON orders.id = order_items_1.order_id "
"JOIN items ON items.id = order_items_1.item_id JOIN addresses "
"ON addresses.id = orders.address_id")
def test_invalid_kwarg_join(self):
User = self.classes.User
sess = create_session()
assert_raises_message(
TypeError,
"unknown arguments: bar, foob",
sess.query(User).join, "address", foob="bar", bar="bat"
)
assert_raises_message(
TypeError,
"unknown arguments: bar, foob",
sess.query(User).outerjoin, "address", foob="bar", bar="bat"
)
def test_left_w_no_entity(self):
User = self.classes.User
Address = self.classes.Address
sess = create_session()
self.assert_compile(
sess.query(User, literal_column('x'), ).join(Address),
"SELECT users.id AS users_id, users.name AS users_name, x "
"FROM users JOIN addresses ON users.id = addresses.user_id"
)
self.assert_compile(
sess.query(literal_column('x'), User).join(Address),
"SELECT x, users.id AS users_id, users.name AS users_name "
"FROM users JOIN addresses ON users.id = addresses.user_id"
)
def test_left_is_none_and_query_has_no_entities(self):
User = self.classes.User
Address = self.classes.Address
sess = create_session()
assert_raises_message(
sa_exc.InvalidRequestError,
r"No entities to join from; please use select_from\(\) to "
r"establish the left entity/selectable of this join",
sess.query().join, Address
)
def test_isouter_flag(self):
User = self.classes.User
self.assert_compile(
create_session().query(User).join('orders', isouter=True),
"SELECT users.id AS users_id, users.name AS users_name "
"FROM users LEFT OUTER JOIN orders ON users.id = orders.user_id"
)
def test_full_flag(self):
User = self.classes.User
self.assert_compile(
create_session().query(User).outerjoin('orders', full=True),
"SELECT users.id AS users_id, users.name AS users_name "
"FROM users FULL OUTER JOIN orders ON users.id = orders.user_id"
)
def test_multi_tuple_form(self):
"""test the 'tuple' form of join, now superseded
by the two-element join() form.
Not deprecating this style as of yet.
"""
Item, Order, User = (self.classes.Item,
self.classes.Order,
self.classes.User)
sess = create_session()
# assert_raises(
# sa.exc.SADeprecationWarning,
# sess.query(User).join, (Order, User.id==Order.user_id)
# )
self.assert_compile(
sess.query(User).join((Order, User.id == Order.user_id)),
"SELECT users.id AS users_id, users.name AS users_name "
"FROM users JOIN orders ON users.id = orders.user_id",
)
self.assert_compile(
sess.query(User).join(
(Order, User.id == Order.user_id),
(Item, Order.items)),
"SELECT users.id AS users_id, users.name AS users_name "
"FROM users JOIN orders ON users.id = orders.user_id "
"JOIN order_items AS order_items_1 ON orders.id = "
"order_items_1.order_id JOIN items ON items.id = "
"order_items_1.item_id",
)
# the old "backwards" form
self.assert_compile(
sess.query(User).join(("orders", Order)),
"SELECT users.id AS users_id, users.name AS users_name "
"FROM users JOIN orders ON users.id = orders.user_id",
)
def test_single_prop_1(self):
Item, Order, User, Address = (self.classes.Item,
self.classes.Order,
self.classes.User,
self.classes.Address)
sess = create_session()
self.assert_compile(
sess.query(User).join(User.orders),
"SELECT users.id AS users_id, users.name AS users_name "
"FROM users JOIN orders ON users.id = orders.user_id"
)
def test_single_prop_2(self):
Item, Order, User, Address = (self.classes.Item,
self.classes.Order,
self.classes.User,
self.classes.Address)
sess = create_session()
self.assert_compile(
sess.query(User).join(Order.user),
"SELECT users.id AS users_id, users.name AS users_name "
"FROM orders JOIN users ON users.id = orders.user_id"
)
def test_single_prop_3(self):
Item, Order, User, Address = (self.classes.Item,
self.classes.Order,
self.classes.User,
self.classes.Address)
sess = create_session()
oalias1 = aliased(Order)
self.assert_compile(
sess.query(User).join(oalias1.user),
"SELECT users.id AS users_id, users.name AS users_name "
"FROM orders AS orders_1 JOIN users ON users.id = orders_1.user_id"
)
def test_single_prop_4(self):
Item, Order, User, Address = (self.classes.Item,
self.classes.Order,
self.classes.User,
self.classes.Address)
sess = create_session()
oalias1 = aliased(Order)
oalias2 = aliased(Order)
# another nonsensical query. (from [ticket:1537]).
# in this case, the contract of "left to right" is honored
self.assert_compile(
sess.query(User).join(oalias1.user).join(oalias2.user),
"SELECT users.id AS users_id, users.name AS users_name "
"FROM orders AS orders_1 JOIN users "
"ON users.id = orders_1.user_id, "
"orders AS orders_2 JOIN users ON users.id = orders_2.user_id")
def test_single_prop_5(self):
Item, Order, User, Address = (self.classes.Item,
self.classes.Order,
self.classes.User,
self.classes.Address)
sess = create_session()
self.assert_compile(
sess.query(User).join(User.orders, Order.items),
"SELECT users.id AS users_id, users.name AS users_name FROM users "
"JOIN orders ON users.id = orders.user_id "
"JOIN order_items AS order_items_1 "
"ON orders.id = order_items_1.order_id JOIN items "
"ON items.id = order_items_1.item_id"
)
def test_single_prop_6(self):
Item, Order, User, Address = (self.classes.Item,
self.classes.Order,
self.classes.User,
self.classes.Address)
sess = create_session()
ualias = aliased(User)
self.assert_compile(
sess.query(ualias).join(ualias.orders),
"SELECT users_1.id AS users_1_id, users_1.name AS users_1_name "
"FROM users AS users_1 JOIN orders ON users_1.id = orders.user_id"
)
def test_single_prop_7(self):
Item, Order, User, Address = (self.classes.Item,
self.classes.Order,
self.classes.User,
self.classes.Address)
sess = create_session()
# this query is somewhat nonsensical. the old system didn't render a
# correct query for this. In this case its the most faithful to what
# was asked - there's no linkage between User.orders and "oalias",
# so two FROM elements are generated.
oalias = aliased(Order)
self.assert_compile(
sess.query(User).join(User.orders, oalias.items),
"SELECT users.id AS users_id, users.name AS users_name FROM users "
"JOIN orders ON users.id = orders.user_id, "
"orders AS orders_1 JOIN order_items AS order_items_1 "
"ON orders_1.id = order_items_1.order_id "
"JOIN items ON items.id = order_items_1.item_id")
def test_single_prop_8(self):
Item, Order, User, Address = (self.classes.Item,
self.classes.Order,
self.classes.User,
self.classes.Address)
sess = create_session()
# same as before using an aliased() for User as well
ualias = aliased(User)
oalias = aliased(Order)
self.assert_compile(
sess.query(ualias).join(ualias.orders, oalias.items),
"SELECT users_1.id AS users_1_id, users_1.name AS users_1_name "
"FROM users AS users_1 "
"JOIN orders ON users_1.id = orders.user_id, "
"orders AS orders_1 JOIN order_items AS order_items_1 "
"ON orders_1.id = order_items_1.order_id "
"JOIN items ON items.id = order_items_1.item_id")
def test_single_prop_9(self):
Item, Order, User, Address = (self.classes.Item,
self.classes.Order,
self.classes.User,
self.classes.Address)
sess = create_session()
self.assert_compile(
sess.query(User).filter(User.name == 'ed').from_self().
join(User.orders),
"SELECT anon_1.users_id AS anon_1_users_id, "
"anon_1.users_name AS anon_1_users_name "
"FROM (SELECT users.id AS users_id, users.name AS users_name "
"FROM users "
"WHERE users.name = :name_1) AS anon_1 JOIN orders "
"ON anon_1.users_id = orders.user_id"
)
def test_single_prop_10(self):
Item, Order, User, Address = (self.classes.Item,
self.classes.Order,
self.classes.User,
self.classes.Address)
sess = create_session()
self.assert_compile(
sess.query(User).join(User.addresses, aliased=True).
filter(Address.email_address == 'foo'),
"SELECT users.id AS users_id, users.name AS users_name "
"FROM users JOIN addresses AS addresses_1 "
"ON users.id = addresses_1.user_id "
"WHERE addresses_1.email_address = :email_address_1"
)
def test_single_prop_11(self):
Item, Order, User, Address = (self.classes.Item,
self.classes.Order,
self.classes.User,
self.classes.Address)
sess = create_session()
self.assert_compile(
sess.query(User).join(User.orders, Order.items, aliased=True).
filter(Item.id == 10),
"SELECT users.id AS users_id, users.name AS users_name "
"FROM users JOIN orders AS orders_1 "
"ON users.id = orders_1.user_id "
"JOIN order_items AS order_items_1 "
"ON orders_1.id = order_items_1.order_id "
"JOIN items AS items_1 ON items_1.id = order_items_1.item_id "
"WHERE items_1.id = :id_1")
def test_single_prop_12(self):
Item, Order, User, Address = (self.classes.Item,
self.classes.Order,
self.classes.User,
self.classes.Address)
sess = create_session()
oalias1 = aliased(Order)
# test #1 for [ticket:1706]
ualias = aliased(User)
self.assert_compile(
sess.query(ualias).
join(oalias1, ualias.orders).
join(Address, ualias.addresses),
"SELECT users_1.id AS users_1_id, users_1.name AS "
"users_1_name FROM users AS users_1 JOIN orders AS orders_1 "
"ON users_1.id = orders_1.user_id JOIN addresses ON users_1.id "
"= addresses.user_id"
)
def test_single_prop_13(self):
Item, Order, User, Address = (self.classes.Item,
self.classes.Order,
self.classes.User,
self.classes.Address)
sess = create_session()
# test #2 for [ticket:1706]
ualias = aliased(User)
ualias2 = aliased(User)
self.assert_compile(
sess.query(ualias).
join(Address, ualias.addresses).
join(ualias2, Address.user).
join(Order, ualias.orders),
"SELECT users_1.id AS users_1_id, users_1.name AS users_1_name "
"FROM users "
"AS users_1 JOIN addresses ON users_1.id = addresses.user_id "
"JOIN users AS users_2 "
"ON users_2.id = addresses.user_id JOIN orders "
"ON users_1.id = orders.user_id"
)
def test_overlapping_paths(self):
User = self.classes.User
for aliased in (True, False):
# load a user who has an order that contains item id 3 and address
# id 1 (order 3, owned by jack)
result = create_session().query(User) \
.join('orders', 'items', aliased=aliased) \
.filter_by(id=3) \
.join('orders', 'address', aliased=aliased) \
.filter_by(id=1).all()
assert [User(id=7, name='jack')] == result
def test_overlapping_paths_multilevel(self):
User = self.classes.User
s = Session()
q = s.query(User).\
join('orders').\
join('addresses').\
join('orders', 'items').\
join('addresses', 'dingaling')
self.assert_compile(
q,
"SELECT users.id AS users_id, users.name AS users_name "
"FROM users JOIN orders ON users.id = orders.user_id "
"JOIN addresses ON users.id = addresses.user_id "
"JOIN order_items AS order_items_1 ON orders.id = "
"order_items_1.order_id "
"JOIN items ON items.id = order_items_1.item_id "
"JOIN dingalings ON addresses.id = dingalings.address_id"
)
def test_overlapping_paths_outerjoin(self):
User = self.classes.User
result = create_session().query(User).outerjoin('orders', 'items') \
.filter_by(id=3).outerjoin('orders', 'address') \
.filter_by(id=1).all()
assert [User(id=7, name='jack')] == result
def test_raises_on_dupe_target_rel(self):
User = self.classes.User
assert_raises_message(
sa.exc.SAWarning,
"Pathed join target Order.items has already been joined to; "
"skipping",
lambda: create_session().query(User).outerjoin('orders', 'items').
outerjoin('orders', 'items')
)
def test_from_joinpoint(self):
Item, User, Order = (self.classes.Item,
self.classes.User,
self.classes.Order)
sess = create_session()
for oalias, ialias in [
(True, True),
(False, False),
(True, False),
(False, True)]:
eq_(
sess.query(User).join('orders', aliased=oalias)
.join('items', from_joinpoint=True, aliased=ialias)
.filter(Item.description == 'item 4').all(),
[User(name='jack')]
)
# use middle criterion
eq_(
sess.query(User).join('orders', aliased=oalias)
.filter(Order.user_id == 9)
.join('items', from_joinpoint=True, aliased=ialias)
.filter(Item.description == 'item 4').all(),
[]
)
orderalias = aliased(Order)
itemalias = aliased(Item)
eq_(
sess.query(User).join(orderalias, 'orders')
.join(itemalias, 'items', from_joinpoint=True)
.filter(itemalias.description == 'item 4').all(),
[User(name='jack')]
)
eq_(
sess.query(User).join(orderalias, 'orders')
.join(itemalias, 'items', from_joinpoint=True)
.filter(orderalias.user_id == 9)
.filter(itemalias.description == 'item 4').all(),
[]
) | zimports | /zimports-0.6.0.tar.gz/zimports-0.6.0/test_files/star_imports.expected.py | star_imports.expected.py |
zimpute:A scRNA imputation method base on low rank complemetion
Author<ZHOUSIHAN>
Structure and function composition:
Zimpute:
|——Command():Command-line interactive functions
|——Load_Matrix(infile_path):Import observation matrix (no row and column names),the return value is a two-dimensional matrix
|——Data_Filtering(Data_matrix_M) :Delete the genes,which expressed in less than three cells, and the expression value is less than 3,the return value is a filtered matrix and deleted genes
|——Data_Normlization(Filtering_M):Normalizing the observation matrix,the return value is normzlized matrix ,the median of rows(int) and the sum of rows(list)
|——Select_r(Data_matrix_M):Calculate the most suitable r value,the return value is a truncated value(int)
|——Truncated_QR(X,r):An improved QR singular value decomposition method,X is a two-dimentional matrix ,r is truncated value.the return value is L,S,R.The L,R is left singular vector and right singular vector,S is a singular value list
|——Impute(M,r=1,lamda=0.01,F_flag="F",N_flag="F"):scRNA-seq imputation method.M is observation matrix,r is truncated value,F_flag represents the flag of filter,the default value is "F".N_flag represents the flag of normalize,the default value is "F".The return value is a imputed matrix
|——Save_result(outfile_path,W):Save result at outfile_path.no return value.
|——Example_lambda_pic():Show the relative error of example dataset with different lambda,no return value
|——Example_mu_pic():Show the relative error of example dataset with different mu,no return value
|——Relative_error(M_pred,M_obes):compute the relative error.M_pred is prediction matrix,M_obes is observation matrix.The return value is relative error(float)
|——tSNE_Visualize(Matrix_raw,Matrix_impute,Target_group,celltype_list,n_components=2):Visualize the results.The format of Target_group is a list with 1 column.like:[0 0 1 1 2 2 1 0] ,different numbers mean different cell types,celltype_list:["cell A","cell B","cell C"],0=cell A,1=cell B,2=cell C.
|——Example_sigma_pic(Matrix):Draw the trend of singular value,no return value.
|——Sample(M,sample_rate):Random sampling function, input dropout rate and raw matrix,the return value is sampled matrix.
|——Show_error_plot():Draw the relative error with diferent methods,no return value.
| zimpute | /zimpute-1.7.tar.gz/zimpute-1.7/README.md | README.md |
# zimpy
[](https://pypi.python.org/pypi/zimpy)
[](https://anaconda.org/conda-forge/zimpy)
**Python Boilerplate contains all the boilerplate you need to create a Python package.**
- Free software: MIT license
- Documentation: https://statenandrea33.github.io/zimpy
## Features
- TODO
## Credits
This package was created with [Cookiecutter](https://github.com/cookiecutter/cookiecutter) and the [giswqs/pypackage](https://github.com/giswqs/pypackage) project template.
| zimpy | /zimpy-0.0.1.tar.gz/zimpy-0.0.1/README.md | README.md |
import aio_pika
import pika
from aioretry import retry
from pika.adapters.blocking_connection import BlockingChannel
from zimran.events.constants import (
DEAD_LETTER_QUEUE_NAME,
DEFAULT_DEAD_LETTER_EXCHANGE_NAME,
UNROUTABLE_EXCHANGE_NAME,
UNROUTABLE_QUEUE_NAME,
)
from zimran.events.utils import retry_policy
try:
from loguru import logger
except ImportError:
import logging
logger = logging.getLogger(__name__)
class Connection:
def __init__(self, *, broker_url: str, channel_number: int = 1):
self._url = broker_url
self._connection = None
self._channel = None
self._channel_number = channel_number
def __enter__(self):
self.connect()
return self
def __exit__(self, exc_type, exc_val, exc_tb): # noqa: U100
self.disconnect()
@property
def connection(self):
if self._connection is None or self._connection.is_closed:
self._connection = pika.BlockingConnection(parameters=pika.URLParameters(self._url))
logger.info('AMQP connection established')
return self._connection
@property
def channel(self) -> BlockingChannel:
if self._channel is None or self._channel.is_closed:
self._channel = self.connection.channel(channel_number=self._channel_number)
logger.info('Channel connection established')
return self._channel
def connect(self):
self._channel = self.connection.channel(channel_number=self._channel_number)
logger.info('Channel connection established')
def disconnect(self):
if self._channel is not None and self._channel.is_open:
self._channel.close()
if self._connection is not None and self._connection.is_open:
self._connection.close()
logger.info('AMQP Connection disconnected')
def _declare_unroutable_queue(self, channel: BlockingChannel):
channel.exchange_declare(exchange=UNROUTABLE_EXCHANGE_NAME, exchange_type='fanout', durable=True)
channel.queue_declare(queue=UNROUTABLE_QUEUE_NAME, durable=True)
channel.queue_bind(queue=UNROUTABLE_QUEUE_NAME, exchange=UNROUTABLE_EXCHANGE_NAME, routing_key='')
def _declare_default_dead_letter_exchange(self, channel: BlockingChannel):
channel.exchange_declare(exchange=DEFAULT_DEAD_LETTER_EXCHANGE_NAME, exchange_type='fanout', durable=True)
channel.queue_declare(queue=DEAD_LETTER_QUEUE_NAME, durable=True)
channel.queue_bind(queue=DEAD_LETTER_QUEUE_NAME, exchange=DEFAULT_DEAD_LETTER_EXCHANGE_NAME, routing_key='')
class AsyncConnection:
def __init__(self, *, broker_url: str, channel_number: int = 1):
self._url = broker_url
self._connection = None
self._channel = None
self._channel_number = channel_number
async def __aenter__(self):
await self.connect()
return self
async def __aexit__(self, exc_type, exc_val, exc_tb): # noqa: U100
await self.disconnect()
@property
async def connection(self) -> aio_pika.abc.AbstractRobustConnection:
if self._connection is None or self._connection.is_closed:
self._connection = await aio_pika.connect_robust(url=self._url)
logger.info('AMQP connection established')
return self._connection
@property
async def channel(self):
if self._channel is None or self._channel.is_closed:
self._channel = await (await self.connection).channel(channel_number=self._channel_number)
logger.info('Channel connection established')
return self._channel
@retry(retry_policy)
async def connect(self):
self._channel = await (await self.connection).channel(channel_number=self._channel_number)
logger.info('Channel connection established')
async def disconnect(self):
if self._channel is not None and not self._channel.is_closed:
await self._channel.close()
if self._connection is not None and not self._connection.is_closed:
await self._connection.close()
logger.info('AMQP Connection disconnected')
async def _declare_unroutable_queue(self, channel: aio_pika.abc.AbstractRobustChannel):
exchange = await channel.declare_exchange(name=UNROUTABLE_EXCHANGE_NAME, type='fanout', durable=True)
queue = await channel.declare_queue(name=UNROUTABLE_QUEUE_NAME, durable=True)
await queue.bind(exchange=exchange, routing_key='')
async def _declare_default_dead_letter_exchange(self, channel: aio_pika.abc.AbstractRobustChannel):
exchange = await channel.declare_exchange(
name=DEFAULT_DEAD_LETTER_EXCHANGE_NAME,
type='fanout',
durable=True,
)
queue = await channel.declare_queue(name=DEAD_LETTER_QUEUE_NAME, durable=True)
await queue.bind(exchange=exchange, routing_key='') | zimran-events | /zimran_events-0.3.2-py3-none-any.whl/zimran/events/connection.py | connection.py |
import asyncio
import json
import aio_pika
import pika
from aioretry import retry
try:
from loguru import logger
except ImportError:
import logging
logger = logging.getLogger(__name__)
from .connection import AsyncConnection, Connection
from .dto import ChannelProperties, Exchange
from .utils import retry_policy, validate_channel_properties, validate_exchange
class Producer(Connection):
def __init__(self, *, broker_url: str, channel_number: int = 1):
super().__init__(broker_url=broker_url, channel_number=channel_number)
def publish(
self,
routing_key: str,
*,
payload: dict,
exchange: Exchange | None = None,
properties: ChannelProperties | None = None,
):
if properties is None:
properties = ChannelProperties()
else:
validate_channel_properties(properties)
basic_properties = pika.BasicProperties(**properties.as_dict(exclude_none=True))
body = json.dumps(payload, default=str)
if exchange is None:
self.channel.basic_publish(exchange='', routing_key=routing_key, body=body, properties=basic_properties)
logger.info(f'Message published to basic exchange | routing_key: {routing_key}')
return
validate_exchange(exchange)
self._declare_unroutable_queue(channel=self.channel)
self._declare_default_dead_letter_exchange(channel=self.channel)
self.channel.exchange_declare(
exchange=exchange.name,
exchange_type=exchange.type,
**exchange.as_dict(exclude=['name', 'type', 'timeout'], exclude_none=True),
)
self.channel.basic_publish(
exchange=exchange.name,
routing_key=routing_key,
body=body,
properties=basic_properties,
)
logger.info(f'Message published to {exchange.name} exchange | routing_key: {routing_key}')
class AsyncProducer(AsyncConnection):
def __init__(self, *, broker_url: str, channel_number: int = 1):
super().__init__(broker_url=broker_url, channel_number=channel_number)
@retry(retry_policy)
async def publish(
self,
routing_key: str,
*,
payload: dict,
exchange: Exchange | None = None,
properties: ChannelProperties | None = None,
):
if properties is None:
properties = ChannelProperties()
else:
validate_channel_properties(properties)
message = self._get_message(properties=properties, payload=payload)
channel = await self.channel
if exchange is None:
await channel.default_exchange.publish(message=message, routing_key=routing_key)
logger.info(f'Message published to basic exchange | routing_key: {routing_key}')
return
validate_exchange(exchange)
declared_exchange, *_ = await asyncio.gather(
channel.declare_exchange(**exchange.as_dict(exclude_none=True)),
self._declare_unroutable_queue(channel=channel),
self._declare_default_dead_letter_exchange(channel=channel),
return_exceptions=True,
)
await declared_exchange.publish(message=message, routing_key=routing_key)
logger.info(f'Message published to {exchange.name} exchange | routing_key: {routing_key}')
@staticmethod
def _get_message(properties: ChannelProperties, payload: dict):
return aio_pika.Message(body=json.dumps(payload, default=str).encode(), **properties.as_dict(exclude_none=True)) | zimran-events | /zimran_events-0.3.2-py3-none-any.whl/zimran/events/producer.py | producer.py |
import asyncio
from aioretry import retry
try:
from loguru import logger
except ImportError:
import logging
logger = logging.getLogger(__name__)
from zimran.events.connection import AsyncConnection, Connection
from zimran.events.constants import DEFAULT_DEAD_LETTER_EXCHANGE_NAME
from zimran.events.schemas import ExchangeScheme
from zimran.events.utils import cleanup_and_normalize_queue_name, retry_policy, validate_exchange
class ConsumerMixin:
def handle_event(self, name: str, *, exchange: ExchangeScheme | None = None):
if exchange is not None:
validate_exchange(exchange)
def wrapper(func):
self._event_handlers[name] = {
'exchange': exchange,
'handler': func,
}
return wrapper
def add_event_handler(
self,
name: str,
handler: callable,
*,
exchange: ExchangeScheme | None = None,
):
if exchange is not None:
validate_exchange(exchange)
self._event_handlers[name] = {
'exchange': exchange,
'handler': handler,
}
class Consumer(Connection, ConsumerMixin):
def __init__(self, *, service_name: str, broker_url: str, channel_number: int = 1, prefetch_count: int = 10):
super().__init__(broker_url=broker_url, channel_number=channel_number)
self._service_name = service_name.replace('-', '_').lower()
self._prefetch_count = prefetch_count
self._event_handlers = {}
def run(self):
try:
channel = self.channel
channel.basic_qos(prefetch_count=self._prefetch_count)
self._declare_unroutable_queue(channel)
self._declare_default_dead_letter_exchange(channel)
consumer_amount = 0
for event_name, data in self._event_handlers.items():
queue_name = cleanup_and_normalize_queue_name(f'{self._service_name}.{event_name}')
channel.queue_declare(
queue_name,
durable=True,
arguments={
'x-dead-letter-exchange': DEFAULT_DEAD_LETTER_EXCHANGE_NAME,
},
)
if exchange := data['exchange']:
channel.exchange_declare(
exchange=exchange.name,
exchange_type=exchange.type,
**exchange.as_dict(exclude=['name', 'type', 'timeout']),
)
channel.queue_bind(queue=queue_name, exchange=exchange.name, routing_key=event_name)
channel.basic_consume(queue_name, data['handler'])
logger.info(f'Registering consumer | queue: {queue_name} | routing_key: {event_name}')
consumer_amount += 1
logger.info(f'Registered {consumer_amount} consumers')
channel.start_consuming()
except Exception as exc:
logger.error(f'Exception occured | error: {exc} | type: {type(exc)}')
finally:
self.disconnect()
class AsyncConsumer(AsyncConnection, ConsumerMixin):
def __init__(
self,
*,
service_name: str,
broker_url: str,
channel_number: int = 1,
prefetch_count: int = 10,
):
super().__init__(broker_url=broker_url, channel_number=channel_number)
self._service_name = service_name.replace('-', '_').lower()
self._prefetch_count = prefetch_count
self._event_handlers = {}
@retry(retry_policy)
async def run(self):
try:
channel = await self.channel
await channel.set_qos(prefetch_count=self._prefetch_count)
await self._declare_unroutable_queue(channel)
await asyncio.gather(
self._declare_unroutable_queue(channel),
self._declare_default_dead_letter_exchange(channel),
return_exceptions=True,
)
consumer_amount = 0
for event_name, data in self._event_handlers.items():
queue_name = cleanup_and_normalize_queue_name(f'{self._service_name}.{event_name}')
queue = await channel.declare_queue(
queue_name,
durable=True,
arguments={
'x-dead-letter-exchange': DEFAULT_DEAD_LETTER_EXCHANGE_NAME,
},
)
if _exchange := data['exchange']:
exchange = await channel.declare_exchange(**_exchange.as_dict(exclude_none=True))
await queue.bind(exchange=exchange, routing_key=event_name)
await queue.consume(data['handler'])
logger.info(f'Registering consumer | queue: {queue_name} | routing_key: {event_name}')
consumer_amount += 1
logger.info(f'Registered {consumer_amount} consumers')
await asyncio.Future()
except Exception as exc:
logger.error(f'Exception occured | error: {exc}')
raise exc
finally:
await self.disconnect() | zimran-events | /zimran_events-0.3.2-py3-none-any.whl/zimran/events/consumer.py | consumer.py |
# ZIM Scan
Minimal ZIM file reader, designed for article streaming.
## Getting Started
Install using pip:
```
pip install zimscan
```
Or from Git repository, for latest version:
```
pip install -U git+https://gitlab.com/jojolebarjos/zimscan.git
```
Iterate over a records, which are binary file-like objects:
```python
from zimscan import Reader
with Reader(open('wikipedia_en_all_nopic_2019-10.zim', 'rb')) as reader:
for record in reader:
data = record.read()
...
```
## Links
* [ZIM file format](https://openzim.org/wiki/ZIM_file_format), official documentation
* [Kiwix ZIM repository](http://download.kiwix.org/zim/), to download official ZIM files
* [Wikipedia ZIM dumps](https://dumps.wikimedia.org/other/kiwix/zim/wikipedia/), to download Wikipedia ZIM files
* [ZIMply](https://github.com/kimbauters/ZIMply), a ZIM file reader in the browser, in Python
* [libzim](https://github.com/openzim/libzim), the reference implementation, in C++
* [pyzim](https://github.com/pediapress/pyzim), Python wrapper for libzim
* [pyzim](https://framagit.org/mgautierfr/pyzim), another Python wrapper for libzim
* [Internet In A Box](https://github.com/iiab/internet-in-a-box), a project to bundle open knowledge locally
| zimscan | /zimscan-0.1.0.tar.gz/zimscan-0.1.0/README.md | README.md |
zimscraperlib
=============
[](https://github.com/openzim/python-scraperlib/actions?query=branch%3Amain)
[](https://www.codefactor.io/repository/github/openzim/python-scraperlib)
[](https://www.gnu.org/licenses/gpl-3.0)
[](https://pypi.org/project/zimscraperlib/)
[](https://codecov.io/gh/openzim/python-scraperlib)
Collection of python code to re-use across python-based scrapers
# Usage
* This library is meant to be installed via PyPI ([`zimscraperlib`](https://pypi.org/project/zimscraperlib/)).
* Make sure to reference it using a version code as the API is subject to frequent changes.
* API should remain the same only within the same *minor* version.
Example usage:
``` pip
zimscraperlib>=1.1,<1.2
```
# Dependencies
* libmagic
* wget
* libzim (auto-installed, not available on Windows)
* Pillow
* FFmpeg
* gifsicle (>=1.92)
## macOS
```sh
brew install libmagic wget libtiff libjpeg webp little-cms2 ffmpeg gifsicle
```
## Linux
```sh
sudo apt install libmagic1 wget ffmpeg \
libtiff5-dev libjpeg8-dev libopenjp2-7-dev zlib1g-dev \
libfreetype6-dev liblcms2-dev libwebp-dev tcl8.6-dev tk8.6-dev python3-tk \
libharfbuzz-dev libfribidi-dev libxcb1-dev gifsicle
```
# Contribution
```shell
pip -r requirements.txt
pip install tox pre-commit
pre-commit install
# For tests
tox
```
# Users
Non-exhaustive list of scrapers using it (check status when updating API):
* [openzim/youtube](https://github.com/openzim/youtube)
* [openzim/nautilus](https://github.com/openzim/nautilus)
# releasing
* Update your dependencies: `pip install -U setuptools wheel twine`
* Make sure CHANGELOG.md is up-to-date
* Bump version on `src/zimscraperlib/VERSION`
* Build packages `python ./setup.py sdist bdist_wheel`
* Upload to PyPI `twine upload dist/zimscraperlib-2.0.0*`.
* Commit your Changelog + version bump changes
* Tag version on git `git tag -a v2.0.0`
| zimscraperlib | /zimscraperlib-3.1.1.tar.gz/zimscraperlib-3.1.1/README.md | README.md |
## Changelog
All notable changes to this project are documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html) (as of version 1.5.0).
## [3.1.1]
### Changed
- Fixed declared (hint) return type of `download.stream_file` #104
- Fixed declared (hint) type of `content` param for `Creator.add_item_for` #107
## [3.1.0] - 2023-05-05
### Changed
- Using pylibzim `3.1.0`
- ZIM metadata check now allows multiple values (comma-separated) for `Language`
- Using `yt_dlp` instead of `youtube_dl`
### Removed
- Dropped support for Python 3.6
## [3.0.0] - 2023-03-31
⚠️ Warning: this release introduce several API changes to `zim.creator.Creator` and `zim.filesystem.make_zim_file`
### Added
- `zim.creator.Creator.config_metadata` method (returning Self) exposing all mandatory Metdata, all standard ones and allowing extra text metdadata.
- `zim.creator.Creator.config_dev_metadata` method setting stub metdata for all mandatory ones (allowing overrides)
- `zim.metadata` module with a list of per-metadata validation functions
- `zim.creator.Creator.validate_metadata` (called on `start`) to verify metadata respects the spec (and its recommendations)
- `zim.filesystem.make_zim_file` accepts a new optional `long_description` param.
- `i18n.is_valid_iso_639_3` to check ISO-639-3 codes
- `image.probing.is_valid_image` to check Image format and size
### Changed
- `zim.creator.Creator` `main_path` argument now mandatory
- `zim.creator.Creator.start` now fails on missing required or invalid metadata
- `zim.creator.Creator.add_metadata` nows enforces validation checks
- `zim.filesystem.make_zim_file` renamed its `favicon_path` param to `illustration_path`
- `zim.creator.Creator.config_indexing` `language` argument now optionnal when `indexing=False`
- `zim.creator.Creator.config_indexing` now validates `language` is ISO- 639-3 when `indexing=True`
### Removed
- `zim.creator.Creator.update_metadata`. See `.config_metadata()` instead
- `zim.creator.Creator` `language` argument. See `.config_metadata()` instead
- `zim.creator.Creator` keyword arguments. See `.config_metadata()` instead
- `zim.creator.Creator.add_default_illustration`. See `.config_metadata()` instead
- `zim.archibe.Archive.media_counter` (deprecated in `2.0.0`)
## [2.1.0] - 2023-03-06
## Added
- `zim.creator.Creator(language=)` can be specified as `List[str]`. `["eng", "fra"]`, `["eng"]`, `"eng,fra"`, "eng" are all valid values.
### Changed
- Fixed `zim.providers.URLProvider` returning incomplete streams under certain circumstances (from https://github.com/openzim/kolibri/issues/40)
- Fixed `zim.creator.Creator` not supporting multiple values in for Language metadata, as required by the spec
## [2.0.0] - 2022-12-06
- Using pylibzim v2.1.0 (using libzim 8.1.0)
### Added
- [libzim] `Entry.get_redirect_entry()`
- [libzim] `Item.get_indexdata()` to implement custom IndexData per entry (writer)
- [libzim] `Archive.media_count`
### Changed
- [libzim] `Archive.article_count` updated to match scraperlib's version
- `Archive.article_counter` now deprecated. Now returns `Archive.article_count`
- `Archive.media_counter` now deprecated. Now returns `Archive.media_count`
### Removed
- [libzim] `lzma` compression algorithm
## [1.8.0] - 2022-08-05
### Added
- `download.get_session()` to build a new requests Session
### Changed
- `download.stream_file()` accepts a `session` param to use instead of creating one
## [1.7.0] - 2022-08-02
### Added
- `zim.Creator` now supports `ignore_duplicates: bool` parameter to
prevent duplicates from raising exceptions
- `zim.Creator.add_item`, `zim.Creator.add_redirect` and `zim.Creator.add_item_for`
now supports a `duplicate_ok: bool` parameter to prevent an exception
should this item/redirect be a duplicate
## [1.6.3] - 2022-08-02
### Added
- `download.stream_file()` supports passing `headers` (scrapers were already using it)
## [1.6.2] - 2022-07-29
### Changed
- Fixed `filesystem.get_content_mimetype()` crashing on non-guessable byte stream
## [1.6.1] - 2022-07-26
### Changed
- Wider range of accepted lxml dependency version as 4.9.1 fixes a security issue
## [1.6.0] - 2022-05-23
## Added
- `Archive.get_metadata_item()` to retrieve full item instead of just value
### Changed
- Using pylibzim v1.1.0 (using libzim 7.2.1)
- Adding duplicate entries now raises RuntimeError
- filesize is fixed for larger ZIMs
## [1.5.0] - 2022-05-09
### Added
- `zim.Archive.tags` and `zim.Archive.get_tags()` to retrieve parsed Tags
with optionnal `libkiwix` param to include libkiwix's hints
- [tests] Counter tests now also uses a libzim6 file.
### Changed
- `zim.Archive.article_counter` follows libkiwix's new bahavior of
returning libzim's `article_count` for libzim 7+ ZIMs and
returning previously returned (parsed) value for older ZIMs.
### Removed
- Unreachable code removed in `imaging` module.
- [tests] “Sanskrit” removed from tests as output not predicatble depending on plaftform.
## [1.4.3]
* `zim.Archive.counters` wont fail on missing `Counter` metadata
## [1.4.2]
* Fixed leak in `zim.Archive`'s `.counters`
* New `.get_text_metadata()` method on `zim.Archive` to save UTF-8 decoding
## [1.4.1]
* New `Counter` metadata based properties for Archive:
* `.counters`: parsed dict of the Counter metadata
* `.article_counter`: libkiwix's calculation for nb or article
* `.media_counter`: libkiwix's calculation for nb or media
* Fixed `i18n.find_language_names()` failing on some languages
* Added `uri` module with `rebuild_uri()`
## [1.4.0]
* Using new python-libzim based on libzim v7
* New Creator API
* Removed all namespace references
* Renamed `url` mentions to `path`
* Removed all links rewriting
* Removed Article/CSS/Binary seggreation
* Kept zimwriterfs mode (except it doesn't rewrite for namespaces)
* New `html` module for HTML document manipulations
* New callback system on `add_item_for()` and `add_item()`
* New Archive API with easier search/suggestions and content access
* Changed download log level to DEBUG (was INFO)
* `filesystem.get_file_mimetype` now passes bytes to libmagic instead of filename due to release issue in libmagic
* safer `inputs.handle_user_provided_file` regarding input as str instead of Path
* `image.presets` and `video.presets` now all includes `ext` and `mimetype` properties
* Video convert log now DEBUG instead of INFO
* Fixed `image.save_image()` saving to disk even when using a bytes stream
* Fixed `image.transformation.resize_image()` when resizing a byte stream without a dst
## [1.3.6 (internal)]
Intermediate release using unreleased libzim to support development of libzim7.
Don't use it.
* requesting newer libzim version (not released ATM)
* New ZIM API for non-namespace libzim (v7)
* updated all requirements
* Fixed download test inconsistency
* fix_ogvjs mostly useless: only allows webm types
* exposing retry_adapter for refactoring
* Changed download log level to DEBUG (was INFO)
* guess more-defined mime from filename if magic says it's text
* get_file_mimetype now passes bytes to libmagic
* safer regarding input as str instead of Path
* fixed static item for empty content
* ext and mimetype properties for all presets
* Video convert log now DEBUG instead of INFO
* Added delete_fpath to add_item_for() and fixed StaticItem's auto remove
* Updated badges for new repo name
## [1.3.5]
* add `stream_file()` to stream content from a URL into a file or a `BytesIO` object
* deprecated `save_file()`
* fixed `add_binary` when used without an fpath (#69)
* deprecated `make_grayscale` option in image optimization
* Added support for in-memory optimization for PNG, JPEG, and WebP images
* allows enabling debug logs via ZIMSCRAPERLIB_DEBUG environ
## [1.3.4]
* added `wait` option in `YoutubeDownloader` to allow parallelism while using context manager
* do not use extension for finding format in `ensure_matches()` in `image.optimization` module
* added `VideoWebmHigh` and `VideoMp4High` presets for high quality WebM and Mp4 convertion respectively
* updated presets `WebpHigh`, `JpegMedium`, `JpegLow` and `PngMedium` in `image.presets`
* `save_image` moved from `image` to `image.utils`
* added `convert_image` `optimize_image` `resize_image` functions to `image` module
## [1.3.3]
* added `YoutubeDownloader` to `download` to download YT videos using a capped nb of threads
## [1.3.2]
* fixed rewriting of links with empty target
* added support for image optimization using `zimscraperlib.image.optimization` for webp, gif, jpeg and png formats
* added `format_for()` in `zimscraperlib.image.probing` to get PIL image format from the suffix
## [1.3.1]
* replaced BeautifoulSoup parser in rewriting (`html.parser` –> `lxml`)
## [1.3.0]
* detect mimetypes from filenames for all text files
* fixed non-filename based StaticArticle
* enable rewriting of links in poster attribute of audio element
* added find_language_in() and find_language_in_file() to get language from HTML content and HTML file respectively
* add a mime mapping to deal with inconsistencies in mimetypes detected by magic on different platforms
* convert_image signature changed:
* `target_format` positional argument removed. Replaced with optionnal `fmt` key of keyword arguments.
* `colorspace` optionnal positional argument removed. Replaced with optionnal `colorspace` key of keyword arguments.
* prevent rewriting of links with special schemes `mailto`, 'tel', etc. in HTML links rewriting
* replaced `imaging` module with exploded `image` module (`convertion`, `probing`, `transformation`)
* changed `create_favicon()` param names (`source_image` -> `src`, `dest_ico` -> `dst`)
* changed `save_image()` param names (`image` -> `src`)
* changed `get_colors()` param names (`image_path` -> `src`)
* changed `resize_image()` param names (`fpath` -> `src`)
## [1.2.1]
* fixed URL rewriting when running from /
* added support for link rewriting in `<object>` element
* prevent from raising error if element doesn't have the attribute with url
* use non greedy match for CSS URL links (shortest string matching `url()` format)
* fix namespace of target only if link doesn't have a netloc
## [1.2.0]
* added UTF8 to constants
* added mime_type discovery via magic (filesystem)
* Added types: mime types guessing from file names
* Revamped zim API
* Removed ZimInfo which role was tu hold metadata for zimwriterfs call
* Removed calling zimwriterfs binary but kept function name
* Added zim.filesystem: zimwriterfs-like creation from a build folder
* Added zim.creator: create files by manually adding each article
* Added zim.rewriting: tools to rewrite links/urls in HTML/CSS
* add timeout and retries to save_file() and make it return headers
## [1.1.2]
* fixed `convert_image()` which tried to use a closed file
## [1.1.1]
* exposed reencode, Config and get_media_info in zimscraperlib.video
* added save_image() and convert_image() in zimscraperlib.imaging
* added support for upscaling in resize_image() via allow_upscaling
* resize_image() now supports params given by user and preservs image colorspace
* fixed tests for zimscraperlib.imaging
## [1.1.0]
* added video module with reencode, presets, config builder and video file probing
* `make_zim_file()` accepts extra kwargs for zimwriterfs
## [1.0.6]
* added translation support to i18n
## [1.0.5]
* added s3transfer to verbose dependencies list
* changed default log format to include module name
## [1.0.4]
* verbose dependencies (urllib3, boto3) now logged at WARNING level by default
* ability to set verbose dependencies log level and add modules to the list
* zimscraperlib's logging level now aligned with scraper's requested one
## [1.0.3]
* fix_ogvjs_dist script more generic (#1)
* updated zim to support other zimwriterfs params (#10)
* more flexible requirements for requests dependency
## [1.0.2]
* fixed return value of `get_language_details` on non-existent language
* fixed crash on `resize_image` with method `height`
* fixed root logger level (now DEBUG)
* removed useless `console=True` `getLogger` param
* completed tests (100% coverage)
* added `./test` script for quick local testing
* improved tox.ini
* added `create_favicon` to generate a squared favicon
* added `handle_user_provided_file` to handle user file/URL from param
## [1.0.1]
* fixed fix_ogvjs_dist
## [1.0.0]
* initial version providing
* download: save_file, save_large_file
* fix_ogvjs_dist
* i18n: setlocale, get_language_details
* imaging: get_colors, resize_image, is_hex_color
* zim: ZimInfo, make_zim_file
| zimscraperlib | /zimscraperlib-3.1.1.tar.gz/zimscraperlib-3.1.1/CHANGELOG.md | CHANGELOG.md |
# Zimuzu
[![Latest Version][1]][2]
A command line tool for doing the signing work for zimuzu(<http://www.zimuzu.tv/>).
## Install
$ (sudo) pip install -U zimuzu
## Usage
Touch a new configuration json file named `zimuzu_config.json` under
`/usr/local/bin`:
{
"account": "Your username",
"password": "Your password"
}
do the sign:
$ zimuzu sign
## Contribute
Contributions are always welcome! :sparkles: :cake: :sparkles: Please file an
issue with detailed information if you run into problems. Feel free to send me
a pull request, I'll be happy to review and merge it!
## Details
### Login:
* url: http://www.zimuzu.tv/User/Login/ajaxLogin
* method: post
* post data:
- account
- password
- remember: 0 no; 1 yes
- url_back: http://www.zimuzu.tv/user/sign
* response(json):
{
"status":1, # 1 means ok.
"info":"\u767b\u5f55\u6210\u529f\uff01",
"data":{"url_back":"http:\/\/www.zimuzu.tv\/user\/sign"}
}
### Visit sign page
* url: http://www.zimuzu.tv/user/sign
* method: get
This step is essential, or you'll get 4002 status when you do the sign next.
### Do sign:
* url: http://www.zimuzu.tv/user/sign/dosign
* method: get
* response(json):
{
"status":1, # 1 means ok.
"info":"",
"data":1 # 1 means you keep signing for 1 days.
}
## License
MIT.
[1]: http://img.shields.io/pypi/v/zimuzu.svg
[2]: https://pypi.python.org/pypi/zimuzu | zimuzu | /zimuzu-0.1.0.tar.gz/zimuzu-0.1.0/README.md | README.md |
# Zin
zin is a lightweight command management tool that simplifies scripting and task automation. It provides an simple interface for defining and executing custom commands, helping you streamline your workflows and boost productivity.
## Features ✨
- Command Definition: Easily define custom commands using a YAML configuration file.
- Command Execution: Use a straightforward command-line interface to carry out the commands you've specified.
- Modeled after [Rav](https://github.com/jmitchel3/rav): The flexibility and ease-of-use of the command-line program [Rav](https://github.com/jmitchel3/rav) serve as inspiration for zin.
## Usage
To install `zin`, you can use pip:
```bash
pip install zin
```
## Configuration
Make sure that 'zin.yaml' is present in the root working directory of your project before using 'zin'. But running 'zin' for the first time will automatically create the file if it doesn't already exist.
The `zin.yaml` file follows a simple and easy-to-use format. Start with the `scripts:` section, followed by the command names and their corresponding commands. You can include multiple commands for a single command name using YAML collections.
Example `zin.yaml` configuration:<br>
(this is just an example for demonstration purposes.)
```yaml
scripts:
tests: python -m pytest --cov=my_module tests/
build:
- rm -rf dist/
- venv/bin/python3 -m build
- venv/bin/pip uninstall rav
- venv/bin/pip install -e .
```
Feel free to add more commands as needed.
**To run a command** using `zin`, you can use the following syntax:
```bash
zin run <command_name>
```
**To list all the commands** using `zin`, you can use the following syntax:
```bash
zin list
```
## Contributing 🤝
Contributions are encouraged! Please start an issue or submit a pull request if you want to share thoughts, ideas, or problem reports. As stated in the [CONTRIBUTING.md](CONTRIBUTING.md) file, be sure that you stick to the guidelines.
## License
This project is licensed under the MIT License. See the [LICENSE](LICENSE) file for more information.
---
**Note:** zin is highly inspired by [Rav](https://github.com/jmitchel3/rav), a command-line tool developed by [Justin Mitchel](https://github.com/jmitchel3).
**Happy coding with zin! 😁**
---
**Project Renamed: pep is now zin. We apologize for any confusion caused.**
| zin | /zin-1.0.1.tar.gz/zin-1.0.1/README.md | README.md |
from __future__ import annotations
import dataclasses
import datetime
import decimal
import re
import uuid
from collections.abc import Callable
from typing import Any, Generic, Literal, NewType, TypeVar, Union, cast
FieldName = NewType("FieldName", str)
ObjectName = NewType("ObjectName", str)
@dataclasses.dataclass(frozen=True)
class SerDeError:
locator: FieldName | ObjectName
problem: Problem
Problem = Union[str, frozenset[SerDeError]]
T = TypeVar("T")
V = TypeVar("V")
E = TypeVar("E")
@dataclasses.dataclass(frozen=True)
class Ok(Generic[T]):
value: T
type: Literal["ok"] = "ok"
@dataclasses.dataclass(frozen=True)
class Error(Generic[E]):
error: E
type: Literal["error"] = "error"
Result = Union[Ok[T], Error[E]]
def map_result(result: Result[T, E], mapper: Callable[[T], V]) -> Result[V, E]:
if result.type == "ok":
return Ok(mapper(result.value))
return cast(Result[V, E], result)
def get_field(
data: dict[str, Any],
field: str,
delegate: Callable[[Any], Result[T, Problem]],
*,
nullable: bool = False,
) -> Result[T, SerDeError]:
raw_value = data.get(field, None)
if not nullable and raw_value is None:
return Error(SerDeError(locator=FieldName(field), problem="Missing required field"))
result = delegate(raw_value)
if result.type == "ok":
return cast(Result[T, SerDeError], result)
return Error(SerDeError(FieldName(field), problem=result.error))
def get_optional_field(
data: dict[str, Any], field: str, delegate: Callable[[Any], Result[T, Problem]]
) -> Result[T | None, SerDeError]:
raw_value = data.get(field, None)
if raw_value is None:
return Ok(None)
result = delegate(raw_value)
if result.type == "ok":
return cast(Result[Union[T, None], SerDeError], result)
return Error(SerDeError(FieldName(field), problem=result.error))
def get_enum(
raw_value: Any, enum_factory: Callable[..., Result[T, SerDeError]]
) -> Result[T, Problem]:
if not isinstance(raw_value, str):
return _type_mismatch_error("string", raw_value)
result = enum_factory(raw_value)
if result.type == "ok":
return cast(Result[T, Problem], result)
return Error(frozenset((result.error,)))
def get_object(
raw_value: Any,
object_factory: Callable[..., Result[T, SerDeError]],
*,
ignore_unknown_properties: bool,
) -> Result[T, Problem]:
if not isinstance(raw_value, dict):
return _type_mismatch_error("object", raw_value)
result = object_factory(raw_value, ignore_unknown_properties=ignore_unknown_properties)
if result.type == "ok":
return cast(Result[T, Problem], result)
return Error(frozenset((result.error,)))
def _as_int32(value: int) -> Result[int, Problem]:
if -(2**31) <= value <= (2**31 - 1):
return Ok(value)
return Error(f"{value} must be in int32 range [{-2**31}, {2**31 - 1}]")
def _as_int(value: Any) -> Result[int, Problem]:
if isinstance(value, int):
return _as_int32(value)
else:
if isinstance(value, str) and re.fullmatch("-?\\d+", value):
try:
return _as_int32(int(value))
except Exception:
pass
return _type_mismatch_error("int", value)
def get_int(
raw_value: Any,
*,
allow_number: bool = False,
inclusive_min: int | None = None,
inclusive_max: int | None = None,
exclusive_min: int | None = None,
exclusive_max: int | None = None,
) -> Result[int, Problem]:
if allow_number:
float_value = get_number(raw_value)
if float_value.type == "ok":
raw_value = int(float_value.value)
int_result = _as_int(raw_value)
if int_result.type == "error":
return int_result
int_value = int_result.value
if inclusive_min is not None and int_value < inclusive_min:
return Error(f"{int_value} must be at least {inclusive_min} inclusive")
elif exclusive_min is not None and int_value <= exclusive_min:
return Error(f"{int_value} must be at least {exclusive_min} exclusive")
if inclusive_max is not None and int_value > inclusive_max:
return Error(f"{int_value} must be at most {inclusive_max} inclusive")
elif exclusive_max is not None and int_value >= exclusive_max:
return Error(f"{int_value} must be at most {exclusive_max} exclusive")
return Ok(int_value)
# TODO(forozco): reconcile decimal
def get_number(
value: Any,
*,
inclusive_min: float | None = None,
inclusive_max: float | None = None,
exclusive_min: float | None = None,
exclusive_max: float | None = None,
) -> Result[float, Problem]:
if isinstance(value, float):
float_value = value
else:
try:
float_value = float(value)
except:
return _type_mismatch_error("float", value)
if inclusive_min is not None and float_value < inclusive_min:
return Error(f"{float_value} must be at least {inclusive_min} inclusive")
elif exclusive_min is not None and float_value <= exclusive_min:
return Error(f"{float_value} must be at least {exclusive_min} exclusive")
if inclusive_max is not None and float_value > inclusive_max:
return Error(f"{float_value} must be at most {inclusive_max} inclusive")
elif exclusive_max is not None and float_value >= exclusive_max:
return Error(f"{float_value} must be at most {exclusive_max} exclusive")
return Ok(float_value)
def get_string(value: Any) -> Result[str, Problem]:
if isinstance(value, str):
return Ok(value)
elif isinstance(value, int) or isinstance(value, float):
return Ok(str(value))
return _type_mismatch_error("string", value)
def get_uuid(value: Any) -> Result[uuid.UUID, Problem]:
if isinstance(value, str):
try:
return Ok(uuid.UUID(value))
except:
return Error(f"{value} is not a valid UUID")
return _type_mismatch_error("uuid", value)
def get_date(value: Any) -> Result[datetime.date, Problem]:
if isinstance(value, str):
try:
return Ok(datetime.date.fromisoformat(value))
except:
return Error(f"{value} is not a valid date (dates must be in format yyyy-mm-dd)")
return _type_mismatch_error("date", value)
def get_datetime(value: Any) -> Result[datetime.datetime, Problem]:
if isinstance(value, str):
try:
return Ok(datetime.datetime.fromisoformat(value))
except:
return Error(
f"{value} is not a valid datetime (dates must be in format yyyy-mm-dd'T'HH:mm:ss.sss)"
)
return _type_mismatch_error("datetime", value)
def get_boolean(value: Any) -> Result[bool, Problem]:
if isinstance(value, bool):
return Ok(value)
return _type_mismatch_error("boolean", value)
def check_string_literal(value: Any, *, literal: str) -> Result[None, Problem]:
result = get_string(value)
if result.type == "ok":
if not result.value == literal:
return Error("foo")
return cast(Result[None, Problem], result)
def get_list(
raw_value: Any | None,
*,
element_deser: Callable[[Any], Result[T, Problem]],
allow_single_value: bool = False,
min_count: int | None = None,
max_count: int | None = None,
) -> Result[list[T], Problem]:
elements: list[T] | None = None
if raw_value is None:
elements = []
elif isinstance(raw_value, list):
elements = []
errors: set[SerDeError] = set()
for i, element in enumerate(raw_value):
element_value = element_deser(element)
if element_value.type == "error":
errors.add(
SerDeError(locator=FieldName(f"Element {i}"), problem=element_value.error)
)
else:
elements.append(element_value.value)
if len(errors) > 0:
return Error(frozenset(errors))
elif allow_single_value:
single_element = element_deser(raw_value)
if single_element.type == "ok":
elements = [single_element.value]
else:
return Error(single_element.error)
if elements is None:
return _type_mismatch_error("list", raw_value)
if min_count is not None and len(elements) < min_count:
return Error(
f"Must contain at least {min_count} item{_pluralize(min_count)} (found {len(elements)} item{_pluralize(len(elements))})"
)
if max_count is not None and len(elements) > max_count:
return Error(
f"May contain at most {max_count} item{_pluralize(max_count)} (found {len(elements)} item{_pluralize(len(elements))})"
)
return Ok(elements)
def _pluralize(value: int | float) -> str:
if value == 1:
return ""
return "s"
def get_dict(
raw_value: Any | None,
*,
value_deser: Callable[[Any], Result[T, Problem]],
min_count: int | None = None,
max_count: int | None = None,
) -> Result[dict[str, T], Problem]:
if raw_value is None:
return Ok(dict())
if isinstance(raw_value, dict):
elements: dict[str, T] = {}
errors: set[SerDeError] = set()
for key, value in raw_value.items():
parsed_value = value_deser(value)
if parsed_value.type == "error":
errors.add(SerDeError(locator=FieldName(f"Key {key}"), problem=parsed_value.error))
else:
elements[key] = parsed_value.value
if len(errors) > 0:
return Error(frozenset(errors))
if min_count is not None and len(elements) < min_count:
return Error(f"Dict with size {len(elements)} must have at least {min_count} elements")
if max_count is not None and len(elements) > max_count:
return Error(f"Dict with size {len(elements)} must have at most {max_count} elements")
return Ok(elements)
return _type_mismatch_error("dict", raw_value)
def normalize_error_details(serde_error: SerDeError) -> list[str]:
if isinstance(serde_error.problem, str):
return [serde_error.locator + ": " + serde_error.problem]
result = []
for problem in serde_error.problem:
for inner_problem in normalize_error_details(problem):
result.append(serde_error.locator + "." + inner_problem)
return result
def _type_mismatch_error(target_type: str, value: Any) -> Error[Problem]:
return Error(
f"Unable to interpret value of type '{_to_human_readable_type(value)}' as a '{target_type}'"
)
def _to_human_readable_type(value: Any) -> str:
if isinstance(value, str):
return "string"
if isinstance(value, int):
return "int"
elif isinstance(value, float) or isinstance(value, decimal.Decimal):
return "float"
elif isinstance(value, bool):
return "boolean"
elif isinstance(value, list):
return "list"
elif isinstance(value, dict):
return "object"
return str(type(value)) | zinc-api-runtime | /zinc_api_runtime-0.0.3-py3-none-any.whl/zinc/runtime/codegen_runtime.py | codegen_runtime.py |
from __future__ import annotations
import dataclasses
import os
import string
import textwrap
from typing import Literal, Union
import black
import inflection
import isort.settings
from zinc.api import codegen_api as api
@dataclasses.dataclass(frozen=True)
class ModuleImport:
module: str
type: Literal["module"] = "module"
@dataclasses.dataclass(frozen=True)
class FromImport:
module: str
field: str
type: Literal["from"] = "from"
Import = Union[ModuleImport, FromImport]
def get_isort_config() -> isort.Config:
return isort.settings.Config(profile="black", line_length=100)
def generate_objects(specs: list[api.ClassSpec]) -> str:
imports: set[Import] = set()
after_import_output = ""
specs_by_name = {spec.name: spec for spec in specs}
# Need to put alias and union types at the end of the file since they may reference other types in the global scope
for spec in sorted(
specs, key=lambda spec: (spec.type == "union" or spec.type == "alias", spec.name)
):
if spec.type == "object":
output, more_imports = generate_complex_object(spec, specs_by_name)
elif spec.type == "enum":
output, more_imports = generate_enum(spec)
elif spec.type == "union":
output, more_imports = generate_union(spec)
elif spec.type == "alias":
output, more_imports = generate_alias(spec)
else:
raise Exception(f"Unexpected class spec type: {spec.type}")
imports.update(more_imports)
after_import_output += output + "\n\n"
output = "# Generated by codegen.py\n"
output += "from __future__ import annotations\n\n"
output += "from zinc.runtime import codegen_runtime as cr\n\n"
for import_ in sorted(imports, key=lambda key: (key.type == "module", key.module)):
if import_.type == "module":
output += f"import {import_.module}\n"
else:
output += f"from {import_.module} import {import_.field}\n"
output += "\n\n"
output += after_import_output
return isort.api.sort_code_string(
black.format_str(output, mode=black.Mode(line_length=100)), config=get_isort_config()
)
def generate_complex_object(
spec: api.ObjectSpec, specs_by_name: dict[api.ObjectTypeName, api.ClassSpec]
) -> tuple[str, set[Import]]:
imports: set[Import] = set()
output = fix_indent(
f"""\
@dataclasses.dataclass(frozen=True)
class {spec.name}:
""",
0,
)
output += _description_as_docstring(spec.description, True, 4)
imports.add(ModuleImport("dataclasses"))
sorted_fields = sorted(
spec.fields.items(), key=lambda entry: compare_field_spec(entry[0], entry[1])
)
for field_name, field_spec in sorted_fields:
# TODO(markelliot): we don't allow lists and dicts to contain optionals,
# but we should probably enforce that somewhere reasonable rather than
# here in the type system.
if field_spec.field_type == "literal":
imports.add(FromImport("typing", "Literal"))
output += fix_indent(
f"""{to_python(field_name)}: Literal["{field_spec.value}"] = "{field_spec.value}"\n""",
4,
)
else:
real_type, more_imports = _as_real_type(field_spec)
imports.update(more_imports)
output += fix_indent(
f"""{to_python(field_name)}: {real_type}{"" if field_spec.required else " | None"}{generate_field_initializer(field_spec)}\n""",
4,
)
output += _description_as_docstring(field_spec.description, False, 4)
output += "\n"
output += fix_indent(
f"""\
def to_dict(self) -> dict[str, Any]:
result: dict[str, Any] = {"{"}{"}"}
""",
4,
)
imports.add(FromImport("typing", "Any"))
for field_name, field_spec in sorted_fields:
if field_spec.required:
output += fix_indent(
f"""\
result["{field_name}"] = {generate_field_serializer("self." + to_python(field_name), field_spec, specs_by_name)}
""",
8,
)
else:
output += fix_indent(
f"""\
if self.{to_python(field_name)} is not None:
result["{field_name}"] = {generate_field_serializer("self." + to_python(field_name), field_spec, specs_by_name)}
""",
8,
)
output += fix_indent("return result\n", 8)
output += fix_indent(
f"""\
@classmethod
def from_dict(
cls, data: dict[str, Any], *, ignore_unknown_properties: bool = False
) -> cr.Result[{spec.name}, cr.SerDeError]:
errors = set()
""",
4,
)
for field_name, field_spec in sorted_fields:
output += fix_indent(
f"""
__{to_python(field_name)} = {generate_field_deserializer(field_name, field_spec, specs_by_name).lstrip()}
if __{to_python(field_name)}.type == "error":
errors.add(__{to_python(field_name)}.error)
""",
8,
)
props_list = ",".join(['"' + k + '"' for k, v in sorted_fields])
assignments = ",".join(
[
f"{to_python(k)}=cast(cr.Ok, __{to_python(k)}).value"
for k, v in sorted_fields
if v.field_type != "literal"
]
)
imports.add(FromImport("typing", "cast"))
output += fix_indent(
f"""
if len(errors) > 0:
return cr.Error(cr.SerDeError(locator=cr.ObjectName("{spec.name}"), problem=frozenset(errors)))
if not ignore_unknown_properties:
unknown_props = data.keys() - {"{"}
{props_list}
{"}"}
if len(unknown_props) > 0:
return cr.Error(cr.SerDeError(
locator=cr.ObjectName("{spec.name}"),
problem=f"Unexpected extra properties {"{"}unknown_props{"}"}",
))
return cr.Ok(
{spec.name}(
{assignments}
))
""",
8,
)
return output, imports
def generate_enum(spec: api.EnumSpec) -> tuple[str, set[Import]]:
imports: set[Import] = set()
output = fix_indent(
f"""\
class {spec.name}(enum.Enum):
""",
0,
)
output += _description_as_docstring(spec.description, True, 4)
imports.add(ModuleImport("enum"))
for value in spec.values:
output += fix_indent(
f"""\
{to_python_identifier(value)} = "{value}"
""",
4,
)
output += "\n"
output += fix_indent(
f"""\
@classmethod
def from_str(cls, data: str) -> cr.Result[{spec.name}, cr.SerDeError]:
normalized_data = data.upper()
for variant in {spec.name}.__members__.values():
if variant.value.upper() == normalized_data:
return cr.Ok(variant)
return cr.Error(cr.SerDeError(
locator=cr.ObjectName("{spec.name}"),
problem=f"Unexpected value {"{"}data{"}"}. Value must be one of '{"{"}{spec.name}.__members__.values(){"}"}",
))
""",
4,
)
return output, imports
def generate_union(spec: api.UnionSpec) -> tuple[str, set[Import]]:
imports: set[Import] = set()
output = fix_indent(
f"""\
{spec.name}Type = Union[{",".join(sorted(spec.union.values()))}]
@dataclasses.dataclass(frozen=True)
class {spec.name}:
value: {spec.name}Type
""",
0,
)
imports.add(ModuleImport("dataclasses"))
imports.add(FromImport("typing", "Union"))
imports.add(FromImport("typing", "Any"))
output += fix_indent(
"""\
def to_dict(self) -> dict[str, Any]:
return self.value.to_dict()
""",
4,
)
output += fix_indent(
f"""
@classmethod
def from_dict(
cls, data: dict[str, Any], *, ignore_unknown_properties: bool = False
) -> cr.Result[{spec.name}, cr.SerDeError]:
__{spec.discriminator_field} = cr.get_field(data, "{spec.discriminator_field}", cr.get_string)
if __{spec.discriminator_field}.type == "error":
return cr.Error(cr.SerDeError(locator=cr.ObjectName("{spec.name}"), problem=frozenset((__{spec.discriminator_field}.error,))))
""",
4,
)
for key, variant in sorted(spec.union.items(), key=lambda entry: entry[0]):
output += fix_indent(
f"""\
elif __{spec.discriminator_field}.value == "{key}":
return cr.map_result({variant}.from_dict(data, ignore_unknown_properties=ignore_unknown_properties), {spec.name})
""",
8,
)
escape = '"""'
output += fix_indent(
f"""\
else:
return cr.Error(
cr.SerDeError(
locator=cr.ObjectName("{spec.name}"),
problem=f{escape}Unexpected value "{"{"}__{spec.discriminator_field}.value{"}"}" for "type\\"{escape}
)
)
""",
8,
)
return output, imports
def generate_alias(alias: api.AliasSpec) -> tuple[str, set[Import]]:
type_name, imports = _as_real_type(alias.field)
output = f"""{alias.name} = NewType("{alias.name}", {type_name})"""
imports.add(FromImport("typing", "NewType"))
# TODO: how do we do docstrings on root fields?
# output += _description_as_docstring(alias.description, True, 0)
return output, imports
def generate_field_initializer(field_spec: api.FieldSpec) -> str:
if field_spec.field_type == "list":
return " = dataclasses.field(default_factory=lambda: [])"
elif field_spec.field_type == "dict":
return " = dataclasses.field(default_factory=lambda: {})"
elif not field_spec.required:
return " = None"
return ""
def generate_field_serializer(
locator: str,
field_spec: api.FieldSpec,
specs_by_name: dict[api.ObjectTypeName, api.ClassSpec],
) -> str:
if field_spec.field_type == "object":
object_spec = specs_by_name[field_spec.object_type]
if object_spec.type == "object" or object_spec.type == "union":
return f"{locator}.to_dict()"
elif object_spec.type == "enum":
return f"{locator}.value"
elif object_spec.type == "alias":
return generate_field_serializer(locator, object_spec.field, specs_by_name)
elif field_spec.field_type == "list":
return f"""[{generate_field_serializer("value", field_spec.element_type, specs_by_name)} for value in {locator}]"""
elif field_spec.field_type == "dict":
return f"""{"{"}key: {generate_field_serializer("value", field_spec.value_type, specs_by_name)} for key, value in {locator}.items(){"}"}"""
elif field_spec.field_type == "uuid":
return f"str({locator})"
elif field_spec.field_type == "date" or field_spec.field_type == "datetime":
return f"{locator}.isoformat()"
return locator
def generate_field_deserializer(
field_name: str,
field_spec: api.FieldSpec,
specs_by_name: dict[api.ObjectTypeName, api.ClassSpec],
) -> str:
if field_spec.required:
output = f"""cr.get_field(data, "{field_name}", """
if field_spec.field_type == "list" or field_spec.field_type == "dict":
output += "nullable=True, "
else:
output = f"""cr.get_optional_field(data, "{field_name}", """
return output + "delegate=" + generate_value_deserializer(0, field_spec, specs_by_name) + ")"
def generate_value_deserializer(
depth: int,
field_spec: api.FieldSpec,
specs_by_name: dict[api.ObjectTypeName, api.ClassSpec],
) -> str:
cur_var = f"var{depth}"
# TODO(forozco): use arg list instead of string concatenation
if field_spec.field_type == "object":
object_spec = specs_by_name[field_spec.object_type]
if object_spec.type == "object" or object_spec.type == "union":
return f"lambda {cur_var}: cr.get_object({cur_var}, object_factory={field_spec.object_type}.from_dict, ignore_unknown_properties=ignore_unknown_properties)"
elif object_spec.type == "enum":
return f"lambda {cur_var}: cr.get_enum({cur_var}, enum_factory={field_spec.object_type}.from_str)"
elif object_spec.type == "alias":
return generate_value_deserializer(depth, object_spec.field, specs_by_name)
elif field_spec.field_type == "list":
args = ""
if field_spec.min_count is not None:
args += f", min_count={field_spec.min_count}"
if field_spec.max_count is not None:
args += f", max_count={field_spec.max_count}"
if field_spec.allow_single_value_as_array:
args += ", allow_single_value=True"
return f"lambda {cur_var}: cr.get_list({cur_var}{args}, element_deser={generate_value_deserializer(depth + 1, field_spec.element_type, specs_by_name)})"
elif field_spec.field_type == "dict":
args = ""
if field_spec.min_count is not None:
args += f", min_count={field_spec.min_count}"
if field_spec.max_count is not None:
args += f", max_count={field_spec.max_count}"
return f"lambda {cur_var}: cr.get_dict({cur_var}{args}, value_deser={generate_value_deserializer(depth + 1, field_spec.value_type, specs_by_name)})"
elif field_spec.field_type == "literal":
return f"""lambda {cur_var}: cr.check_string_literal({cur_var}, literal="{field_spec.value}")"""
elif field_spec.field_type == "int" or field_spec.field_type == "number":
args = ""
if field_spec.minimum is not None:
if field_spec.exclusive_min:
args += f", exclusive_min={field_spec.minimum}"
else:
args += f", inclusive_min={field_spec.minimum}"
if field_spec.maximum is not None:
if field_spec.exclusive_max:
args += f", exclusive_max={field_spec.maximum}"
else:
args += f", inclusive_max={field_spec.maximum}"
if field_spec.field_type == "int" and field_spec.allow_number:
args += ", allow_number=True"
if args:
return f"lambda {cur_var}: cr.get_{field_spec.field_type}({cur_var}{args})"
return f"cr.get_{field_spec.field_type}"
def _as_real_type(v: api.FieldSpec) -> tuple[str, set[Import]]:
if v.field_type == "int":
return "int", set()
elif v.field_type == "number":
return "decimal.Decimal", {ModuleImport("decimal")}
elif v.field_type == "datetime":
return "datetime.datetime", {ModuleImport("datetime")}
elif v.field_type == "date":
return "datetime.date", {ModuleImport("datetime")}
elif v.field_type == "string":
return "str", set()
elif v.field_type == "uuid":
return "uuid.UUID", {ModuleImport("uuid")}
elif v.field_type == "boolean":
return "bool", set()
elif v.field_type == "list":
inner, imports = _as_real_type(v.element_type)
return f"list[{inner}]", imports
elif v.field_type == "dict":
inner, imports = _as_real_type(v.value_type)
return f"dict[str, {inner}]", imports
elif v.field_type == "object":
return v.object_type, set()
raise Exception("Should never happen")
def to_python(field_name: str) -> str:
return inflection.underscore(field_name)
def to_python_identifier(value: str) -> str:
for special_char in string.punctuation + string.whitespace:
value = value.replace(special_char, "_")
if value[0] in string.digits:
value = "_" + value
return value
def fix_indent(lines: str, indent: int) -> str:
return textwrap.indent(textwrap.dedent(lines), " " * indent)
def compare_field_spec(field_name: str, field_spec: api.FieldSpec) -> tuple[bool, str]:
has_default_value = (
not field_spec.required
or field_spec.field_type == "literal"
or field_spec.field_type == "dict"
or field_spec.field_type == "list"
)
return has_default_value, field_name
def _description_as_docstring(
description: str | None, quotes_on_own_line: bool, indent: int
) -> str:
if description is None or len(description) == 0:
return ""
quote_lb = "\n" if quotes_on_own_line else ""
output = '"""' + quote_lb
if description.endswith('"'):
output += description[:-1] + '\\"'
else:
output += description
output += quote_lb + '"""\n'
return fix_indent(output, indent) | zinc-api | /zinc-api-0.0.3.tar.gz/zinc-api-0.0.3/zinc/codegen.py | codegen.py |
from __future__ import annotations
import decimal
from typing import Union
import yaml
import zinc.openapi_schema_pydantic as opi
from zinc.api import codegen_api
def load_spec(path: str) -> opi.Components:
with open(path, "r") as f:
load = yaml.full_load(f)
return opi.Components(**load["components"])
def to_class_specs(components: dict[str, opi.Components]) -> list[codegen_api.ClassSpec]:
specs: list[codegen_api.ClassSpec] = []
for path, component in components.items():
if component.schemas:
local_components = {"": component, **components}
for name, schema in component.schemas.items():
specs.append(
to_class_spec(
codegen_api.ObjectTypeName(name),
schema,
local_components,
)
)
return specs
def to_class_spec(
name: codegen_api.ObjectTypeName, schema: opi.Schema, components: dict[str, opi.Components]
) -> codegen_api.ClassSpec:
required_fields = schema.required if schema.required is not None else []
if schema.allOf is not None:
fields = {}
for inner_schemaish in schema.allOf:
inner_schema = resolve_schema(inner_schemaish, components)
inner_required_fields = required_fields + (
inner_schema.required if inner_schema.required is not None else []
)
properties = inner_schema.properties if inner_schema.properties is not None else {}
fields.update(
{
key: to_field_spec(value, required=key in inner_required_fields)
for key, value in properties.items()
}
)
return codegen_api.ObjectSpec(name=name, fields=fields)
if schema.oneOf is not None:
if schema.discriminator is None:
raise Exception("Invalid oneOf definition. Expecting 'discriminator' to exist")
union = {}
discriminator_field = schema.discriminator.propertyName
for inner_schemaish in schema.oneOf:
field = resolve_field(inner_schemaish, discriminator_field, components)
if field is None or field.enum is None or len(field.enum) != 1:
raise Exception(f"Union variant {inner_schemaish} is missing valid discriminator")
union[field.enum[0]] = to_object_name(inner_schemaish)
return codegen_api.UnionSpec(
name=name,
discriminator_field=discriminator_field,
union=union,
)
if schema.type is not None:
if schema.type == "object" and schema.additionalProperties is None:
properties = schema.properties if schema.properties is not None else {}
return codegen_api.ObjectSpec(
name=name,
fields={
key: to_field_spec(value, required=key in required_fields)
for key, value in properties.items()
},
description=schema.description,
)
elif schema.type == "string" and schema.enum is not None:
return codegen_api.EnumSpec(
name=name,
values=schema.enum,
description=schema.description,
)
else:
return codegen_api.AliasSpec(
name=name,
field=to_field_spec(schema, required=True),
description=schema.description,
)
raise Exception(f"Encountered unexpected schema. type: {schema.type}")
def to_field_spec(
property: Union[opi.Reference, opi.Schema], required: bool
) -> codegen_api.FieldSpec:
if isinstance(property, opi.Reference):
return codegen_api.ObjectFieldSpec(
object_type=to_object_name(property),
required=required,
description=property.description,
)
if property.type == "string":
if property.enum:
if len(property.enum) != 1:
raise Exception(
"Unexpected definition. Inline enum fields can only have a single value"
)
return codegen_api.StringLiteralFieldSpec(
value=property.enum[0],
required=required,
description=property.description,
)
if property.schema_format == "date":
return codegen_api.DateFieldSpec(required=required, description=property.description)
elif property.schema_format == "datetime":
return codegen_api.DatetimeFieldSpec(
required=required, description=property.description
)
elif property.schema_format == "uuid":
return codegen_api.UUIDFieldSpec(required=required, description=property.description)
return codegen_api.StringFieldSpec(
required=required,
description=property.description,
)
elif property.type == "array":
if property.items is None:
raise Exception("Invalid array definition. Expecting 'items' to exist")
return codegen_api.ListFieldSpec(
element_type=to_field_spec(property.items, True),
min_count=property.minItems,
max_count=property.maxItems,
allow_single_value_as_array=property.allow_single_value_as_array,
required=True,
description=property.description,
)
elif property.type == "number" or property.type == "integer":
minimum: float | None = None
exclusive_min = False
if property.minimum is not None:
minimum = property.minimum
if property.exclusiveMinimum is not None:
minimum = property.minimum
exclusive_min = True
maximum: float | None = None
exclusive_max = False
if property.maximum is not None:
maximum = property.maximum
if property.exclusiveMaximum is not None:
maximum = property.maximum
exclusive_max = True
if property.type == "number":
if (
property.schema_format == "float"
or property.schema_format == "double"
or property.schema_format is None
):
return codegen_api.NumberFieldSpec(
minimum=decimal.Decimal(minimum) if minimum is not None else minimum,
exclusive_min=exclusive_min,
maximum=decimal.Decimal(maximum) if maximum is not None else maximum,
exclusive_max=exclusive_max,
required=required,
description=property.description,
)
else:
raise Exception(
f"Property with type number has invalid format='{property.schema_format}'"
" (valid formats are 'float', 'double' and absent)"
)
elif property.type == "integer":
if property.schema_format == "int32" or property.schema_format is None:
return codegen_api.IntFieldSpec(
minimum=int(minimum) if minimum is not None else minimum,
exclusive_min=exclusive_min,
maximum=int(maximum) if maximum is not None else maximum,
exclusive_max=exclusive_max,
allow_number=property.allow_number_as_int,
required=required,
description=property.description,
)
else:
raise Exception(
f"Property with type integer has invalid format='{property.schema_format}'"
" (valid formats are 'int32' and absent)"
)
elif property.type == "boolean":
return codegen_api.BooleanFieldSpec(
required=required,
description=property.description,
)
elif property.type == "object":
if property.additionalProperties is not None:
if isinstance(property.additionalProperties, bool):
raise Exception("Invalid map definition")
return codegen_api.DictFieldSpec(
value_type=to_field_spec(property.additionalProperties, True),
min_count=property.minProperties,
max_count=property.maxProperties,
required=True,
description=property.description,
)
raise Exception(f"Encountered unexpected property: {property}")
def to_object_name(schemaish: Union[opi.Reference, opi.Schema]) -> codegen_api.ObjectTypeName:
if isinstance(schemaish, opi.Reference):
return codegen_api.ObjectTypeName(schemaish.ref[schemaish.ref.rindex("/") + 1 :])
raise Exception()
def resolve_field(
schemaish: Union[opi.Reference, opi.Schema], field: str, components: dict[str, opi.Components]
) -> opi.Schema | None:
if isinstance(schemaish, opi.Reference):
schema = resolve_schema(schemaish, components)
else:
schema = schemaish
if schema.type and schema.properties is not None and schema.type == "object":
if inner_schema := schema.properties.get(field):
return resolve_schema(inner_schema, components)
if schema.allOf:
for inner_schema in schema.allOf:
if current_field := resolve_field(inner_schema, field, components):
return current_field
return None
def resolve_schema(
schemaish: Union[opi.Reference, opi.Schema], components: dict[str, opi.Components]
) -> opi.Schema:
if isinstance(schemaish, opi.Reference):
file, json_path = schemaish.ref.split("#")
reference_type = json_path[json_path.rindex("/") + 1 :]
component = components.get(file)
if component is None or component.schemas is None:
raise Exception(f"Missing definition for reference {schemaish}")
return component.schemas[reference_type]
return schemaish | zinc-api | /zinc-api-0.0.3.tar.gz/zinc-api-0.0.3/zinc/openapi_conversion.py | openapi_conversion.py |
from __future__ import annotations
import dataclasses
import decimal
from typing import Literal, NewType, Union
PrimitiveTypeName = Literal["boolean", "int", "number", "date", "enum", "string", "list", "dict"]
ObjectTypeName = NewType("ObjectTypeName", str)
FieldType = Union[PrimitiveTypeName, Literal["object"]]
TypeName = Union[ObjectTypeName, PrimitiveTypeName]
@dataclasses.dataclass(frozen=True)
class BooleanFieldSpec:
required: bool = True
description: str | None = None
field_type: Literal["boolean"] = "boolean"
@dataclasses.dataclass(frozen=True)
class IntFieldSpec:
required: bool = True
minimum: int | None = None
maximum: int | None = None
exclusive_min: bool = False
exclusive_max: bool = False
allow_number: bool = False
description: str | None = None
field_type: Literal["int"] = "int"
@dataclasses.dataclass(frozen=True)
class NumberFieldSpec:
required: bool = True
minimum: decimal.Decimal | None = None
maximum: decimal.Decimal | None = None
exclusive_min: bool = False
exclusive_max: bool = False
description: str | None = None
field_type: Literal["number"] = "number"
@dataclasses.dataclass(frozen=True)
class DateFieldSpec:
required: bool = True
description: str | None = None
field_type: Literal["date"] = "date"
@dataclasses.dataclass(frozen=True)
class DatetimeFieldSpec:
required: bool = True
description: str | None = None
field_type: Literal["datetime"] = "datetime"
@dataclasses.dataclass(frozen=True)
class StringFieldSpec:
required: bool = True
description: str | None = None
field_type: Literal["string"] = "string"
@dataclasses.dataclass(frozen=True)
class UUIDFieldSpec:
required: bool = True
description: str | None = None
field_type: Literal["uuid"] = "uuid"
@dataclasses.dataclass
class StringLiteralFieldSpec:
value: str
required: bool = True
description: str | None = None
field_type: Literal["literal"] = "literal"
@dataclasses.dataclass(frozen=True)
class ListFieldSpec:
element_type: FieldSpec
min_count: int | None = None
max_count: int | None = None
allow_single_value_as_array: bool = False
required: bool = True
description: str | None = None
field_type: Literal["list"] = "list"
@dataclasses.dataclass(frozen=True)
class DictFieldSpec:
# keys have to be strings
value_type: FieldSpec
min_count: int | None = None
max_count: int | None = None
required: bool = True
description: str | None = None
field_type: Literal["dict"] = "dict"
@dataclasses.dataclass(frozen=True)
class ObjectFieldSpec:
object_type: ObjectTypeName
required: bool = True
description: str | None = None
field_type: Literal["object"] = "object"
FieldSpec = Union[
BooleanFieldSpec,
DateFieldSpec,
DatetimeFieldSpec,
DictFieldSpec,
IntFieldSpec,
ListFieldSpec,
NumberFieldSpec,
ObjectFieldSpec,
StringFieldSpec,
StringLiteralFieldSpec,
UUIDFieldSpec,
]
@dataclasses.dataclass(frozen=True)
class ObjectSpec:
name: ObjectTypeName
fields: dict[str, FieldSpec]
description: str | None = None
type: Literal["object"] = "object"
@dataclasses.dataclass(frozen=True)
class EnumSpec:
name: ObjectTypeName
values: list[str]
description: str | None = None
type: Literal["enum"] = "enum"
@dataclasses.dataclass(frozen=True)
class UnionSpec:
name: ObjectTypeName
discriminator_field: str
union: dict[str, ObjectTypeName]
description: str | None = None
type: Literal["union"] = "union"
@dataclasses.dataclass(frozen=True)
class AliasSpec:
name: ObjectTypeName
field: FieldSpec
description: str | None = None
type: Literal["alias"] = "alias"
ClassSpec = Union[ObjectSpec, EnumSpec, UnionSpec, AliasSpec] | zinc-api | /zinc-api-0.0.3.tar.gz/zinc-api-0.0.3/zinc/api/codegen_api.py | codegen_api.py |
from typing import Optional, Union
from pydantic import BaseModel
from pydantic import Extra
from .link import Link
from .media_type import MediaType
from .reference import Reference
class Response(BaseModel):
"""
Describes a single response from an API Operation, including design-time,
static `links` to operations based on the response.
"""
description: str
"""
**REQUIRED**. A short description of the response.
[CommonMark syntax](https://spec.commonmark.org/) MAY be used for rich text representation.
"""
content: Optional[dict[str, MediaType]] = None
"""
A map containing descriptions of potential response payloads.
The key is a media type or [media type range](https://tools.ietf.org/html/rfc7231#appendix-D)
and the value describes it.
For responses that match multiple keys, only the most specific key is applicable. e.g. text/plain overrides text/*
"""
links: Optional[dict[str, Union[Link, Reference]]] = None
"""
A map of operations links that can be followed from the response.
The key of the map is a short name for the link,
following the naming constraints of the names for [Component Objects](#componentsObject).
"""
class Config:
extra = Extra.ignore
schema_extra = {
"examples": [
{
"description": "A complex object array response",
"content": {
"application/json": {
"schema": {
"type": "array",
"items": {"$ref": "#/components/schemas/VeryComplexType"},
}
}
},
},
{
"description": "A simple string response",
"content": {"text/plain": {"schema": {"type": "string"}}},
},
{
"description": "A simple string response",
"content": {"text/plain": {"schema": {"type": "string", "example": "whoa!"}}},
"headers": {
"X-Rate-Limit-Limit": {
"description": "The number of allowed requests in the current period",
"schema": {"type": "integer"},
},
"X-Rate-Limit-Remaining": {
"description": "The number of remaining requests in the current period",
"schema": {"type": "integer"},
},
"X-Rate-Limit-Reset": {
"description": "The number of seconds left in the current period",
"schema": {"type": "integer"},
},
},
},
{"description": "object created"},
]
} | zinc-api | /zinc-api-0.0.3.tar.gz/zinc-api-0.0.3/zinc/openapi_schema_pydantic/response.py | response.py |
from typing import Optional
from pydantic import BaseModel
from pydantic import Extra
from .media_type import MediaType
class RequestBody(BaseModel):
"""Describes a single request body."""
description: Optional[str] = None
"""
A brief description of the request body.
This could contain examples of use.
[CommonMark syntax](https://spec.commonmark.org/) MAY be used for rich text representation.
"""
content: dict[str, MediaType]
"""
**REQUIRED**. The content of the request body.
The key is a media type or [media type range](https://tools.ietf.org/html/rfc7231#appendix-D)
and the value describes it.
For requests that match multiple keys, only the most specific key is applicable. e.g. text/plain overrides text/*
"""
required: bool = False
"""
Determines if the request body is required in the request. Defaults to `false`.
"""
class Config:
extra = Extra.ignore
schema_extra = {
"examples": [
{
"description": "user to add to the system",
"content": {
"application/json": {
"schema": {"$ref": "#/components/schemas/User"},
"examples": {
"user": {
"summary": "User Example",
"externalValue": "http://foo.bar/examples/user-example.json",
}
},
},
"application/xml": {
"schema": {"$ref": "#/components/schemas/User"},
"examples": {
"user": {
"summary": "User example in XML",
"externalValue": "http://foo.bar/examples/user-example.xml",
}
},
},
"text/plain": {
"examples": {
"user": {
"summary": "User example in Plain text",
"externalValue": "http://foo.bar/examples/user-example.txt",
}
}
},
"*/*": {
"examples": {
"user": {
"summary": "User example in other format",
"externalValue": "http://foo.bar/examples/user-example.whatever",
}
}
},
},
},
{
"description": "user to add to the system",
"content": {
"text/plain": {"schema": {"type": "array", "items": {"type": "string"}}}
},
},
]
} | zinc-api | /zinc-api-0.0.3.tar.gz/zinc-api-0.0.3/zinc/openapi_schema_pydantic/request_body.py | request_body.py |
from typing import Optional, Union
from pydantic import BaseModel
from pydantic import Extra
from .components import Components
from .external_documentation import ExternalDocumentation
from .info import Info
from .path_item import PathItem
from .paths import Paths
from .reference import Reference
from .security_requirement import SecurityRequirement
from .server import Server
from .tag import Tag
class OpenAPI(BaseModel):
"""This is the root document object of the OpenAPI document."""
openapi: str = "3.1.0"
"""
**REQUIRED**. This string MUST be the [version number](#versions)
of the OpenAPI Specification that the OpenAPI document uses.
The `openapi` field SHOULD be used by tooling to interpret the OpenAPI document.
This is *not* related to the API [`info.version`](#infoVersion) string.
"""
info: Info
"""
**REQUIRED**. Provides metadata about the API. The metadata MAY be used by tooling as required.
"""
jsonSchemaDialect: Optional[str] = None
"""
The default value for the `$schema` keyword within [Schema Objects](#schemaObject)
contained within this OAS document. This MUST be in the form of a URI.
"""
servers: list[Server] = [Server(url="/")]
"""
An array of Server Objects, which provide connectivity information to a target server.
If the `servers` property is not provided, or is an empty array,
the default value would be a [Server Object](#serverObject) with a [url](#serverUrl) value of `/`.
"""
paths: Optional[Paths] = None
"""
The available paths and operations for the API.
"""
webhooks: Optional[dict[str, Union[PathItem, Reference]]] = None
"""
The incoming webhooks that MAY be received as part of this API and that the API consumer MAY choose to implement.
Closely related to the `callbacks` feature, this section describes requests initiated other than by an API call,
for example by an out of band registration.
The key name is a unique string to refer to each webhook,
while the (optionally referenced) Path Item Object describes a request
that may be initiated by the API provider and the expected responses.
An [example](../examples/v3.1/webhook-example.yaml) is available.
"""
components: Optional[Components] = None
"""
An element to hold various schemas for the document.
"""
security: Optional[list[SecurityRequirement]] = None
"""
A declaration of which security mechanisms can be used across the API.
The list of values includes alternative security requirement objects that can be used.
Only one of the security requirement objects need to be satisfied to authorize a request.
Individual operations can override this definition.
To make security optional, an empty security requirement (`{}`) can be included in the array.
"""
tags: Optional[list[Tag]] = None
"""
A list of tags used by the document with additional metadata.
The order of the tags can be used to reflect on their order by the parsing tools.
Not all tags that are used by the [Operation Object](#operationObject) must be declared.
The tags that are not declared MAY be organized randomly or based on the tools' logic.
Each tag name in the list MUST be unique.
"""
externalDocs: Optional[ExternalDocumentation] = None
"""
Additional external documentation.
"""
class Config:
extra = Extra.ignore | zinc-api | /zinc-api-0.0.3.tar.gz/zinc-api-0.0.3/zinc/openapi_schema_pydantic/open_api.py | open_api.py |
from typing import Optional
from pydantic import AnyUrl
from pydantic import BaseModel
from pydantic import Extra
from .contact import Contact
from .license import License
class Info(BaseModel):
"""
The object provides metadata about the API.
The metadata MAY be used by the clients if needed,
and MAY be presented in editing or documentation generation tools for convenience.
"""
title: str
"""
**REQUIRED**. The title of the API.
"""
summary: Optional[str] = None
"""
A short summary of the API.
"""
description: Optional[str] = None
"""
A description of the API.
[CommonMark syntax](https://spec.commonmark.org/) MAY be used for rich text representation.
"""
termsOfService: Optional[AnyUrl] = None
"""
A URL to the Terms of Service for the API.
MUST be in the form of a URL.
"""
contact: Optional[Contact] = None
"""
The contact information for the exposed API.
"""
license: Optional[License] = None
"""
The license information for the exposed API.
"""
version: str
"""
**REQUIRED**. The version of the OpenAPI document
(which is distinct from the [OpenAPI Specification version](#oasVersion) or the API implementation version).
"""
class Config:
extra = Extra.ignore
schema_extra = {
"examples": [
{
"title": "Sample Pet Store App",
"summary": "A pet store manager.",
"description": "This is a sample server for a pet store.",
"termsOfService": "http://example.com/terms/",
"contact": {
"name": "API Support",
"url": "http://www.example.com/support",
"email": "[email protected]",
},
"license": {
"name": "Apache 2.0",
"url": "https://www.apache.org/licenses/LICENSE-2.0.html",
},
"version": "1.0.1",
}
]
} | zinc-api | /zinc-api-0.0.3.tar.gz/zinc-api-0.0.3/zinc/openapi_schema_pydantic/info.py | info.py |
from typing import Optional, Union
from pydantic import BaseModel
from pydantic import Extra
from .external_documentation import ExternalDocumentation
from .parameter import Parameter
from .reference import Reference
from .request_body import RequestBody
from .responses import Responses
from .security_requirement import SecurityRequirement
from .server import Server
class Operation(BaseModel):
"""Describes a single API operation on a path."""
tags: Optional[list[str]] = None
"""
A list of tags for API documentation control.
Tags can be used for logical grouping of operations by resources or any other qualifier.
"""
summary: Optional[str] = None
"""
A short summary of what the operation does.
"""
description: Optional[str] = None
"""
A verbose explanation of the operation behavior.
[CommonMark syntax](https://spec.commonmark.org/) MAY be used for rich text representation.
"""
externalDocs: Optional[ExternalDocumentation] = None
"""
Additional external documentation for this operation.
"""
operationId: Optional[str] = None
"""
Unique string used to identify the operation.
The id MUST be unique among all operations described in the API.
The operationId value is **case-sensitive**.
Tools and libraries MAY use the operationId to uniquely identify an operation,
therefore, it is RECOMMENDED to follow common programming naming conventions.
"""
parameters: Optional[list[Union[Parameter, Reference]]] = None
"""
A list of parameters that are applicable for this operation.
If a parameter is already defined at the [Path Item](#pathItemParameters),
the new definition will override it but can never remove it.
The list MUST NOT include duplicated parameters.
A unique parameter is defined by a combination of a [name](#parameterName) and [location](#parameterIn).
The list can use the [Reference Object](#referenceObject) to link to parameters
that are defined at the [OpenAPI Object's components/parameters](#componentsParameters).
"""
requestBody: Optional[Union[RequestBody, Reference]] = None
"""
The request body applicable for this operation.
The `requestBody` is fully supported in HTTP methods where the HTTP 1.1 specification
[RFC7231](https://tools.ietf.org/html/rfc7231#section-4.3.1) has explicitly defined semantics for request bodies.
In other cases where the HTTP spec is vague (such as [GET](https://tools.ietf.org/html/rfc7231#section-4.3.1),
[HEAD](https://tools.ietf.org/html/rfc7231#section-4.3.2)
and [DELETE](https://tools.ietf.org/html/rfc7231#section-4.3.5)),
`requestBody` is permitted but does not have well-defined semantics and SHOULD be avoided if possible.
"""
responses: Optional[Responses] = None
"""
The list of possible responses as they are returned from executing this operation.
"""
deprecated: bool = False
"""
Declares this operation to be deprecated.
Consumers SHOULD refrain from usage of the declared operation.
Default value is `false`.
"""
security: Optional[list[SecurityRequirement]] = None
"""
A declaration of which security mechanisms can be used for this operation.
The list of values includes alternative security requirement objects that can be used.
Only one of the security requirement objects need to be satisfied to authorize a request.
To make security optional, an empty security requirement (`{}`) can be included in the array.
This definition overrides any declared top-level [`security`](#oasSecurity).
To remove a top-level security declaration, an empty array can be used.
"""
servers: Optional[list[Server]] = None
"""
An alternative `server` array to service this operation.
If an alternative `server` object is specified at the Path Item Object or Root level,
it will be overridden by this value.
"""
class Config:
extra = Extra.ignore
schema_extra = {
"examples": [
{
"tags": ["pet"],
"summary": "Updates a pet in the store with form data",
"operationId": "updatePetWithForm",
"parameters": [
{
"name": "petId",
"in": "path",
"description": "ID of pet that needs to be updated",
"required": True,
"schema": {"type": "string"},
}
],
"requestBody": {
"content": {
"application/x-www-form-urlencoded": {
"schema": {
"type": "object",
"properties": {
"name": {
"description": "Updated name of the pet",
"type": "string",
},
"status": {
"description": "Updated status of the pet",
"type": "string",
},
},
"required": ["status"],
}
}
}
},
"responses": {
"200": {
"description": "Pet updated.",
"content": {"application/json": {}, "application/xml": {}},
},
"405": {
"description": "Method Not Allowed",
"content": {"application/json": {}, "application/xml": {}},
},
},
"security": [{"petstore_auth": ["write:pets", "read:pets"]}],
}
]
} | zinc-api | /zinc-api-0.0.3.tar.gz/zinc-api-0.0.3/zinc/openapi_schema_pydantic/operation.py | operation.py |
from typing import Optional, Union
from pydantic import BaseModel
from pydantic import Extra
from .example import Example
from .link import Link
from .parameter import Parameter
from .path_item import PathItem
from .reference import Reference
from .request_body import RequestBody
from .response import Response
from .schema import Schema
from .security_scheme import SecurityScheme
class Components(BaseModel):
"""
Holds a set of reusable objects for different aspects of the OAS.
All objects defined within the components object will have no effect on the API
unless they are explicitly referenced from properties outside the components object.
"""
schemas: Optional[dict[str, Schema]] = None
"""An object to hold reusable [Schema Objects](#schemaObject)."""
responses: Optional[dict[str, Union[Response, Reference]]] = None
"""An object to hold reusable [Response Objects](#responseObject)."""
parameters: Optional[dict[str, Union[Parameter, Reference]]] = None
"""An object to hold reusable [Parameter Objects](#parameterObject)."""
examples: Optional[dict[str, Union[Example, Reference]]] = None
"""An object to hold reusable [Example Objects](#exampleObject)."""
requestBodies: Optional[dict[str, Union[RequestBody, Reference]]] = None
"""An object to hold reusable [Request Body Objects](#requestBodyObject)."""
securitySchemes: Optional[dict[str, Union[SecurityScheme, Reference]]] = None
"""An object to hold reusable [Security Scheme Objects](#securitySchemeObject)."""
links: Optional[dict[str, Union[Link, Reference]]] = None
"""An object to hold reusable [Link Objects](#linkObject)."""
pathItems: Optional[dict[str, Union[PathItem, Reference]]] = None
"""An object to hold reusable [Path Item Object](#pathItemObject)."""
class Config:
extra = Extra.ignore
schema_extra = {
"examples": [
{
"schemas": {
"GeneralError": {
"type": "object",
"properties": {
"code": {"type": "integer", "format": "int32"},
"message": {"type": "string"},
},
},
"Category": {
"type": "object",
"properties": {
"id": {"type": "integer", "format": "int64"},
"name": {"type": "string"},
},
},
"Tag": {
"type": "object",
"properties": {
"id": {"type": "integer", "format": "int64"},
"name": {"type": "string"},
},
},
},
"parameters": {
"skipParam": {
"name": "skip",
"in": "query",
"description": "number of items to skip",
"required": True,
"schema": {"type": "integer", "format": "int32"},
},
"limitParam": {
"name": "limit",
"in": "query",
"description": "max records to return",
"required": True,
"schema": {"type": "integer", "format": "int32"},
},
},
"responses": {
"NotFound": {"description": "Entity not found."},
"IllegalInput": {"description": "Illegal input for operation."},
"GeneralError": {
"description": "General Error",
"content": {
"application/json": {
"schema": {"$ref": "#/components/schemas/GeneralError"}
}
},
},
},
"securitySchemes": {
"api_key": {"type": "apiKey", "name": "api_key", "in": "header"},
"petstore_auth": {
"type": "oauth2",
"flows": {
"implicit": {
"authorizationUrl": "http://example.org/api/oauth/dialog",
"scopes": {
"write:pets": "modify pets in your account",
"read:pets": "read your pets",
},
}
},
},
},
}
]
} | zinc-api | /zinc-api-0.0.3.tar.gz/zinc-api-0.0.3/zinc/openapi_schema_pydantic/components.py | components.py |
from typing import Any, Optional, Union
from pydantic import BaseModel
from pydantic import Extra
from pydantic import Field
from .discriminator import Discriminator
from .external_documentation import ExternalDocumentation
from .reference import Reference
from .xml import XML
class Schema(BaseModel):
"""
The Schema Object allows the definition of input and output data types.
These types can be objects, but also primitives and arrays.
This object is a superset of
the [JSON Schema Specification Draft 2020-12](https://tools.ietf.org/html/draft-bhutton-json-schema-00).
For more information about the properties,
see [JSON Schema Core](https://tools.ietf.org/html/draft-wright-json-schema-00)
and [JSON Schema Validation](https://tools.ietf.org/html/draft-wright-json-schema-validation-00).
Unless stated otherwise, the property definitions follow those of JSON Schema
and do not add any additional semantics.
Where JSON Schema indicates that behavior is defined by the application (e.g. for annotations),
OAS also defers the definition of semantics to the application consuming the OpenAPI document.
"""
"""
The following properties are taken directly from the
[JSON Schema Core](https://tools.ietf.org/html/draft-wright-json-schema-00)
and follow the same specifications:
"""
allOf: Optional[list[Union[Reference, "Schema"]]] = None
"""
This keyword's value MUST be a non-empty array. Each item of the
array MUST be a valid JSON Schema.
An instance validates successfully against this keyword if it
validates successfully against all schemas defined by this keyword's
value.
"""
anyOf: Optional[list[Union[Reference, "Schema"]]] = None
"""
This keyword's value MUST be a non-empty array. Each item of the
array MUST be a valid JSON Schema.
An instance validates successfully against this keyword if it
validates successfully against at least one schema defined by this
keyword's value. Note that when annotations are being collected, all
subschemas MUST be examined so that annotations are collected from
each subschema that validates successfully.
"""
oneOf: Optional[list[Union[Reference, "Schema"]]] = None
"""
This keyword's value MUST be a non-empty array. Each item of the
array MUST be a valid JSON Schema.
An instance validates successfully against this keyword if it
validates successfully against exactly one schema defined by this
keyword's value.
"""
schema_not: Optional[Union[Reference, "Schema"]] = Field(default=None, alias="not")
"""
This keyword's value MUST be a valid JSON Schema.
An instance is valid against this keyword if it fails to validate
successfully against the schema defined by this keyword.
"""
schema_if: Optional[Union[Reference, "Schema"]] = Field(default=None, alias="if")
"""
This keyword's value MUST be a valid JSON Schema.
This validation outcome of this keyword's subschema has no direct
effect on the overall validation result. Rather, it controls which
of the "then" or "else" keywords are evaluated.
Instances that successfully validate against this keyword's subschema
MUST also be valid against the subschema value of the "then" keyword,
if present.
Instances that fail to validate against this keyword's subschema MUST
also be valid against the subschema value of the "else" keyword, if
present.
If annotations (Section 7.7) are being collected, they are collected
from this keyword's subschema in the usual way, including when the
keyword is present without either "then" or "else".
"""
then: Optional[Union[Reference, "Schema"]] = None
"""
This keyword's value MUST be a valid JSON Schema.
When "if" is present, and the instance successfully validates against
its subschema, then validation succeeds against this keyword if the
instance also successfully validates against this keyword's
subschema.
This keyword has no effect when "if" is absent, or when the instance
fails to validate against its subschema. Implementations MUST NOT
evaluate the instance against this keyword, for either validation or
annotation collection purposes, in such cases.
"""
schema_else: Optional[Union[Reference, "Schema"]] = Field(default=None, alias="else")
"""
This keyword's value MUST be a valid JSON Schema.
When "if" is present, and the instance fails to validate against its
subschema, then validation succeeds against this keyword if the
instance successfully validates against this keyword's subschema.
This keyword has no effect when "if" is absent, or when the instance
successfully validates against its subschema. Implementations MUST
NOT evaluate the instance against this keyword, for either validation
or annotation collection purposes, in such cases.
"""
dependentSchemas: Optional[dict[str, Union[Reference, "Schema"]]] = None
"""
This keyword specifies subschemas that are evaluated if the instance
is an object and contains a certain property.
This keyword's value MUST be an object. Each value in the object
MUST be a valid JSON Schema.
If the object key is a property in the instance, the entire instance
must validate against the subschema. Its use is dependent on the
presence of the property.
Omitting this keyword has the same behavior as an empty object.
"""
prefixItems: Optional[list[Union[Reference, "Schema"]]] = None
"""
The value of "prefixItems" MUST be a non-empty array of valid JSON
Schemas.
Validation succeeds if each element of the instance validates against
the schema at the same position, if any. This keyword does not
constrain the length of the array. If the array is longer than this
keyword's value, this keyword validates only the prefix of matching
length.
This keyword produces an annotation value which is the largest index
to which this keyword applied a subschema. The value MAY be a
boolean true if a subschema was applied to every index of the
instance, such as is produced by the "items" keyword. This
annotation affects the behavior of "items" and "unevaluatedItems".
Omitting this keyword has the same assertion behavior as an empty
array.
"""
items: Optional[Union[Reference, "Schema"]] = None
"""
The value of "items" MUST be a valid JSON Schema.
This keyword applies its subschema to all instance elements at
indexes greater than the length of the "prefixItems" array in the
same schema object, as reported by the annotation result of that
"prefixItems" keyword. If no such annotation result exists, "items"
applies its subschema to all instance array elements. [[CREF11: Note
that the behavior of "items" without "prefixItems" is identical to
that of the schema form of "items" in prior drafts. When
"prefixItems" is present, the behavior of "items" is identical to the
former "additionalItems" keyword. ]]
If the "items" subschema is applied to any positions within the
instance array, it produces an annotation result of boolean true,
indicating that all remaining array elements have been evaluated
against this keyword's subschema.
Omitting this keyword has the same assertion behavior as an empty
schema.
Implementations MAY choose to implement or optimize this keyword in
another way that produces the same effect, such as by directly
checking for the presence and size of a "prefixItems" array.
Implementations that do not support annotation collection MUST do so.
"""
contains: Optional[Union[Reference, "Schema"]] = None
"""
The value of this keyword MUST be a valid JSON Schema.
An array instance is valid against "contains" if at least one of its
elements is valid against the given schema. The subschema MUST be
applied to every array element even after the first match has been
found, in order to collect annotations for use by other keywords.
This is to ensure that all possible annotations are collected.
Logically, the validation result of applying the value subschema to
each item in the array MUST be ORed with "false", resulting in an
overall validation result.
This keyword produces an annotation value which is an array of the
indexes to which this keyword validates successfully when applying
its subschema, in ascending order. The value MAY be a boolean "true"
if the subschema validates successfully when applied to every index
of the instance. The annotation MUST be present if the instance
array to which this keyword's schema applies is empty.
"""
properties: Optional[dict[str, Union[Reference, "Schema"]]] = None
"""
The value of "properties" MUST be an object. Each value of this
object MUST be a valid JSON Schema.
Validation succeeds if, for each name that appears in both the
instance and as a name within this keyword's value, the child
instance for that name successfully validates against the
corresponding schema.
The annotation result of this keyword is the set of instance property
names matched by this keyword.
Omitting this keyword has the same assertion behavior as an empty
object.
"""
patternProperties: Optional[dict[str, Union[Reference, "Schema"]]] = None
"""
The value of "patternProperties" MUST be an object. Each property
name of this object SHOULD be a valid regular expression, according
to the ECMA-262 regular expression dialect. Each property value of
this object MUST be a valid JSON Schema.
Validation succeeds if, for each instance name that matches any
regular expressions that appear as a property name in this keyword's
value, the child instance for that name successfully validates
against each schema that corresponds to a matching regular
expression.
The annotation result of this keyword is the set of instance property
names matched by this keyword.
Omitting this keyword has the same assertion behavior as an empty
object.
"""
additionalProperties: Optional[Union[Reference, "Schema", bool]] = None
"""
The value of "additionalProperties" MUST be a valid JSON Schema.
The behavior of this keyword depends on the presence and annotation
results of "properties" and "patternProperties" within the same
schema object. Validation with "additionalProperties" applies only
to the child values of instance names that do not appear in the
annotation results of either "properties" or "patternProperties".
For all such properties, validation succeeds if the child instance
validates against the "additionalProperties" schema.
The annotation result of this keyword is the set of instance property
names validated by this keyword's subschema.
Omitting this keyword has the same assertion behavior as an empty
schema.
Implementations MAY choose to implement or optimize this keyword in
another way that produces the same effect, such as by directly
checking the names in "properties" and the patterns in
"patternProperties" against the instance property set.
Implementations that do not support annotation collection MUST do so.
"""
propertyNames: Optional[Union[Reference, "Schema"]] = None
"""
The value of "propertyNames" MUST be a valid JSON Schema.
If the instance is an object, this keyword validates if every
property name in the instance validates against the provided schema.
Note the property name that the schema is testing will always be a
string.
Omitting this keyword has the same behavior as an empty schema.
"""
unevaluatedItems: Optional[Union[Reference, "Schema"]] = None
"""
The value of "unevaluatedItems" MUST be a valid JSON Schema.
The behavior of this keyword depends on the annotation results of
adjacent keywords that apply to the instance location being
validated. Specifically, the annotations from "prefixItems",
"items", and "contains", which can come from those keywords when they
are adjacent to the "unevaluatedItems" keyword. Those three
annotations, as well as "unevaluatedItems", can also result from any
and all adjacent in-place applicator (Section 10.2) keywords. This
includes but is not limited to the in-place applicators defined in
this document.
If no relevant annotations are present, the "unevaluatedItems"
subschema MUST be applied to all locations in the array. If a
boolean true value is present from any of the relevant annotations,
"unevaluatedItems" MUST be ignored. Otherwise, the subschema MUST be
applied to any index greater than the largest annotation value for
"prefixItems", which does not appear in any annotation value for
"contains".
This means that "prefixItems", "items", "contains", and all in-place
applicators MUST be evaluated before this keyword can be evaluated.
Authors of extension keywords MUST NOT define an in-place applicator
that would need to be evaluated after this keyword.
If the "unevaluatedItems" subschema is applied to any positions
within the instance array, it produces an annotation result of
boolean true, analogous to the behavior of "items".
Omitting this keyword has the same assertion behavior as an empty
schema.
"""
unevaluatedProperties: Optional[Union[Reference, "Schema"]] = None
"""
The value of "unevaluatedProperties" MUST be a valid JSON Schema.
The behavior of this keyword depends on the annotation results of
adjacent keywords that apply to the instance location being
validated. Specifically, the annotations from "properties",
"patternProperties", and "additionalProperties", which can come from
those keywords when they are adjacent to the "unevaluatedProperties"
keyword. Those three annotations, as well as
"unevaluatedProperties", can also result from any and all adjacent
in-place applicator (Section 10.2) keywords. This includes but is
not limited to the in-place applicators defined in this document.
Validation with "unevaluatedProperties" applies only to the child
values of instance names that do not appear in the "properties",
"patternProperties", "additionalProperties", or
"unevaluatedProperties" annotation results that apply to the instance
location being validated.
For all such properties, validation succeeds if the child instance
validates against the "unevaluatedProperties" schema.
This means that "properties", "patternProperties",
"additionalProperties", and all in-place applicators MUST be
evaluated before this keyword can be evaluated. Authors of extension
keywords MUST NOT define an in-place applicator that would need to be
evaluated after this keyword.
The annotation result of this keyword is the set of instance property
names validated by this keyword's subschema.
Omitting this keyword has the same assertion behavior as an empty
schema.
"""
"""
The following properties are taken directly from the
[JSON Schema Validation](https://tools.ietf.org/html/draft-wright-json-schema-validation-00)
and follow the same specifications:
"""
type: Optional[Union[str, list[str]]] = None
"""
The value of this keyword MUST be either a string or an array. If it
is an array, elements of the array MUST be strings and MUST be
unique.
String values MUST be one of the six primitive types ("null",
"boolean", "object", "array", "number", or "string"), or "integer"
which matches any number with a zero fractional part.
An instance validates if and only if the instance is in any of the
sets listed for this keyword.
"""
enum: Optional[list[Any]] = Field(default=None, min_items=1)
"""
The value of this keyword MUST be an array. This array SHOULD have
at least one element. Elements in the array SHOULD be unique.
An instance validates successfully against this keyword if its value
is equal to one of the elements in this keyword's array value.
Elements in the array might be of any type, including null.
"""
const: Optional[Any] = None
"""
The value of this keyword MAY be of any type, including null.
Use of this keyword is functionally equivalent to an "enum"
(Section 6.1.2) with a single value.
An instance validates successfully against this keyword if its value
is equal to the value of the keyword.
"""
multipleOf: Optional[float] = Field(default=None, gt=0.0)
"""
The value of "multipleOf" MUST be a number, strictly greater than 0.
A numeric instance is only valid if division by this keyword's value
results in an integer.
"""
maximum: Optional[float] = None
"""
The value of "maximum" MUST be a number, representing an inclusive
upper limit for a numeric instance.
If the instance is a number, then this keyword validates only if the
instance is less than or exactly equal to "maximum".
"""
exclusiveMaximum: Optional[float] = None
"""
The value of "exclusiveMaximum" MUST be a number, representing an
exclusive upper limit for a numeric instance.
If the instance is a number, then the instance is valid only if it
has a value strictly less than (not equal to) "exclusiveMaximum".
"""
minimum: Optional[float] = None
"""
The value of "minimum" MUST be a number, representing an inclusive
lower limit for a numeric instance.
If the instance is a number, then this keyword validates only if the
instance is greater than or exactly equal to "minimum".
"""
exclusiveMinimum: Optional[float] = None
"""
The value of "exclusiveMinimum" MUST be a number, representing an
exclusive lower limit for a numeric instance.
If the instance is a number, then the instance is valid only if it
has a value strictly greater than (not equal to) "exclusiveMinimum".
"""
maxLength: Optional[int] = Field(default=None, ge=0)
"""
The value of this keyword MUST be a non-negative integer.
A string instance is valid against this keyword if its length is less
than, or equal to, the value of this keyword.
The length of a string instance is defined as the number of its
characters as defined by RFC 8259 [RFC8259].
"""
minLength: Optional[int] = Field(default=None, ge=0)
"""
The value of this keyword MUST be a non-negative integer.
A string instance is valid against this keyword if its length is
greater than, or equal to, the value of this keyword.
The length of a string instance is defined as the number of its
characters as defined by RFC 8259 [RFC8259].
Omitting this keyword has the same behavior as a value of 0.
"""
pattern: Optional[str] = None
"""
The value of this keyword MUST be a string. This string SHOULD be a
valid regular expression, according to the ECMA-262 regular
expression dialect.
A string instance is considered valid if the regular expression
matches the instance successfully. Recall: regular expressions are
not implicitly anchored.
"""
maxItems: Optional[int] = Field(default=None, ge=0)
"""
The value of this keyword MUST be a non-negative integer.
An array instance is valid against "maxItems" if its size is less
than, or equal to, the value of this keyword.
"""
minItems: Optional[int] = Field(default=None, ge=0)
"""
The value of this keyword MUST be a non-negative integer.
An array instance is valid against "minItems" if its size is greater
than, or equal to, the value of this keyword.
Omitting this keyword has the same behavior as a value of 0.
"""
uniqueItems: Optional[bool] = None
"""
The value of this keyword MUST be a boolean.
If this keyword has boolean value false, the instance validates
successfully. If it has boolean value true, the instance validates
successfully if all of its elements are unique.
Omitting this keyword has the same behavior as a value of false.
"""
maxContains: Optional[int] = Field(default=None, ge=0)
"""
The value of this keyword MUST be a non-negative integer.
If "contains" is not present within the same schema object, then this
keyword has no effect.
An instance array is valid against "maxContains" in two ways,
depending on the form of the annotation result of an adjacent
"contains" [json-schema] keyword. The first way is if the annotation
result is an array and the length of that array is less than or equal
to the "maxContains" value. The second way is if the annotation
result is a boolean "true" and the instance array length is less than
or equal to the "maxContains" value.
"""
minContains: Optional[int] = Field(default=None, ge=0)
"""
The value of this keyword MUST be a non-negative integer.
If "contains" is not present within the same schema object, then this
keyword has no effect.
An instance array is valid against "minContains" in two ways,
depending on the form of the annotation result of an adjacent
"contains" [json-schema] keyword. The first way is if the annotation
result is an array and the length of that array is greater than or
equal to the "minContains" value. The second way is if the
annotation result is a boolean "true" and the instance array length
is greater than or equal to the "minContains" value.
A value of 0 is allowed, but is only useful for setting a range of
occurrences from 0 to the value of "maxContains". A value of 0 with
no "maxContains" causes "contains" to always pass validation.
Omitting this keyword has the same behavior as a value of 1.
"""
maxProperties: Optional[int] = Field(default=None, ge=0)
"""
The value of this keyword MUST be a non-negative integer.
An object instance is valid against "maxProperties" if its number of
properties is less than, or equal to, the value of this keyword.
"""
minProperties: Optional[int] = Field(default=None, ge=0)
"""
The value of this keyword MUST be a non-negative integer.
An object instance is valid against "minProperties" if its number of
properties is greater than, or equal to, the value of this keyword.
Omitting this keyword has the same behavior as a value of 0.
"""
required: Optional[list[str]] = None
"""
The value of this keyword MUST be an array. Elements of this array,
if any, MUST be strings, and MUST be unique.
An object instance is valid against this keyword if every item in the
array is the name of a property in the instance.
Omitting this keyword has the same behavior as an empty array.
"""
dependentRequired: Optional[dict[str, list[str]]] = None
"""
The value of this keyword MUST be an object. Properties in this
object, if any, MUST be arrays. Elements in each array, if any, MUST
be strings, and MUST be unique.
This keyword specifies properties that are required if a specific
other property is present. Their requirement is dependent on the
presence of the other property.
Validation succeeds if, for each name that appears in both the
instance and as a name within this keyword's value, every item in the
corresponding array is also the name of a property in the instance.
Omitting this keyword has the same behavior as an empty object.
"""
schema_format: Optional[str] = Field(default=None, alias="format")
"""
From OpenAPI:
See [Data Type Formats](#dataTypeFormat) for further details.
While relying on JSON Schema's defined formats, the OAS offers a few additional predefined formats.
From JSON Schema:
Structural validation alone may be insufficient to allow an
application to correctly utilize certain values. The "format"
annotation keyword is defined to allow schema authors to convey
semantic information for a fixed subset of values which are
accurately described by authoritative resources, be they RFCs or
other external specifications.
The value of this keyword is called a format attribute. It MUST be a
string. A format attribute can generally only validate a given set
of instance types. If the type of the instance to validate is not in
this set, validation for this format attribute and instance SHOULD
succeed. All format attributes defined in this section apply to
strings, but a format attribute can be specified to apply to any
instance types defined in the data model defined in the core JSON
Schema. [json-schema] [[CREF1: Note that the "type" keyword in this
specification defines an "integer" type which is not part of the data
model. Therefore a format attribute can be limited to numbers, but
not specifically to integers. However, a numeric format can be used
alongside the "type" keyword with a value of "integer", or could be
explicitly defined to always pass if the number is not an integer,
which produces essentially the same behavior as only applying to
integers. ]]
"""
contentEncoding: Optional[str] = None
"""
If the instance value is a string, this property defines that the
string SHOULD be interpreted as binary data and decoded using the
encoding named by this property.
Possible values indicating base 16, 32, and 64 encodings with several
variations are listed in RFC 4648 [RFC4648]. Additionally, sections
6.7 and 6.8 of RFC 2045 [RFC2045] provide encodings used in MIME. As
"base64" is defined in both RFCs, the definition from RFC 4648 SHOULD
be assumed unless the string is specifically intended for use in a
MIME context. Note that all of these encodings result in strings
consisting only of 7-bit ASCII characters. Therefore, this keyword
has no meaning for strings containing characters outside of that
range.
If this keyword is absent, but "contentMediaType" is present, this
indicates that the encoding is the identity encoding, meaning that no
transformation was needed in order to represent the content in a
UTF-8 string.
"""
contentMediaType: Optional[str] = None
"""
If the instance is a string, this property indicates the media type
of the contents of the string. If "contentEncoding" is present, this
property describes the decoded string.
The value of this property MUST be a string, which MUST be a media
type, as defined by RFC 2046 [RFC2046].
"""
contentSchema: Optional[Union[Reference, "Schema"]] = None
"""
If the instance is a string, and if "contentMediaType" is present,
this property contains a schema which describes the structure of the
string.
This keyword MAY be used with any media type that can be mapped into
JSON Schema's data model.
The value of this property MUST be a valid JSON schema. It SHOULD be
ignored if "contentMediaType" is not present.
"""
title: Optional[str] = None
"""
The value of "title" MUST be a string.
The title can be used to decorate a user interface with
information about the data produced by this user interface.
A title will preferably be short.
"""
description: Optional[str] = None
"""
From OpenAPI:
[CommonMark syntax](https://spec.commonmark.org/) MAY be used for rich text representation.
From JSON Schema:
The value "description" MUST be a string.
The description can be used to decorate a user interface with
information about the data produced by this user interface.
A description will provide explanation about the purpose of
the instance described by this schema.
"""
default: Optional[Any] = None
"""
There are no restrictions placed on the value of this keyword. When
multiple occurrences of this keyword are applicable to a single sub-
instance, implementations SHOULD remove duplicates.
This keyword can be used to supply a default JSON value associated
with a particular schema. It is RECOMMENDED that a default value be
valid against the associated schema.
"""
deprecated: Optional[bool] = None
"""
The value of this keyword MUST be a boolean. When multiple
occurrences of this keyword are applicable to a single sub-instance,
applications SHOULD consider the instance location to be deprecated
if any occurrence specifies a true value.
If "deprecated" has a value of boolean true, it indicates that
applications SHOULD refrain from usage of the declared property. It
MAY mean the property is going to be removed in the future.
A root schema containing "deprecated" with a value of true indicates
that the entire resource being described MAY be removed in the
future.
The "deprecated" keyword applies to each instance location to which
the schema object containing the keyword successfully applies. This
can result in scenarios where every array item or object property is
deprecated even though the containing array or object is not.
Omitting this keyword has the same behavior as a value of false.
"""
readOnly: Optional[bool] = None
"""
The value of "readOnly" MUST be a boolean. When multiple
occurrences of this keyword are applicable to a single sub-instance,
the resulting behavior SHOULD be as for a true value if any
occurrence specifies a true value, and SHOULD be as for a false value
otherwise.
If "readOnly" has a value of boolean true, it indicates that the
value of the instance is managed exclusively by the owning authority,
and attempts by an application to modify the value of this property
are expected to be ignored or rejected by that owning authority.
An instance document that is marked as "readOnly" for the entire
document MAY be ignored if sent to the owning authority, or MAY
result in an error, at the authority's discretion.
For example, "readOnly" would be used to mark a database-generated
serial number as read-only, while "writeOnly" would be used to mark a
password input field.
This keyword can be used to assist in user interface instance
generation. In particular, an application MAY choose to use a widget
that hides input values as they are typed for write-only fields.
Omitting these keywords has the same behavior as values of false.
"""
writeOnly: Optional[bool] = None
"""
The value of "writeOnly" MUST be a boolean. When multiple
occurrences of this keyword are applicable to a single sub-instance,
the resulting behavior SHOULD be as for a true value if any
occurrence specifies a true value, and SHOULD be as for a false value
otherwise.
If "writeOnly" has a value of boolean true, it indicates that the
value is never present when the instance is retrieved from the owning
authority. It can be present when sent to the owning authority to
update or create the document (or the resource it represents), but it
will not be included in any updated or newly created version of the
instance.
An instance document that is marked as "writeOnly" for the entire
document MAY be returned as a blank document of some sort, or MAY
produce an error upon retrieval, or have the retrieval request
ignored, at the authority's discretion.
For example, "readOnly" would be used to mark a database-generated
serial number as read-only, while "writeOnly" would be used to mark a
password input field.
This keyword can be used to assist in user interface instance
generation. In particular, an application MAY choose to use a widget
that hides input values as they are typed for write-only fields.
Omitting these keywords has the same behavior as values of false.
"""
examples: Optional[list[Any]] = None
"""
The value of this keyword MUST be an array. There are no
restrictions placed on the values within the array. When multiple
occurrences of this keyword are applicable to a single sub-instance,
implementations MUST provide a flat array of all values rather than
an array of arrays.
This keyword can be used to provide sample JSON values associated
with a particular schema, for the purpose of illustrating usage. It
is RECOMMENDED that these values be valid against the associated
schema.
Implementations MAY use the value(s) of "default", if present, as an
additional example. If "examples" is absent, "default" MAY still be
used in this manner.
"""
"""
The OpenAPI Specification's base vocabulary is comprised of the following keywords:
"""
discriminator: Optional[Discriminator] = None
"""
Adds support for polymorphism.
The discriminator is an object name that is used to differentiate between other schemas
which may satisfy the payload description.
See [Composition and Inheritance](#schemaComposition) for more details.
"""
xml: Optional[XML] = None
"""
This MAY be used only on properties schemas.
It has no effect on root schemas.
Adds additional metadata to describe the XML representation of this property.
"""
externalDocs: Optional[ExternalDocumentation] = None
"""
Additional external documentation for this schema.
"""
example: Optional[Any] = None
"""
A free-form property to include an example of an instance for this schema.
To represent examples that cannot be naturally represented in JSON or YAML,
a string value can be used to contain the example with escaping where necessary.
Deprecated: The example property has been deprecated in favor of the JSON Schema examples keyword.
Use of example is discouraged, and later versions of this specification may remove it.
"""
allow_single_value_as_array: bool = Field(
default=False, alias="x-theorem-allow-single-value-as-array"
)
allow_number_as_int: bool = Field(default=False, alias="x-theorem-allow-number-as-int")
class Config:
extra = Extra.forbid
allow_population_by_field_name = True
schema_extra = {
"examples": [
{"type": "string", "format": "email"},
{
"type": "object",
"required": ["name"],
"properties": {
"name": {"type": "string"},
"address": {"$ref": "#/components/schemas/Address"},
"age": {"type": "integer", "format": "int32", "minimum": 0},
},
},
{"type": "object", "additionalProperties": {"type": "string"}},
{
"type": "object",
"additionalProperties": {"$ref": "#/components/schemas/ComplexModel"},
},
{
"type": "object",
"properties": {
"id": {"type": "integer", "format": "int64"},
"name": {"type": "string"},
},
"required": ["name"],
"example": {"name": "Puma", "id": 1},
},
{
"type": "object",
"required": ["message", "code"],
"properties": {
"message": {"type": "string"},
"code": {"type": "integer", "minimum": 100, "maximum": 600},
},
},
{
"allOf": [
{"$ref": "#/components/schemas/ErrorModel"},
{
"type": "object",
"required": ["rootCause"],
"properties": {"rootCause": {"type": "string"}},
},
]
},
{
"type": "object",
"discriminator": {"propertyName": "petType"},
"properties": {"name": {"type": "string"}, "petType": {"type": "string"}},
"required": ["name", "petType"],
},
{
"description": "A representation of a cat. "
"Note that `Cat` will be used as the discriminator value.",
"allOf": [
{"$ref": "#/components/schemas/Pet"},
{
"type": "object",
"properties": {
"huntingSkill": {
"type": "string",
"description": "The measured skill for hunting",
"default": "lazy",
"enum": ["clueless", "lazy", "adventurous", "aggressive"],
}
},
"required": ["huntingSkill"],
},
],
},
{
"description": "A representation of a dog. "
"Note that `Dog` will be used as the discriminator value.",
"allOf": [
{"$ref": "#/components/schemas/Pet"},
{
"type": "object",
"properties": {
"packSize": {
"type": "integer",
"format": "int32",
"description": "the size of the pack the dog is from",
"default": 0,
"minimum": 0,
}
},
"required": ["packSize"],
},
],
},
]
} | zinc-api | /zinc-api-0.0.3.tar.gz/zinc-api-0.0.3/zinc/openapi_schema_pydantic/schema.py | schema.py |
from typing import Optional, Union
from pydantic import BaseModel
from pydantic import Extra
from pydantic import Field
from .operation import Operation
from .parameter import Parameter
from .reference import Reference
from .server import Server
class PathItem(BaseModel):
"""
Describes the operations available on a single path.
A Path Item MAY be empty, due to [ACL constraints](#securityFiltering).
The path itself is still exposed to the documentation viewer
but they will not know which operations and parameters are available.
"""
ref: Optional[str] = Field(default=None, alias="$ref")
"""
Allows for an external definition of this path item.
The referenced structure MUST be in the format of a [Path Item Object](#pathItemObject).
In case a Path Item Object field appears both in the defined object and the referenced object,
the behavior is undefined.
See the rules for resolving [Relative References](#relativeReferencesURI).
"""
summary: Optional[str] = None
"""
An optional, string summary, intended to apply to all operations in this path.
"""
description: Optional[str] = None
"""
An optional, string description, intended to apply to all operations in this path.
[CommonMark syntax](https://spec.commonmark.org/) MAY be used for rich text representation.
"""
get: Optional[Operation] = None
"""
A definition of a GET operation on this path.
"""
put: Optional[Operation] = None
"""
A definition of a PUT operation on this path.
"""
post: Optional[Operation] = None
"""
A definition of a POST operation on this path.
"""
delete: Optional[Operation] = None
"""
A definition of a DELETE operation on this path.
"""
options: Optional[Operation] = None
"""
A definition of a OPTIONS operation on this path.
"""
head: Optional[Operation] = None
"""
A definition of a HEAD operation on this path.
"""
patch: Optional[Operation] = None
"""
A definition of a PATCH operation on this path.
"""
trace: Optional[Operation] = None
"""
A definition of a TRACE operation on this path.
"""
servers: Optional[list[Server]] = None
"""
An alternative `server` array to service all operations in this path.
"""
parameters: Optional[list[Union[Parameter, Reference]]] = None
"""
A list of parameters that are applicable for all the operations described under this path.
These parameters can be overridden at the operation level, but cannot be removed there.
The list MUST NOT include duplicated parameters.
A unique parameter is defined by a combination of a [name](#parameterName) and [location](#parameterIn).
The list can use the [Reference Object](#referenceObject) to link to parameters that are defined at the
[OpenAPI Object's components/parameters](#componentsParameters).
"""
class Config:
extra = Extra.ignore
allow_population_by_field_name = True
schema_extra = {
"examples": [
{
"get": {
"description": "Returns pets based on ID",
"summary": "Find pets by ID",
"operationId": "getPetsById",
"responses": {
"200": {
"description": "pet response",
"content": {
"*/*": {
"schema": {
"type": "array",
"items": {"$ref": "#/components/schemas/Pet"},
}
}
},
},
"default": {
"description": "error payload",
"content": {
"text/html": {
"schema": {"$ref": "#/components/schemas/ErrorModel"}
}
},
},
},
},
"parameters": [
{
"name": "id",
"in": "path",
"description": "ID of pet to use",
"required": True,
"schema": {"type": "array", "items": {"type": "string"}},
"style": "simple",
}
],
}
]
} | zinc-api | /zinc-api-0.0.3.tar.gz/zinc-api-0.0.3/zinc/openapi_schema_pydantic/path_item.py | path_item.py |
from typing import Union
from .reference import Reference
from .response import Response
Responses = dict[str, Union[Response, Reference]]
"""
A container for the expected responses of an operation.
The container maps a HTTP response code to the expected response.
The documentation is not necessarily expected to cover all possible HTTP response codes
because they may not be known in advance.
However, documentation is expected to cover a successful operation response and any known errors.
The `default` MAY be used as a default response object for all HTTP codes
that are not covered individually by the specification.
The `Responses Object` MUST contain at least one response code, and it
SHOULD be the response for a successful operation call.
"""
"""Fixed Fields"""
# default: Optional[Union[Response, Reference]]
"""
The documentation of responses other than the ones declared for specific HTTP response codes.
Use this field to cover undeclared responses.
A [Reference Object](#referenceObject) can link to a response
that the [OpenAPI Object's components/responses](#componentsResponses) section defines.
"""
"""Patterned Fields"""
# {httpStatusCode}: Optional[Union[Response, Reference]]
"""
Any [HTTP status code](#httpCodes) can be used as the property name,
but only one property per code, to describe the expected response for that HTTP status code.
A [Reference Object](#referenceObject) can link to a response
that is defined in the [OpenAPI Object's components/responses](#componentsResponses) section.
This field MUST be enclosed in quotation marks (for example, "200") for compatibility between JSON and YAML.
To define a range of response codes, this field MAY contain the uppercase wildcard character `X`.
For example, `2XX` represents all response codes between `[200-299]`.
Only the following range definitions are allowed: `1XX`, `2XX`, `3XX`, `4XX`, and `5XX`.
If a response is defined using an explicit code,
the explicit code definition takes precedence over the range definition for that code.
""" | zinc-api | /zinc-api-0.0.3.tar.gz/zinc-api-0.0.3/zinc/openapi_schema_pydantic/responses.py | responses.py |
from typing import Any, Optional
from pydantic import BaseModel
from pydantic import Extra
from .server import Server
class Link(BaseModel):
"""
The `Link object` represents a possible design-time link for a response.
The presence of a link does not guarantee the caller's ability to successfully invoke it,
rather it provides a known relationship and traversal mechanism between responses and other operations.
Unlike _dynamic_ links (i.e. links provided **in** the response payload),
the OAS linking mechanism does not require link information in the runtime response.
For computing links, and providing instructions to execute them,
a [runtime expression](#runtimeExpression) is used for accessing values in an operation
and using them as parameters while invoking the linked operation.
"""
operationRef: Optional[str] = None
"""
A relative or absolute URI reference to an OAS operation.
This field is mutually exclusive of the `operationId` field,
and MUST point to an [Operation Object](#operationObject).
Relative `operationRef` values MAY be used to locate an existing [Operation Object](#operationObject)
in the OpenAPI definition. See the rules for resolving [Relative References](#relativeReferencesURI).
"""
operationId: Optional[str] = None
"""
The name of an _existing_, resolvable OAS operation, as defined with a unique `operationId`.
This field is mutually exclusive of the `operationRef` field.
"""
parameters: Optional[dict[str, Any]] = None
"""
A map representing parameters to pass to an operation
as specified with `operationId` or identified via `operationRef`.
The key is the parameter name to be used,
whereas the value can be a constant or an expression to be evaluated and passed to the linked operation.
The parameter name can be qualified using the [parameter location](#parameterIn) `[{in}.]{name}`
for operations that use the same parameter name in different locations (e.g. path.id).
"""
requestBody: Optional[Any] = None
"""
A literal value or [{expression}](#runtimeExpression) to use as a request body when calling the target operation.
"""
description: Optional[str] = None
"""
A description of the link.
[CommonMark syntax](https://spec.commonmark.org/) MAY be used for rich text representation.
"""
server: Optional[Server] = None
"""
A server object to be used by the target operation.
"""
class Config:
extra = Extra.ignore
schema_extra = {
"examples": [
{
"operationId": "getUserAddressByUUID",
"parameters": {"userUuid": "$response.body#/uuid"},
},
{
"operationRef": "#/paths/~12.0~1repositories~1{username}/get",
"parameters": {"username": "$response.body#/username"},
},
]
} | zinc-api | /zinc-api-0.0.3.tar.gz/zinc-api-0.0.3/zinc/openapi_schema_pydantic/link.py | link.py |
from typing import Optional, Union
from pydantic import AnyUrl
from pydantic import BaseModel
from pydantic import Extra
class OAuthFlow(BaseModel):
"""
Configuration details for a supported OAuth Flow
"""
authorizationUrl: Optional[Union[AnyUrl, str]] = None
"""
**REQUIRED** for `oauth2 ("implicit", "authorizationCode")`.
The authorization URL to be used for this flow.
This MUST be in the form of a URL.
The OAuth2 standard requires the use of TLS.
"""
tokenUrl: Optional[Union[AnyUrl, str]] = None
"""
**REQUIRED** for `oauth2 ("password", "clientCredentials", "authorizationCode")`.
The token URL to be used for this flow.
This MUST be in the form of a URL.
The OAuth2 standard requires the use of TLS.
"""
refreshUrl: Optional[Union[AnyUrl, str]] = None
"""
The URL to be used for obtaining refresh tokens.
This MUST be in the form of a URL.
The OAuth2 standard requires the use of TLS.
"""
scopes: Optional[dict[str, str]] = None
"""
**REQUIRED** for `oauth2`. The available scopes for the OAuth2 security scheme.
A map between the scope name and a short description for it.
The map MAY be empty.
"""
class Config:
extra = Extra.ignore
schema_extra = {
"examples": [
{
"authorizationUrl": "https://example.com/api/oauth/dialog",
"scopes": {
"write:pets": "modify pets in your account",
"read:pets": "read your pets",
},
},
{
"authorizationUrl": "https://example.com/api/oauth/dialog",
"tokenUrl": "https://example.com/api/oauth/token",
"scopes": {
"write:pets": "modify pets in your account",
"read:pets": "read your pets",
},
},
{
"authorizationUrl": "/api/oauth/dialog", # issue #5: allow relative path
"tokenUrl": "/api/oauth/token", # issue #5: allow relative path
"refreshUrl": "/api/oauth/token", # issue #5: allow relative path
"scopes": {
"write:pets": "modify pets in your account",
"read:pets": "read your pets",
},
},
]
} | zinc-api | /zinc-api-0.0.3.tar.gz/zinc-api-0.0.3/zinc/openapi_schema_pydantic/oauth_flow.py | oauth_flow.py |
from typing import Any, Optional, Union
from pydantic import BaseModel
from pydantic import Extra
from pydantic import Field
from .example import Example
from .reference import Reference
from .schema import Schema
class MediaType(BaseModel):
"""Each Media Type Object provides schema and examples for the media type identified by its key."""
media_type_schema: Optional[Union[Reference, Schema]] = Field(default=None, alias="schema")
"""
The schema defining the content of the request, response, or parameter.
"""
example: Optional[Any] = None
"""
Example of the media type.
The example object SHOULD be in the correct format as specified by the media type.
The `example` field is mutually exclusive of the `examples` field.
Furthermore, if referencing a `schema` which contains an example,
the `example` value SHALL _override_ the example provided by the schema.
"""
examples: Optional[dict[str, Union[Example, Reference]]] = None
"""
Examples of the media type.
Each example object SHOULD match the media type and specified schema if present.
The `examples` field is mutually exclusive of the `example` field.
Furthermore, if referencing a `schema` which contains an example,
the `examples` value SHALL _override_ the example provided by the schema.
"""
class Config:
extra = Extra.ignore
allow_population_by_field_name = True
schema_extra = {
"examples": [
{
"schema": {"$ref": "#/components/schemas/Pet"},
"examples": {
"cat": {
"summary": "An example of a cat",
"value": {
"name": "Fluffy",
"petType": "Cat",
"color": "White",
"gender": "male",
"breed": "Persian",
},
},
"dog": {
"summary": "An example of a dog with a cat's name",
"value": {
"name": "Puma",
"petType": "Dog",
"color": "Black",
"gender": "Female",
"breed": "Mixed",
},
},
"frog": {"$ref": "#/components/examples/frog-example"},
},
}
]
} | zinc-api | /zinc-api-0.0.3.tar.gz/zinc-api-0.0.3/zinc/openapi_schema_pydantic/media_type.py | media_type.py |
from typing import Any, Optional, Union
from pydantic import BaseModel
from pydantic import Extra
from pydantic import Field
from .example import Example
from .media_type import MediaType
from .reference import Reference
from .schema import Schema
class Parameter(BaseModel):
"""
Describes a single operation parameter.
A unique parameter is defined by a combination of a [name](#parameterName) and [location](#parameterIn).
"""
"""Fixed Fields"""
name: str
"""
**REQUIRED**. The name of the parameter.
Parameter names are *case sensitive*.
- If [`in`](#parameterIn) is `"path"`, the `name` field MUST correspond to a template expression occurring
within the [path](#pathsPath) field in the [Paths Object](#pathsObject).
See [Path Templating](#pathTemplating) for further information.
- If [`in`](#parameterIn) is `"header"` and the `name` field is `"Accept"`, `"Content-Type"` or `"Authorization"`,
the parameter definition SHALL be ignored.
- For all other cases, the `name` corresponds to the parameter name used by the [`in`](#parameterIn) property.
"""
param_in: str = Field(alias="in")
"""
**REQUIRED**. The location of the parameter. Possible values are `"query"`, `"header"`, `"path"` or `"cookie"`.
"""
description: Optional[str] = None
"""
A brief description of the parameter.
This could contain examples of use.
[CommonMark syntax](https://spec.commonmark.org/) MAY be used for rich text representation.
"""
required: bool = False
"""
Determines whether this parameter is mandatory.
If the [parameter location](#parameterIn) is `"path"`, this property is **REQUIRED** and its value MUST be `true`.
Otherwise, the property MAY be included and its default value is `false`.
"""
deprecated: bool = False
"""
Specifies that a parameter is deprecated and SHOULD be transitioned out of usage.
Default value is `false`.
"""
allowEmptyValue: bool = False
"""
Sets the ability to pass empty-valued parameters.
This is valid only for `query` parameters and allows sending a parameter with an empty value.
Default value is `false`.
If [`style`](#parameterStyle) is used, and if behavior is `n/a` (cannot be serialized),
the value of `allowEmptyValue` SHALL be ignored.
Use of this property is NOT RECOMMENDED, as it is likely to be removed in a later revision.
"""
"""
The rules for serialization of the parameter are specified in one of two ways.
For simpler scenarios, a [`schema`](#parameterSchema) and [`style`](#parameterStyle)
can describe the structure and syntax of the parameter.
"""
style: Optional[str] = None
"""
Describes how the parameter value will be serialized depending on the type of the parameter value.
Default values (based on value of `in`):
- for `query` - `form`;
- for `path` - `simple`;
- for `header` - `simple`;
- for `cookie` - `form`.
"""
explode: bool = False
"""
When this is true, parameter values of type `array` or `object` generate separate parameters
for each value of the array or key-value pair of the map.
For other types of parameters this property has no effect.
When [`style`](#parameterStyle) is `form`, the default value is `true`.
For all other styles, the default value is `false`.
"""
allowReserved: bool = False
"""
Determines whether the parameter value SHOULD allow reserved characters,
as defined by [RFC3986](https://tools.ietf.org/html/rfc3986#section-2.2)
`:/?#[]@!$&'()*+,;=` to be included without percent-encoding.
This property only applies to parameters with an `in` value of `query`.
The default value is `false`.
"""
param_schema: Optional[Union[Schema, Reference]] = Field(default=None, alias="schema")
"""
The schema defining the type used for the parameter.
"""
example: Optional[Any] = None
"""
Example of the parameter's potential value.
The example SHOULD match the specified schema and encoding properties if present.
The `example` field is mutually exclusive of the `examples` field.
Furthermore, if referencing a `schema` that contains an example,
the `example` value SHALL _override_ the example provided by the schema.
To represent examples of media types that cannot naturally be represented in JSON or YAML,
a string value can contain the example with escaping where necessary.
"""
examples: Optional[dict[str, Union[Example, Reference]]] = None
"""
Examples of the parameter's potential value.
Each example SHOULD contain a value in the correct format as specified in the parameter encoding.
The `examples` field is mutually exclusive of the `example` field.
Furthermore, if referencing a `schema` that contains an example,
the `examples` value SHALL _override_ the example provided by the schema.
"""
"""
For more complex scenarios, the [`content`](#parameterContent) property
can define the media type and schema of the parameter.
A parameter MUST contain either a `schema` property, or a `content` property, but not both.
When `example` or `examples` are provided in conjunction with the `schema` object,
the example MUST follow the prescribed serialization strategy for the parameter.
"""
content: Optional[dict[str, MediaType]] = None
"""
A map containing the representations for the parameter.
The key is the media type and the value describes it.
The map MUST only contain one entry.
"""
class Config:
extra = Extra.ignore
allow_population_by_field_name = True
schema_extra = {
"examples": [
{
"name": "token",
"in": "header",
"description": "token to be passed as a header",
"required": True,
"schema": {"type": "array", "items": {"type": "integer", "format": "int64"}},
"style": "simple",
},
{
"name": "username",
"in": "path",
"description": "username to fetch",
"required": True,
"schema": {"type": "string"},
},
{
"name": "id",
"in": "query",
"description": "ID of the object to fetch",
"required": False,
"schema": {"type": "array", "items": {"type": "string"}},
"style": "form",
"explode": True,
},
{
"in": "query",
"name": "freeForm",
"schema": {"type": "object", "additionalProperties": {"type": "integer"}},
"style": "form",
},
{
"in": "query",
"name": "coordinates",
"content": {
"application/json": {
"schema": {
"type": "object",
"required": ["lat", "long"],
"properties": {
"lat": {"type": "number"},
"long": {"type": "number"},
},
}
}
},
},
]
} | zinc-api | /zinc-api-0.0.3.tar.gz/zinc-api-0.0.3/zinc/openapi_schema_pydantic/parameter.py | parameter.py |
from typing import Optional, Union
from pydantic import AnyUrl
from pydantic import BaseModel
from pydantic import Extra
from pydantic import Field
from .oauth_flows import OAuthFlows
class SecurityScheme(BaseModel):
"""
Defines a security scheme that can be used by the operations.
Supported schemes are HTTP authentication,
an API key (either as a header, a cookie parameter or as a query parameter),
mutual TLS (use of a client certificate),
OAuth2's common flows (implicit, password, client credentials and authorization code)
as defined in [RFC6749](https://tools.ietf.org/html/rfc6749),
and [OpenID Connect Discovery](https://tools.ietf.org/html/draft-ietf-oauth-discovery-06).
Please note that as of 2020, the implicit flow is about to be deprecated by
[OAuth 2.0 Security Best Current Practice](https://tools.ietf.org/html/draft-ietf-oauth-security-topics).
Recommended for most use case is Authorization Code Grant flow with PKCE.
"""
type: str
"""
**REQUIRED**. The type of the security scheme.
Valid values are `"apiKey"`, `"http"`, "mutualTLS", `"oauth2"`, `"openIdConnect"`.
"""
description: Optional[str] = None
"""
A description for security scheme.
[CommonMark syntax](https://spec.commonmark.org/) MAY be used for rich text representation.
"""
name: Optional[str] = None
"""
**REQUIRED** for `apiKey`. The name of the header, query or cookie parameter to be used.
"""
security_scheme_in: Optional[str] = Field(alias="in", default=None)
"""
**REQUIRED** for `apiKey`. The location of the API key. Valid values are `"query"`, `"header"` or `"cookie"`.
"""
scheme: Optional[str] = None
"""
**REQUIRED** for `http`. The name of the HTTP Authorization scheme to be used in the
[Authorization header as defined in RFC7235](https://tools.ietf.org/html/rfc7235#section-5.1).
The values used SHOULD be registered in the
[IANA Authentication Scheme registry](https://www.iana.org/assignments/http-authschemes/http-authschemes.xhtml).
"""
bearerFormat: Optional[str] = None
"""
A hint to the client to identify how the bearer token is formatted.
Bearer tokens are usually generated by an authorization server,
so this information is primarily for documentation purposes.
"""
flows: Optional[OAuthFlows] = None
"""
**REQUIRED** for `oauth2`. An object containing configuration information for the flow types supported.
"""
openIdConnectUrl: Optional[Union[AnyUrl, str]] = None
"""
**REQUIRED** for `openIdConnect`. OpenId Connect URL to discover OAuth2 configuration values.
This MUST be in the form of a URL. The OpenID Connect standard requires the use of TLS.
"""
class Config:
extra = Extra.ignore
allow_population_by_field_name = True
schema_extra = {
"examples": [
{"type": "http", "scheme": "basic"},
{"type": "apiKey", "name": "api_key", "in": "header"},
{"type": "http", "scheme": "bearer", "bearerFormat": "JWT"},
{
"type": "oauth2",
"flows": {
"implicit": {
"authorizationUrl": "https://example.com/api/oauth/dialog",
"scopes": {
"write:pets": "modify pets in your account",
"read:pets": "read your pets",
},
}
},
},
{"type": "openIdConnect", "openIdConnectUrl": "https://example.com/openIdConnect"},
{
"type": "openIdConnect",
"openIdConnectUrl": "openIdConnect",
}, # issue #5: allow relative path
]
} | zinc-api | /zinc-api-0.0.3.tar.gz/zinc-api-0.0.3/zinc/openapi_schema_pydantic/security_scheme.py | security_scheme.py |
from typing import Dict, Union
import kix
from .infrastructure_service_field import InfrastructureServiceField as Field
class InfrastructureServiceModel:
def __init__(self):
self._all_fields: Dict[str, Field] = {}
# General.
self.aws_account_id: Field = self._add_field("Z_AWS_ACCOUNT_ID")
self.aws_region: Field = self._add_field("Z_AWS_REGION")
self.project_name: Field = self._add_field("Z_PROJECT_NAME")
# Static Site Creation.
self.create_static_site: Field = self._add_field("Z_CREATE_STATIC_SITE", default=False)
self.domain_name: Field = self._add_field("Z_DOMAIN_NAME")
self.static_site_bucket_name: Field = self._add_field("Z_STATIC_SITE_BUCKET")
# CRUD API Creation.
self.create_crud_api: Field = self._add_field("Z_CREATE_CRUD_API", default=False)
# Contact Form API Creation.
self.create_contact_api: Field = self._add_field("Z_CREATE_CONTACT_API", default=False)
self.forwarding_email: Field = self._add_field("Z_FORWARDING_EMAIL")
def _add_field(self, key: str, default: Union[str, bool] = "") -> Field:
field = Field(key, default)
self._all_fields[key] = field
return field
def save_to_environ(self):
for field in self._all_fields.values():
field.write_to_environ()
def load_from_environ(self):
data = {}
for field in self._all_fields.values():
field.load_from_environ()
data[field.key] = field.value
kix.info("Loading Infrastructure Model from Environment", data)
def append(self, new_model: 'InfrastructureServiceModel'):
# Absorb the edited fields from the new model.
for k, field in self._all_fields.items():
new_field = new_model._all_fields[k]
if new_field.was_edited:
field.set(new_field.value)
def get_command_line_args(self) -> str:
# Turn all the non-default fields into environment arguments.
commands: Dict[str, str] = self.get_command_line_dict()
commands_list = [f"{k}={v}" for k, v in commands.items()]
return " ".join(commands_list)
def get_command_line_dict(self) -> Dict[str, str]:
# Turn all the non-default fields into environment arguments.
commands: Dict[str, str] = {}
for field in self._all_fields.values():
if field.was_edited:
commands[field.key] = str(field.value)
return commands | zinc-cli | /zinc_cli-0.3.1-py3-none-any.whl/zinc_cli/infrastructure/models/infrastructure_service_model.py | infrastructure_service_model.py |
import os
from typing import Optional, Dict, List
import kix
from aws_cdk import core, aws_dynamodb, aws_route53, aws_certificatemanager, aws_apigateway, aws_route53_targets, \
aws_lambda, aws_cognito
from aws_cdk.aws_cognito import SignInAliases, UserVerificationConfig, VerificationEmailStyle
from aws_cdk.aws_route53 import HostedZone
class CDKMasterStack(core.Stack):
def __init__(self, scope: core.Construct, id: str, domain: str, **kwargs) -> None:
super().__init__(scope, id, **kwargs)
self.stack_module_path: str = os.path.dirname(__file__)
self.construct_id: str = id
self.domain: str = domain
# Create the hosted zone.
self.zone: HostedZone = HostedZone.from_lookup(
self, "HostedZone",
domain_name=domain,
private_zone=False)
kix.info(f"Zone from look-up: {self.zone.zone_name}")
# Create the data table.
public_crud_table_name: str = "public_crud" + domain.replace(".", "_")
self.public_table: aws_dynamodb.Table = self.create_table(public_crud_table_name, "PublicCrudTable")
# Create the user pool.
self.user_pool: aws_cognito.UserPool = self._create_user_pool()
# Create the API Gateway.
self._rest_api: Optional[aws_apigateway.RestApi] = None
self._api_map: Dict[str, aws_apigateway.Resource] = {}
def _create_user_pool(self) -> aws_cognito.UserPool:
user_pool = aws_cognito.UserPool(
scope=self,
id="UserPoolX",
# auto_verify=aws_cognito.AutoVerifiedAttrs(email=True),
self_sign_up_enabled=True,
sign_in_aliases=SignInAliases(email=True),
user_verification=UserVerificationConfig(email_style=VerificationEmailStyle.LINK)
)
aws_cognito.UserPoolClient(
scope=self,
user_pool=user_pool,
id="AuthClientWeb",
generate_secret=False
)
return user_pool
def get_rest_api(self):
if self._rest_api is None:
self._rest_api = self.create_root_api_gateway(f"apix.{self.domain}")
return self._rest_api
def _get_api_entity(self, api_path: str):
if api_path not in self._api_map:
self._api_map[api_path] = self.get_rest_api().root.add_resource(api_path)
api_entity: aws_apigateway.Resource = self._api_map[api_path]
return api_entity
def add_api_method(self, api_path: str, method: str, lambda_handler) -> aws_apigateway.Resource:
# Either look-up or create the API entity.
api_entity: aws_apigateway.Resource = self._get_api_entity(api_path)
# Create the Lambda integration out of the provided Lambda handler.
lambda_integration = aws_apigateway.LambdaIntegration(
lambda_handler, proxy=False, integration_responses=[self.get_integration_response()])
# Create the API Method (adding the integration).
api_entity.add_method(method, lambda_integration, method_responses=[self.get_method_response()])
self.add_cors_options(api_entity)
return api_entity
def _add_generic_api_method(
self, api_resource: aws_apigateway.Resource, integration: aws_apigateway.LambdaIntegration,
methods: List[str], authorizer: Optional[aws_apigateway.CfnAuthorizer]):
auth_type = None if authorizer is None else aws_apigateway.AuthorizationType.COGNITO
for method in methods:
api_method = api_resource.add_method(
method,
integration,
method_responses=[self.get_method_response()],
authorization_type=auth_type
)
if authorizer:
api_method.node.find_child("Resource").add_property_override('AuthorizerId', authorizer.ref)
def create_root_api_gateway(self, api_domain_name: str):
""" We need this to create the root API Gateway resource. """
certificate = self.create_api_certificate(api_domain_name, self.zone)
domain_options = aws_apigateway.DomainNameOptions(domain_name=api_domain_name, certificate=certificate)
stage_options = aws_apigateway.StageOptions(
throttling_rate_limit=10,
throttling_burst_limit=100
)
rest_api = aws_apigateway.RestApi(
self, 'PublicCrudApi',
rest_api_name='PublicCrudApi',
domain_name=domain_options,
deploy_options=stage_options
)
kix.info("Routing A-Record Alias for REST API")
a_record_target = aws_route53.RecordTarget.from_alias(aws_route53_targets.ApiGateway(rest_api))
aws_route53.ARecord(
self, "PublicCrudApiAliasRecord",
zone=self.zone,
target=a_record_target,
record_name=api_domain_name)
# Also create the authorizer for this API.
return rest_api
def create_api_certificate(self, domain: str, zone: aws_route53.HostedZone):
kix.info("Creating Certificate")
cert = aws_certificatemanager.DnsValidatedCertificate(
self, f"ApiCertificate",
domain_name=domain,
hosted_zone=zone)
core.CfnOutput(self, 'ApiCertificateArn', value=cert.certificate_arn)
return cert
def create_table(self, table_name: str, construct_id: str) -> aws_dynamodb.Table:
partition_key_attr = aws_dynamodb.Attribute(name="pk", type=aws_dynamodb.AttributeType.STRING)
sort_key_attr = aws_dynamodb.Attribute(name="sk", type=aws_dynamodb.AttributeType.STRING)
table = aws_dynamodb.Table(
scope=self,
id=construct_id,
table_name=table_name,
partition_key=partition_key_attr,
sort_key=sort_key_attr,
billing_mode=aws_dynamodb.BillingMode.PAY_PER_REQUEST,
removal_policy=core.RemovalPolicy.DESTROY)
return table
@staticmethod
def add_cors_options(api_gateway_resource: aws_apigateway.Resource):
api_gateway_resource.add_method(
'OPTIONS',
aws_apigateway.MockIntegration(
integration_responses=[CDKMasterStack.get_options_integration_response()],
passthrough_behavior=aws_apigateway.PassthroughBehavior.WHEN_NO_MATCH,
request_templates={"application/json": "{\"statusCode\":200}"}),
method_responses=[CDKMasterStack.get_options_method_response()])
@staticmethod
def get_integration_response():
integration_response = aws_apigateway.IntegrationResponse(
status_code="200",
response_parameters={'method.response.header.Access-Control-Allow-Origin': "'*'"})
return integration_response
@staticmethod
def get_method_response():
method_response = aws_apigateway.MethodResponse(
status_code="200",
response_parameters={'method.response.header.Access-Control-Allow-Origin': True})
return method_response
@staticmethod
def get_options_integration_response():
integration_response = aws_apigateway.IntegrationResponse(
status_code="200",
response_parameters={
'method.response.header.Access-Control-Allow-Headers': "'Content-Type,X-Amz-Date,Authorization,X-Api-Key,X-Amz-Security-Token'",
'method.response.header.Access-Control-Allow-Origin': "'*'",
'method.response.header.Access-Control-Allow-Methods': "'GET,OPTIONS'"
}
)
return integration_response
@staticmethod
def get_options_method_response():
method_response = aws_apigateway.MethodResponse(
status_code="200",
response_parameters={
'method.response.header.Access-Control-Allow-Headers': True,
'method.response.header.Access-Control-Allow-Methods': True,
'method.response.header.Access-Control-Allow-Origin': True,
}
)
return method_response
@staticmethod
def get_request_template() -> str:
return "{\"auth_sub\": \"$context.authorizer.claims.sub\",\n" \
"\"method\": \"$context.httpMethod\",\n" \
"\"body\" : $input.json('$'),\n" \
"\"queryParams\": {\n" \
"#foreach($param in $input.params().querystring.keySet())\n" \
"\"$param\": \"$util.escapeJavaScript($input.params().querystring.get($param))\" #if($foreach.hasNext),#end\n" \
"#end\n" \
"},\n" \
"\"pathParams\": {\n" \
"#foreach($param in $input.params().path.keySet())\n" \
"\"$param\": \"$util.escapeJavaScript($input.params().path.get($param))\" #if($foreach.hasNext),#end\n" \
"#end\n" \
"}\n" \
"}\n" | zinc-cli | /zinc_cli-0.3.1-py3-none-any.whl/zinc_cli/infrastructure/services/master_stack.py | master_stack.py |
import kix
from aws_cdk import core, aws_route53, aws_s3, aws_certificatemanager, aws_cloudfront, aws_route53_targets
from services.master_stack import CDKMasterStack
def add_static_site(stack: CDKMasterStack, domain: str, bucket_name: str, prefix: str = ""):
# Construct code goes here
core.CfnOutput(stack, f"{prefix}Site", value=f"https://{domain}")
# Content bucket
kix.info("Bucket Name: " + bucket_name)
site_bucket = aws_s3.Bucket(
stack, f"{prefix}SiteBucket",
bucket_name=bucket_name,
website_index_document="index.html",
website_error_document="index.html",
public_read_access=True,
removal_policy=core.RemovalPolicy.DESTROY)
core.CfnOutput(stack, f"{prefix}BucketArn", value=site_bucket.bucket_arn)
# Certificate
kix.info("Creating Certificate")
cert = aws_certificatemanager.DnsValidatedCertificate(
stack, f"{prefix}ValidatedCert",
domain_name=domain,
hosted_zone=stack.zone)
core.CfnOutput(stack, f"{prefix}CertificateArn", value=cert.certificate_arn)
kix.info("Creating Distribution")
distribution = aws_cloudfront.CloudFrontWebDistribution(
stack, f"{prefix}SiteDistribution",
alias_configuration=aws_cloudfront.AliasConfiguration(
acm_cert_ref=cert.certificate_arn,
names=[domain],
ssl_method=aws_cloudfront.SSLMethod.SNI,
security_policy=aws_cloudfront.SecurityPolicyProtocol.TLS_V1_1_2016,
),
origin_configs=[
aws_cloudfront.SourceConfiguration(
s3_origin_source=aws_cloudfront.S3OriginConfig(s3_bucket_source=site_bucket),
behaviors=[aws_cloudfront.Behavior(is_default_behavior=True)]
)],
error_configurations=[
aws_cloudfront.CfnDistribution.CustomErrorResponseProperty(
error_code=403,
response_code=200,
response_page_path="/index.html"
),
aws_cloudfront.CfnDistribution.CustomErrorResponseProperty(
error_code=404,
response_code=200,
response_page_path="/index.html"
)
]
)
core.CfnOutput(stack, f"{prefix}DistributionId", value=distribution.distribution_id)
a_record_target = aws_route53.AddressRecordTarget.from_alias(aws_route53_targets.CloudFrontTarget(distribution))
# Route 53 alias record for the CloudFront distribution
kix.info("Routing A-Record Alias")
aws_route53.ARecord(
stack, f"{prefix}SiteAliasRecord",
zone=stack.zone,
target=a_record_target,
record_name=domain) | zinc-cli | /zinc_cli-0.3.1-py3-none-any.whl/zinc_cli/infrastructure/services/static_site/cdk_static_site_stack.py | cdk_static_site_stack.py |
import kix
from aws_cdk import (
core,
aws_lambda,
aws_apigateway,
aws_certificatemanager, aws_route53, aws_route53_targets, aws_iam)
import os
from aws_cdk.aws_iam import PolicyStatement
from aws_cdk.custom_resources import AwsCustomResource, AwsSdkCall, PhysicalResourceId, AwsCustomResourcePolicy
from services.master_stack import CDKMasterStack
def add_contact_api(stack: CDKMasterStack, project_name: str, domain: str, forwarding_email: str):
module_path = os.path.dirname(__file__)
lambda_path = os.path.join(module_path, "lambda")
api_path = "contact"
base_lambda = aws_lambda.Function(
stack, 'ContactFormLambda',
handler='lambda_handler.handler',
runtime=aws_lambda.Runtime.PYTHON_3_7,
environment={
"TARGET_EMAIL": forwarding_email,
"SENDER_EMAIL": f"contact@{domain}",
"SENDER_NAME": f"{project_name.capitalize()}",
"SENDER": f"{project_name.capitalize()} Contact Form <contact@{domain}>"
},
code=aws_lambda.Code.asset(lambda_path),
)
base_lambda.add_to_role_policy(aws_iam.PolicyStatement(
effect=aws_iam.Effect.ALLOW,
resources=["*"],
actions=["ses:SendEmail", "ses:SendRawEmail"]))
verify_domain_create_call = AwsSdkCall(service="SES",
action="verifyDomainIdentity",
parameters={"Domain": domain},
physical_resource_id=PhysicalResourceId.from_response("VerificationToken"))
policy_statement = PolicyStatement(actions=["ses:VerifyDomainIdentity"], resources=["*"])
verify_domain_identity = AwsCustomResource(
stack, "VerifyDomainIdentity",
on_create=verify_domain_create_call,
policy=AwsCustomResourcePolicy.from_statements(statements=[policy_statement])
)
aws_route53.TxtRecord(
stack, "SESVerificationRecord",
zone=stack.zone,
record_name=f"_amazonses.{domain}",
values=[verify_domain_identity.get_response_field("VerificationToken")]
)
stack.add_api_method(api_path, "POST", base_lambda) | zinc-cli | /zinc_cli-0.3.1-py3-none-any.whl/zinc_cli/infrastructure/services/contact_api/cdk_contact_api_stack.py | cdk_contact_api_stack.py |
import json
import uuid
from typing import Dict, List, Tuple
import boto3
from botocore.exceptions import ClientError
import os
TARGET_EMAIL_KEY = "TARGET_EMAIL"
SENDER_EMAIL_KEY = "SENDER_EMAIL"
SENDER_NAME_KEY = "SENDER_NAME"
def handler(event, context):
response = {
'statusCode': 200,
"body": "No event body was found.",
"event": event
}
try:
contact_payload, fields_payload = create_payloads_from_event(event)
name = event["name"] if "name" in event else "Unknown"
email = event["email"] if "email" in event else "[email protected]"
email = "[email protected]" if (len(email) == 0 or "@" not in email) else email
# Replace [email protected] with your "From" address.
# This address must be verified with Amazon SES.
service_name: str = os.environ[SENDER_NAME_KEY] # "Zinc Admin <[email protected]>"
service_email: str = os.environ[SENDER_EMAIL_KEY] # "Zinc Admin <[email protected]>"
sender_id = uuid.uuid4().hex[:12]
service_email = f"{sender_id}-{service_email}"
sender = f"{name} via {service_name} <{service_email}>"
# Replace [email protected] with a "To" address. If your account
# is still in the sandbox, this address must be verified.
if TARGET_EMAIL_KEY not in os.environ:
return {"statusCode": "500", "body": "No target email detected."}
RECIPIENT = os.environ[TARGET_EMAIL_KEY]
# If necessary, replace us-west-2 with the AWS Region you're using for Amazon SES.
AWS_REGION = "us-east-1"
# The subject line for the email.
name = contact_payload["name"] if "name" in contact_payload else "Unknown"
SUBJECT = f"{name} sent you a message"
# The character encoding for the email.
CHARSET = "UTF-8"
html_body = create_html_from_payload(contact_payload, fields_payload)
text_body = create_text_from_payload(contact_payload, fields_payload)
# Create a new SES resource and specify a region.
client = boto3.client('ses', region_name=AWS_REGION)
# Provide the contents of the email.
ses_response = client.send_email(
Destination={'ToAddresses': [RECIPIENT]},
Message={
'Body': {
'Html': {
'Charset': CHARSET,
'Data': html_body,
},
'Text': {
'Charset': CHARSET,
'Data': text_body,
},
},
'Subject': {
'Charset': CHARSET,
'Data': SUBJECT,
},
},
ReplyToAddresses=[email],
Source=sender
)
# Display an error if something goes wrong.
except ClientError as e:
print(e.response['Error']['Message'])
response["ses_response"] = e.response['Error']['Message']
else:
print("Email sent! Message ID:"),
print(ses_response['MessageId'])
response["ses_response"] = "Successful mail sent!"
return response
def create_payloads_from_event(event) -> (dict, dict):
known_keys = ["notes"]
contact_keys = ["name", "email", "phone"]
contact_payload: Dict[str, str] = {}
fields_payload: Dict[str, str] = {}
# Add all known keys.
for k in contact_keys:
if k in event:
contact_payload[k] = event[k]
# Add all known keys.
for k in known_keys:
if k in event:
fields_payload[k] = event[k]
# Add all fielded keys.
if "fields" in event:
data = event["fields"]
for k in data:
fields_payload[k] = data[k]
return contact_payload, fields_payload
def create_html_from_payload(contact_payload: Dict[str, str], fields_payload: Dict[str, str]):
html_head = """
<html><head><style>
.container { max-width: 680px; }
.box { padding: 0.7em; margin: 0.5em; border: 1px solid #ccc; border-radius: 3px; }
.cell-title { font-size: 0.9em; }
.cell-title-grey { color: #666; font-style: italic}
.info-box { background-color: #def9ff; color: #3289e6; border-color: #3289e6;}
.header-cell {display: flex; }
.header-label {color: #888; width: 5em;}
.header-content {}
.row-item:not(:last-child) {margin-bottom: 0.5em;}
</style></head><body>
<div class="container">
"""
html_tail = """
</div></body></html>
"""
# Create the content.
content = []
# Add the information section.
info_box = create_info_box("This message was generated by your contact form from your website.")
content.append(info_box)
# Create fields.
contact_elements = []
for k, v in contact_payload.items():
element = create_contact_element(k.capitalize(), v)
contact_elements.append(element)
contact_section = create_contact_section(contact_elements)
content.append(contact_section)
# Create fields.
for k, v in fields_payload.items():
item = create_form_box(k.capitalize(), v)
content.append(item)
# Put it all together.
html_body = html_head + "\n".join(content) + html_tail
return html_body
def create_text_from_payload(contact_payload: Dict[str, str], fields_payload: Dict[str, str]):
# Create the content.
content = []
for k, v in contact_payload.items():
item = f"{k}: {v}"
content.append(item)
for k, v in fields_payload.items():
item = f"{k}: {v}"
content.append(item)
return "\n".join(content) + "\n\n" + "This message was generated by your contact form from your website."
def create_info_box(text: str):
return f"""
<div class="box info-box">
<div>{text}</div>
</div>
"""
def create_form_box(label: str, body: str):
return f"""
<div class="box">
<div class="cell-title cell-title-grey">{label}</div>
<div>{body}</div>
</div>
"""
def create_contact_section(elements: List[str]) -> str:
return f"""
<div class="box">
{''.join(elements)}
</div>
"""
def create_contact_element(label: str, body: str) -> str:
return f"""
<div class="header-cell row-item">
<div class="header-label">{label}</div>
<div class="header-content">{body}</div>
</div>
""" | zinc-cli | /zinc_cli-0.3.1-py3-none-any.whl/zinc_cli/infrastructure/services/contact_api/lambda/lambda_handler.py | lambda_handler.py |
import argparse
import os
from typing import TextIO
TEMPLATE_TOKEN = "<INSERT_TOKEN>"
INDENT_CHAR = " "
def content_template():
x = """import ZincContentInterface from "./zincContentInterface";
class ZincContent extends ZincContentInterface {
constructor () {
super();
<INSERT_TOKEN>
}
}
export default ZincContent
"""
return x
def invoke():
print("Creating Project...")
parser = argparse.ArgumentParser()
parser.add_argument("-t", "--template_path", type=str, required=True, help="Path to the front-end template to transform into.")
args = parser.parse_args()
template_path = args.template_path
print("tp: " + template_path)
transform_project(template_path)
def transform_project(template_path: str = ""):
# Transforms a project into a TypeScript object so a front-end can import it.
path = os.path.join(template_path, "zinc")
if not os.path.exists(path):
os.makedirs(path, exist_ok=True)
path = os.path.join(path, "zincContent.ts")
template = content_template()
with open(path, "w") as file:
indent_level = 0
for template_line in template.split("\n"):
template_line = template_line.strip()
if TEMPLATE_TOKEN in template_line:
write_content(file, indent_level)
else:
indent_level = process_template_line(file, indent_level, template_line)
def process_template_line(file: TextIO, indent_level: int, template_line: str):
if "}" in template_line:
indent_level -= 1
file.write(get_indent(indent_level))
file.write(template_line + "\n")
if "{" in template_line:
indent_level += 1
return indent_level
def get_indent(level: int):
return level * INDENT_CHAR
def write_content(file: TextIO, indent_level: int):
for i in range(3):
file.write(get_indent(indent_level))
file.write("this.addBody('This is injected content!');")
file.write("\n") | zinc-cli | /zinc_cli-0.3.1-py3-none-any.whl/zinc_cli/commands/zinc_transform.py | zinc_transform.py |
import argparse
from typing import Optional
import kix
class ZincCreateRequest:
def __init__(self):
# Core project arguments.
self.project_name: str = "untitled"
self.domain: str = "blah.com"
# Service APIs.
self.with_contact_api: bool = True
self.forwarding_email: Optional[str] = None
# Meta-data flags.
self.dry_run: bool = True
self.pull_template: bool = True
self.wizard: bool = False
def gather_arguments(self):
self._capture_cli_arguments()
self._execute_wizard()
self._validation()
self._show_arguments()
return self
def _capture_cli_arguments(self):
parser = argparse.ArgumentParser()
# Basic.
parser.add_argument("-n", "--name", type=str, required=True, help="Name of the new project.")
parser.add_argument("--domain", type=str, required=True, help="Bootstrap a static site at the domain.")
# Contact API.
parser.add_argument("--with-contact-api", action="store_true",
help="Whether or not to attach the contact form API.")
parser.add_argument("--forwarding-email", type=str, help="Email to forward contact requests to.")
# Options.
parser.add_argument("--dry-run", action="store_true", help="Do not publish to actual AWS.")
parser.add_argument("--pull-template", action="store_true",
help="Whether or not to pull the local project template.")
parser.add_argument("--wizard", action="store_true", help="On-board with the wizard.")
args = parser.parse_args()
self.project_name: str = args.name
self.domain: str = args.domain
self.with_contact_api: str = args.with_contact_api
self.forwarding_email: str = args.forwarding_email
self.dry_run: bool = args.dry_run
self.wizard: bool = args.wizard
self.pull_template: bool = args.pull_template
def _execute_wizard(self):
# Step by step input for each of the arguments.
kix.info(f"Executing Wizard: {self.wizard}")
if not self.wizard:
return
self.domain = kix.prompt.show_text_input(f"Enter domain name")
if kix.prompt.show_yes_no("Would you like to enable contact form API?"):
self.with_contact_api = True
self.forwarding_email = kix.prompt.show_text_input("Contact form forwarding email")
self.pull_template = kix.prompt.show_yes_no("Pull local front-end template from git?")
self.dry_run = kix.prompt.show_yes_no("Is this a dry-run?")
def _validation(self):
if self.with_contact_api and not self.forwarding_email:
message = "Cannot have a contact form if forwarding_email is empty. Please use --forwarding-email."
kix.error(message)
raise Exception(message)
def _show_arguments(self):
data = {
"Project Name": self.project_name,
"Domain": self.domain,
"Contact API": {
"Enabled": self.with_contact_api,
"Forwarding Email": self.forwarding_email
},
"Pull Template": self.pull_template,
"Dry Run": self.dry_run
}
kix.info("Running Zinc Create with Arguments", data) | zinc-cli | /zinc_cli-0.3.1-py3-none-any.whl/zinc_cli/commands/create/zinc_create_request.py | zinc_create_request.py |
from zinc_cli.commands.aws_util.aws_utils import create_aws_service_model, bootstrap_cdk
from zinc_cli.commands.create.contact_api.create_contact_api_cmd import create_contact_api
from zinc_cli.commands.create.contact_api.create_contact_api_request import CreateContactApiRequest
from zinc_cli.commands.create.create_infrastructure import create_infrastructure
from zinc_cli.commands.create.project.create_project_cmd import create_project
from zinc_cli.commands.create.project.create_project_request import CreateProjectRequest
from zinc_cli.commands.create.static_site.create_static_site_cmd import create_static_site
from zinc_cli.commands.create.static_site.create_static_site_request import CreateStaticSiteRequest
from zinc_cli.commands.create.zinc_create_request import ZincCreateRequest
from zinc_cli.infrastructure.models.infrastructure_service_model import InfrastructureServiceModel
def invoke():
master_request: ZincCreateRequest = ZincCreateRequest().gather_arguments()
# Create local resources and template site.
bucket_name = f"static.{master_request.domain}"
project_request = CreateProjectRequest(
project_name=master_request.project_name,
domain_name=master_request.domain,
bucket_name=bucket_name,
dry_run=master_request.dry_run,
pull_template=master_request.pull_template
)
create_project(project_request)
# AWS Validation.
service_model: InfrastructureServiceModel = create_aws_service_model()
# Ensure that CDK is bootstrapped.
bootstrap_cdk(service_model.aws_account_id.value, service_model.aws_region.value)
# Add the static site request to the service model.
static_site_request = CreateStaticSiteRequest(master_request.project_name, master_request.domain, bucket_name)
service_model.append(create_static_site(static_site_request))
# Add Contact Form API to the service model.
if master_request.with_contact_api:
contact_api_request = CreateContactApiRequest(
project_name=master_request.project_name,
forwarding_email=master_request.forwarding_email
)
service_model.append(create_contact_api(contact_api_request))
# Create service infrastructure.
create_infrastructure(service_model, master_request.dry_run)
if __name__ == "__main__":
invoke() | zinc-cli | /zinc_cli-0.3.1-py3-none-any.whl/zinc_cli/commands/create/zinc_create.py | zinc_create.py |
import subprocess
import boto3
from botocore.client import BaseClient
# ======================================================================================================================
# Singleton Methods.
# ======================================================================================================================
class DomainManager:
_AWS_CLIENT = None
@staticmethod
def _client() -> BaseClient:
if DomainManager._AWS_CLIENT is None:
DomainManager._AWS_CLIENT = boto3.client("route53domains", region_name="us-east-1")
return DomainManager._AWS_CLIENT
@staticmethod
def validate(domain_name: str):
try:
domain_name = domain_name.lower()
domain_split = domain_name.split(".")
domain_sections = len(domain_split)
if domain_sections < 2 or domain_sections > 3:
return False
for domain_str in domain_split:
if not domain_str.isalnum():
return False
return True
except Exception as e:
print("validation failure", e)
return False
@staticmethod
def user_owns_domain(domain_name: str):
is_domain_owned = False
try:
response = DomainManager._client().get_domain_detail(DomainName=domain_name)
is_domain_owned = True
print(response)
except Exception as e:
print("Unable to find domain: ", str(e))
print(f"Domain Check Result: {domain_name}, owned: {is_domain_owned}")
return is_domain_owned
@staticmethod
def is_domain_available(domain_name: str):
is_available = False
try:
response = DomainManager._client().check_domain_availability(DomainName=domain_name)
availability_status = response["Availability"]
is_available = availability_status == "AVAILABLE"
print(availability_status)
except Exception as e:
print("Unable to find domain: ", str(e))
return is_available | zinc-cli | /zinc_cli-0.3.1-py3-none-any.whl/zinc_cli/commands/create/domain/domain_manager.py | domain_manager.py |
import json
import shutil
import subprocess
import os
import kix
from.create_project_request import CreateProjectRequest
def create_project(request: CreateProjectRequest):
# Download the template into the directory.
clone_repo = "https://github.com/krinj/zinc-react-template.git"
content_folder_src = "zinc-react-template"
content_folder_dst = "site"
project_path = request.project_name
original_path = os.getcwd()
if request.pull_template:
_create_local_resources(request.project_name, project_path)
_clone_template_site(clone_repo, content_folder_dst, content_folder_src)
_inject_deployment_script(content_folder_dst, request)
_install_project_modules(content_folder_dst)
if request.dry_run:
kix.info("Dry run: Removing all local resources.")
os.chdir(original_path)
shutil.rmtree(project_path)
def _inject_deployment_script(destination: str, request: CreateProjectRequest):
# Replace the deployment script.
package_json_path = os.path.join(destination, "package.json")
with open(package_json_path, "r") as f:
current_package_json = json.load(f)
# Inject the deployment script.
deploy_script = f"aws s3 sync build/ s3://{request.bucket_name} --acl public-read"
current_package_json["scripts"]["deploy"] = deploy_script
with open(package_json_path, "w") as f:
json.dump(current_package_json, f, indent=2)
kix.info("package.json updated with deployment script")
# Inject the API endpoint.
api_endpoint = f"https://api.{request.domain_name}"
def _install_project_modules(destination: str):
current_dir = os.getcwd()
os.chdir(destination)
subprocess.call(["yarn", "install"])
os.chdir(current_dir)
def _clone_template_site(repo_address: str, destination: str, source_path: str):
kix.info(f"Creating project in {os.getcwd()}")
# Clone the repo.
kix.info(f"Cloning template site from: {repo_address}")
subprocess.call(["git", "clone", repo_address])
# Copy the template folder.
app_folder_path = os.path.join(source_path, "app")
shutil.copytree(app_folder_path, destination)
kix.info(f"Copied template source from {app_folder_path} to {destination}")
# Delete source.
shutil.rmtree(source_path)
kix.info(f"Source removed: {source_path}")
def _create_local_resources(project_name: str, path: str):
# Check if the directory already exists.
if os.path.exists(path):
message = f"Cannot create project {project_name}. The directory {path} already exists."
kix.error(message)
raise IsADirectoryError(message)
# It doesn't exist, so we can try to make the project here.
os.mkdir(path)
os.chdir(path)
kix.info(f"Project path changed to: {os.getcwd()}") | zinc-cli | /zinc_cli-0.3.1-py3-none-any.whl/zinc_cli/commands/create/project/create_project_cmd.py | create_project_cmd.py |
# zinc
[](https://drone.presslabs.net/PressLabs/zinc)
# Welcome to Zinc
Zinc is a Route 53 zone manager.
Zinc was developed by the awesome engineering team at [Presslabs](https://www.presslabs.com/),
a Managed WordPress Hosting provider.
For more open-source projects, check [Presslabs Code](https://www.presslabs.org/).
# Policy Records on the Cheap
Q: Why would one use Zinc over AWS's Policy Records?
A: Price. 50$ per Record adds up quickly.
# Overview
## IPs, Policies and Policy Records
At the end of the day your domain name `example.com` needs to resolve to one or more
ip addresses. Here's how we go about it.
### IPs
Should be self explanatory. An IP can be enabled or disabled.
There is no explicit handling in zinc of multiple IPs belonging to one server.
Enabling or disabling can be done from the admin or by implementing a django app (see
lattice_sync for an example).
**N.B.** If implementing your own app it's your responsibility to call
`ip.mark_policy_records_dirty` if the IP changes, so that zinc's reconcile loop will
actually pick up the changes.
### HealthChecks
Zinc will create a Route53 Health Check for each IP. If Route53 deems the IP unavailable,
it will stop routing traffic to it.
Currently the Health Checks are hardcoded to expect all servers to accept requests with the
same FQDN (defaults to node.presslabs.net, set `ZINC_HEALTH_CHECK_FQDN` to change).
### Policies
A policy groups several IPs together. There are 2 types of policies:
* Weighted
* Latency
Note that an IP can be a member of multiple Policies at the same time. A PolicyMember
can has it's own enabled flag, so you can disable an IP for one Policy only, or you can
disable the it for all Policies by setting the enabled flag on the IP model.
#### Weighted
Trafic will be routed to all IP's based on their weights. Bigger weight means more trafic.
#### Latency
Each IP you add to a Policy will have a region specified as well. The region must be an AWS
region. IPs will still have weights, which will be used to balance the trafic within a
region. When a cliend does a DNS lookup, they'll get directed to the region with the lowest
latency, and then an IP will be picked based on weight.
The resulting setup will be similar to the example described here:
http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/dns-failover-complex-configs.html
### Policy Records
Your desired DNS record. In Route53 it will be an alias to the Latency or Weighted records
that make up a Policy.
## Reconcile Loops and the Single Source of Truth
For simple records in a zone (anything except a PolicyRecord) AWS is the Sigle Source of
Truth. Zinc never stores those locally.
For Zones, HealthChecks and PolicyRecords Zinc's database is the single source of truth.
Zinc runs reconcile loops and attempts to update your AWS data to match the expected state
in the DB. To minimize throttling by AWS, in most cases, Zinc only attempts to reconcile
objects marked deemed dirty. This means it is possible to have a missmatch between what you
have in AWS and Zinc's expected state if you make changes bypassing Zinc (using the AWS
console, or the api).
## API
You are encouraged to install django-rest-swagger, run zinc locally and explore the API at
http://localhost:8080/swagger
### Policies
Policies are read only trough the API. You can define them in the admin.
#### Policy listing.
`GET /policies`
#### Policy detail. Example:
`GET /policies/{id}`
```
GET /policies/344b7bee-da33-4234-b645-805cc26adab0
{
"id": "344b7bee-da33-4234-b645-805cc26adab0",
"name": "policy-one",
"members": [
{
"id": "6bcb4e77-04dc-45f7-bebb-a2fcfadd7669",
"region": "us-east-1",
"ip": "192.0.2.11",
"weight": 10,
"enabled": true
},
{
"id": "4f83d47f-af0c-4fa7-80c8-710cb32e4928",
"region": "us-west-1",
"ip": "192.0.2.11",
"weight": 10,
"enabled": true
}
],
"url": "https://zinc.stage.presslabs.net/policies/344b7bee-da33-4234-b645-805cc26adab0"
}
```
### Zones
#### Zone listing.
`GET /zones/`
#### Zone creation.
`POST /zones/`
Args:
| argument | required | default | description |
| --- | --- | --- | --- |
| root | required | - | The domain name of this zone. Trailing dot is optional. |
Returns the newly created zone object.
#### Delete a zone.
`DELETE /zones/{zone_id}/`
#### Zone detail.
`GET /zones/{zone_id}`
Example:
```
GET /zones/102
{
"root": "zinc.example.presslabs.net.",
"url": "https://zinc.stage.presslabs.net/zones/102",
"records_url": "https://zinc.stage.presslabs.net/zones/102/records",
"records": [
{
"name": "@",
"fqdn": "zinc.example.presslabs.net.",
"type": "NS",
"values": [
"ns-389.awsdns-48.com.",
"ns-1596.awsdns-07.co.uk.",
"ns-1008.awsdns-62.net.",
"ns-1294.awsdns-33.org."
],
"ttl": 172800,
"dirty": false,
"id": "Z6k504rwKzbamNZ9ZmY5lvkoOJGDW0",
"url": "https://zinc.stage.presslabs.net/zones/102/records/Z6k504rwKzbamNZ9ZmY5lvkoOJGDW0",
"managed": true
},
{
"name": "@",
"fqdn": "zinc.example.presslabs.net.",
"type": "SOA",
"values": [
"ns-389.awsdns-48.com. awsdns-hostmaster.amazon.com. 1 7200 900 1209600 86400"
],
"ttl": 900,
"dirty": false,
"id": "Z6k504rwKzbamNZ6Z7doJ0yg98j9zA",
"url": "https://zinc.stage.presslabs.net/zones/102/records/Z6k504rwKzbamNZ6Z7doJ0yg98j9zA",
"managed": true
}
],
"route53_id": "Z8QRF09VVGAC6",
"dirty": false,
"ns_propagated": false
}
```
### Records
#### List records in a zone.
`GET /zones/{zone_id}/records`
Example:
```
GET /zones/102/records
[
{
"name": "@",
"fqdn": "zinc.example.presslabs.net.",
"type": "NS",
"values": [
"ns-389.awsdns-48.com.",
"ns-1596.awsdns-07.co.uk.",
"ns-1008.awsdns-62.net.",
"ns-1294.awsdns-33.org."
],
"ttl": 172800,
"dirty": false,
"id": "Z6k504rwKzbamNZ9ZmY5lvkoOJGDW0",
"url": "https://zinc.stage.presslabs.net/zones/102/records/Z6k504rwKzbamNZ9ZmY5lvkoOJGDW0",
"managed": true
},
{
"name": "@",
"fqdn": "zinc.example.presslabs.net.",
"type": "SOA",
"values": [
"ns-389.awsdns-48.com. awsdns-hostmaster.amazon.com. 1 7200 900 1209600 86400"
],
"ttl": 900,
"dirty": false,
"id": "Z6k504rwKzbamNZ6Z7doJ0yg98j9zA",
"url": "https://zinc.stage.presslabs.net/zones/102/records/Z6k504rwKzbamNZ6Z7doJ0yg98j9zA",
"managed": true
}
]
```
#### Create a record.
`POST /zones/{zone_id}/records`
Args:
| argument | required | default | description |
| --- | --- | --- | --- |
| name | required | - | The domain name (without the zone root). |
| type | required | - | The record type. Must be either POLICY\_ROUTED or a valid record type. |
| values | required | - | List of values. Should be one IP for A, MX records, a policy id for POLICY_ROUTED, one or more domain names for NS records. |
| ttl | optional | 300 | The TTL for DNS. |
#### Delete a record.
`DELETE /zones/{zone_id}/records/{record_id}`
#### Record detail.
`GET /zones/{zone_id}/records/{record_id}`
Example:
```
GET /zones/102/records/Z6k504rwKzbamNZ1ZxLxRR4BKly04J
{
"name": "www",
"fqdn": "www.zinc.example.presslabs.net.",
"type": "POLICY_ROUTED",
"values": [
"344b7bee-da33-4234-b645-805cc26adab0"
],
"ttl": null,
"dirty": false,
"id": "Z6k504rwKzbamNZ1ZxLxRR4BKly04J",
"url": "https://zinc.stage.presslabs.net/zones/102/records/Z6k504rwKzbamNZ1ZxLxRR4BKly04J",
"managed": false
}
```
#### Update an existing record.
`PATCH /zones/{zone_id}/records/{record_id}`
The type and name can't be changed.
Missing attributes don't change.
| argument | required | default | description |
| --- | --- | --- | --- |
| values | optional | - | List of values. Should be one IP for A, MX records, a policy id for POLICY_ROUTED, one or more domain names for NS records. |
| ttl | optional | - | The TTL for DNS. |
# Installing and Running
The recomended way to get up and running is using our Docker container.
```
cd contrib/
docker-compose up
```
## Config
If you run the django project with default settings, you can configure zinc by setting
environment variables. If you're using the provided docker-compose.yml you can set the
environment in ./zinc.env
The following are essential and required:
```
ZINC_AWS_KEY - AWS Key
ZINC_AWS_SECRET - AWS Secret
ZINC_SECRET_KEY - Django secret
```
You can also set the following:
```
ZINC_ALLOWED_HOSTS - Django Allowed Hosts
ZINC_BROKER_URL - Celery Broker URL, defaults to ${REDIS_URL}/0
ZINC_CELERY_RESULT_BACKEND - Celery Result Backend, defaults to ${REDIS_URL}/1
ZINC_DATA_DIR - PROJECT_ROOT
ZINC_DB_ENGINE - The django db engine to use. Defaults to 'django.db.backends.sqlite3'
ZINC_DB_HOST -
ZINC_DB_NAME - zinc
ZINC_DB_PASSWORD - password
ZINC_DB_PORT -
ZINC_DB_USER - zinc
ZINC_DEBUG - Django debug. Defaults to False. Set to the string "True" to turn on debugging.
ZINC_DEFAULT_TTL - 300
ZINC_ENV_NAME - The environment for sentry reporting.
ZINC_GOOGLE_OAUTH2_KEY - For use with social-django. If you don't set this, social-django will be disabled.
ZINC_GOOGLE_OAUTH2_SECRET - For use with social-django.
ZINC_SOCIAL_AUTH_ADMIN_EMAILS - List of email addresses that will be automatically granted admin access.
ZINC_SOCIAL_AUTH_GOOGLE_OAUTH2_WHITELISTED_DOMAINS - see http://python-social-auth.readthedocs.io/en/latest/configuration/settings.html?highlight=whitelisted#whitelists
ZINC_HEALTH_CHECK_FQDN - Hostname to use in Health Checks. Defaults to 'node.presslabs.net.'
ZINC_LOCK_SERVER_URL - Used with redis-lock. Defaults to ${REDIS_URL}/2.
ZINC_LOG_LEVEL - Defaults to INFO
ZINC_NS_CHECK_RESOLVERS - NameServers to use when checking zone propagation. Default: ['8.8.8.8']
ZINC_REDIS_URL - Defaults to 'redis://localhost:6379'
ZINC_SECRET_KEY - The secret key used by the django app.
ZINC_SENTRY_DSN - Set this to enable sentry error reporting.
ZINC_STATIC_URL - Defaults to '/static/'
ZINC_ZONE_OWNERSHIP_COMMENT - Set this comment on records, to Defaults to 'zinc'
```
# Development
**Warning! Don't use production AWS credentials when developing or testing Zinc!**
After you've cloned the code:
```
pip install -r requirements.dev.txt
python setup.py develop
cp local_settings.py.example local_settings.py
# open local_settings.py in your favorite editor, and set AWS credentials
```
To run the tests:
```
# all tests
py.test .
# to skip tests that need AWS
py.test -k 'not with_aws' .
```
| zinc-dns | /zinc-dns-1.1.0.tar.gz/zinc-dns-1.1.0/README.md | README.md |
from rest_framework.generics import (ListAPIView, CreateAPIView, RetrieveUpdateDestroyAPIView)
from rest_framework import viewsets, status, mixins, views
from rest_framework.response import Response
from rest_framework.generics import get_object_or_404
from rest_framework.exceptions import NotFound
from zinc.serializers import (PolicySerializer, ZoneDetailSerializer,
ZoneListSerializer, RecordSerializer)
from zinc import models
from zinc.utils import memoized_property
class PolicyViewset(viewsets.ReadOnlyModelViewSet):
serializer_class = PolicySerializer
queryset = models.Policy.objects.all()
class ZoneViewset(mixins.CreateModelMixin,
mixins.RetrieveModelMixin,
mixins.DestroyModelMixin,
mixins.ListModelMixin,
viewsets.GenericViewSet):
queryset = models.Zone.objects.filter(deleted=False)
def get_serializer_class(self):
if self.action in ['list', 'create']:
return ZoneListSerializer
return ZoneDetailSerializer
def destroy(self, request, pk=None):
zone = get_object_or_404(models.Zone.objects, pk=pk)
zone.soft_delete()
return Response(status=status.HTTP_204_NO_CONTENT)
class RecordDetail(RetrieveUpdateDestroyAPIView):
queryset = models.Zone.objects.filter(deleted=False)
serializer_class = RecordSerializer
allowed_methods = ['GET', 'DELETE', 'PATCH']
def get_object(self):
zone = self.zone
record_id = self.kwargs['record_id']
for record in zone.records:
if record.id == record_id:
return record
raise NotFound(detail='Record not found.')
@memoized_property
def zone(self):
zone_id = self.kwargs.get('zone_id')
if zone_id is not None:
queryset = self.get_queryset()
return get_object_or_404(queryset, id=zone_id)
def get_serializer_context(self):
zone = self.zone
context = super(RecordDetail, self).get_serializer_context()
context['zone'] = zone
return context
def perform_destroy(self, instance):
serializer = self.get_serializer(instance, data={}, partial=True)
serializer.is_valid(raise_exception=True)
serializer.save()
class RecordCreate(ListAPIView, CreateAPIView):
serializer_class = RecordSerializer
paginator = None
def list(self, request, zone_id):
zone = get_object_or_404(models.Zone, id=zone_id)
zone_data = ZoneDetailSerializer(zone, context={'request': request}).data
return Response(zone_data['records'])
def get_queryset(self):
return None
def get_object(self):
zone_id = self.kwargs.get('zone_id')
if zone_id is not None:
return get_object_or_404(models.Zone, id=zone_id)
def get_serializer_context(self):
zone = self.get_object()
context = super(RecordCreate, self).get_serializer_context()
context['zone'] = zone
return context
class HealtchCheck(views.APIView):
permission_classes = ()
def get(self, request, format=None):
return Response({'status': 'ok'}) | zinc-dns | /zinc-dns-1.1.0.tar.gz/zinc-dns-1.1.0/zinc/views.py | views.py |
import redis
import redis_lock
from celery import shared_task
from celery.exceptions import MaxRetriesExceededError
from celery.utils.log import get_task_logger
from django.conf import settings
from zinc import models, route53
logger = get_task_logger(__name__)
@shared_task(bind=True, ignore_result=True, default_retry_delay=60)
def aws_delete_zone(self, pk):
zone = models.Zone.objects.get(pk=pk)
assert zone.deleted
aws_zone = zone.r53_zone
try:
aws_zone.delete()
except Exception as e:
logger.exception(e)
try:
self.retry()
except MaxRetriesExceededError:
logger.error('Failed to remove zone %s', zone.id)
@shared_task(bind=True, ignore_result=True)
def reconcile_zones(bind=True):
"""
Periodic task that reconciles everything zone-related (zone deletion, policy record updates)
"""
redis_client = redis.from_url(settings.LOCK_SERVER_URL)
lock = redis_lock.Lock(redis_client, 'recouncile_zones', expire=60)
if not lock.acquire(blocking=False):
logger.info('Cannot aquire task lock. Probaly another task is running. Bailing out.')
return
try:
for zone in models.Zone.need_reconciliation():
try:
zone.reconcile()
lock.extend(5) # extend the lease each time we rebuild a tree
except Exception:
logger.exception(
"reconcile failed for Zone %s.%s", zone, zone.root
)
finally:
lock.release()
@shared_task(bind=True, ignore_result=True)
def check_clean_zones(bind=True):
for zone in models.Zone.get_clean_zones():
zone.r53_zone.check_policy_trees()
@shared_task(bind=True, ignore_result=True)
def reconcile_healthchecks(bind=True):
route53.HealthCheck.reconcile_for_ips(models.IP.objects.all())
@shared_task(bind=True, ignore_result=True)
def update_ns_propagated(bind=True):
redis_client = redis.from_url(settings.LOCK_SERVER_URL)
# make this lock timeout big enough to cover updating about 1000 zones
# ns_propagated flag and small enough to update the flag in an acceptable
# time frame. 5 minutes sound good at the moment.
lock = redis_lock.Lock(redis_client, 'update_ns_propagated', expire=300)
if not lock.acquire(blocking=False):
logger.info('Cannot aquire task lock. Probaly another task is running. Bailing out.')
return
try:
models.Zone.update_ns_propagated(delay=getattr(settings, 'ZINC_NS_UPDATE_DELAY', 0.3))
except Exception:
logger.exception("Could not update ns_propagated flag")
finally:
lock.release() | zinc-dns | /zinc-dns-1.1.0.tar.gz/zinc-dns-1.1.0/zinc/tasks.py | tasks.py |
from collections import OrderedDict
import collections.abc
import contextlib
import json
import uuid
from logging import getLogger
from django.core.exceptions import ValidationError
from django.db import models, transaction
from django.db.models import Q
from zinc import ns_check, route53, tasks
from zinc.route53 import HealthCheck, get_local_aws_region_choices
from zinc.route53.record import RECORD_PREFIX
from zinc.validators import validate_domain, validate_hostname
logger = getLogger(__name__)
ROUTING_CHOICES = OrderedDict([
("latency", "latency"),
("weighted", "weighted"),
])
class IP(models.Model):
ip = models.GenericIPAddressField(
primary_key=True,
protocol='IPv4',
verbose_name='IP Address'
)
hostname = models.CharField(max_length=64, validators=[validate_hostname])
friendly_name = models.TextField(blank=True)
enabled = models.BooleanField(default=True)
healthcheck_id = models.CharField(max_length=200, blank=True, null=True)
healthcheck_caller_reference = models.UUIDField(null=True, blank=True)
deleted = models.BooleanField(default=False)
class Meta:
verbose_name = 'IP'
def mark_policy_records_dirty(self):
# sadly this breaks sqlite
# policies = [
# member.policy for member in
# self.policy_members.order_by('policy_id').distinct('policy_id')]
policies = set([
member.policy for member in
self.policy_members.all()])
for policy in policies:
policy.mark_policy_records_dirty()
def soft_delete(self):
self.deleted = True
self.enabled = False
self.save(update_fields=['deleted', 'enabled'])
self.reconcile_healthcheck()
def reconcile_healthcheck(self):
HealthCheck(self).reconcile()
def __str__(self):
value = self.friendly_name or self.hostname.split(".", 1)[0]
return '{} ({})'.format(self.ip, value)
class Policy(models.Model):
id = models.UUIDField(primary_key=True, default=uuid.uuid4, editable=False)
name = models.CharField(max_length=255, unique=True, null=False)
routing = models.CharField(
max_length=255, choices=ROUTING_CHOICES.items(), default=ROUTING_CHOICES['latency'])
dirty_trigger_fields = set(['name'])
class Meta:
verbose_name_plural = 'policies'
ordering = ('name',)
def __str__(self):
return self.name
def change_trigger(self, field_names):
# if field_names is not a set-like object (eg. dict_keys) convert to set
if not isinstance(field_names, collections.abc.Set):
field_names = set(field_names)
if field_names & self.dirty_trigger_fields:
self.mark_policy_records_dirty()
# atomic isn't strictly required since it's a single statement that would run
# in a transaction in autocommit mode on innodb, but it's better to be explicit
@transaction.atomic
def mark_policy_records_dirty(self):
self.records.update(dirty=True)
class PolicyMember(models.Model):
AWS_REGIONS = get_local_aws_region_choices()
id = models.UUIDField(primary_key=True, default=uuid.uuid4, editable=False)
region = models.CharField(choices=AWS_REGIONS, max_length=20,
default='us-east-1')
ip = models.ForeignKey(IP, on_delete=models.CASCADE, related_name='policy_members')
policy = models.ForeignKey(Policy, on_delete=models.CASCADE, related_name='members')
weight = models.PositiveIntegerField(default=10)
enabled = models.BooleanField(default=True)
class Meta:
ordering = ('region', 'ip__hostname')
def save(self, *args, **kwargs):
self.policy.mark_policy_records_dirty()
return super(PolicyMember, self).save(*args, **kwargs)
def delete(self, *args, **kwargs):
self.policy.mark_policy_records_dirty()
return super(PolicyMember, self).delete(*args, **kwargs)
def __str__(self):
return '{} {} {}'.format(self.ip, self.region, self.weight)
def validate_json(value):
try:
json.loads(value)
except json.JSONDecodeError:
raise ValidationError("Not valid json")
class Zone(models.Model):
root = models.CharField(max_length=255, validators=[validate_domain])
route53_id = models.CharField(max_length=32, unique=True, editable=False,
null=True, default=None)
caller_reference = models.UUIDField(editable=False, null=True)
deleted = models.BooleanField(default=False)
ns_propagated = models.BooleanField(default=False)
cached_ns_records = models.TextField(validators=[validate_json], default=None, null=True)
class Meta:
ordering = ['root']
def __init__(self, *args, **kwargs):
self._route53_instance = None
super(Zone, self).__init__(*args, **kwargs)
@property
def dirty(self):
dirty = False
for policy_record in self.policy_records.all():
dirty |= policy_record.dirty
return dirty
def clean(self):
# if the root is not a fqdn then add the dot at the end
# this will be called from admin
if not self.root.endswith('.'):
self.root += '.'
super().clean()
def save(self, *args, **kwargs):
if self.route53_id is not None:
if self.route53_id.startswith('/hostedzone/'):
self.route53_id = self.route53_id[len('/hostedzone/'):]
return super(Zone, self).save(*args, **kwargs)
def commit(self):
self.r53_zone.commit()
def delete_record_by_hash(self, record_hash):
records = self.r53_zone.records()
to_delete_record = records[record_hash]
to_delete_record.deleted = True
self.r53_zone.process_records([to_delete_record])
def delete_record(self, record):
self.delete_record_by_hash(record.id)
def get_policy_records(self):
# return a list with Policy records
records = []
for policy_record in self.policy_records.all():
records.append(policy_record.serialize())
return records
@property
def r53_zone(self):
if not self._route53_instance:
self._route53_instance = route53.Zone(self)
return self._route53_instance
def soft_delete(self):
self.deleted = True
self.save(update_fields=['deleted'])
tasks.aws_delete_zone.delay(self.pk)
@property
def records(self):
records = self.r53_zone.records()
filtered_records = []
policy_records = self.get_policy_records()
for record in records.values():
if record.is_hidden:
continue
if record.is_alias and any(((record.name == pr.name) for pr in policy_records)):
continue
filtered_records.append(record)
# Add policy records.
for record in policy_records:
filtered_records.append(record)
return filtered_records
def update_records(self, records):
self.r53_zone.process_records(records)
def __str__(self):
return '{} ({})'.format(self.root, self.route53_id)
@transaction.atomic
def reconcile(self):
self.r53_zone.reconcile()
@contextlib.contextmanager
@transaction.atomic
def lock_dirty_policy_records(self):
policy_records = self.policy_records.select_for_update() \
.select_related('policy').filter(dirty=True)
yield policy_records
def _delete_orphaned_managed_records(self):
"""Delete any managed record not belonging to one of the zone's policies"""
policies = set([pr.policy for pr in self.policy_records.select_related('policy')])
pol_names = ['{}_{}'.format(RECORD_PREFIX, policy.name) for policy in policies]
for record in self.r53_zone.records().values():
name = record.name
if name.startswith(RECORD_PREFIX):
for pol_name in pol_names:
if name.startswith(pol_name):
break
else:
self.delete_record(record)
@classmethod
def update_ns_propagated(cls, delay=0):
resolver = ns_check.get_resolver()
# the order matters because we want unpropagated zones to be checked first
# to minimize the delay in tarnsitioning to propagated state
for zone in cls.objects.order_by('ns_propagated').all():
try:
zone.ns_propagated = ns_check.is_ns_propagated(
zone, resolver=resolver, delay=delay)
except ns_check.CouldNotResolve:
logger.warn('Failed to resolve nameservers for %s', zone.root)
else:
if not zone.ns_propagated:
logger.info('ns_propagated %-5s %s', zone.ns_propagated, zone.root)
zone.save()
@classmethod
def _dirty_query(cls):
return Q(deleted=True) | Q(route53_id=None) | Q(policy_records__dirty=True)
@classmethod
def need_reconciliation(cls):
return cls.objects.filter(
cls._dirty_query()
)
@classmethod
def get_clean_zones(cls):
return cls.objects.filter(
~cls._dirty_query()
)
class PolicyRecord(models.Model):
id = models.UUIDField(primary_key=True, default=uuid.uuid4, editable=False)
name = models.CharField(max_length=255)
policy = models.ForeignKey(Policy, related_name='records')
dirty = models.BooleanField(default=True, editable=False)
zone = models.ForeignKey(Zone, related_name='policy_records')
deleted = models.BooleanField(default=False)
class Meta:
unique_together = ('name', 'zone')
def __init__(self, *a, **kwa):
super().__init__(*a, **kwa)
self._r53_policy_record = None
def __str__(self):
return '{}.{}'.format(self.name, self.zone.root)
def serialize(self):
assert self.zone is not None
record = route53.PolicyRecord(policy_record=self, zone=self.zone.r53_zone)
record.dirty = self.dirty
record.managed = False
record.deleted = self.deleted
return record
def soft_delete(self):
self.deleted = True
self.dirty = True
self.save(update_fields=['deleted', 'dirty'])
def mark_dirty(self):
self.dirty = True
self.save(update_fields=['dirty'])
def clean(self):
zone_records = self.zone.r53_zone.records()
# guard against PolicyRecords/CNAME name clashes
if not self.deleted:
# don't do the check unless the PR is deleted
for record in zone_records.values():
if record.name == self.name and record.type == 'CNAME':
raise ValidationError(
{'name': "A CNAME record of the same name already exists."})
super().clean()
@property
def r53_policy_record(self):
if self._r53_policy_record is None:
self._r53_policy_record = route53.PolicyRecord(
policy_record=self, zone=self.zone.r53_zone)
return self._r53_policy_record
@transaction.atomic
def apply_record(self):
# build the tree for this policy record.
if self.deleted:
# if the zone is marked as deleted don't try to build the tree.
self.delete_record()
self.delete()
return
self.zone.r53_zone.process_records([self.r53_policy_record])
self.dirty = False # mark as clean
self.save()
@classmethod
def new_or_deleted(cls, name, zone):
# if the record hasn't been reconciled yet (still exists in the DB), we want to reuse it
# to avoid violating the unique together constraint on name and zone
# TODO: if we add deleted to that constraint and make it null-able, we can keep the DB
# sane and simplify the system. Reusing the record like this opens up the possibility
# of running into concurrency issues.
try:
model = cls.objects.get(deleted=True, name=name, zone=zone)
model.deleted = False
return model
except cls.DoesNotExist:
return cls(name=name, zone=zone) | zinc-dns | /zinc-dns-1.1.0.tar.gz/zinc-dns-1.1.0/zinc/models.py | models.py |
from __future__ import unicode_literals
import django.core.validators
from django.db import migrations, models
import django.db.models.deletion
import uuid
class Migration(migrations.Migration):
initial = True
dependencies = [
]
operations = [
migrations.CreateModel(
name='IP',
fields=[
('ip', models.GenericIPAddressField(primary_key=True, protocol='IPv4', serialize=False, verbose_name='IP Address')),
('hostname', models.CharField(max_length=64, validators=[django.core.validators.RegexValidator(code='invalid_hostname', message='Invalid hostname', regex='^(?=[a-z0-9\\-\\.]{1,253}$)([a-z0-9](([a-z0-9\\-]){,61}[a-z0-9])?\\.)*([a-z0-9](([a-z0-9\\-]){,61}[a-z0-9])?)$')])),
('friendly_name', models.TextField(blank=True)),
('enabled', models.BooleanField(default=True)),
('healthcheck_id', models.CharField(blank=True, max_length=200, null=True)),
('healthcheck_caller_reference', models.UUIDField(blank=True, null=True)),
('deleted', models.BooleanField(default=False)),
],
options={
'verbose_name': 'IP',
},
),
migrations.CreateModel(
name='Policy',
fields=[
('id', models.UUIDField(default=uuid.uuid4, editable=False, primary_key=True, serialize=False)),
('name', models.CharField(max_length=255, unique=True)),
],
options={
'verbose_name_plural': 'policies',
},
),
migrations.CreateModel(
name='PolicyMember',
fields=[
('id', models.UUIDField(default=uuid.uuid4, editable=False, primary_key=True, serialize=False)),
('region', models.CharField(choices=[('us-east-1', 'us-east-1'), ('us-west-1', 'us-west-1'), ('us-west-2', 'us-west-2'), ('ap-northeast-1', 'ap-northeast-1'), ('ap-northeast-2', 'ap-northeast-2'), ('ap-south-1', 'ap-south-1'), ('ap-southeast-1', 'ap-southeast-1'), ('ap-southeast-2', 'ap-southeast-2'), ('sa-east-1', 'sa-east-1'), ('eu-west-1', 'eu-west-1'), ('eu-central-1', 'eu-central-1')], max_length=20)),
('weight', models.PositiveIntegerField(default=10)),
('ip', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='policy_members', to='zinc.IP')),
('policy', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='members', to='zinc.Policy')),
],
),
migrations.CreateModel(
name='PolicyRecord',
fields=[
('id', models.UUIDField(default=uuid.uuid4, editable=False, primary_key=True, serialize=False)),
('name', models.CharField(max_length=255)),
('dirty', models.BooleanField(default=True, editable=False)),
('deleted', models.BooleanField(default=False)),
('policy', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='records', to='zinc.Policy')),
],
),
migrations.CreateModel(
name='Zone',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('root', models.CharField(max_length=255, validators=[django.core.validators.RegexValidator(code='invalid_root_domain', message='Invalid root domain', regex='[a-z0-9](?:[a-z0-9-]{0,61}[a-z0-9])?(?:\\.(?!-)[a-z0-9-]{1,63}(?<!-))*\\.(?!-)(?:[a-z-]{2,63}|xn--[a-z0-9]{1,59})(?<!-)\\.?$')])),
('route53_id', models.CharField(default=None, editable=False, max_length=32, null=True, unique=True)),
('caller_reference', models.UUIDField(editable=False, null=True)),
('deleted', models.BooleanField(default=False)),
],
),
migrations.AddField(
model_name='policyrecord',
name='zone',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='policy_records', to='zinc.Zone'),
),
migrations.AlterUniqueTogether(
name='policyrecord',
unique_together=set([('name', 'zone')]),
),
] | zinc-dns | /zinc-dns-1.1.0.tar.gz/zinc-dns-1.1.0/zinc/migrations/0001_initial.py | 0001_initial.py |
from django.contrib import admin
from django.db import transaction
from zinc.models import Policy, PolicyMember
class PolicyMemberInline(admin.TabularInline):
readonly_fields = ('ip_enabled',)
model = PolicyMember
extra = 1
verbose_name = 'member'
verbose_name_plural = 'members'
def ip_enabled(self, obj):
return obj.ip.enabled
ip_enabled.boolean = True
@admin.register(Policy)
class PolicyAdmin(admin.ModelAdmin):
fields = ('name', 'routing',)
readonly_fields = ()
list_display = ('__str__', 'routing', 'regions', 'status')
list_filter = ('routing', 'members__region')
inlines = (PolicyMemberInline,)
exclude = ('members',)
def get_queryset(self, request):
qs = super(PolicyAdmin, self).get_queryset(request)
qs = qs.prefetch_related('members')
return qs
def regions(self, obj):
# get_queryset prefetches related policy members so iterating over
# objects is ok because we are iterating over already fetched data
return ', '.join(sorted({m.region for m in obj.members.all()}))
@transaction.atomic
def save_model(self, request, obj, form, change):
rv = super().save_model(request, obj, form, change)
obj.change_trigger(form.changed_data)
return rv
def status(self, obj):
warnings = []
if obj.routing == 'latency':
members_by_region = {}
for member in obj.members.all():
members_by_region.setdefault(member.region, []).append(member)
if len(members_by_region) <= 1:
warnings.append('✖ Latency routed policy should span multiple regions!')
for region, members in members_by_region.items():
if len([m for m in members if m.weight > 0]) == 0:
warnings.append(
'✖ All members of region {} have weight zero!'.format(region))
elif obj.routing == 'weighted':
active_members = [m for m in obj.members.all() if m.weight > 0]
if len(active_members) == 0:
warnings.append('✖ All members have weight zero!')
if warnings:
return '<span style="color: red">{}</red>'.format("<br>".join(warnings))
else:
return "✔ ok"
status.allow_tags = True
status.short_description = 'Status' | zinc-dns | /zinc-dns-1.1.0.tar.gz/zinc-dns-1.1.0/zinc/admin/policy.py | policy.py |
import json
from contextlib import contextmanager
from botocore.exceptions import ClientError
from rest_framework import fields
from rest_framework import serializers
from rest_framework.exceptions import ValidationError
from django.core.exceptions import ValidationError as DjangoValidationError
from django.conf import settings
from zinc import route53
from zinc.models import RECORD_PREFIX
from zinc.route53.record import ZINC_RECORD_TYPES, POLICY_ROUTED, ALLOWED_RECORD_TYPES
@contextmanager
def interpret_client_error():
try:
yield
except ClientError as error:
if 'ARRDATAIllegalIPv4Address' in error.response['Error']['Message']:
raise ValidationError({'values': ["Value is not a valid IPv4 address."]})
elif 'AAAARRDATAIllegalIPv6Address' in error.response['Error']['Message']:
raise ValidationError({'values': ["Value is not a valid IPv6 address."]})
error = error.response['Error']['Message']
try:
error = json.loads(error)
except TypeError:
pass
except json.JSONDecodeError:
# boto returns a badly formatted error
if error[0] == "[" and error[1] != "\"":
error = error[1:-1]
if not isinstance(error, list):
error = [error]
raise ValidationError({'non_field_error': error})
except DjangoValidationError as error:
raise ValidationError(error.message_dict)
class RecordListSerializer(serializers.ListSerializer):
# This is used for list the records in Zone serializer
# by using many=True and passing the entier zone as object
def to_representation(self, zone):
# pass to RecordSerializer zone in the context.
self.context['zone'] = zone
return super(RecordListSerializer, self).to_representation(zone.records)
def update(self, instance, validated_data):
raise NotImplementedError('Can not update records this way. Use records/ endpoint.')
class RecordSerializer(serializers.Serializer):
name = fields.CharField(max_length=255)
fqdn = fields.SerializerMethodField(required=False)
type = fields.ChoiceField(choices=ZINC_RECORD_TYPES)
values = fields.ListField(child=fields.CharField())
ttl = fields.IntegerField(allow_null=True, min_value=1, required=False)
dirty = fields.SerializerMethodField(required=False)
id = fields.SerializerMethodField(required=False)
url = fields.SerializerMethodField(required=False)
managed = fields.SerializerMethodField(required=False)
class Meta:
list_serializer_class = RecordListSerializer
def get_fqdn(self, obj):
zone = self.context['zone']
if obj.name == '@':
return zone.root
return '{}.{}'.format(obj.name, zone.root)
def get_id(self, obj):
return obj.id
def get_url(self, obj):
# compute the url for record
zone = self.context['zone']
request = self.context['request']
record_id = self.get_id(obj)
return request.build_absolute_uri('/zones/%s/records/%s' % (zone.id, record_id))
def get_managed(self, obj):
return obj.managed
def get_dirty(self, obj):
return obj.dirty
def to_representation(self, obj):
assert obj.values if obj.is_alias else True
rv = super().to_representation(obj)
return rv
def create(self, validated_data):
zone = self.context['zone']
obj = route53.record_factory(zone=zone, created=True, **validated_data)
with interpret_client_error():
obj.full_clean()
obj.save()
zone.r53_zone.commit()
return obj
def update(self, obj, validated_data):
zone = self.context['zone']
if obj.managed:
raise ValidationError("Can't change a managed record.")
for attr, value in validated_data.items():
setattr(obj, attr, value)
obj.full_clean()
obj.save()
with interpret_client_error():
zone.commit()
return obj
def validate_type(self, value):
if value not in ALLOWED_RECORD_TYPES:
raise ValidationError("Type '{}' is not allowed.".format(value))
return value
def validate_name(self, value):
# record name should not start with reserved prefix.
if value.startswith(RECORD_PREFIX):
raise ValidationError(
('Record {} can\'t start with {}. '
'It\'s a reserved prefix.').format(value, RECORD_PREFIX)
)
return value
def validate(self, data):
errors = {}
# TODO: this stinks! we need a cleaner approach here
# if is a delete then the data should be {'deleted': True}
if self.context['request'].method == 'DELETE':
return {'deleted': True}
# for PATCH type and name field can't be modified.
if self.context['request'].method == 'PATCH':
if 'type' in data or 'name' in data:
errors.update({'non_field_errors': ["Can't update 'name' and 'type' fields. "]})
else:
# POST method
# for POLICY_ROUTED the values should contain just one value
if data['type'] in ['CNAME', POLICY_ROUTED]:
if not len(data['values']) == 1:
errors.update({
'values': ('Only one value can be '
'specified for {} records.'.format(data['type']))
})
else:
data.setdefault('ttl', settings.ZINC_DEFAULT_TTL)
# for normal records values is required.
if not data.get('values', False):
errors.update({'values': 'This field is required.'})
if errors:
raise ValidationError(errors)
return data | zinc-dns | /zinc-dns-1.1.0.tar.gz/zinc-dns-1.1.0/zinc/serializers/record.py | record.py |
import uuid
import logging
from botocore.exceptions import ClientError
from django.conf import settings
from .client import get_client
logger = logging.getLogger('zinc.route53')
def generate_caller_ref():
return 'zinc {}'.format(uuid.uuid4())
class HealthCheck:
def __init__(self, ip):
self.ip = ip
self._aws_data = None
self._client = get_client()
@property
def exists(self):
self._load()
return self._aws_data is not None
@property
def id(self):
self._load()
return self._aws_data.get('Id')
def _load(self):
if self._aws_data is not None:
return
if self.ip.healthcheck_id is not None:
try:
health_check = self._client.get_health_check(HealthCheckId=self.ip.healthcheck_id)
self._aws_data = health_check.get('HealthCheck')
except self._client.exceptions.NoSuchHealthCheck:
pass
@property
def desired_config(self):
config = {
'IPAddress': self.ip.ip,
}
config.update(settings.HEALTH_CHECK_CONFIG)
return config
@property
def config(self):
self._load()
return self._aws_data.get('HealthCheckConfig')
def create(self):
if self.ip.healthcheck_caller_reference is None:
self.ip.healthcheck_caller_reference = uuid.uuid4()
logger.info("%-15s new caller_reference %s",
self.ip.ip, self.ip.healthcheck_caller_reference)
self.ip.save()
resp = self._client.create_health_check(
CallerReference=str(self.ip.healthcheck_caller_reference),
HealthCheckConfig=self.desired_config
)
self.ip.healthcheck_id = resp['HealthCheck']['Id']
logger.info("%-15s created hc: %s", self.ip.ip, self.ip.healthcheck_id)
self.ip.save()
def delete(self):
if self.exists:
logger.info("%-15s delete hc: %s", self.ip.ip, self.ip.healthcheck_id)
self._client.delete_health_check(HealthCheckId=self.id)
self.ip.healthcheck_caller_reference = None
self.ip.save(update_fields=['healthcheck_caller_reference'])
def reconcile(self):
if self.ip.deleted:
self.delete()
self.ip.delete()
elif self.exists:
# if the desired config is not a subset of the current config
if not self.desired_config.items() <= self.config.items():
self.delete()
self.create()
else:
logger.info("%-15s nothing to do", self.ip.ip)
else:
try:
self.create()
except self._client.exceptions.HealthCheckAlreadyExists:
self.ip.healthcheck_caller_reference = None
self.ip.save()
self.create()
@classmethod
def reconcile_for_ips(cls, ips):
checks = [cls(ip) for ip in ips]
for check in checks:
try:
check.reconcile()
except ClientError:
logger.exception("Error while handling %s", check.ip.friendly_name) | zinc-dns | /zinc-dns-1.1.0.tar.gz/zinc-dns-1.1.0/zinc/route53/health_check.py | health_check.py |
import json
import hashlib
from hashids import Hashids
from django.conf import settings
from django.core.exceptions import SuspiciousOperation, ValidationError
from zinc import models, route53
from zinc.utils import memoized_property
from zinc.utils.generators import chunks
HASHIDS_SALT = getattr(settings, 'SECRET_KEY', '')
HASHIDS_MIN_LENGTH = getattr(settings, 'HASHIDS_MIN_LENGTH', 7)
HASHIDS_ALPHABET = getattr(settings, 'HASHIDS_ALPHABET',
'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXY1234567890')
hashids = Hashids(salt=HASHIDS_SALT,
alphabet=HASHIDS_ALPHABET)
RECORD_PREFIX = '_zn'
POLICY_ROUTED = 'POLICY_ROUTED'
RECORD_TYPES = [
'A', 'AAAA', 'CNAME', 'MX', 'TXT', 'SOA',
'SPF', 'SRV', 'NS', POLICY_ROUTED
]
ALLOWED_RECORD_TYPES = set(RECORD_TYPES)
ALLOWED_RECORD_TYPES.remove('SOA')
ZINC_RECORD_TYPES = [(rtype, rtype) for rtype in RECORD_TYPES]
ZINC_RECORD_TYPES_MAP = {i + 1: RECORD_TYPES[i] for i in range(0, len(RECORD_TYPES))}
ZINC_RECORD_TYPES_MAP[0] = POLICY_ROUTED
ZINC_RECORD_TYPES_MAP_REV = {rtype: i for i, rtype in ZINC_RECORD_TYPES_MAP.items()}
def get_record_type(rtype):
if type(rtype) is int:
return ZINC_RECORD_TYPES_MAP[rtype]
else:
return ZINC_RECORD_TYPES_MAP_REV[rtype]
def _encode(*args):
_set_id = ':'.join([str(arg) for arg in args])
_set_id = int(hashlib.sha256(_set_id.encode('utf-8')).hexdigest()[:16], base=16)
return hashids.encode(_set_id)
class BaseRecord:
_obj_to_r53 = dict([
('name', 'Name'),
('type', 'Type'),
('managed', 'Managed'),
('ttl', 'ttl'),
('alias_target', 'AliasTarget'),
('values', 'Values'),
('weight', 'Weight'),
('region', 'Region'),
('set_identifier', 'SetIdentifier'),
('health_check_id', 'HealthCheckId'),
('traffic_policy_instance_id', 'TrafficPolicyInstanceId'),
])
_r53_to_obj = {v: k for k, v in _obj_to_r53.items()}
def __init__(self, name=None, alias_target=None, created=False, deleted=False, dirty=False,
health_check_id=None, managed=False, region=None, set_identifier=None,
traffic_policy_instance_id=None, ttl=None, values=None, weight=None,
zone=None):
self.name = name
self.alias_target = alias_target
self.created = created
assert alias_target is None or ttl is None
self.ttl = ttl
self._values = values
self.weight = weight
self.region = region
self.set_identifier = set_identifier
self.health_check_id = health_check_id
self.traffic_policy_instance_id = traffic_policy_instance_id
self.zone = zone
self.zone_id = zone.id
self.zone_root = zone.root
assert self.zone_id is not None
assert self.zone_root is not None
self.deleted = deleted
self.dirty = dirty
self.managed = managed
def __repr__(self):
return "<{} id={} {}:{} {}>".format(
type(self).__name__, self.id, self.type, self.name, self.values)
@property
def values(self):
if self.is_alias:
if 'DNSName' in self.alias_target:
return ['ALIAS {}'.format(self.alias_target['DNSName'])]
else:
return self._values
@values.setter
def values(self, value):
assert not self.is_alias
self._values = value
@staticmethod
def _strip_root(name, root):
return '@' if name == root else name.replace('.' + root, '')
@staticmethod
def _add_root(name, root):
return root if name == '@' else '{}.{}'.format(name, root)
@classmethod
def unpack_txt_value(cls, value):
if value.startswith('"') and value.endswith('"'):
value = value[1:-1]
return ''.join(json.loads('"%s"' % chunk) for chunk in value.split('" "'))
@classmethod
def from_aws_record(cls, record, zone):
# Determine if a R53 DNS record is of type ALIAS
def is_alias_record(record):
return 'AliasTarget' in record.keys()
# Determine if a record is the NS or SOA record of the root domain
def root_ns_soa(record, root):
return record['Name'] == root and record['Type'] in ['NS', 'SOA']
kwargs = {}
for attr_name in ['weight', 'region', 'set_identifier', 'health_check_id',
'traffic_policy_instance_id']:
kwargs[attr_name] = record.get(cls._obj_to_r53[attr_name], None)
new = cls(zone=zone, **kwargs)
new.name = cls._strip_root(record['Name'], zone.root)
new.type = record['Type']
new.managed = ((record.get('SetIdentifier', False)) or
root_ns_soa(record, zone.root) or (is_alias_record(record)))
new.ttl = record.get('TTL')
if is_alias_record(record):
new.alias_target = {
'DNSName': record['AliasTarget']['DNSName'],
'EvaluateTargetHealth': record['AliasTarget']['EvaluateTargetHealth'],
'HostedZoneId': record['AliasTarget']['HostedZoneId']
}
elif record['Type'] == 'TXT':
# Decode json escaped strings
new.values = [cls.unpack_txt_value(value['Value'])
for value in record.get('ResourceRecords', [])]
else:
new.values = [value['Value'] for value in
record.get('ResourceRecords', [])]
return new
@property
def id(self):
zone_hash = _encode(self.zone_id)
record_hash = _encode(self.name, self.type, self.set_identifier)
return 'Z{zone}Z{type}Z{id}'.format(
zone=zone_hash, type=get_record_type(self.type), id=record_hash)
@classmethod
def pack_txt_value(cls, value):
max_length = 255
if len(value) < max_length:
value = json.dumps(value)
else:
value = ' '.join('{}'.format(json.dumps(element))
for element in chunks(value, max_length))
return {'Value': value}
def to_aws(self):
encoded_record = {
'Name': self._add_root(self.name, self.zone_root),
'Type': self.type,
}
if not self.is_alias:
if self.type == 'TXT':
# Encode json escape.
encoded_record['ResourceRecords'] = [self.pack_txt_value(value)
for value in self.values]
else:
encoded_record['ResourceRecords'] = [{'Value': value} for value in self.values]
else:
encoded_record['AliasTarget'] = {
'DNSName': self.alias_target['DNSName'],
'EvaluateTargetHealth': self.alias_target['EvaluateTargetHealth'],
'HostedZoneId': self.alias_target['HostedZoneId'],
}
if self.ttl is not None:
encoded_record['TTL'] = self.ttl
for attr_name in ['Weight', 'Region', 'SetIdentifier',
'HealthCheckId', 'TrafficPolicyInstanceId']:
value = getattr(self, self._r53_to_obj[attr_name])
if value is not None:
encoded_record[attr_name] = value
return encoded_record
@property
def is_alias(self):
return self.alias_target is not None
@property
def is_hidden(self):
return self.name.startswith(RECORD_PREFIX)
def is_member_of(self, policy):
return self.name.startswith('{}_{}'.format(RECORD_PREFIX, policy.name))
def save(self):
self.zone.process_records([self])
def is_subset(self, other):
return self.to_aws().items() <= other.to_aws().items()
def validate_unique(self):
"""You're not allowed to have a CNAME clash with any other type of record"""
if self.deleted:
# allow deleting any conflicting record
return
if self.type == 'CNAME':
clashing = tuple((self.name, r_type) for r_type in RECORD_TYPES)
else:
clashing = ((self.name, 'CNAME'), )
for record in self.zone.db_zone.records:
for other in clashing:
if (record.name, record.type) == other and record.id != self.id:
raise ValidationError(
{'name': "A {} record of the same name already exists.".format(other[1])})
def clean(self):
pass
def clean_fields(self):
pass
def full_clean(self):
self.clean_fields()
self.clean()
self.validate_unique()
class Record(BaseRecord):
def __init__(self, type=None, **kwa):
super().__init__(**kwa)
self.type = type
class PolicyRecord(BaseRecord):
def __init__(self, zone, policy_record=None, policy=None, dirty=None,
deleted=None, created=None):
if policy is None:
policy = policy_record.policy
if dirty is None:
dirty = policy_record.dirty
if deleted is None:
deleted = policy_record.deleted
self.db_policy_record = policy_record
self._policy = None
self.policy = policy
self.zone = zone
super().__init__(
name=self.db_policy_record.name,
zone=zone,
alias_target={
'HostedZoneId': zone.id,
'DNSName': '{}_{}.{}'.format(RECORD_PREFIX, self.policy.name, zone.root),
'EvaluateTargetHealth': False
},
deleted=deleted,
dirty=dirty,
created=created,
)
def save(self):
if self.deleted:
# The record will be deleted
self.db_policy_record.deleted = True
self.db_policy_record.dirty = True
else:
# Update policy for this record.
self.db_policy_record.policy_id = self.policy.id
self.db_policy_record.deleted = False # clear deleted flag
self.db_policy_record.dirty = True
self.db_policy_record.full_clean()
self.db_policy_record.save()
def reconcile(self):
# upsert or delete the top level alias
if self.deleted:
if self._top_level_record.id in self.zone.records():
self.zone.process_records([self])
self.db_policy_record.delete()
else:
existing_alias = self._existing_alias
if (existing_alias is None or not self._top_level_record.is_subset(existing_alias)):
self.zone.process_records([self])
self.db_policy_record.dirty = False # mark as clean
self.db_policy_record.save()
@memoized_property
def _top_level_record(self):
return Record(
name=self.name,
type='A',
alias_target={
'HostedZoneId': self.zone.id,
'DNSName': '{}_{}.{}'.format(RECORD_PREFIX, self.policy.name, self.zone.root),
'EvaluateTargetHealth': False
},
zone=self.zone,
)
@memoized_property
def _existing_alias(self):
return self.zone.records().get(self.id)
def to_aws(self):
return self._top_level_record.to_aws()
@property
def id(self):
return self._top_level_record.id
@property
def values(self):
return [str(self.policy.id)]
@values.setter
def values(self, values):
(pol_id, ) = values
policy = route53.Policy(policy=models.Policy.objects.get(id=pol_id), zone=self.zone)
self.policy = policy
@property
def type(self):
return POLICY_ROUTED
@property
def policy(self):
return self._policy
@policy.setter
def policy(self, value):
if value is None:
self.db_policy_record.policy = None
else:
self.db_policy_record.policy_id = value.id
self._policy = value
def record_factory(zone, created=None, **validated_data):
record_type = validated_data.pop('type')
if record_type == POLICY_ROUTED:
assert len(validated_data['values']) == 1
policy_id = validated_data['values'][0]
try:
policy = models.Policy.objects.get(id=policy_id)
except models.Policy.DoesNotExist:
raise SuspiciousOperation("Policy {} does not exists.".format(
policy_id))
record_model = models.PolicyRecord.new_or_deleted(name=validated_data['name'], zone=zone)
obj = PolicyRecord(
policy_record=record_model,
zone=zone.r53_zone,
policy=policy,
dirty=True,
created=created,
)
else:
obj = Record(zone=zone.r53_zone, type=record_type, created=created, **validated_data)
return obj | zinc-dns | /zinc-dns-1.1.0.tar.gz/zinc-dns-1.1.0/zinc/route53/record.py | record.py |
from collections import OrderedDict
import zinc.route53
from zinc.utils import memoized_property
from .record import Record, RECORD_PREFIX
class Policy:
def __init__(self, zone, policy):
assert isinstance(zone, zinc.route53.Zone)
self.zone = zone
self.db_policy = policy
@property
def name(self):
return self.db_policy.name
@property
def id(self):
return self.db_policy.id
@property
def routing(self):
return self.db_policy.routing
@memoized_property
def aws_records(self):
"""What we have in AWS"""
return dict([
(r_id, record) for (r_id, record) in self.zone.records().items()
if record.is_member_of(self)
])
@memoized_property
def desired_records(self):
"""The records we should have (the desired state of the world)"""
return OrderedDict([(record.id, record) for record in self._build_tree()])
def _build_weighted_tree(self, policy_members, region_suffixed=True):
# Build simple tree
records = []
for policy_member in policy_members:
health_check_kwa = {}
if policy_member.ip.healthcheck_id:
health_check_kwa['health_check_id'] = str(policy_member.ip.healthcheck_id)
record = Record(
ttl=30,
type='A',
values=[policy_member.ip.ip],
set_identifier='{}-{}'.format(str(policy_member.id), policy_member.region),
weight=policy_member.weight,
zone=self.zone,
**health_check_kwa,
)
# TODO: maybe we should have a specialized subclass for PolicyRecords
# and this logic should be moved there
if region_suffixed:
record.name = '{}_{}_{}'.format(RECORD_PREFIX, self.name, policy_member.region)
else:
record.name = '{}_{}'.format(RECORD_PREFIX, self.name)
records.append(record)
return records
def _build_lbr_tree(self, policy_members, regions):
# Build latency based routed tree
records = self._build_weighted_tree(policy_members)
for region in regions:
record = Record(
name='{}_{}'.format(RECORD_PREFIX, self.name),
type='A',
alias_target={
'HostedZoneId': self.zone.id,
'DNSName': '{}_{}_{}.{}'.format(
RECORD_PREFIX, self.name, region, self.zone.root),
'EvaluateTargetHealth': True # len(regions) > 1
},
region=region,
set_identifier=region,
zone=self.zone,
)
records.append(record)
return records
def _build_tree(self):
policy_members = self.db_policy.members.exclude(enabled=False).exclude(ip__enabled=False)
# ensure we always build region subtrees in alphabetical order; makes tests simpler
regions = sorted(set([pm.region for pm in policy_members]))
if len(regions) == 0:
raise Exception(
"Policy can't be applied for zone '{}'; "
"There is no member in the '{}' policy.".format(
self.zone, self
)
)
if self.routing == 'latency':
# Here is the case where are multiple regions
records = self._build_lbr_tree(policy_members, regions=regions)
# elif len(regions) == 1:
elif self.routing == 'weighted':
# Case with a single region
records = self._build_weighted_tree(
policy_members, region_suffixed=False)
else:
raise AssertionError('invalid routing {} for policy {}'.format(
self.routing, self.db_policy))
return records
def reconcile(self):
aws_record_ids = self.aws_records.keys()
desired_record_ids = self.desired_records.keys()
to_delete = []
for obsolete_rec_id in aws_record_ids - desired_record_ids:
record = self.aws_records[obsolete_rec_id]
record.deleted = True
to_delete.append(record)
self.zone.process_records(to_delete)
to_upsert = []
for rec_id, desired_record in self.desired_records.items():
existing_record = self.aws_records.get(rec_id)
if existing_record is None:
to_upsert.append(desired_record)
else:
# if desired is a subset of existing
if not desired_record.to_aws().items() <= existing_record.to_aws().items():
to_upsert.append(desired_record)
self.zone.process_records(to_upsert)
def remove(self):
records = list(self.aws_records.values())
for record in records:
record.deleted = True
self.zone.process_records(records) | zinc-dns | /zinc-dns-1.1.0.tar.gz/zinc-dns-1.1.0/zinc/route53/policy.py | policy.py |
from collections import OrderedDict
import uuid
import logging
from botocore.exceptions import ClientError
from django.db import transaction
from django.conf import settings
from .record import Record
from .policy import Policy
from .client import get_client
logger = logging.getLogger(__name__)
class Zone(object):
def __init__(self, db_zone):
self.db_zone = db_zone
self._aws_records = None
self._exists = None
self._change_batch = []
self._client = get_client()
def __repr__(self):
return "<route53.Zone {} at 0x{:x}>".format(self, id(self))
def __str__(self):
return "{}:{}".format(self.id, self.root)
@property
def id(self):
return self.db_zone.route53_id
@property
def root(self):
return self.db_zone.root
def process_records(self, records):
for record in records:
self._add_record_changes(record)
def _add_record_changes(self, record):
if record.deleted:
action = 'DELETE'
else:
if record.created is True:
action = 'CREATE'
else:
action = 'UPSERT'
self._change_batch.append({
'Action': action,
'ResourceRecordSet': record.to_aws()
})
def _reset_change_batch(self):
self._change_batch = []
def commit(self, preserve_cache=False):
if not preserve_cache:
self._clear_cache()
if not self._change_batch:
return
try:
self._client.change_resource_record_sets(
HostedZoneId=self.id,
ChangeBatch={'Changes': self._change_batch}
)
except self._client.exceptions.InvalidChangeBatch:
logger.warning("failed to process batch %r", self._change_batch)
raise
self._reset_change_batch()
def records(self):
self._cache_aws_records()
entries = OrderedDict()
for aws_record in self._aws_records or []:
record = Record.from_aws_record(aws_record, zone=self)
if record:
entries[record.id] = record
return entries
@property
def exists(self):
self._cache_aws_records()
return self._exists
@property
def ns(self):
if not self.exists:
return None
ns = [record for record in self.records().values()
if record.type == 'NS' and record.name == '@']
assert len(ns) == 1
return ns[0]
def _cache_aws_records(self):
if self._aws_records is not None:
return
if not self.id:
return
paginator = self._client.get_paginator('list_resource_record_sets')
records = []
try:
for page in paginator.paginate(HostedZoneId=self.id):
records.extend(page['ResourceRecordSets'])
except self._client.exceptions.NoSuchHostedZone:
self._clear_cache()
else:
self._aws_records = records
self._exists = True
def _clear_cache(self):
self._aws_records = None
self._exists = None
def delete_from_r53(self):
self._delete_records()
self._client.delete_hosted_zone(Id=self.id)
def delete(self):
if self.exists:
self.delete_from_r53()
self.db_zone.delete()
def _delete_records(self):
self._cache_aws_records()
zone_root = self.root
to_delete = []
for record in self._aws_records:
if record['Type'] in ['NS', 'SOA'] and record['Name'] == zone_root:
continue
to_delete.append({
'Action': 'DELETE',
'ResourceRecordSet': record
})
if to_delete:
self._client.change_resource_record_sets(
HostedZoneId=self.id,
ChangeBatch={
'Changes': to_delete
})
def create(self):
if self.db_zone.caller_reference is None:
self.db_zone.caller_reference = uuid.uuid4()
self.db_zone.save()
zone = self._client.create_hosted_zone(
Name=self.root,
CallerReference=str(self.db_zone.caller_reference),
HostedZoneConfig={
'Comment': getattr(settings, 'ZONE_OWNERSHIP_COMMENT', 'zinc')
}
)
self.db_zone.route53_id = zone['HostedZone']['Id']
self.db_zone.save()
def _reconcile_zone(self):
"""
Handles zone creation/deletion.
"""
if self.db_zone.deleted:
self.delete()
elif self.db_zone.route53_id is None:
self.create()
elif not self.exists:
try:
self.create()
except self._client.exceptions.HostedZoneAlreadyExists:
# This can happen if a zone was manually deleted from AWS.
# Create will fail because we re-use the caller_reference
self.db_zone.caller_reference = None
self.db_zone.save()
self.create()
def check_policy_trees(self):
clean_policy_records = self.db_zone.policy_records.filter(dirty=False)
clean_policies = set([policy_record.policy for policy_record in clean_policy_records])
assert self._change_batch == []
for policy in clean_policies:
r53_policy = Policy(policy=policy, zone=self)
r53_policy.reconcile()
if self._change_batch:
logger.error("Glitch in the matrix for %s %s", self.root, policy.name)
self._change_batch = []
def _reconcile_policy_records(self):
"""
Reconcile policy records for this zone.
"""
with self.db_zone.lock_dirty_policy_records() as dirty_policy_records:
dirty_policies = set()
for policy_record in dirty_policy_records:
if not policy_record.deleted:
dirty_policies.add(policy_record.policy)
for policy in dirty_policies:
r53_policy = Policy(policy=policy, zone=self)
r53_policy.reconcile()
self.commit(preserve_cache=True)
for policy_record in dirty_policy_records:
try:
with transaction.atomic():
policy_record.r53_policy_record.reconcile()
self.commit(preserve_cache=True)
except ClientError:
logger.exception("failed to reconcile record %r", policy_record)
self._reset_change_batch()
self._delete_orphaned_managed_records()
self.commit()
def _delete_orphaned_managed_records(self):
"""Delete any managed record not belonging to one of the zone's policies"""
active_policy_records = self.db_zone.policy_records.select_related('policy') \
.exclude(deleted=True)
policies = set([pr.policy for pr in active_policy_records])
for record in self.records().values():
if record.is_hidden:
for policy in policies:
if record.is_member_of(policy):
break
else:
record.deleted = True
self.process_records([record])
def reconcile(self):
self._reconcile_zone()
self._reconcile_policy_records()
@classmethod
def reconcile_multiple(cls, zones):
for db_zone in zones:
zone = cls(db_zone)
try:
zone.reconcile()
except ClientError:
logger.exception("Error while handling %s", db_zone.name) | zinc-dns | /zinc-dns-1.1.0.tar.gz/zinc-dns-1.1.0/zinc/route53/zone.py | zone.py |
[](https://circleci.com/gh/complexdb/zincbase)
[](https://zenodo.org/badge/latestdoi/183831265)
[](https://zincbase.readthedocs.io/en/latest/?badge=latest)
[](https://pypi.python.org/pypi/zincbase/)
[](https://pypi.python.org/pypi/zincbase/)
[](https://pypi.python.org/pypi/zincbase/)
[](https://pypi.python.org/pypi/zincbase/)
<img src="https://user-images.githubusercontent.com/2245347/57199440-c45daf00-6f33-11e9-91df-1a6a9cae6fb7.png" width="140" alt="Zincbase logo">
ZincBase is a state of the art knowledge base and complex simulation suite. It does the following:
* Store and retrieve graph structured data efficiently.
* Provide ways to query the graph, including via bleeding-edge graph neural networks.
* Simulate complex effects playing out across the graph and see how predictions change.
Zincbase exists to answer questions like "what is the probability that Tom likes LARPing", or "who likes LARPing", or "classify people into LARPers vs normies", or simulations like "what happens if all the LARPers become normies".
<img src="https://user-images.githubusercontent.com/2245347/57595488-2dc45b80-74fa-11e9-80f4-dc5c7a5b22de.png" width="320" alt="Example graph for reasoning">
It combines the latest in neural networks with symbolic logic (think expert systems and prolog), graph search, and complexity theory.
View full documentation [here](https://zincbase.readthedocs.io).
## Quickstart
`pip3 install zincbase`
```
from zincbase import KB
kb = KB()
kb.store('eats(tom, rice)')
for ans in kb.query('eats(tom, Food)'):
print(ans['Food']) # prints 'rice'
...
# The included assets/countries_s1_train.csv contains triples like:
# (namibia, locatedin, africa)
# (lithuania, neighbor, poland)
kb = KB()
kb.from_csv('./assets/countries_s1_train.csv', delimiter='\t')
kb.build_kg_model(cuda=False, embedding_size=40)
kb.train_kg_model(steps=8000, batch_size=1, verbose=False)
kb.estimate_triple_prob('fiji', 'locatedin', 'melanesia')
0.9607
```
# Requirements
* Python 3
* Libraries from requirements.txt
* GPU preferable for large graphs but not required
# Installation
`pip install -r requirements.txt`
_Note:_ Requirements might differ for PyTorch depending on your system.
# Web UI
Zincbase can serve live-updating force-directed graphs in 3D to a web browser. The command
`python -m zincbase.web` will set up a static file server and a websocket
server for live updates. Visit `http://localhost:5000/` in your browser
and you'll see the graph UI. As you build a graph in Python, you can
visualize it (and changes to it) in realtime through this UI.
Here are a couple of examples (source code [here](https://github.com/complexdb/zincbase/tree/master/examples/visualization)):


# Complexity (Graph/Network) Examples
Two such examples are included (right now; we intend to include more soon such
as virus spread and neural nets that communicate.) The examples are
basic ones: Conway's Game of Life and the Abelian Sandpile. Here are some
screencaps; source code is [here](https://github.com/complexdb/zincbase/tree/master/examples),
performance can be lightning fast depending how you tweak Zincbase recursion
and propagation settings.


### Required for the UI
* You should `pip install zincbase[web]` to get the optional web extra.
* You should have Redis running; by default, at `localhost:6379`. This
is easily achievable, just do `docker run -p 6379:6379 -d redis`
# Testing
```
python test/test_main.py
python test/test_graph.py
... etc ... all the test files there
python -m doctest zincbase/zincbase.py
```
# Validation
"Countries" and "FB15k" datasets are included in this repo.
There is a script to evaluate that ZincBase gets at least as good
performance on the Countries dataset as the original (2019) RotatE paper. From the repo's
root directory:
```
python examples/eval_countries_s3.py
```
It tests the hardest Countries task and prints out the AUC ROC, which should be
~ 0.95 to match the paper. It takes about 30 minutes to run on a modern GPU.
There is also a script to evaluate performance on FB15k: `python examples/fb15k_mrr.py`.
## Running the web UI
There are a couple of extra requirements -- install with `pip3 install zincbase[web]`.
You also need an accessible Redis instance somewhere. This one-liner will get it running
locally: `docker run -p 6379:6379 -d redis` (requires Docker, of course.)
You then need a Zincbase server instance running:
## Building documentation
From docs/ dir: `make html`. If something changed a lot: `sphinx-apidoc -o . ..`
## Pushing to pypi
NOTE: This is now all automatic via CircleCI, but here are the manual steps for reference:
* Edit `setup.py` as appropriate (probably not necessary)
* Edit the version in `zincbase/__init__.py`
* From the top project directory `python setup.py sdist bdist_wheel --universal`
* `twine upload dist/*`
# TODO
* add ability to `kb = KB(backend='complexdb://my_api_key')`
* utilize postgres as backend triple store
* Reinforcement learning for graph traversal.
* Rete algorithm (maybe)
# References & Acknowledgements
[Theo Trouillon. Complex-Valued Embedding Models for Knowledge Graphs. Machine Learning[cs.LG]. Université Grenoble Alpes, 2017. English. ffNNT : 2017GREAM048](https://tel.archives-ouvertes.fr/tel-01692327/file/TROUILLON_2017_archivage.pdf)
[L334: Computational Syntax and Semantics -- Introduction to Prolog, Steve Harlow](http://www-users.york.ac.uk/~sjh1/courses/L334css/complete/complete2li1.html)
[Open Book Project: Prolog in Python, Chris Meyers](http://www.openbookproject.net/py4fun/prolog/intro.html)
[Prolog Interpreter in Javascript](https://curiosity-driven.org/prolog-interpreter)
[RotatE: Knowledge Graph Embedding by Relational Rotation in Complex Space, Zhiqing Sun and Zhi-Hong Deng and Jian-Yun Nie and Jian Tang, International Conference on Learning Representations, 2019](https://openreview.net/forum?id=HkgEQnRqYQ)
# Citing
If you use this software, please consider citing:
```
@software{zincbase,
author = {{Tom Grek}},
title = {ZincBase: A state of the art knowledge base},
url = {https://github.com/tomgrek/zincbase},
version = {0.1.1},
date = {2019-05-12}
}
```
# Contributing
See CONTRIBUTING. And please do! | zincbase | /zincbase-0.10.1.tar.gz/zincbase-0.10.1/README.md | README.md |
import json
import atexit
import mimetypes
from multiprocessing.pool import ThreadPool
import io
import os
import re
import typing
from urllib.parse import quote
from urllib3.fields import RequestField
from zincsearch_sdk import rest
from zincsearch_sdk.configuration import Configuration
from zincsearch_sdk.exceptions import ApiTypeError, ApiValueError, ApiException
from zincsearch_sdk.model_utils import (
ModelNormal,
ModelSimple,
ModelComposed,
check_allowed_values,
check_validations,
date,
datetime,
deserialize_file,
file_type,
model_to_dict,
none_type,
validate_and_convert_types
)
class ApiClient(object):
"""Generic API client for OpenAPI client library builds.
OpenAPI generic API client. This client handles the client-
server communication, and is invariant across implementations. Specifics of
the methods and models for each application are generated from the OpenAPI
templates.
NOTE: This class is auto generated by OpenAPI Generator.
Ref: https://openapi-generator.tech
Do not edit the class manually.
:param configuration: .Configuration object for this client
:param header_name: a header to pass when making calls to the API.
:param header_value: a header value to pass when making calls to
the API.
:param cookie: a cookie to include in the header when making calls
to the API
:param pool_threads: The number of threads to use for async requests
to the API. More threads means more concurrent API requests.
"""
_pool = None
def __init__(self, configuration=None, header_name=None, header_value=None,
cookie=None, pool_threads=1):
if configuration is None:
configuration = Configuration.get_default_copy()
self.configuration = configuration
self.pool_threads = pool_threads
self.rest_client = rest.RESTClientObject(configuration)
self.default_headers = {}
if header_name is not None:
self.default_headers[header_name] = header_value
self.cookie = cookie
# Set default User-Agent.
self.user_agent = 'OpenAPI-Generator/0.3.3/python'
def __enter__(self):
return self
def __exit__(self, exc_type, exc_value, traceback):
self.close()
def close(self):
if self._pool:
self._pool.close()
self._pool.join()
self._pool = None
if hasattr(atexit, 'unregister'):
atexit.unregister(self.close)
@property
def pool(self):
"""Create thread pool on first request
avoids instantiating unused threadpool for blocking clients.
"""
if self._pool is None:
atexit.register(self.close)
self._pool = ThreadPool(self.pool_threads)
return self._pool
@property
def user_agent(self):
"""User agent for this API client"""
return self.default_headers['User-Agent']
@user_agent.setter
def user_agent(self, value):
self.default_headers['User-Agent'] = value
def set_default_header(self, header_name, header_value):
self.default_headers[header_name] = header_value
def __call_api(
self,
resource_path: str,
method: str,
path_params: typing.Optional[typing.Dict[str, typing.Any]] = None,
query_params: typing.Optional[typing.List[typing.Tuple[str, typing.Any]]] = None,
header_params: typing.Optional[typing.Dict[str, typing.Any]] = None,
body: typing.Optional[typing.Any] = None,
post_params: typing.Optional[typing.List[typing.Tuple[str, typing.Any]]] = None,
files: typing.Optional[typing.Dict[str, typing.List[io.IOBase]]] = None,
response_type: typing.Optional[typing.Tuple[typing.Any]] = None,
auth_settings: typing.Optional[typing.List[str]] = None,
_return_http_data_only: typing.Optional[bool] = None,
collection_formats: typing.Optional[typing.Dict[str, str]] = None,
_preload_content: bool = True,
_request_timeout: typing.Optional[typing.Union[int, float, typing.Tuple]] = None,
_host: typing.Optional[str] = None,
_check_type: typing.Optional[bool] = None,
_content_type: typing.Optional[str] = None,
_request_auths: typing.Optional[typing.List[typing.Dict[str, typing.Any]]] = None
):
config = self.configuration
# header parameters
header_params = header_params or {}
header_params.update(self.default_headers)
if self.cookie:
header_params['Cookie'] = self.cookie
if header_params:
header_params = self.sanitize_for_serialization(header_params)
header_params = dict(self.parameters_to_tuples(header_params,
collection_formats))
# path parameters
if path_params:
path_params = self.sanitize_for_serialization(path_params)
path_params = self.parameters_to_tuples(path_params,
collection_formats)
for k, v in path_params:
# specified safe chars, encode everything
resource_path = resource_path.replace(
'{%s}' % k,
quote(str(v), safe=config.safe_chars_for_path_param)
)
# query parameters
if query_params:
query_params = self.sanitize_for_serialization(query_params)
query_params = self.parameters_to_tuples(query_params,
collection_formats)
# post parameters
if post_params or files:
post_params = post_params if post_params else []
post_params = self.sanitize_for_serialization(post_params)
post_params = self.parameters_to_tuples(post_params,
collection_formats)
post_params.extend(self.files_parameters(files))
if header_params['Content-Type'].startswith("multipart"):
post_params = self.parameters_to_multipart(post_params,
(dict))
# body
if body:
body = self.sanitize_for_serialization(body)
# auth setting
self.update_params_for_auth(header_params, query_params,
auth_settings, resource_path, method, body,
request_auths=_request_auths)
# request url
if _host is None:
url = self.configuration.host + resource_path
else:
# use server/host defined in path or operation instead
url = _host + resource_path
try:
# perform request and return response
response_data = self.request(
method, url, query_params=query_params, headers=header_params,
post_params=post_params, body=body,
_preload_content=_preload_content,
_request_timeout=_request_timeout)
except ApiException as e:
e.body = e.body.decode('utf-8')
raise e
self.last_response = response_data
return_data = response_data
if not _preload_content:
return (return_data)
return return_data
# deserialize response data
if response_type:
if response_type != (file_type,):
encoding = "utf-8"
content_type = response_data.getheader('content-type')
if content_type is not None:
match = re.search(r"charset=([a-zA-Z\-\d]+)[\s\;]?", content_type)
if match:
encoding = match.group(1)
response_data.data = response_data.data.decode(encoding)
return_data = self.deserialize(
response_data,
response_type,
_check_type
)
else:
return_data = None
if _return_http_data_only:
return (return_data)
else:
return (return_data, response_data.status,
response_data.getheaders())
def parameters_to_multipart(self, params, collection_types):
"""Get parameters as list of tuples, formatting as json if value is collection_types
:param params: Parameters as list of two-tuples
:param dict collection_types: Parameter collection types
:return: Parameters as list of tuple or urllib3.fields.RequestField
"""
new_params = []
if collection_types is None:
collection_types = (dict)
for k, v in params.items() if isinstance(params, dict) else params: # noqa: E501
if isinstance(
v, collection_types): # v is instance of collection_type, formatting as application/json
v = json.dumps(v, ensure_ascii=False).encode("utf-8")
field = RequestField(k, v)
field.make_multipart(content_type="application/json; charset=utf-8")
new_params.append(field)
else:
new_params.append((k, v))
return new_params
@classmethod
def sanitize_for_serialization(cls, obj):
"""Prepares data for transmission before it is sent with the rest client
If obj is None, return None.
If obj is str, int, long, float, bool, return directly.
If obj is datetime.datetime, datetime.date
convert to string in iso8601 format.
If obj is list, sanitize each element in the list.
If obj is dict, return the dict.
If obj is OpenAPI model, return the properties dict.
If obj is io.IOBase, return the bytes
:param obj: The data to serialize.
:return: The serialized form of data.
"""
if isinstance(obj, (ModelNormal, ModelComposed)):
return {
key: cls.sanitize_for_serialization(val) for key,
val in model_to_dict(
obj,
serialize=True).items()}
elif isinstance(obj, io.IOBase):
return cls.get_file_data_and_close_file(obj)
elif isinstance(obj, (str, int, float, none_type, bool)):
return obj
elif isinstance(obj, (datetime, date)):
return obj.isoformat()
elif isinstance(obj, ModelSimple):
return cls.sanitize_for_serialization(obj.value)
elif isinstance(obj, (list, tuple)):
return [cls.sanitize_for_serialization(item) for item in obj]
if isinstance(obj, dict):
return {key: cls.sanitize_for_serialization(val) for key, val in obj.items()}
raise ApiValueError(
'Unable to prepare type {} for serialization'.format(
obj.__class__.__name__))
def deserialize(self, response, response_type, _check_type):
"""Deserializes response into an object.
:param response: RESTResponse object to be deserialized.
:param response_type: For the response, a tuple containing:
valid classes
a list containing valid classes (for list schemas)
a dict containing a tuple of valid classes as the value
Example values:
(str,)
(Pet,)
(float, none_type)
([int, none_type],)
({str: (bool, str, int, float, date, datetime, str, none_type)},)
:param _check_type: boolean, whether to check the types of the data
received from the server
:type _check_type: bool
:return: deserialized object.
"""
# handle file downloading
# save response body into a tmp file and return the instance
if response_type == (file_type,):
content_disposition = response.getheader("Content-Disposition")
return deserialize_file(response.data, self.configuration,
content_disposition=content_disposition)
# fetch data from response object
try:
received_data = json.loads(response.data)
except ValueError:
received_data = response.data
# store our data under the key of 'received_data' so users have some
# context if they are deserializing a string and the data type is wrong
deserialized_data = validate_and_convert_types(
received_data,
response_type,
['received_data'],
True,
_check_type,
configuration=self.configuration
)
return deserialized_data
def call_api(
self,
resource_path: str,
method: str,
path_params: typing.Optional[typing.Dict[str, typing.Any]] = None,
query_params: typing.Optional[typing.List[typing.Tuple[str, typing.Any]]] = None,
header_params: typing.Optional[typing.Dict[str, typing.Any]] = None,
body: typing.Optional[typing.Any] = None,
post_params: typing.Optional[typing.List[typing.Tuple[str, typing.Any]]] = None,
files: typing.Optional[typing.Dict[str, typing.List[io.IOBase]]] = None,
response_type: typing.Optional[typing.Tuple[typing.Any]] = None,
auth_settings: typing.Optional[typing.List[str]] = None,
async_req: typing.Optional[bool] = None,
_return_http_data_only: typing.Optional[bool] = None,
collection_formats: typing.Optional[typing.Dict[str, str]] = None,
_preload_content: bool = True,
_request_timeout: typing.Optional[typing.Union[int, float, typing.Tuple]] = None,
_host: typing.Optional[str] = None,
_check_type: typing.Optional[bool] = None,
_request_auths: typing.Optional[typing.List[typing.Dict[str, typing.Any]]] = None
):
"""Makes the HTTP request (synchronous) and returns deserialized data.
To make an async_req request, set the async_req parameter.
:param resource_path: Path to method endpoint.
:param method: Method to call.
:param path_params: Path parameters in the url.
:param query_params: Query parameters in the url.
:param header_params: Header parameters to be
placed in the request header.
:param body: Request body.
:param post_params dict: Request post form parameters,
for `application/x-www-form-urlencoded`, `multipart/form-data`.
:param auth_settings list: Auth Settings names for the request.
:param response_type: For the response, a tuple containing:
valid classes
a list containing valid classes (for list schemas)
a dict containing a tuple of valid classes as the value
Example values:
(str,)
(Pet,)
(float, none_type)
([int, none_type],)
({str: (bool, str, int, float, date, datetime, str, none_type)},)
:param files: key -> field name, value -> a list of open file
objects for `multipart/form-data`.
:type files: dict
:param async_req bool: execute request asynchronously
:type async_req: bool, optional
:param _return_http_data_only: response data without head status code
and headers
:type _return_http_data_only: bool, optional
:param collection_formats: dict of collection formats for path, query,
header, and post parameters.
:type collection_formats: dict, optional
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:type _preload_content: bool, optional
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:param _check_type: boolean describing if the data back from the server
should have its type checked.
:type _check_type: bool, optional
:param _request_auths: set to override the auth_settings for an a single
request; this effectively ignores the authentication
in the spec for a single request.
:type _request_auths: list, optional
:return:
If async_req parameter is True,
the request will be called asynchronously.
The method will return the request thread.
If parameter async_req is False or missing,
then the method will return the response directly.
"""
if not async_req:
return self.__call_api(resource_path, method,
path_params, query_params, header_params,
body, post_params, files,
response_type, auth_settings,
_return_http_data_only, collection_formats,
_preload_content, _request_timeout, _host,
_check_type, _request_auths=_request_auths)
return self.pool.apply_async(self.__call_api, (resource_path,
method, path_params,
query_params,
header_params, body,
post_params, files,
response_type,
auth_settings,
_return_http_data_only,
collection_formats,
_preload_content,
_request_timeout,
_host, _check_type, None, _request_auths))
def request(self, method, url, query_params=None, headers=None,
post_params=None, body=None, _preload_content=True,
_request_timeout=None):
"""Makes the HTTP request using RESTClient."""
if method == "GET":
return self.rest_client.GET(url,
query_params=query_params,
_preload_content=_preload_content,
_request_timeout=_request_timeout,
headers=headers)
elif method == "HEAD":
return self.rest_client.HEAD(url,
query_params=query_params,
_preload_content=_preload_content,
_request_timeout=_request_timeout,
headers=headers)
elif method == "OPTIONS":
return self.rest_client.OPTIONS(url,
query_params=query_params,
headers=headers,
post_params=post_params,
_preload_content=_preload_content,
_request_timeout=_request_timeout,
body=body)
elif method == "POST":
return self.rest_client.POST(url,
query_params=query_params,
headers=headers,
post_params=post_params,
_preload_content=_preload_content,
_request_timeout=_request_timeout,
body=body)
elif method == "PUT":
return self.rest_client.PUT(url,
query_params=query_params,
headers=headers,
post_params=post_params,
_preload_content=_preload_content,
_request_timeout=_request_timeout,
body=body)
elif method == "PATCH":
return self.rest_client.PATCH(url,
query_params=query_params,
headers=headers,
post_params=post_params,
_preload_content=_preload_content,
_request_timeout=_request_timeout,
body=body)
elif method == "DELETE":
return self.rest_client.DELETE(url,
query_params=query_params,
headers=headers,
_preload_content=_preload_content,
_request_timeout=_request_timeout,
body=body)
else:
raise ApiValueError(
"http method must be `GET`, `HEAD`, `OPTIONS`,"
" `POST`, `PATCH`, `PUT` or `DELETE`."
)
def parameters_to_tuples(self, params, collection_formats):
"""Get parameters as list of tuples, formatting collections.
:param params: Parameters as dict or list of two-tuples
:param dict collection_formats: Parameter collection formats
:return: Parameters as list of tuples, collections formatted
"""
new_params = []
if collection_formats is None:
collection_formats = {}
for k, v in params.items() if isinstance(params, dict) else params: # noqa: E501
if k in collection_formats:
collection_format = collection_formats[k]
if collection_format == 'multi':
new_params.extend((k, value) for value in v)
else:
if collection_format == 'ssv':
delimiter = ' '
elif collection_format == 'tsv':
delimiter = '\t'
elif collection_format == 'pipes':
delimiter = '|'
else: # csv is the default
delimiter = ','
new_params.append(
(k, delimiter.join(str(value) for value in v)))
else:
new_params.append((k, v))
return new_params
@staticmethod
def get_file_data_and_close_file(file_instance: io.IOBase) -> bytes:
file_data = file_instance.read()
file_instance.close()
return file_data
def files_parameters(self,
files: typing.Optional[typing.Dict[str,
typing.List[io.IOBase]]] = None):
"""Builds form parameters.
:param files: None or a dict with key=param_name and
value is a list of open file objects
:return: List of tuples of form parameters with file data
"""
if files is None:
return []
params = []
for param_name, file_instances in files.items():
if file_instances is None:
# if the file field is nullable, skip None values
continue
for file_instance in file_instances:
if file_instance is None:
# if the file field is nullable, skip None values
continue
if file_instance.closed is True:
raise ApiValueError(
"Cannot read a closed file. The passed in file_type "
"for %s must be open." % param_name
)
filename = os.path.basename(file_instance.name)
filedata = self.get_file_data_and_close_file(file_instance)
mimetype = (mimetypes.guess_type(filename)[0] or
'application/octet-stream')
params.append(
tuple([param_name, tuple([filename, filedata, mimetype])]))
return params
def select_header_accept(self, accepts):
"""Returns `Accept` based on an array of accepts provided.
:param accepts: List of headers.
:return: Accept (e.g. application/json).
"""
if not accepts:
return
accepts = [x.lower() for x in accepts]
if 'application/json' in accepts:
return 'application/json'
else:
return ', '.join(accepts)
def select_header_content_type(self, content_types, method=None, body=None):
"""Returns `Content-Type` based on an array of content_types provided.
:param content_types: List of content-types.
:param method: http method (e.g. POST, PATCH).
:param body: http body to send.
:return: Content-Type (e.g. application/json).
"""
if not content_types:
return None
content_types = [x.lower() for x in content_types]
if (method == 'PATCH' and
'application/json-patch+json' in content_types and
isinstance(body, list)):
return 'application/json-patch+json'
if 'application/json' in content_types or '*/*' in content_types:
return 'application/json'
else:
return content_types[0]
def update_params_for_auth(self, headers, queries, auth_settings,
resource_path, method, body, request_auths=None):
"""Updates header and query params based on authentication setting.
:param headers: Header parameters dict to be updated.
:param queries: Query parameters tuple list to be updated.
:param auth_settings: Authentication setting identifiers list.
:param resource_path: A string representation of the HTTP request resource path.
:param method: A string representation of the HTTP request method.
:param body: A object representing the body of the HTTP request.
The object type is the return value of _encoder.default().
:param request_auths: if set, the provided settings will
override the token in the configuration.
"""
if not auth_settings:
return
if request_auths:
for auth_setting in request_auths:
self._apply_auth_params(
headers, queries, resource_path, method, body, auth_setting)
return
for auth in auth_settings:
auth_setting = self.configuration.auth_settings().get(auth)
if auth_setting:
self._apply_auth_params(
headers, queries, resource_path, method, body, auth_setting)
def _apply_auth_params(self, headers, queries, resource_path, method, body, auth_setting):
if auth_setting['in'] == 'cookie':
headers['Cookie'] = auth_setting['key'] + "=" + auth_setting['value']
elif auth_setting['in'] == 'header':
if auth_setting['type'] != 'http-signature':
headers[auth_setting['key']] = auth_setting['value']
elif auth_setting['in'] == 'query':
queries.append((auth_setting['key'], auth_setting['value']))
else:
raise ApiValueError(
'Authentication token must be in `query` or `header`'
)
class Endpoint(object):
def __init__(self, settings=None, params_map=None, root_map=None,
headers_map=None, api_client=None, callable=None):
"""Creates an endpoint
Args:
settings (dict): see below key value pairs
'response_type' (tuple/None): response type
'auth' (list): a list of auth type keys
'endpoint_path' (str): the endpoint path
'operation_id' (str): endpoint string identifier
'http_method' (str): POST/PUT/PATCH/GET etc
'servers' (list): list of str servers that this endpoint is at
params_map (dict): see below key value pairs
'all' (list): list of str endpoint parameter names
'required' (list): list of required parameter names
'nullable' (list): list of nullable parameter names
'enum' (list): list of parameters with enum values
'validation' (list): list of parameters with validations
root_map
'validations' (dict): the dict mapping endpoint parameter tuple
paths to their validation dictionaries
'allowed_values' (dict): the dict mapping endpoint parameter
tuple paths to their allowed_values (enum) dictionaries
'openapi_types' (dict): param_name to openapi type
'attribute_map' (dict): param_name to camelCase name
'location_map' (dict): param_name to 'body', 'file', 'form',
'header', 'path', 'query'
collection_format_map (dict): param_name to `csv` etc.
headers_map (dict): see below key value pairs
'accept' (list): list of Accept header strings
'content_type' (list): list of Content-Type header strings
api_client (ApiClient) api client instance
callable (function): the function which is invoked when the
Endpoint is called
"""
self.settings = settings
self.params_map = params_map
self.params_map['all'].extend([
'async_req',
'_host_index',
'_preload_content',
'_request_timeout',
'_return_http_data_only',
'_check_input_type',
'_check_return_type',
'_content_type',
'_spec_property_naming',
'_request_auths'
])
self.params_map['nullable'].extend(['_request_timeout'])
self.validations = root_map['validations']
self.allowed_values = root_map['allowed_values']
self.openapi_types = root_map['openapi_types']
extra_types = {
'async_req': (bool,),
'_host_index': (none_type, int),
'_preload_content': (bool,),
'_request_timeout': (none_type, float, (float,), [float], int, (int,), [int]),
'_return_http_data_only': (bool,),
'_check_input_type': (bool,),
'_check_return_type': (bool,),
'_spec_property_naming': (bool,),
'_content_type': (none_type, str),
'_request_auths': (none_type, list)
}
self.openapi_types.update(extra_types)
self.attribute_map = root_map['attribute_map']
self.location_map = root_map['location_map']
self.collection_format_map = root_map['collection_format_map']
self.headers_map = headers_map
self.api_client = api_client
self.callable = callable
def __validate_inputs(self, kwargs):
for param in self.params_map['enum']:
if param in kwargs:
check_allowed_values(
self.allowed_values,
(param,),
kwargs[param]
)
for param in self.params_map['validation']:
if param in kwargs:
check_validations(
self.validations,
(param,),
kwargs[param],
configuration=self.api_client.configuration
)
if kwargs['_check_input_type'] is False:
return
for key, value in kwargs.items():
fixed_val = validate_and_convert_types(
value,
self.openapi_types[key],
[key],
kwargs['_spec_property_naming'],
kwargs['_check_input_type'],
configuration=self.api_client.configuration
)
kwargs[key] = fixed_val
def __gather_params(self, kwargs):
params = {
'body': None,
'collection_format': {},
'file': {},
'form': [],
'header': {},
'path': {},
'query': []
}
for param_name, param_value in kwargs.items():
param_location = self.location_map.get(param_name)
if param_location is None:
continue
if param_location:
if param_location == 'body':
params['body'] = param_value
continue
base_name = self.attribute_map[param_name]
if (param_location == 'form' and
self.openapi_types[param_name] == (file_type,)):
params['file'][base_name] = [param_value]
elif (param_location == 'form' and
self.openapi_types[param_name] == ([file_type],)):
# param_value is already a list
params['file'][base_name] = param_value
elif param_location in {'form', 'query'}:
param_value_full = (base_name, param_value)
params[param_location].append(param_value_full)
if param_location not in {'form', 'query'}:
params[param_location][base_name] = param_value
collection_format = self.collection_format_map.get(param_name)
if collection_format:
params['collection_format'][base_name] = collection_format
return params
def __call__(self, *args, **kwargs):
""" This method is invoked when endpoints are called
Example:
api_instance = Document()
api_instance.bulk # this is an instance of the class Endpoint
api_instance.bulk() # this invokes api_instance.bulk.__call__()
which then invokes the callable functions stored in that endpoint at
api_instance.bulk.callable or self.callable in this class
"""
return self.callable(self, *args, **kwargs)
def call_with_http_info(self, **kwargs):
try:
index = self.api_client.configuration.server_operation_index.get(
self.settings['operation_id'], self.api_client.configuration.server_index
) if kwargs['_host_index'] is None else kwargs['_host_index']
server_variables = self.api_client.configuration.server_operation_variables.get(
self.settings['operation_id'], self.api_client.configuration.server_variables
)
_host = self.api_client.configuration.get_host_from_settings(
index, variables=server_variables, servers=self.settings['servers']
)
except IndexError:
if self.settings['servers']:
raise ApiValueError(
"Invalid host index. Must be 0 <= index < %s" %
len(self.settings['servers'])
)
_host = None
for key, value in kwargs.items():
if key not in self.params_map['all']:
raise ApiTypeError(
"Got an unexpected parameter '%s'"
" to method `%s`" %
(key, self.settings['operation_id'])
)
# only throw this nullable ApiValueError if _check_input_type
# is False, if _check_input_type==True we catch this case
# in self.__validate_inputs
if (key not in self.params_map['nullable'] and value is None
and kwargs['_check_input_type'] is False):
raise ApiValueError(
"Value may not be None for non-nullable parameter `%s`"
" when calling `%s`" %
(key, self.settings['operation_id'])
)
for key in self.params_map['required']:
if key not in kwargs.keys():
raise ApiValueError(
"Missing the required parameter `%s` when calling "
"`%s`" % (key, self.settings['operation_id'])
)
self.__validate_inputs(kwargs)
params = self.__gather_params(kwargs)
accept_headers_list = self.headers_map['accept']
if accept_headers_list:
params['header']['Accept'] = self.api_client.select_header_accept(
accept_headers_list)
if kwargs.get('_content_type'):
params['header']['Content-Type'] = kwargs['_content_type']
else:
content_type_headers_list = self.headers_map['content_type']
if content_type_headers_list:
if params['body'] != "":
content_types_list = self.api_client.select_header_content_type(
content_type_headers_list, self.settings['http_method'],
params['body'])
if content_types_list:
params['header']['Content-Type'] = content_types_list
return self.api_client.call_api(
self.settings['endpoint_path'], self.settings['http_method'],
params['path'],
params['query'],
params['header'],
body=params['body'],
post_params=params['form'],
files=params['file'],
response_type=self.settings['response_type'],
auth_settings=self.settings['auth'],
async_req=kwargs['async_req'],
_check_type=kwargs['_check_return_type'],
_return_http_data_only=kwargs['_return_http_data_only'],
_preload_content=kwargs['_preload_content'],
_request_timeout=kwargs['_request_timeout'],
_host=_host,
_request_auths=kwargs['_request_auths'],
collection_formats=params['collection_format']) | zincsearch-sdk | /zincsearch-sdk-0.3.3.tar.gz/zincsearch-sdk-0.3.3/zincsearch_sdk/api_client.py | api_client.py |
import io
import json
import logging
import re
import ssl
from urllib.parse import urlencode
from urllib.parse import urlparse
from urllib.request import proxy_bypass_environment
import urllib3
import ipaddress
from zincsearch_sdk.exceptions import ApiException, UnauthorizedException, ForbiddenException, NotFoundException, ServiceException, ApiValueError
logger = logging.getLogger(__name__)
class RESTResponse(io.IOBase):
def __init__(self, resp):
self.urllib3_response = resp
self.status = resp.status
self.reason = resp.reason
self.data = resp.data
def getheaders(self):
"""Returns a dictionary of the response headers."""
return self.urllib3_response.getheaders()
def getheader(self, name, default=None):
"""Returns a given response header."""
return self.urllib3_response.getheader(name, default)
class RESTClientObject(object):
def __init__(self, configuration, pools_size=4, maxsize=None):
# urllib3.PoolManager will pass all kw parameters to connectionpool
# https://github.com/shazow/urllib3/blob/f9409436f83aeb79fbaf090181cd81b784f1b8ce/urllib3/poolmanager.py#L75 # noqa: E501
# https://github.com/shazow/urllib3/blob/f9409436f83aeb79fbaf090181cd81b784f1b8ce/urllib3/connectionpool.py#L680 # noqa: E501
# maxsize is the number of requests to host that are allowed in parallel # noqa: E501
# Custom SSL certificates and client certificates: http://urllib3.readthedocs.io/en/latest/advanced-usage.html # noqa: E501
# cert_reqs
if configuration.verify_ssl:
cert_reqs = ssl.CERT_REQUIRED
else:
cert_reqs = ssl.CERT_NONE
addition_pool_args = {}
if configuration.assert_hostname is not None:
addition_pool_args['assert_hostname'] = configuration.assert_hostname # noqa: E501
if configuration.retries is not None:
addition_pool_args['retries'] = configuration.retries
if configuration.socket_options is not None:
addition_pool_args['socket_options'] = configuration.socket_options
if maxsize is None:
if configuration.connection_pool_maxsize is not None:
maxsize = configuration.connection_pool_maxsize
else:
maxsize = 4
# https pool manager
if configuration.proxy and not should_bypass_proxies(
configuration.host, no_proxy=configuration.no_proxy or ''):
self.pool_manager = urllib3.ProxyManager(
num_pools=pools_size,
maxsize=maxsize,
cert_reqs=cert_reqs,
ca_certs=configuration.ssl_ca_cert,
cert_file=configuration.cert_file,
key_file=configuration.key_file,
proxy_url=configuration.proxy,
proxy_headers=configuration.proxy_headers,
**addition_pool_args
)
else:
self.pool_manager = urllib3.PoolManager(
num_pools=pools_size,
maxsize=maxsize,
cert_reqs=cert_reqs,
ca_certs=configuration.ssl_ca_cert,
cert_file=configuration.cert_file,
key_file=configuration.key_file,
**addition_pool_args
)
def request(self, method, url, query_params=None, headers=None,
body=None, post_params=None, _preload_content=True,
_request_timeout=None):
"""Perform requests.
:param method: http request method
:param url: http request url
:param query_params: query parameters in the url
:param headers: http request headers
:param body: request json body, for `application/json`
:param post_params: request post parameters,
`application/x-www-form-urlencoded`
and `multipart/form-data`
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
"""
method = method.upper()
assert method in ['GET', 'HEAD', 'DELETE', 'POST', 'PUT',
'PATCH', 'OPTIONS']
if post_params and body:
raise ApiValueError(
"body parameter cannot be used with post_params parameter."
)
post_params = post_params or {}
headers = headers or {}
timeout = None
if _request_timeout:
if isinstance(_request_timeout, (int, float)): # noqa: E501,F821
timeout = urllib3.Timeout(total=_request_timeout)
elif (isinstance(_request_timeout, tuple) and
len(_request_timeout) == 2):
timeout = urllib3.Timeout(
connect=_request_timeout[0], read=_request_timeout[1])
try:
# For `POST`, `PUT`, `PATCH`, `OPTIONS`, `DELETE`
if method in ['POST', 'PUT', 'PATCH', 'OPTIONS', 'DELETE']:
# Only set a default Content-Type for POST, PUT, PATCH and OPTIONS requests
if (method != 'DELETE') and ('Content-Type' not in headers):
headers['Content-Type'] = 'application/json'
if query_params:
url += '?' + urlencode(query_params)
if ('Content-Type' not in headers) or (re.search('json',
headers['Content-Type'], re.IGNORECASE)):
request_body = None
if body is not None:
request_body = json.dumps(body)
r = self.pool_manager.request(
method, url,
body=request_body,
preload_content=_preload_content,
timeout=timeout,
headers=headers)
elif headers['Content-Type'] == 'application/x-www-form-urlencoded': # noqa: E501
r = self.pool_manager.request(
method, url,
fields=post_params,
encode_multipart=False,
preload_content=_preload_content,
timeout=timeout,
headers=headers)
elif headers['Content-Type'] == 'multipart/form-data':
# must del headers['Content-Type'], or the correct
# Content-Type which generated by urllib3 will be
# overwritten.
del headers['Content-Type']
r = self.pool_manager.request(
method, url,
fields=post_params,
encode_multipart=True,
preload_content=_preload_content,
timeout=timeout,
headers=headers)
# Pass a `string` parameter directly in the body to support
# other content types than Json when `body` argument is
# provided in serialized form
elif isinstance(body, str) or isinstance(body, bytes):
request_body = body
r = self.pool_manager.request(
method, url,
body=request_body,
preload_content=_preload_content,
timeout=timeout,
headers=headers)
else:
# Cannot generate the request from given parameters
msg = """Cannot prepare a request message for provided
arguments. Please check that your arguments match
declared content type."""
raise ApiException(status=0, reason=msg)
# For `GET`, `HEAD`
else:
r = self.pool_manager.request(method, url,
fields=query_params,
preload_content=_preload_content,
timeout=timeout,
headers=headers)
except urllib3.exceptions.SSLError as e:
msg = "{0}\n{1}".format(type(e).__name__, str(e))
raise ApiException(status=0, reason=msg)
if _preload_content:
r = RESTResponse(r)
# log response body
logger.debug("response body: %s", r.data)
if not 200 <= r.status <= 299:
if r.status == 401:
raise UnauthorizedException(http_resp=r)
if r.status == 403:
raise ForbiddenException(http_resp=r)
if r.status == 404:
raise NotFoundException(http_resp=r)
if 500 <= r.status <= 599:
raise ServiceException(http_resp=r)
raise ApiException(http_resp=r)
return r
def GET(self, url, headers=None, query_params=None, _preload_content=True,
_request_timeout=None):
return self.request("GET", url,
headers=headers,
_preload_content=_preload_content,
_request_timeout=_request_timeout,
query_params=query_params)
def HEAD(self, url, headers=None, query_params=None, _preload_content=True,
_request_timeout=None):
return self.request("HEAD", url,
headers=headers,
_preload_content=_preload_content,
_request_timeout=_request_timeout,
query_params=query_params)
def OPTIONS(self, url, headers=None, query_params=None, post_params=None,
body=None, _preload_content=True, _request_timeout=None):
return self.request("OPTIONS", url,
headers=headers,
query_params=query_params,
post_params=post_params,
_preload_content=_preload_content,
_request_timeout=_request_timeout,
body=body)
def DELETE(self, url, headers=None, query_params=None, body=None,
_preload_content=True, _request_timeout=None):
return self.request("DELETE", url,
headers=headers,
query_params=query_params,
_preload_content=_preload_content,
_request_timeout=_request_timeout,
body=body)
def POST(self, url, headers=None, query_params=None, post_params=None,
body=None, _preload_content=True, _request_timeout=None):
return self.request("POST", url,
headers=headers,
query_params=query_params,
post_params=post_params,
_preload_content=_preload_content,
_request_timeout=_request_timeout,
body=body)
def PUT(self, url, headers=None, query_params=None, post_params=None,
body=None, _preload_content=True, _request_timeout=None):
return self.request("PUT", url,
headers=headers,
query_params=query_params,
post_params=post_params,
_preload_content=_preload_content,
_request_timeout=_request_timeout,
body=body)
def PATCH(self, url, headers=None, query_params=None, post_params=None,
body=None, _preload_content=True, _request_timeout=None):
return self.request("PATCH", url,
headers=headers,
query_params=query_params,
post_params=post_params,
_preload_content=_preload_content,
_request_timeout=_request_timeout,
body=body)
# end of class RESTClientObject
def is_ipv4(target):
""" Test if IPv4 address or not
"""
try:
chk = ipaddress.IPv4Address(target)
return True
except ipaddress.AddressValueError:
return False
def in_ipv4net(target, net):
""" Test if target belongs to given IPv4 network
"""
try:
nw = ipaddress.IPv4Network(net)
ip = ipaddress.IPv4Address(target)
if ip in nw:
return True
return False
except ipaddress.AddressValueError:
return False
except ipaddress.NetmaskValueError:
return False
def should_bypass_proxies(url, no_proxy=None):
""" Yet another requests.should_bypass_proxies
Test if proxies should not be used for a particular url.
"""
parsed = urlparse(url)
# special cases
if parsed.hostname in [None, '']:
return True
# special cases
if no_proxy in [None, '']:
return False
if no_proxy == '*':
return True
no_proxy = no_proxy.lower().replace(' ', '');
entries = (
host for host in no_proxy.split(',') if host
)
if is_ipv4(parsed.hostname):
for item in entries:
if in_ipv4net(parsed.hostname, item):
return True
return proxy_bypass_environment(parsed.hostname, {'no': no_proxy}) | zincsearch-sdk | /zincsearch-sdk-0.3.3.tar.gz/zincsearch-sdk-0.3.3/zincsearch_sdk/rest.py | rest.py |
class OpenApiException(Exception):
"""The base exception class for all OpenAPIExceptions"""
class ApiTypeError(OpenApiException, TypeError):
def __init__(self, msg, path_to_item=None, valid_classes=None,
key_type=None):
""" Raises an exception for TypeErrors
Args:
msg (str): the exception message
Keyword Args:
path_to_item (list): a list of keys an indices to get to the
current_item
None if unset
valid_classes (tuple): the primitive classes that current item
should be an instance of
None if unset
key_type (bool): False if our value is a value in a dict
True if it is a key in a dict
False if our item is an item in a list
None if unset
"""
self.path_to_item = path_to_item
self.valid_classes = valid_classes
self.key_type = key_type
full_msg = msg
if path_to_item:
full_msg = "{0} at {1}".format(msg, render_path(path_to_item))
super(ApiTypeError, self).__init__(full_msg)
class ApiValueError(OpenApiException, ValueError):
def __init__(self, msg, path_to_item=None):
"""
Args:
msg (str): the exception message
Keyword Args:
path_to_item (list) the path to the exception in the
received_data dict. None if unset
"""
self.path_to_item = path_to_item
full_msg = msg
if path_to_item:
full_msg = "{0} at {1}".format(msg, render_path(path_to_item))
super(ApiValueError, self).__init__(full_msg)
class ApiAttributeError(OpenApiException, AttributeError):
def __init__(self, msg, path_to_item=None):
"""
Raised when an attribute reference or assignment fails.
Args:
msg (str): the exception message
Keyword Args:
path_to_item (None/list) the path to the exception in the
received_data dict
"""
self.path_to_item = path_to_item
full_msg = msg
if path_to_item:
full_msg = "{0} at {1}".format(msg, render_path(path_to_item))
super(ApiAttributeError, self).__init__(full_msg)
class ApiKeyError(OpenApiException, KeyError):
def __init__(self, msg, path_to_item=None):
"""
Args:
msg (str): the exception message
Keyword Args:
path_to_item (None/list) the path to the exception in the
received_data dict
"""
self.path_to_item = path_to_item
full_msg = msg
if path_to_item:
full_msg = "{0} at {1}".format(msg, render_path(path_to_item))
super(ApiKeyError, self).__init__(full_msg)
class ApiException(OpenApiException):
def __init__(self, status=None, reason=None, http_resp=None):
if http_resp:
self.status = http_resp.status
self.reason = http_resp.reason
self.body = http_resp.data
self.headers = http_resp.getheaders()
else:
self.status = status
self.reason = reason
self.body = None
self.headers = None
def __str__(self):
"""Custom error messages for exception"""
error_message = "Status Code: {0}\n"\
"Reason: {1}\n".format(self.status, self.reason)
if self.headers:
error_message += "HTTP response headers: {0}\n".format(
self.headers)
if self.body:
error_message += "HTTP response body: {0}\n".format(self.body)
return error_message
class NotFoundException(ApiException):
def __init__(self, status=None, reason=None, http_resp=None):
super(NotFoundException, self).__init__(status, reason, http_resp)
class UnauthorizedException(ApiException):
def __init__(self, status=None, reason=None, http_resp=None):
super(UnauthorizedException, self).__init__(status, reason, http_resp)
class ForbiddenException(ApiException):
def __init__(self, status=None, reason=None, http_resp=None):
super(ForbiddenException, self).__init__(status, reason, http_resp)
class ServiceException(ApiException):
def __init__(self, status=None, reason=None, http_resp=None):
super(ServiceException, self).__init__(status, reason, http_resp)
def render_path(path_to_item):
"""Returns a string representation of a path"""
result = ""
for pth in path_to_item:
if isinstance(pth, int):
result += "[{0}]".format(pth)
else:
result += "['{0}']".format(pth)
return result | zincsearch-sdk | /zincsearch-sdk-0.3.3.tar.gz/zincsearch-sdk-0.3.3/zincsearch_sdk/exceptions.py | exceptions.py |
from datetime import date, datetime # noqa: F401
from copy import deepcopy
import inspect
import io
import os
import pprint
import re
import tempfile
import uuid
from dateutil.parser import parse
from zincsearch_sdk.exceptions import (
ApiKeyError,
ApiAttributeError,
ApiTypeError,
ApiValueError,
)
none_type = type(None)
file_type = io.IOBase
def convert_js_args_to_python_args(fn):
from functools import wraps
@wraps(fn)
def wrapped_init(_self, *args, **kwargs):
"""
An attribute named `self` received from the api will conflicts with the reserved `self`
parameter of a class method. During generation, `self` attributes are mapped
to `_self` in models. Here, we name `_self` instead of `self` to avoid conflicts.
"""
spec_property_naming = kwargs.get('_spec_property_naming', False)
if spec_property_naming:
kwargs = change_keys_js_to_python(
kwargs, _self if isinstance(
_self, type) else _self.__class__)
return fn(_self, *args, **kwargs)
return wrapped_init
class cached_property(object):
# this caches the result of the function call for fn with no inputs
# use this as a decorator on function methods that you want converted
# into cached properties
result_key = '_results'
def __init__(self, fn):
self._fn = fn
def __get__(self, instance, cls=None):
if self.result_key in vars(self):
return vars(self)[self.result_key]
else:
result = self._fn()
setattr(self, self.result_key, result)
return result
PRIMITIVE_TYPES = (list, float, int, bool, datetime, date, str, file_type)
def allows_single_value_input(cls):
"""
This function returns True if the input composed schema model or any
descendant model allows a value only input
This is true for cases where oneOf contains items like:
oneOf:
- float
- NumberWithValidation
- StringEnum
- ArrayModel
- null
TODO: lru_cache this
"""
if (
issubclass(cls, ModelSimple) or
cls in PRIMITIVE_TYPES
):
return True
elif issubclass(cls, ModelComposed):
if not cls._composed_schemas['oneOf']:
return False
return any(allows_single_value_input(c) for c in cls._composed_schemas['oneOf'])
return False
def composed_model_input_classes(cls):
"""
This function returns a list of the possible models that can be accepted as
inputs.
TODO: lru_cache this
"""
if issubclass(cls, ModelSimple) or cls in PRIMITIVE_TYPES:
return [cls]
elif issubclass(cls, ModelNormal):
if cls.discriminator is None:
return [cls]
else:
return get_discriminated_classes(cls)
elif issubclass(cls, ModelComposed):
if not cls._composed_schemas['oneOf']:
return []
if cls.discriminator is None:
input_classes = []
for c in cls._composed_schemas['oneOf']:
input_classes.extend(composed_model_input_classes(c))
return input_classes
else:
return get_discriminated_classes(cls)
return []
class OpenApiModel(object):
"""The base class for all OpenAPIModels"""
def set_attribute(self, name, value):
# this is only used to set properties on self
path_to_item = []
if self._path_to_item:
path_to_item.extend(self._path_to_item)
path_to_item.append(name)
if name in self.openapi_types:
required_types_mixed = self.openapi_types[name]
elif self.additional_properties_type is None:
raise ApiAttributeError(
"{0} has no attribute '{1}'".format(
type(self).__name__, name),
path_to_item
)
elif self.additional_properties_type is not None:
required_types_mixed = self.additional_properties_type
if get_simple_class(name) != str:
error_msg = type_error_message(
var_name=name,
var_value=name,
valid_classes=(str,),
key_type=True
)
raise ApiTypeError(
error_msg,
path_to_item=path_to_item,
valid_classes=(str,),
key_type=True
)
if self._check_type:
value = validate_and_convert_types(
value, required_types_mixed, path_to_item, self._spec_property_naming,
self._check_type, configuration=self._configuration)
if (name,) in self.allowed_values:
check_allowed_values(
self.allowed_values,
(name,),
value
)
if (name,) in self.validations:
check_validations(
self.validations,
(name,),
value,
self._configuration
)
self.__dict__['_data_store'][name] = value
def __repr__(self):
"""For `print` and `pprint`"""
return self.to_str()
def __ne__(self, other):
"""Returns true if both objects are not equal"""
return not self == other
def __setattr__(self, attr, value):
"""set the value of an attribute using dot notation: `instance.attr = val`"""
self[attr] = value
def __getattr__(self, attr):
"""get the value of an attribute using dot notation: `instance.attr`"""
return self.__getitem__(attr)
def __copy__(self):
cls = self.__class__
if self.get("_spec_property_naming", False):
return cls._new_from_openapi_data(**self.__dict__)
else:
return cls.__new__(cls, **self.__dict__)
def __deepcopy__(self, memo):
cls = self.__class__
if self.get("_spec_property_naming", False):
new_inst = cls._new_from_openapi_data()
else:
new_inst = cls.__new__(cls, **self.__dict__)
for k, v in self.__dict__.items():
setattr(new_inst, k, deepcopy(v, memo))
return new_inst
def __new__(cls, *args, **kwargs):
# this function uses the discriminator to
# pick a new schema/class to instantiate because a discriminator
# propertyName value was passed in
if len(args) == 1:
arg = args[0]
if arg is None and is_type_nullable(cls):
# The input data is the 'null' value and the type is nullable.
return None
if issubclass(cls, ModelComposed) and allows_single_value_input(cls):
model_kwargs = {}
oneof_instance = get_oneof_instance(cls, model_kwargs, kwargs, model_arg=arg)
return oneof_instance
visited_composed_classes = kwargs.get('_visited_composed_classes', ())
if (
cls.discriminator is None or
cls in visited_composed_classes
):
# Use case 1: this openapi schema (cls) does not have a discriminator
# Use case 2: we have already visited this class before and are sure that we
# want to instantiate it this time. We have visited this class deserializing
# a payload with a discriminator. During that process we traveled through
# this class but did not make an instance of it. Now we are making an
# instance of a composed class which contains cls in it, so this time make an instance of cls.
#
# Here's an example of use case 2: If Animal has a discriminator
# petType and we pass in "Dog", and the class Dog
# allOf includes Animal, we move through Animal
# once using the discriminator, and pick Dog.
# Then in the composed schema dog Dog, we will make an instance of the
# Animal class (because Dal has allOf: Animal) but this time we won't travel
# through Animal's discriminator because we passed in
# _visited_composed_classes = (Animal,)
return super(OpenApiModel, cls).__new__(cls)
# Get the name and value of the discriminator property.
# The discriminator name is obtained from the discriminator meta-data
# and the discriminator value is obtained from the input data.
discr_propertyname_py = list(cls.discriminator.keys())[0]
discr_propertyname_js = cls.attribute_map[discr_propertyname_py]
if discr_propertyname_js in kwargs:
discr_value = kwargs[discr_propertyname_js]
elif discr_propertyname_py in kwargs:
discr_value = kwargs[discr_propertyname_py]
else:
# The input data does not contain the discriminator property.
path_to_item = kwargs.get('_path_to_item', ())
raise ApiValueError(
"Cannot deserialize input data due to missing discriminator. "
"The discriminator property '%s' is missing at path: %s" %
(discr_propertyname_js, path_to_item)
)
# Implementation note: the last argument to get_discriminator_class
# is a list of visited classes. get_discriminator_class may recursively
# call itself and update the list of visited classes, and the initial
# value must be an empty list. Hence not using 'visited_composed_classes'
new_cls = get_discriminator_class(
cls, discr_propertyname_py, discr_value, [])
if new_cls is None:
path_to_item = kwargs.get('_path_to_item', ())
disc_prop_value = kwargs.get(
discr_propertyname_js, kwargs.get(discr_propertyname_py))
raise ApiValueError(
"Cannot deserialize input data due to invalid discriminator "
"value. The OpenAPI document has no mapping for discriminator "
"property '%s'='%s' at path: %s" %
(discr_propertyname_js, disc_prop_value, path_to_item)
)
if new_cls in visited_composed_classes:
# if we are making an instance of a composed schema Descendent
# which allOf includes Ancestor, then Ancestor contains
# a discriminator that includes Descendent.
# So if we make an instance of Descendent, we have to make an
# instance of Ancestor to hold the allOf properties.
# This code detects that use case and makes the instance of Ancestor
# For example:
# When making an instance of Dog, _visited_composed_classes = (Dog,)
# then we make an instance of Animal to include in dog._composed_instances
# so when we are here, cls is Animal
# cls.discriminator != None
# cls not in _visited_composed_classes
# new_cls = Dog
# but we know we know that we already have Dog
# because it is in visited_composed_classes
# so make Animal here
return super(OpenApiModel, cls).__new__(cls)
# Build a list containing all oneOf and anyOf descendants.
oneof_anyof_classes = None
if cls._composed_schemas is not None:
oneof_anyof_classes = (
cls._composed_schemas.get('oneOf', ()) +
cls._composed_schemas.get('anyOf', ()))
oneof_anyof_child = new_cls in oneof_anyof_classes
kwargs['_visited_composed_classes'] = visited_composed_classes + (cls,)
if cls._composed_schemas.get('allOf') and oneof_anyof_child:
# Validate that we can make self because when we make the
# new_cls it will not include the allOf validations in self
self_inst = super(OpenApiModel, cls).__new__(cls)
self_inst.__init__(*args, **kwargs)
if kwargs.get("_spec_property_naming", False):
# when true, implies new is from deserialization
new_inst = new_cls._new_from_openapi_data(*args, **kwargs)
else:
new_inst = new_cls.__new__(new_cls, *args, **kwargs)
new_inst.__init__(*args, **kwargs)
return new_inst
@classmethod
@convert_js_args_to_python_args
def _new_from_openapi_data(cls, *args, **kwargs):
# this function uses the discriminator to
# pick a new schema/class to instantiate because a discriminator
# propertyName value was passed in
if len(args) == 1:
arg = args[0]
if arg is None and is_type_nullable(cls):
# The input data is the 'null' value and the type is nullable.
return None
if issubclass(cls, ModelComposed) and allows_single_value_input(cls):
model_kwargs = {}
oneof_instance = get_oneof_instance(cls, model_kwargs, kwargs, model_arg=arg)
return oneof_instance
visited_composed_classes = kwargs.get('_visited_composed_classes', ())
if (
cls.discriminator is None or
cls in visited_composed_classes
):
# Use case 1: this openapi schema (cls) does not have a discriminator
# Use case 2: we have already visited this class before and are sure that we
# want to instantiate it this time. We have visited this class deserializing
# a payload with a discriminator. During that process we traveled through
# this class but did not make an instance of it. Now we are making an
# instance of a composed class which contains cls in it, so this time make an instance of cls.
#
# Here's an example of use case 2: If Animal has a discriminator
# petType and we pass in "Dog", and the class Dog
# allOf includes Animal, we move through Animal
# once using the discriminator, and pick Dog.
# Then in the composed schema dog Dog, we will make an instance of the
# Animal class (because Dal has allOf: Animal) but this time we won't travel
# through Animal's discriminator because we passed in
# _visited_composed_classes = (Animal,)
return cls._from_openapi_data(*args, **kwargs)
# Get the name and value of the discriminator property.
# The discriminator name is obtained from the discriminator meta-data
# and the discriminator value is obtained from the input data.
discr_propertyname_py = list(cls.discriminator.keys())[0]
discr_propertyname_js = cls.attribute_map[discr_propertyname_py]
if discr_propertyname_js in kwargs:
discr_value = kwargs[discr_propertyname_js]
elif discr_propertyname_py in kwargs:
discr_value = kwargs[discr_propertyname_py]
else:
# The input data does not contain the discriminator property.
path_to_item = kwargs.get('_path_to_item', ())
raise ApiValueError(
"Cannot deserialize input data due to missing discriminator. "
"The discriminator property '%s' is missing at path: %s" %
(discr_propertyname_js, path_to_item)
)
# Implementation note: the last argument to get_discriminator_class
# is a list of visited classes. get_discriminator_class may recursively
# call itself and update the list of visited classes, and the initial
# value must be an empty list. Hence not using 'visited_composed_classes'
new_cls = get_discriminator_class(
cls, discr_propertyname_py, discr_value, [])
if new_cls is None:
path_to_item = kwargs.get('_path_to_item', ())
disc_prop_value = kwargs.get(
discr_propertyname_js, kwargs.get(discr_propertyname_py))
raise ApiValueError(
"Cannot deserialize input data due to invalid discriminator "
"value. The OpenAPI document has no mapping for discriminator "
"property '%s'='%s' at path: %s" %
(discr_propertyname_js, disc_prop_value, path_to_item)
)
if new_cls in visited_composed_classes:
# if we are making an instance of a composed schema Descendent
# which allOf includes Ancestor, then Ancestor contains
# a discriminator that includes Descendent.
# So if we make an instance of Descendent, we have to make an
# instance of Ancestor to hold the allOf properties.
# This code detects that use case and makes the instance of Ancestor
# For example:
# When making an instance of Dog, _visited_composed_classes = (Dog,)
# then we make an instance of Animal to include in dog._composed_instances
# so when we are here, cls is Animal
# cls.discriminator != None
# cls not in _visited_composed_classes
# new_cls = Dog
# but we know we know that we already have Dog
# because it is in visited_composed_classes
# so make Animal here
return cls._from_openapi_data(*args, **kwargs)
# Build a list containing all oneOf and anyOf descendants.
oneof_anyof_classes = None
if cls._composed_schemas is not None:
oneof_anyof_classes = (
cls._composed_schemas.get('oneOf', ()) +
cls._composed_schemas.get('anyOf', ()))
oneof_anyof_child = new_cls in oneof_anyof_classes
kwargs['_visited_composed_classes'] = visited_composed_classes + (cls,)
if cls._composed_schemas.get('allOf') and oneof_anyof_child:
# Validate that we can make self because when we make the
# new_cls it will not include the allOf validations in self
self_inst = cls._from_openapi_data(*args, **kwargs)
new_inst = new_cls._new_from_openapi_data(*args, **kwargs)
return new_inst
class ModelSimple(OpenApiModel):
"""the parent class of models whose type != object in their
swagger/openapi"""
def __setitem__(self, name, value):
"""set the value of an attribute using square-bracket notation: `instance[attr] = val`"""
if name in self.required_properties:
self.__dict__[name] = value
return
self.set_attribute(name, value)
def get(self, name, default=None):
"""returns the value of an attribute or some default value if the attribute was not set"""
if name in self.required_properties:
return self.__dict__[name]
return self.__dict__['_data_store'].get(name, default)
def __getitem__(self, name):
"""get the value of an attribute using square-bracket notation: `instance[attr]`"""
if name in self:
return self.get(name)
raise ApiAttributeError(
"{0} has no attribute '{1}'".format(
type(self).__name__, name),
[e for e in [self._path_to_item, name] if e]
)
def __contains__(self, name):
"""used by `in` operator to check if an attribute value was set in an instance: `'attr' in instance`"""
if name in self.required_properties:
return name in self.__dict__
return name in self.__dict__['_data_store']
def to_str(self):
"""Returns the string representation of the model"""
return str(self.value)
def __eq__(self, other):
"""Returns true if both objects are equal"""
if not isinstance(other, self.__class__):
return False
this_val = self._data_store['value']
that_val = other._data_store['value']
types = set()
types.add(this_val.__class__)
types.add(that_val.__class__)
vals_equal = this_val == that_val
return vals_equal
class ModelNormal(OpenApiModel):
"""the parent class of models whose type == object in their
swagger/openapi"""
def __setitem__(self, name, value):
"""set the value of an attribute using square-bracket notation: `instance[attr] = val`"""
if name in self.required_properties:
self.__dict__[name] = value
return
self.set_attribute(name, value)
def get(self, name, default=None):
"""returns the value of an attribute or some default value if the attribute was not set"""
if name in self.required_properties:
return self.__dict__[name]
return self.__dict__['_data_store'].get(name, default)
def __getitem__(self, name):
"""get the value of an attribute using square-bracket notation: `instance[attr]`"""
if name in self:
return self.get(name)
raise ApiAttributeError(
"{0} has no attribute '{1}'".format(
type(self).__name__, name),
[e for e in [self._path_to_item, name] if e]
)
def __contains__(self, name):
"""used by `in` operator to check if an attribute value was set in an instance: `'attr' in instance`"""
if name in self.required_properties:
return name in self.__dict__
return name in self.__dict__['_data_store']
def to_dict(self):
"""Returns the model properties as a dict"""
return model_to_dict(self, serialize=False)
def to_str(self):
"""Returns the string representation of the model"""
return pprint.pformat(self.to_dict())
def __eq__(self, other):
"""Returns true if both objects are equal"""
if not isinstance(other, self.__class__):
return False
if not set(self._data_store.keys()) == set(other._data_store.keys()):
return False
for _var_name, this_val in self._data_store.items():
that_val = other._data_store[_var_name]
types = set()
types.add(this_val.__class__)
types.add(that_val.__class__)
vals_equal = this_val == that_val
if not vals_equal:
return False
return True
class ModelComposed(OpenApiModel):
"""the parent class of models whose type == object in their
swagger/openapi and have oneOf/allOf/anyOf
When one sets a property we use var_name_to_model_instances to store the value in
the correct class instances + run any type checking + validation code.
When one gets a property we use var_name_to_model_instances to get the value
from the correct class instances.
This allows multiple composed schemas to contain the same property with additive
constraints on the value.
_composed_schemas (dict) stores the anyOf/allOf/oneOf classes
key (str): allOf/oneOf/anyOf
value (list): the classes in the XOf definition.
Note: none_type can be included when the openapi document version >= 3.1.0
_composed_instances (list): stores a list of instances of the composed schemas
defined in _composed_schemas. When properties are accessed in the self instance,
they are returned from the self._data_store or the data stores in the instances
in self._composed_schemas
_var_name_to_model_instances (dict): maps between a variable name on self and
the composed instances (self included) which contain that data
key (str): property name
value (list): list of class instances, self or instances in _composed_instances
which contain the value that the key is referring to.
"""
def __setitem__(self, name, value):
"""set the value of an attribute using square-bracket notation: `instance[attr] = val`"""
if name in self.required_properties:
self.__dict__[name] = value
return
"""
Use cases:
1. additional_properties_type is None (additionalProperties == False in spec)
Check for property presence in self.openapi_types
if not present then throw an error
if present set in self, set attribute
always set on composed schemas
2. additional_properties_type exists
set attribute on self
always set on composed schemas
"""
if self.additional_properties_type is None:
"""
For an attribute to exist on a composed schema it must:
- fulfill schema_requirements in the self composed schema not considering oneOf/anyOf/allOf schemas AND
- fulfill schema_requirements in each oneOf/anyOf/allOf schemas
schema_requirements:
For an attribute to exist on a schema it must:
- be present in properties at the schema OR
- have additionalProperties unset (defaults additionalProperties = any type) OR
- have additionalProperties set
"""
if name not in self.openapi_types:
raise ApiAttributeError(
"{0} has no attribute '{1}'".format(
type(self).__name__, name),
[e for e in [self._path_to_item, name] if e]
)
# attribute must be set on self and composed instances
self.set_attribute(name, value)
for model_instance in self._composed_instances:
setattr(model_instance, name, value)
if name not in self._var_name_to_model_instances:
# we assigned an additional property
self.__dict__['_var_name_to_model_instances'][name] = self._composed_instances + [self]
return None
__unset_attribute_value__ = object()
def get(self, name, default=None):
"""returns the value of an attribute or some default value if the attribute was not set"""
if name in self.required_properties:
return self.__dict__[name]
# get the attribute from the correct instance
model_instances = self._var_name_to_model_instances.get(name)
values = []
# A composed model stores self and child (oneof/anyOf/allOf) models under
# self._var_name_to_model_instances.
# Any property must exist in self and all model instances
# The value stored in all model instances must be the same
if model_instances:
for model_instance in model_instances:
if name in model_instance._data_store:
v = model_instance._data_store[name]
if v not in values:
values.append(v)
len_values = len(values)
if len_values == 0:
return default
elif len_values == 1:
return values[0]
elif len_values > 1:
raise ApiValueError(
"Values stored for property {0} in {1} differ when looking "
"at self and self's composed instances. All values must be "
"the same".format(name, type(self).__name__),
[e for e in [self._path_to_item, name] if e]
)
def __getitem__(self, name):
"""get the value of an attribute using square-bracket notation: `instance[attr]`"""
value = self.get(name, self.__unset_attribute_value__)
if value is self.__unset_attribute_value__:
raise ApiAttributeError(
"{0} has no attribute '{1}'".format(
type(self).__name__, name),
[e for e in [self._path_to_item, name] if e]
)
return value
def __contains__(self, name):
"""used by `in` operator to check if an attribute value was set in an instance: `'attr' in instance`"""
if name in self.required_properties:
return name in self.__dict__
model_instances = self._var_name_to_model_instances.get(
name, self._additional_properties_model_instances)
if model_instances:
for model_instance in model_instances:
if name in model_instance._data_store:
return True
return False
def to_dict(self):
"""Returns the model properties as a dict"""
return model_to_dict(self, serialize=False)
def to_str(self):
"""Returns the string representation of the model"""
return pprint.pformat(self.to_dict())
def __eq__(self, other):
"""Returns true if both objects are equal"""
if not isinstance(other, self.__class__):
return False
if not set(self._data_store.keys()) == set(other._data_store.keys()):
return False
for _var_name, this_val in self._data_store.items():
that_val = other._data_store[_var_name]
types = set()
types.add(this_val.__class__)
types.add(that_val.__class__)
vals_equal = this_val == that_val
if not vals_equal:
return False
return True
COERCION_INDEX_BY_TYPE = {
ModelComposed: 0,
ModelNormal: 1,
ModelSimple: 2,
none_type: 3, # The type of 'None'.
list: 4,
dict: 5,
float: 6,
int: 7,
bool: 8,
datetime: 9,
date: 10,
str: 11,
file_type: 12, # 'file_type' is an alias for the built-in 'file' or 'io.IOBase' type.
}
# these are used to limit what type conversions we try to do
# when we have a valid type already and we want to try converting
# to another type
UPCONVERSION_TYPE_PAIRS = (
(str, datetime),
(str, date),
# A float may be serialized as an integer, e.g. '3' is a valid serialized float.
(int, float),
(list, ModelComposed),
(dict, ModelComposed),
(str, ModelComposed),
(int, ModelComposed),
(float, ModelComposed),
(list, ModelComposed),
(list, ModelNormal),
(dict, ModelNormal),
(str, ModelSimple),
(int, ModelSimple),
(float, ModelSimple),
(list, ModelSimple),
)
COERCIBLE_TYPE_PAIRS = {
False: ( # client instantiation of a model with client data
# (dict, ModelComposed),
# (list, ModelComposed),
# (dict, ModelNormal),
# (list, ModelNormal),
# (str, ModelSimple),
# (int, ModelSimple),
# (float, ModelSimple),
# (list, ModelSimple),
# (str, int),
# (str, float),
# (str, datetime),
# (str, date),
# (int, str),
# (float, str),
),
True: ( # server -> client data
(dict, ModelComposed),
(list, ModelComposed),
(dict, ModelNormal),
(list, ModelNormal),
(str, ModelSimple),
(int, ModelSimple),
(float, ModelSimple),
(list, ModelSimple),
# (str, int),
# (str, float),
(str, datetime),
(str, date),
# (int, str),
# (float, str),
(str, file_type)
),
}
def get_simple_class(input_value):
"""Returns an input_value's simple class that we will use for type checking
Python2:
float and int will return int, where int is the python3 int backport
str and unicode will return str, where str is the python3 str backport
Note: float and int ARE both instances of int backport
Note: str_py2 and unicode_py2 are NOT both instances of str backport
Args:
input_value (class/class_instance): the item for which we will return
the simple class
"""
if isinstance(input_value, type):
# input_value is a class
return input_value
elif isinstance(input_value, tuple):
return tuple
elif isinstance(input_value, list):
return list
elif isinstance(input_value, dict):
return dict
elif isinstance(input_value, none_type):
return none_type
elif isinstance(input_value, file_type):
return file_type
elif isinstance(input_value, bool):
# this must be higher than the int check because
# isinstance(True, int) == True
return bool
elif isinstance(input_value, int):
return int
elif isinstance(input_value, datetime):
# this must be higher than the date check because
# isinstance(datetime_instance, date) == True
return datetime
elif isinstance(input_value, date):
return date
elif isinstance(input_value, str):
return str
return type(input_value)
def check_allowed_values(allowed_values, input_variable_path, input_values):
"""Raises an exception if the input_values are not allowed
Args:
allowed_values (dict): the allowed_values dict
input_variable_path (tuple): the path to the input variable
input_values (list/str/int/float/date/datetime): the values that we
are checking to see if they are in allowed_values
"""
these_allowed_values = list(allowed_values[input_variable_path].values())
if (isinstance(input_values, list)
and not set(input_values).issubset(
set(these_allowed_values))):
invalid_values = ", ".join(
map(str, set(input_values) - set(these_allowed_values))),
raise ApiValueError(
"Invalid values for `%s` [%s], must be a subset of [%s]" %
(
input_variable_path[0],
invalid_values,
", ".join(map(str, these_allowed_values))
)
)
elif (isinstance(input_values, dict)
and not set(
input_values.keys()).issubset(set(these_allowed_values))):
invalid_values = ", ".join(
map(str, set(input_values.keys()) - set(these_allowed_values)))
raise ApiValueError(
"Invalid keys in `%s` [%s], must be a subset of [%s]" %
(
input_variable_path[0],
invalid_values,
", ".join(map(str, these_allowed_values))
)
)
elif (not isinstance(input_values, (list, dict))
and input_values not in these_allowed_values):
raise ApiValueError(
"Invalid value for `%s` (%s), must be one of %s" %
(
input_variable_path[0],
input_values,
these_allowed_values
)
)
def is_json_validation_enabled(schema_keyword, configuration=None):
"""Returns true if JSON schema validation is enabled for the specified
validation keyword. This can be used to skip JSON schema structural validation
as requested in the configuration.
Args:
schema_keyword (string): the name of a JSON schema validation keyword.
configuration (Configuration): the configuration class.
"""
return (configuration is None or
not hasattr(configuration, '_disabled_client_side_validations') or
schema_keyword not in configuration._disabled_client_side_validations)
def check_validations(
validations, input_variable_path, input_values,
configuration=None):
"""Raises an exception if the input_values are invalid
Args:
validations (dict): the validation dictionary.
input_variable_path (tuple): the path to the input variable.
input_values (list/str/int/float/date/datetime): the values that we
are checking.
configuration (Configuration): the configuration class.
"""
if input_values is None:
return
current_validations = validations[input_variable_path]
if (is_json_validation_enabled('multipleOf', configuration) and
'multiple_of' in current_validations and
isinstance(input_values, (int, float)) and
not (float(input_values) / current_validations['multiple_of']).is_integer()):
# Note 'multipleOf' will be as good as the floating point arithmetic.
raise ApiValueError(
"Invalid value for `%s`, value must be a multiple of "
"`%s`" % (
input_variable_path[0],
current_validations['multiple_of']
)
)
if (is_json_validation_enabled('maxLength', configuration) and
'max_length' in current_validations and
len(input_values) > current_validations['max_length']):
raise ApiValueError(
"Invalid value for `%s`, length must be less than or equal to "
"`%s`" % (
input_variable_path[0],
current_validations['max_length']
)
)
if (is_json_validation_enabled('minLength', configuration) and
'min_length' in current_validations and
len(input_values) < current_validations['min_length']):
raise ApiValueError(
"Invalid value for `%s`, length must be greater than or equal to "
"`%s`" % (
input_variable_path[0],
current_validations['min_length']
)
)
if (is_json_validation_enabled('maxItems', configuration) and
'max_items' in current_validations and
len(input_values) > current_validations['max_items']):
raise ApiValueError(
"Invalid value for `%s`, number of items must be less than or "
"equal to `%s`" % (
input_variable_path[0],
current_validations['max_items']
)
)
if (is_json_validation_enabled('minItems', configuration) and
'min_items' in current_validations and
len(input_values) < current_validations['min_items']):
raise ValueError(
"Invalid value for `%s`, number of items must be greater than or "
"equal to `%s`" % (
input_variable_path[0],
current_validations['min_items']
)
)
items = ('exclusive_maximum', 'inclusive_maximum', 'exclusive_minimum',
'inclusive_minimum')
if (any(item in current_validations for item in items)):
if isinstance(input_values, list):
max_val = max(input_values)
min_val = min(input_values)
elif isinstance(input_values, dict):
max_val = max(input_values.values())
min_val = min(input_values.values())
else:
max_val = input_values
min_val = input_values
if (is_json_validation_enabled('exclusiveMaximum', configuration) and
'exclusive_maximum' in current_validations and
max_val >= current_validations['exclusive_maximum']):
raise ApiValueError(
"Invalid value for `%s`, must be a value less than `%s`" % (
input_variable_path[0],
current_validations['exclusive_maximum']
)
)
if (is_json_validation_enabled('maximum', configuration) and
'inclusive_maximum' in current_validations and
max_val > current_validations['inclusive_maximum']):
raise ApiValueError(
"Invalid value for `%s`, must be a value less than or equal to "
"`%s`" % (
input_variable_path[0],
current_validations['inclusive_maximum']
)
)
if (is_json_validation_enabled('exclusiveMinimum', configuration) and
'exclusive_minimum' in current_validations and
min_val <= current_validations['exclusive_minimum']):
raise ApiValueError(
"Invalid value for `%s`, must be a value greater than `%s`" %
(
input_variable_path[0],
current_validations['exclusive_maximum']
)
)
if (is_json_validation_enabled('minimum', configuration) and
'inclusive_minimum' in current_validations and
min_val < current_validations['inclusive_minimum']):
raise ApiValueError(
"Invalid value for `%s`, must be a value greater than or equal "
"to `%s`" % (
input_variable_path[0],
current_validations['inclusive_minimum']
)
)
flags = current_validations.get('regex', {}).get('flags', 0)
if (is_json_validation_enabled('pattern', configuration) and
'regex' in current_validations and
not re.search(current_validations['regex']['pattern'],
input_values, flags=flags)):
err_msg = r"Invalid value for `%s`, must match regular expression `%s`" % (
input_variable_path[0],
current_validations['regex']['pattern']
)
if flags != 0:
# Don't print the regex flags if the flags are not
# specified in the OAS document.
err_msg = r"%s with flags=`%s`" % (err_msg, flags)
raise ApiValueError(err_msg)
def order_response_types(required_types):
"""Returns the required types sorted in coercion order
Args:
required_types (list/tuple): collection of classes or instance of
list or dict with class information inside it.
Returns:
(list): coercion order sorted collection of classes or instance
of list or dict with class information inside it.
"""
def index_getter(class_or_instance):
if isinstance(class_or_instance, list):
return COERCION_INDEX_BY_TYPE[list]
elif isinstance(class_or_instance, dict):
return COERCION_INDEX_BY_TYPE[dict]
elif (inspect.isclass(class_or_instance)
and issubclass(class_or_instance, ModelComposed)):
return COERCION_INDEX_BY_TYPE[ModelComposed]
elif (inspect.isclass(class_or_instance)
and issubclass(class_or_instance, ModelNormal)):
return COERCION_INDEX_BY_TYPE[ModelNormal]
elif (inspect.isclass(class_or_instance)
and issubclass(class_or_instance, ModelSimple)):
return COERCION_INDEX_BY_TYPE[ModelSimple]
elif class_or_instance in COERCION_INDEX_BY_TYPE:
return COERCION_INDEX_BY_TYPE[class_or_instance]
raise ApiValueError("Unsupported type: %s" % class_or_instance)
sorted_types = sorted(
required_types,
key=lambda class_or_instance: index_getter(class_or_instance)
)
return sorted_types
def remove_uncoercible(required_types_classes, current_item, spec_property_naming,
must_convert=True):
"""Only keeps the type conversions that are possible
Args:
required_types_classes (tuple): tuple of classes that are required
these should be ordered by COERCION_INDEX_BY_TYPE
spec_property_naming (bool): True if the variable names in the input
data are serialized names as specified in the OpenAPI document.
False if the variables names in the input data are python
variable names in PEP-8 snake case.
current_item (any): the current item (input data) to be converted
Keyword Args:
must_convert (bool): if True the item to convert is of the wrong
type and we want a big list of coercibles
if False, we want a limited list of coercibles
Returns:
(list): the remaining coercible required types, classes only
"""
current_type_simple = get_simple_class(current_item)
results_classes = []
for required_type_class in required_types_classes:
# convert our models to OpenApiModel
required_type_class_simplified = required_type_class
if isinstance(required_type_class_simplified, type):
if issubclass(required_type_class_simplified, ModelComposed):
required_type_class_simplified = ModelComposed
elif issubclass(required_type_class_simplified, ModelNormal):
required_type_class_simplified = ModelNormal
elif issubclass(required_type_class_simplified, ModelSimple):
required_type_class_simplified = ModelSimple
if required_type_class_simplified == current_type_simple:
# don't consider converting to one's own class
continue
class_pair = (current_type_simple, required_type_class_simplified)
if must_convert and class_pair in COERCIBLE_TYPE_PAIRS[spec_property_naming]:
results_classes.append(required_type_class)
elif class_pair in UPCONVERSION_TYPE_PAIRS:
results_classes.append(required_type_class)
return results_classes
def get_discriminated_classes(cls):
"""
Returns all the classes that a discriminator converts to
TODO: lru_cache this
"""
possible_classes = []
key = list(cls.discriminator.keys())[0]
if is_type_nullable(cls):
possible_classes.append(cls)
for discr_cls in cls.discriminator[key].values():
if hasattr(discr_cls, 'discriminator') and discr_cls.discriminator is not None:
possible_classes.extend(get_discriminated_classes(discr_cls))
else:
possible_classes.append(discr_cls)
return possible_classes
def get_possible_classes(cls, from_server_context):
# TODO: lru_cache this
possible_classes = [cls]
if from_server_context:
return possible_classes
if hasattr(cls, 'discriminator') and cls.discriminator is not None:
possible_classes = []
possible_classes.extend(get_discriminated_classes(cls))
elif issubclass(cls, ModelComposed):
possible_classes.extend(composed_model_input_classes(cls))
return possible_classes
def get_required_type_classes(required_types_mixed, spec_property_naming):
"""Converts the tuple required_types into a tuple and a dict described
below
Args:
required_types_mixed (tuple/list): will contain either classes or
instance of list or dict
spec_property_naming (bool): if True these values came from the
server, and we use the data types in our endpoints.
If False, we are client side and we need to include
oneOf and discriminator classes inside the data types in our endpoints
Returns:
(valid_classes, dict_valid_class_to_child_types_mixed):
valid_classes (tuple): the valid classes that the current item
should be
dict_valid_class_to_child_types_mixed (dict):
valid_class (class): this is the key
child_types_mixed (list/dict/tuple): describes the valid child
types
"""
valid_classes = []
child_req_types_by_current_type = {}
for required_type in required_types_mixed:
if isinstance(required_type, list):
valid_classes.append(list)
child_req_types_by_current_type[list] = required_type
elif isinstance(required_type, tuple):
valid_classes.append(tuple)
child_req_types_by_current_type[tuple] = required_type
elif isinstance(required_type, dict):
valid_classes.append(dict)
child_req_types_by_current_type[dict] = required_type[str]
else:
valid_classes.extend(get_possible_classes(required_type, spec_property_naming))
return tuple(valid_classes), child_req_types_by_current_type
def change_keys_js_to_python(input_dict, model_class):
"""
Converts from javascript_key keys in the input_dict to python_keys in
the output dict using the mapping in model_class.
If the input_dict contains a key which does not declared in the model_class,
the key is added to the output dict as is. The assumption is the model_class
may have undeclared properties (additionalProperties attribute in the OAS
document).
"""
if getattr(model_class, 'attribute_map', None) is None:
return input_dict
output_dict = {}
reversed_attr_map = {value: key for key, value in
model_class.attribute_map.items()}
for javascript_key, value in input_dict.items():
python_key = reversed_attr_map.get(javascript_key)
if python_key is None:
# if the key is unknown, it is in error or it is an
# additionalProperties variable
python_key = javascript_key
output_dict[python_key] = value
return output_dict
def get_type_error(var_value, path_to_item, valid_classes, key_type=False):
error_msg = type_error_message(
var_name=path_to_item[-1],
var_value=var_value,
valid_classes=valid_classes,
key_type=key_type
)
return ApiTypeError(
error_msg,
path_to_item=path_to_item,
valid_classes=valid_classes,
key_type=key_type
)
def deserialize_primitive(data, klass, path_to_item):
"""Deserializes string to primitive type.
:param data: str/int/float
:param klass: str/class the class to convert to
:return: int, float, str, bool, date, datetime
"""
additional_message = ""
try:
if klass in {datetime, date}:
additional_message = (
"If you need your parameter to have a fallback "
"string value, please set its type as `type: {}` in your "
"spec. That allows the value to be any type. "
)
if klass == datetime:
if len(data) < 8:
raise ValueError("This is not a datetime")
# The string should be in iso8601 datetime format.
parsed_datetime = parse(data)
date_only = (
parsed_datetime.hour == 0 and
parsed_datetime.minute == 0 and
parsed_datetime.second == 0 and
parsed_datetime.tzinfo is None and
8 <= len(data) <= 10
)
if date_only:
raise ValueError("This is a date, not a datetime")
return parsed_datetime
elif klass == date:
if len(data) < 8:
raise ValueError("This is not a date")
return parse(data).date()
else:
converted_value = klass(data)
if isinstance(data, str) and klass == float:
if str(converted_value) != data:
# '7' -> 7.0 -> '7.0' != '7'
raise ValueError('This is not a float')
return converted_value
except (OverflowError, ValueError) as ex:
# parse can raise OverflowError
raise ApiValueError(
"{0}Failed to parse {1} as {2}".format(
additional_message, repr(data), klass.__name__
),
path_to_item=path_to_item
) from ex
def get_discriminator_class(model_class,
discr_name,
discr_value, cls_visited):
"""Returns the child class specified by the discriminator.
Args:
model_class (OpenApiModel): the model class.
discr_name (string): the name of the discriminator property.
discr_value (any): the discriminator value.
cls_visited (list): list of model classes that have been visited.
Used to determine the discriminator class without
visiting circular references indefinitely.
Returns:
used_model_class (class/None): the chosen child class that will be used
to deserialize the data, for example dog.Dog.
If a class is not found, None is returned.
"""
if model_class in cls_visited:
# The class has already been visited and no suitable class was found.
return None
cls_visited.append(model_class)
used_model_class = None
if discr_name in model_class.discriminator:
class_name_to_discr_class = model_class.discriminator[discr_name]
used_model_class = class_name_to_discr_class.get(discr_value)
if used_model_class is None:
# We didn't find a discriminated class in class_name_to_discr_class.
# So look in the ancestor or descendant discriminators
# The discriminator mapping may exist in a descendant (anyOf, oneOf)
# or ancestor (allOf).
# Ancestor example: in the GrandparentAnimal -> ParentPet -> ChildCat
# hierarchy, the discriminator mappings may be defined at any level
# in the hierarchy.
# Descendant example: mammal -> whale/zebra/Pig -> BasquePig/DanishPig
# if we try to make BasquePig from mammal, we need to travel through
# the oneOf descendant discriminators to find BasquePig
descendant_classes = model_class._composed_schemas.get('oneOf', ()) + \
model_class._composed_schemas.get('anyOf', ())
ancestor_classes = model_class._composed_schemas.get('allOf', ())
possible_classes = descendant_classes + ancestor_classes
for cls in possible_classes:
# Check if the schema has inherited discriminators.
if hasattr(cls, 'discriminator') and cls.discriminator is not None:
used_model_class = get_discriminator_class(
cls, discr_name, discr_value, cls_visited)
if used_model_class is not None:
return used_model_class
return used_model_class
def deserialize_model(model_data, model_class, path_to_item, check_type,
configuration, spec_property_naming):
"""Deserializes model_data to model instance.
Args:
model_data (int/str/float/bool/none_type/list/dict): data to instantiate the model
model_class (OpenApiModel): the model class
path_to_item (list): path to the model in the received data
check_type (bool): whether to check the data tupe for the values in
the model
configuration (Configuration): the instance to use to convert files
spec_property_naming (bool): True if the variable names in the input
data are serialized names as specified in the OpenAPI document.
False if the variables names in the input data are python
variable names in PEP-8 snake case.
Returns:
model instance
Raise:
ApiTypeError
ApiValueError
ApiKeyError
"""
kw_args = dict(_check_type=check_type,
_path_to_item=path_to_item,
_configuration=configuration,
_spec_property_naming=spec_property_naming)
if issubclass(model_class, ModelSimple):
return model_class._new_from_openapi_data(model_data, **kw_args)
elif isinstance(model_data, list):
return model_class._new_from_openapi_data(*model_data, **kw_args)
if isinstance(model_data, dict):
kw_args.update(model_data)
return model_class._new_from_openapi_data(**kw_args)
elif isinstance(model_data, PRIMITIVE_TYPES):
return model_class._new_from_openapi_data(model_data, **kw_args)
def deserialize_file(response_data, configuration, content_disposition=None):
"""Deserializes body to file
Saves response body into a file in a temporary folder,
using the filename from the `Content-Disposition` header if provided.
Args:
param response_data (str): the file data to write
configuration (Configuration): the instance to use to convert files
Keyword Args:
content_disposition (str): the value of the Content-Disposition
header
Returns:
(file_type): the deserialized file which is open
The user is responsible for closing and reading the file
"""
fd, path = tempfile.mkstemp(dir=configuration.temp_folder_path)
os.close(fd)
os.remove(path)
if content_disposition:
filename = re.search(r'filename=[\'"]?([^\'"\s]+)[\'"]?',
content_disposition,
flags=re.I)
if filename is not None:
filename = filename.group(1)
else:
filename = "default_" + str(uuid.uuid4())
path = os.path.join(os.path.dirname(path), filename)
with open(path, "wb") as f:
if isinstance(response_data, str):
# change str to bytes so we can write it
response_data = response_data.encode('utf-8')
f.write(response_data)
f = open(path, "rb")
return f
def attempt_convert_item(input_value, valid_classes, path_to_item,
configuration, spec_property_naming, key_type=False,
must_convert=False, check_type=True):
"""
Args:
input_value (any): the data to convert
valid_classes (any): the classes that are valid
path_to_item (list): the path to the item to convert
configuration (Configuration): the instance to use to convert files
spec_property_naming (bool): True if the variable names in the input
data are serialized names as specified in the OpenAPI document.
False if the variables names in the input data are python
variable names in PEP-8 snake case.
key_type (bool): if True we need to convert a key type (not supported)
must_convert (bool): if True we must convert
check_type (bool): if True we check the type or the returned data in
ModelComposed/ModelNormal/ModelSimple instances
Returns:
instance (any) the fixed item
Raises:
ApiTypeError
ApiValueError
ApiKeyError
"""
valid_classes_ordered = order_response_types(valid_classes)
valid_classes_coercible = remove_uncoercible(
valid_classes_ordered, input_value, spec_property_naming)
if not valid_classes_coercible or key_type:
# we do not handle keytype errors, json will take care
# of this for us
if configuration is None or not configuration.discard_unknown_keys:
raise get_type_error(input_value, path_to_item, valid_classes,
key_type=key_type)
for valid_class in valid_classes_coercible:
try:
if issubclass(valid_class, OpenApiModel):
return deserialize_model(input_value, valid_class,
path_to_item, check_type,
configuration, spec_property_naming)
elif valid_class == file_type:
return deserialize_file(input_value, configuration)
return deserialize_primitive(input_value, valid_class,
path_to_item)
except (ApiTypeError, ApiValueError, ApiKeyError) as conversion_exc:
if must_convert:
raise conversion_exc
# if we have conversion errors when must_convert == False
# we ignore the exception and move on to the next class
continue
# we were unable to convert, must_convert == False
return input_value
def is_type_nullable(input_type):
"""
Returns true if None is an allowed value for the specified input_type.
A type is nullable if at least one of the following conditions is true:
1. The OAS 'nullable' attribute has been specified,
1. The type is the 'null' type,
1. The type is a anyOf/oneOf composed schema, and a child schema is
the 'null' type.
Args:
input_type (type): the class of the input_value that we are
checking
Returns:
bool
"""
if input_type is none_type:
return True
if issubclass(input_type, OpenApiModel) and input_type._nullable:
return True
if issubclass(input_type, ModelComposed):
# If oneOf/anyOf, check if the 'null' type is one of the allowed types.
for t in input_type._composed_schemas.get('oneOf', ()):
if is_type_nullable(t):
return True
for t in input_type._composed_schemas.get('anyOf', ()):
if is_type_nullable(t):
return True
return False
def is_valid_type(input_class_simple, valid_classes):
"""
Args:
input_class_simple (class): the class of the input_value that we are
checking
valid_classes (tuple): the valid classes that the current item
should be
Returns:
bool
"""
if issubclass(input_class_simple, OpenApiModel) and \
valid_classes == (bool, date, datetime, dict, float, int, list, str, none_type,):
return True
valid_type = input_class_simple in valid_classes
if not valid_type and (
issubclass(input_class_simple, OpenApiModel) or
input_class_simple is none_type):
for valid_class in valid_classes:
if input_class_simple is none_type and is_type_nullable(valid_class):
# Schema is oneOf/anyOf and the 'null' type is one of the allowed types.
return True
if not (issubclass(valid_class, OpenApiModel) and valid_class.discriminator):
continue
discr_propertyname_py = list(valid_class.discriminator.keys())[0]
discriminator_classes = (
valid_class.discriminator[discr_propertyname_py].values()
)
valid_type = is_valid_type(input_class_simple, discriminator_classes)
if valid_type:
return True
return valid_type
def validate_and_convert_types(input_value, required_types_mixed, path_to_item,
spec_property_naming, _check_type, configuration=None):
"""Raises a TypeError is there is a problem, otherwise returns value
Args:
input_value (any): the data to validate/convert
required_types_mixed (list/dict/tuple): A list of
valid classes, or a list tuples of valid classes, or a dict where
the value is a tuple of value classes
path_to_item: (list) the path to the data being validated
this stores a list of keys or indices to get to the data being
validated
spec_property_naming (bool): True if the variable names in the input
data are serialized names as specified in the OpenAPI document.
False if the variables names in the input data are python
variable names in PEP-8 snake case.
_check_type: (boolean) if true, type will be checked and conversion
will be attempted.
configuration: (Configuration): the configuration class to use
when converting file_type items.
If passed, conversion will be attempted when possible
If not passed, no conversions will be attempted and
exceptions will be raised
Returns:
the correctly typed value
Raises:
ApiTypeError
"""
results = get_required_type_classes(required_types_mixed, spec_property_naming)
valid_classes, child_req_types_by_current_type = results
input_class_simple = get_simple_class(input_value)
valid_type = is_valid_type(input_class_simple, valid_classes)
if not valid_type:
if (configuration
or (input_class_simple == dict
and dict not in valid_classes)):
# if input_value is not valid_type try to convert it
converted_instance = attempt_convert_item(
input_value,
valid_classes,
path_to_item,
configuration,
spec_property_naming,
key_type=False,
must_convert=True,
check_type=_check_type
)
return converted_instance
else:
raise get_type_error(input_value, path_to_item, valid_classes,
key_type=False)
# input_value's type is in valid_classes
if len(valid_classes) > 1 and configuration:
# there are valid classes which are not the current class
valid_classes_coercible = remove_uncoercible(
valid_classes, input_value, spec_property_naming, must_convert=False)
if valid_classes_coercible:
converted_instance = attempt_convert_item(
input_value,
valid_classes_coercible,
path_to_item,
configuration,
spec_property_naming,
key_type=False,
must_convert=False,
check_type=_check_type
)
return converted_instance
if child_req_types_by_current_type == {}:
# all types are of the required types and there are no more inner
# variables left to look at
return input_value
inner_required_types = child_req_types_by_current_type.get(
type(input_value)
)
if inner_required_types is None:
# for this type, there are not more inner variables left to look at
return input_value
if isinstance(input_value, list):
if input_value == []:
# allow an empty list
return input_value
for index, inner_value in enumerate(input_value):
inner_path = list(path_to_item)
inner_path.append(index)
input_value[index] = validate_and_convert_types(
inner_value,
inner_required_types,
inner_path,
spec_property_naming,
_check_type,
configuration=configuration
)
elif isinstance(input_value, dict):
if input_value == {}:
# allow an empty dict
return input_value
for inner_key, inner_val in input_value.items():
inner_path = list(path_to_item)
inner_path.append(inner_key)
if get_simple_class(inner_key) != str:
raise get_type_error(inner_key, inner_path, valid_classes,
key_type=True)
input_value[inner_key] = validate_and_convert_types(
inner_val,
inner_required_types,
inner_path,
spec_property_naming,
_check_type,
configuration=configuration
)
return input_value
def model_to_dict(model_instance, serialize=True):
"""Returns the model properties as a dict
Args:
model_instance (one of your model instances): the model instance that
will be converted to a dict.
Keyword Args:
serialize (bool): if True, the keys in the dict will be values from
attribute_map
"""
result = {}
def extract_item(item): return (
item[0], model_to_dict(
item[1], serialize=serialize)) if hasattr(
item[1], '_data_store') else item
model_instances = [model_instance]
if model_instance._composed_schemas:
model_instances.extend(model_instance._composed_instances)
seen_json_attribute_names = set()
used_fallback_python_attribute_names = set()
py_to_json_map = {}
for model_instance in model_instances:
for attr, value in model_instance._data_store.items():
if serialize:
# we use get here because additional property key names do not
# exist in attribute_map
try:
attr = model_instance.attribute_map[attr]
py_to_json_map.update(model_instance.attribute_map)
seen_json_attribute_names.add(attr)
except KeyError:
used_fallback_python_attribute_names.add(attr)
if isinstance(value, list):
if not value:
# empty list or None
result[attr] = value
else:
res = []
for v in value:
if isinstance(v, PRIMITIVE_TYPES) or v is None:
res.append(v)
elif isinstance(v, ModelSimple):
res.append(v.value)
elif isinstance(v, dict):
res.append(dict(map(
extract_item,
v.items()
)))
else:
res.append(model_to_dict(v, serialize=serialize))
result[attr] = res
elif isinstance(value, dict):
result[attr] = dict(map(
extract_item,
value.items()
))
elif isinstance(value, ModelSimple):
result[attr] = value.value
elif hasattr(value, '_data_store'):
result[attr] = model_to_dict(value, serialize=serialize)
else:
result[attr] = value
if serialize:
for python_key in used_fallback_python_attribute_names:
json_key = py_to_json_map.get(python_key)
if json_key is None:
continue
if python_key == json_key:
continue
json_key_assigned_no_need_for_python_key = json_key in seen_json_attribute_names
if json_key_assigned_no_need_for_python_key:
del result[python_key]
return result
def type_error_message(var_value=None, var_name=None, valid_classes=None,
key_type=None):
"""
Keyword Args:
var_value (any): the variable which has the type_error
var_name (str): the name of the variable which has the typ error
valid_classes (tuple): the accepted classes for current_item's
value
key_type (bool): False if our value is a value in a dict
True if it is a key in a dict
False if our item is an item in a list
"""
key_or_value = 'value'
if key_type:
key_or_value = 'key'
valid_classes_phrase = get_valid_classes_phrase(valid_classes)
msg = (
"Invalid type for variable '{0}'. Required {1} type {2} and "
"passed type was {3}".format(
var_name,
key_or_value,
valid_classes_phrase,
type(var_value).__name__,
)
)
return msg
def get_valid_classes_phrase(input_classes):
"""Returns a string phrase describing what types are allowed
"""
all_classes = list(input_classes)
all_classes = sorted(all_classes, key=lambda cls: cls.__name__)
all_class_names = [cls.__name__ for cls in all_classes]
if len(all_class_names) == 1:
return 'is {0}'.format(all_class_names[0])
return "is one of [{0}]".format(", ".join(all_class_names))
def get_allof_instances(self, model_args, constant_args):
"""
Args:
self: the class we are handling
model_args (dict): var_name to var_value
used to make instances
constant_args (dict):
metadata arguments:
_check_type
_path_to_item
_spec_property_naming
_configuration
_visited_composed_classes
Returns
composed_instances (list)
"""
composed_instances = []
for allof_class in self._composed_schemas['allOf']:
try:
if constant_args.get('_spec_property_naming'):
allof_instance = allof_class._from_openapi_data(**model_args, **constant_args)
else:
allof_instance = allof_class(**model_args, **constant_args)
composed_instances.append(allof_instance)
except Exception as ex:
raise ApiValueError(
"Invalid inputs given to generate an instance of '%s'. The "
"input data was invalid for the allOf schema '%s' in the composed "
"schema '%s'. Error=%s" % (
allof_class.__name__,
allof_class.__name__,
self.__class__.__name__,
str(ex)
)
) from ex
return composed_instances
def get_oneof_instance(cls, model_kwargs, constant_kwargs, model_arg=None):
"""
Find the oneOf schema that matches the input data (e.g. payload).
If exactly one schema matches the input data, an instance of that schema
is returned.
If zero or more than one schema match the input data, an exception is raised.
In OAS 3.x, the payload MUST, by validation, match exactly one of the
schemas described by oneOf.
Args:
cls: the class we are handling
model_kwargs (dict): var_name to var_value
The input data, e.g. the payload that must match a oneOf schema
in the OpenAPI document.
constant_kwargs (dict): var_name to var_value
args that every model requires, including configuration, server
and path to item.
Kwargs:
model_arg: (int, float, bool, str, date, datetime, ModelSimple, None):
the value to assign to a primitive class or ModelSimple class
Notes:
- this is only passed in when oneOf includes types which are not object
- None is used to suppress handling of model_arg, nullable models are handled in __new__
Returns
oneof_instance (instance)
"""
if len(cls._composed_schemas['oneOf']) == 0:
return None
oneof_instances = []
# Iterate over each oneOf schema and determine if the input data
# matches the oneOf schemas.
for oneof_class in cls._composed_schemas['oneOf']:
# The composed oneOf schema allows the 'null' type and the input data
# is the null value. This is a OAS >= 3.1 feature.
if oneof_class is none_type:
# skip none_types because we are deserializing dict data.
# none_type deserialization is handled in the __new__ method
continue
single_value_input = allows_single_value_input(oneof_class)
try:
if not single_value_input:
if constant_kwargs.get('_spec_property_naming'):
oneof_instance = oneof_class._from_openapi_data(
**model_kwargs, **constant_kwargs)
else:
oneof_instance = oneof_class(**model_kwargs, **constant_kwargs)
else:
if issubclass(oneof_class, ModelSimple):
if constant_kwargs.get('_spec_property_naming'):
oneof_instance = oneof_class._from_openapi_data(
model_arg, **constant_kwargs)
else:
oneof_instance = oneof_class(model_arg, **constant_kwargs)
elif oneof_class in PRIMITIVE_TYPES:
oneof_instance = validate_and_convert_types(
model_arg,
(oneof_class,),
constant_kwargs['_path_to_item'],
constant_kwargs['_spec_property_naming'],
constant_kwargs['_check_type'],
configuration=constant_kwargs['_configuration']
)
oneof_instances.append(oneof_instance)
except Exception:
pass
if len(oneof_instances) == 0:
raise ApiValueError(
"Invalid inputs given to generate an instance of %s. None "
"of the oneOf schemas matched the input data." %
cls.__name__
)
elif len(oneof_instances) > 1:
raise ApiValueError(
"Invalid inputs given to generate an instance of %s. Multiple "
"oneOf schemas matched the inputs, but a max of one is allowed." %
cls.__name__
)
return oneof_instances[0]
def get_anyof_instances(self, model_args, constant_args):
"""
Args:
self: the class we are handling
model_args (dict): var_name to var_value
The input data, e.g. the payload that must match at least one
anyOf child schema in the OpenAPI document.
constant_args (dict): var_name to var_value
args that every model requires, including configuration, server
and path to item.
Returns
anyof_instances (list)
"""
anyof_instances = []
if len(self._composed_schemas['anyOf']) == 0:
return anyof_instances
for anyof_class in self._composed_schemas['anyOf']:
# The composed oneOf schema allows the 'null' type and the input data
# is the null value. This is a OAS >= 3.1 feature.
if anyof_class is none_type:
# skip none_types because we are deserializing dict data.
# none_type deserialization is handled in the __new__ method
continue
try:
if constant_args.get('_spec_property_naming'):
anyof_instance = anyof_class._from_openapi_data(**model_args, **constant_args)
else:
anyof_instance = anyof_class(**model_args, **constant_args)
anyof_instances.append(anyof_instance)
except Exception:
pass
if len(anyof_instances) == 0:
raise ApiValueError(
"Invalid inputs given to generate an instance of %s. None of the "
"anyOf schemas matched the inputs." %
self.__class__.__name__
)
return anyof_instances
def get_discarded_args(self, composed_instances, model_args):
"""
Gathers the args that were discarded by configuration.discard_unknown_keys
"""
model_arg_keys = model_args.keys()
discarded_args = set()
# arguments passed to self were already converted to python names
# before __init__ was called
for instance in composed_instances:
if instance.__class__ in self._composed_schemas['allOf']:
try:
keys = instance.to_dict().keys()
discarded_keys = model_args - keys
discarded_args.update(discarded_keys)
except Exception:
# allOf integer schema will throw exception
pass
else:
try:
all_keys = set(model_to_dict(instance, serialize=False).keys())
js_keys = model_to_dict(instance, serialize=True).keys()
all_keys.update(js_keys)
discarded_keys = model_arg_keys - all_keys
discarded_args.update(discarded_keys)
except Exception:
# allOf integer schema will throw exception
pass
return discarded_args
def validate_get_composed_info(constant_args, model_args, self):
"""
For composed schemas, generate schema instances for
all schemas in the oneOf/anyOf/allOf definition. If additional
properties are allowed, also assign those properties on
all matched schemas that contain additionalProperties.
Openapi schemas are python classes.
Exceptions are raised if:
- 0 or > 1 oneOf schema matches the model_args input data
- no anyOf schema matches the model_args input data
- any of the allOf schemas do not match the model_args input data
Args:
constant_args (dict): these are the args that every model requires
model_args (dict): these are the required and optional spec args that
were passed in to make this model
self (class): the class that we are instantiating
This class contains self._composed_schemas
Returns:
composed_info (list): length three
composed_instances (list): the composed instances which are not
self
var_name_to_model_instances (dict): a dict going from var_name
to the model_instance which holds that var_name
the model_instance may be self or an instance of one of the
classes in self.composed_instances()
additional_properties_model_instances (list): a list of the
model instances which have the property
additional_properties_type. This list can include self
"""
# create composed_instances
composed_instances = []
allof_instances = get_allof_instances(self, model_args, constant_args)
composed_instances.extend(allof_instances)
oneof_instance = get_oneof_instance(self.__class__, model_args, constant_args)
if oneof_instance is not None:
composed_instances.append(oneof_instance)
anyof_instances = get_anyof_instances(self, model_args, constant_args)
composed_instances.extend(anyof_instances)
"""
set additional_properties_model_instances
additional properties must be evaluated at the schema level
so self's additional properties are most important
If self is a composed schema with:
- no properties defined in self
- additionalProperties: False
Then for object payloads every property is an additional property
and they are not allowed, so only empty dict is allowed
Properties must be set on all matching schemas
so when a property is assigned toa composed instance, it must be set on all
composed instances regardless of additionalProperties presence
keeping it to prevent breaking changes in v5.0.1
TODO remove cls._additional_properties_model_instances in 6.0.0
"""
additional_properties_model_instances = []
if self.additional_properties_type is not None:
additional_properties_model_instances = [self]
"""
no need to set properties on self in here, they will be set in __init__
By here all composed schema oneOf/anyOf/allOf instances have their properties set using
model_args
"""
discarded_args = get_discarded_args(self, composed_instances, model_args)
# map variable names to composed_instances
var_name_to_model_instances = {}
for prop_name in model_args:
if prop_name not in discarded_args:
var_name_to_model_instances[prop_name] = [self] + list(
filter(
lambda x: prop_name in x.openapi_types, composed_instances))
return [
composed_instances,
var_name_to_model_instances,
additional_properties_model_instances,
discarded_args
] | zincsearch-sdk | /zincsearch-sdk-0.3.3.tar.gz/zincsearch-sdk-0.3.3/zincsearch_sdk/model_utils.py | model_utils.py |
import re # noqa: F401
import sys # noqa: F401
from zincsearch_sdk.api_client import ApiClient, Endpoint as _Endpoint
from zincsearch_sdk.model_utils import ( # noqa: F401
check_allowed_values,
check_validations,
date,
datetime,
file_type,
none_type,
validate_and_convert_types
)
from zincsearch_sdk.model.meta_healthz_response import MetaHealthzResponse
from zincsearch_sdk.model.meta_version_response import MetaVersionResponse
class Default(object):
"""NOTE: This class is auto generated by OpenAPI Generator
Ref: https://openapi-generator.tech
Do not edit the class manually.
"""
def __init__(self, api_client=None):
if api_client is None:
api_client = ApiClient()
self.api_client = api_client
self.healthz_endpoint = _Endpoint(
settings={
'response_type': (MetaHealthzResponse,),
'auth': [
'basicAuth'
],
'endpoint_path': '/healthz',
'operation_id': 'healthz',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
],
'required': [],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
},
'attribute_map': {
},
'location_map': {
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client
)
self.version_endpoint = _Endpoint(
settings={
'response_type': (MetaVersionResponse,),
'auth': [
'basicAuth'
],
'endpoint_path': '/version',
'operation_id': 'version',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
],
'required': [],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
},
'attribute_map': {
},
'location_map': {
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client
)
def healthz(
self,
**kwargs
):
"""Get healthz # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.healthz(async_req=True)
>>> result = thread.get()
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
_request_auths (list): set to override the auth_settings for an a single
request; this effectively ignores the authentication
in the spec for a single request.
Default is None
async_req (bool): execute request asynchronously
Returns:
MetaHealthzResponse
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_spec_property_naming'] = kwargs.get(
'_spec_property_naming', False
)
kwargs['_content_type'] = kwargs.get(
'_content_type')
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['_request_auths'] = kwargs.get('_request_auths', None)
return self.healthz_endpoint.call_with_http_info(**kwargs)
def version(
self,
**kwargs
):
"""Get version # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.version(async_req=True)
>>> result = thread.get()
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
_request_auths (list): set to override the auth_settings for an a single
request; this effectively ignores the authentication
in the spec for a single request.
Default is None
async_req (bool): execute request asynchronously
Returns:
MetaVersionResponse
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_spec_property_naming'] = kwargs.get(
'_spec_property_naming', False
)
kwargs['_content_type'] = kwargs.get(
'_content_type')
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['_request_auths'] = kwargs.get('_request_auths', None)
return self.version_endpoint.call_with_http_info(**kwargs) | zincsearch-sdk | /zincsearch-sdk-0.3.3.tar.gz/zincsearch-sdk-0.3.3/zincsearch_sdk/api/default.py | default.py |
import re # noqa: F401
import sys # noqa: F401
from zincsearch_sdk.api_client import ApiClient, Endpoint as _Endpoint
from zincsearch_sdk.model_utils import ( # noqa: F401
check_allowed_values,
check_validations,
date,
datetime,
file_type,
none_type,
validate_and_convert_types
)
from zincsearch_sdk.model.meta_http_response_document import MetaHTTPResponseDocument
from zincsearch_sdk.model.meta_http_response_error import MetaHTTPResponseError
from zincsearch_sdk.model.meta_http_response_id import MetaHTTPResponseID
from zincsearch_sdk.model.meta_http_response_record_count import MetaHTTPResponseRecordCount
from zincsearch_sdk.model.meta_json_ingest import MetaJSONIngest
class Document(object):
"""NOTE: This class is auto generated by OpenAPI Generator
Ref: https://openapi-generator.tech
Do not edit the class manually.
"""
def __init__(self, api_client=None):
if api_client is None:
api_client = ApiClient()
self.api_client = api_client
self.bulk_endpoint = _Endpoint(
settings={
'response_type': (MetaHTTPResponseRecordCount,),
'auth': [
'basicAuth'
],
'endpoint_path': '/api/_bulk',
'operation_id': 'bulk',
'http_method': 'POST',
'servers': None,
},
params_map={
'all': [
'query',
],
'required': [
'query',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'query':
(str,),
},
'attribute_map': {
},
'location_map': {
'query': 'body',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [
'text/plain'
]
},
api_client=api_client
)
self.bulkv2_endpoint = _Endpoint(
settings={
'response_type': (MetaHTTPResponseRecordCount,),
'auth': [
'basicAuth'
],
'endpoint_path': '/api/_bulkv2',
'operation_id': 'bulkv2',
'http_method': 'POST',
'servers': None,
},
params_map={
'all': [
'query',
],
'required': [
'query',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'query':
(MetaJSONIngest,),
},
'attribute_map': {
},
'location_map': {
'query': 'body',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [
'application/json'
]
},
api_client=api_client
)
self.delete_endpoint = _Endpoint(
settings={
'response_type': (MetaHTTPResponseDocument,),
'auth': [
'basicAuth'
],
'endpoint_path': '/api/{index}/_doc/{id}',
'operation_id': 'delete',
'http_method': 'DELETE',
'servers': None,
},
params_map={
'all': [
'index',
'id',
],
'required': [
'index',
'id',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'index':
(str,),
'id':
(str,),
},
'attribute_map': {
'index': 'index',
'id': 'id',
},
'location_map': {
'index': 'path',
'id': 'path',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client
)
self.es_bulk_endpoint = _Endpoint(
settings={
'response_type': (bool, date, datetime, dict, float, int, list, str, none_type,),
'auth': [
'basicAuth'
],
'endpoint_path': '/es/_bulk',
'operation_id': 'es_bulk',
'http_method': 'POST',
'servers': None,
},
params_map={
'all': [
'query',
],
'required': [
'query',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'query':
(str,),
},
'attribute_map': {
},
'location_map': {
'query': 'body',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [
'text/plain'
]
},
api_client=api_client
)
self.index_endpoint = _Endpoint(
settings={
'response_type': (MetaHTTPResponseID,),
'auth': [
'basicAuth'
],
'endpoint_path': '/api/{index}/_doc',
'operation_id': 'index',
'http_method': 'POST',
'servers': None,
},
params_map={
'all': [
'index',
'document',
],
'required': [
'index',
'document',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'index':
(str,),
'document':
(bool, date, datetime, dict, float, int, list, str, none_type,),
},
'attribute_map': {
'index': 'index',
},
'location_map': {
'index': 'path',
'document': 'body',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [
'application/json'
]
},
api_client=api_client
)
self.index_with_id_endpoint = _Endpoint(
settings={
'response_type': (MetaHTTPResponseID,),
'auth': [
'basicAuth'
],
'endpoint_path': '/api/{index}/_doc/{id}',
'operation_id': 'index_with_id',
'http_method': 'PUT',
'servers': None,
},
params_map={
'all': [
'index',
'id',
'document',
],
'required': [
'index',
'id',
'document',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'index':
(str,),
'id':
(str,),
'document':
(bool, date, datetime, dict, float, int, list, str, none_type,),
},
'attribute_map': {
'index': 'index',
'id': 'id',
},
'location_map': {
'index': 'path',
'id': 'path',
'document': 'body',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [
'application/json'
]
},
api_client=api_client
)
self.multi_endpoint = _Endpoint(
settings={
'response_type': (MetaHTTPResponseRecordCount,),
'auth': [
'basicAuth'
],
'endpoint_path': '/api/{index}/_multi',
'operation_id': 'multi',
'http_method': 'POST',
'servers': None,
},
params_map={
'all': [
'index',
'query',
],
'required': [
'index',
'query',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'index':
(str,),
'query':
(str,),
},
'attribute_map': {
'index': 'index',
},
'location_map': {
'index': 'path',
'query': 'body',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [
'text/plain'
]
},
api_client=api_client
)
self.update_endpoint = _Endpoint(
settings={
'response_type': (MetaHTTPResponseID,),
'auth': [
'basicAuth'
],
'endpoint_path': '/api/{index}/_update/{id}',
'operation_id': 'update',
'http_method': 'POST',
'servers': None,
},
params_map={
'all': [
'index',
'id',
'document',
],
'required': [
'index',
'id',
'document',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'index':
(str,),
'id':
(str,),
'document':
(bool, date, datetime, dict, float, int, list, str, none_type,),
},
'attribute_map': {
'index': 'index',
'id': 'id',
},
'location_map': {
'index': 'path',
'id': 'path',
'document': 'body',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [
'application/json'
]
},
api_client=api_client
)
def bulk(
self,
query,
**kwargs
):
"""Bulk documents # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.bulk(query, async_req=True)
>>> result = thread.get()
Args:
query (str): Query
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
_request_auths (list): set to override the auth_settings for an a single
request; this effectively ignores the authentication
in the spec for a single request.
Default is None
async_req (bool): execute request asynchronously
Returns:
MetaHTTPResponseRecordCount
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_spec_property_naming'] = kwargs.get(
'_spec_property_naming', False
)
kwargs['_content_type'] = kwargs.get(
'_content_type')
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['_request_auths'] = kwargs.get('_request_auths', None)
kwargs['query'] = \
query
return self.bulk_endpoint.call_with_http_info(**kwargs)
def bulkv2(
self,
query,
**kwargs
):
"""Bulkv2 documents # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.bulkv2(query, async_req=True)
>>> result = thread.get()
Args:
query (MetaJSONIngest): Query
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
_request_auths (list): set to override the auth_settings for an a single
request; this effectively ignores the authentication
in the spec for a single request.
Default is None
async_req (bool): execute request asynchronously
Returns:
MetaHTTPResponseRecordCount
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_spec_property_naming'] = kwargs.get(
'_spec_property_naming', False
)
kwargs['_content_type'] = kwargs.get(
'_content_type')
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['_request_auths'] = kwargs.get('_request_auths', None)
kwargs['query'] = \
query
return self.bulkv2_endpoint.call_with_http_info(**kwargs)
def delete(
self,
index,
id,
**kwargs
):
"""Delete document # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.delete(index, id, async_req=True)
>>> result = thread.get()
Args:
index (str): Index
id (str): ID
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
_request_auths (list): set to override the auth_settings for an a single
request; this effectively ignores the authentication
in the spec for a single request.
Default is None
async_req (bool): execute request asynchronously
Returns:
MetaHTTPResponseDocument
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_spec_property_naming'] = kwargs.get(
'_spec_property_naming', False
)
kwargs['_content_type'] = kwargs.get(
'_content_type')
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['_request_auths'] = kwargs.get('_request_auths', None)
kwargs['index'] = \
index
kwargs['id'] = \
id
return self.delete_endpoint.call_with_http_info(**kwargs)
def es_bulk(
self,
query,
**kwargs
):
"""ES bulk documents # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.es_bulk(query, async_req=True)
>>> result = thread.get()
Args:
query (str): Query
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
_request_auths (list): set to override the auth_settings for an a single
request; this effectively ignores the authentication
in the spec for a single request.
Default is None
async_req (bool): execute request asynchronously
Returns:
bool, date, datetime, dict, float, int, list, str, none_type
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_spec_property_naming'] = kwargs.get(
'_spec_property_naming', False
)
kwargs['_content_type'] = kwargs.get(
'_content_type')
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['_request_auths'] = kwargs.get('_request_auths', None)
kwargs['query'] = \
query
return self.es_bulk_endpoint.call_with_http_info(**kwargs)
def index(
self,
index,
document,
**kwargs
):
"""Create or update document # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.index(index, document, async_req=True)
>>> result = thread.get()
Args:
index (str): Index
document (bool, date, datetime, dict, float, int, list, str, none_type): Document
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
_request_auths (list): set to override the auth_settings for an a single
request; this effectively ignores the authentication
in the spec for a single request.
Default is None
async_req (bool): execute request asynchronously
Returns:
MetaHTTPResponseID
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_spec_property_naming'] = kwargs.get(
'_spec_property_naming', False
)
kwargs['_content_type'] = kwargs.get(
'_content_type')
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['_request_auths'] = kwargs.get('_request_auths', None)
kwargs['index'] = \
index
kwargs['document'] = \
document
return self.index_endpoint.call_with_http_info(**kwargs)
def index_with_id(
self,
index,
id,
document,
**kwargs
):
"""Create or update document with id # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.index_with_id(index, id, document, async_req=True)
>>> result = thread.get()
Args:
index (str): Index
id (str): ID
document (bool, date, datetime, dict, float, int, list, str, none_type): Document
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
_request_auths (list): set to override the auth_settings for an a single
request; this effectively ignores the authentication
in the spec for a single request.
Default is None
async_req (bool): execute request asynchronously
Returns:
MetaHTTPResponseID
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_spec_property_naming'] = kwargs.get(
'_spec_property_naming', False
)
kwargs['_content_type'] = kwargs.get(
'_content_type')
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['_request_auths'] = kwargs.get('_request_auths', None)
kwargs['index'] = \
index
kwargs['id'] = \
id
kwargs['document'] = \
document
return self.index_with_id_endpoint.call_with_http_info(**kwargs)
def multi(
self,
index,
query,
**kwargs
):
"""Multi documents # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.multi(index, query, async_req=True)
>>> result = thread.get()
Args:
index (str): Index
query (str): Query
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
_request_auths (list): set to override the auth_settings for an a single
request; this effectively ignores the authentication
in the spec for a single request.
Default is None
async_req (bool): execute request asynchronously
Returns:
MetaHTTPResponseRecordCount
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_spec_property_naming'] = kwargs.get(
'_spec_property_naming', False
)
kwargs['_content_type'] = kwargs.get(
'_content_type')
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['_request_auths'] = kwargs.get('_request_auths', None)
kwargs['index'] = \
index
kwargs['query'] = \
query
return self.multi_endpoint.call_with_http_info(**kwargs)
def update(
self,
index,
id,
document,
**kwargs
):
"""Update document with id # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.update(index, id, document, async_req=True)
>>> result = thread.get()
Args:
index (str): Index
id (str): ID
document (bool, date, datetime, dict, float, int, list, str, none_type): Document
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
_request_auths (list): set to override the auth_settings for an a single
request; this effectively ignores the authentication
in the spec for a single request.
Default is None
async_req (bool): execute request asynchronously
Returns:
MetaHTTPResponseID
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_spec_property_naming'] = kwargs.get(
'_spec_property_naming', False
)
kwargs['_content_type'] = kwargs.get(
'_content_type')
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['_request_auths'] = kwargs.get('_request_auths', None)
kwargs['index'] = \
index
kwargs['id'] = \
id
kwargs['document'] = \
document
return self.update_endpoint.call_with_http_info(**kwargs) | zincsearch-sdk | /zincsearch-sdk-0.3.3.tar.gz/zincsearch-sdk-0.3.3/zincsearch_sdk/api/document.py | document.py |
import re # noqa: F401
import sys # noqa: F401
from zincsearch_sdk.api_client import ApiClient, Endpoint as _Endpoint
from zincsearch_sdk.model_utils import ( # noqa: F401
check_allowed_values,
check_validations,
date,
datetime,
file_type,
none_type,
validate_and_convert_types
)
from zincsearch_sdk.model.meta_http_response_delete_by_query import MetaHTTPResponseDeleteByQuery
from zincsearch_sdk.model.meta_http_response_error import MetaHTTPResponseError
from zincsearch_sdk.model.meta_search_response import MetaSearchResponse
from zincsearch_sdk.model.meta_zinc_query import MetaZincQuery
from zincsearch_sdk.model.v1_zinc_query import V1ZincQuery
class Search(object):
"""NOTE: This class is auto generated by OpenAPI Generator
Ref: https://openapi-generator.tech
Do not edit the class manually.
"""
def __init__(self, api_client=None):
if api_client is None:
api_client = ApiClient()
self.api_client = api_client
self.delete_by_query_endpoint = _Endpoint(
settings={
'response_type': (MetaHTTPResponseDeleteByQuery,),
'auth': [
'basicAuth'
],
'endpoint_path': '/es/{index}/_delete_by_query',
'operation_id': 'delete_by_query',
'http_method': 'POST',
'servers': None,
},
params_map={
'all': [
'index',
'query',
],
'required': [
'index',
'query',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'index':
(str,),
'query':
(MetaZincQuery,),
},
'attribute_map': {
'index': 'index',
},
'location_map': {
'index': 'path',
'query': 'body',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [
'application/json'
]
},
api_client=api_client
)
self.msearch_endpoint = _Endpoint(
settings={
'response_type': (MetaSearchResponse,),
'auth': [
'basicAuth'
],
'endpoint_path': '/es/_msearch',
'operation_id': 'msearch',
'http_method': 'POST',
'servers': None,
},
params_map={
'all': [
'query',
],
'required': [
'query',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'query':
(str,),
},
'attribute_map': {
},
'location_map': {
'query': 'body',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [
'text/plain'
]
},
api_client=api_client
)
self.search_endpoint = _Endpoint(
settings={
'response_type': (MetaSearchResponse,),
'auth': [
'basicAuth'
],
'endpoint_path': '/es/{index}/_search',
'operation_id': 'search',
'http_method': 'POST',
'servers': None,
},
params_map={
'all': [
'index',
'query',
],
'required': [
'index',
'query',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'index':
(str,),
'query':
(MetaZincQuery,),
},
'attribute_map': {
'index': 'index',
},
'location_map': {
'index': 'path',
'query': 'body',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [
'application/json'
]
},
api_client=api_client
)
self.search_v1_endpoint = _Endpoint(
settings={
'response_type': (MetaSearchResponse,),
'auth': [
'basicAuth'
],
'endpoint_path': '/api/{index}/_search',
'operation_id': 'search_v1',
'http_method': 'POST',
'servers': None,
},
params_map={
'all': [
'index',
'query',
],
'required': [
'index',
'query',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'index':
(str,),
'query':
(V1ZincQuery,),
},
'attribute_map': {
'index': 'index',
},
'location_map': {
'index': 'path',
'query': 'body',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [
'application/json'
]
},
api_client=api_client
)
def delete_by_query(
self,
index,
query,
**kwargs
):
"""Searches the index and deletes all matched documents # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.delete_by_query(index, query, async_req=True)
>>> result = thread.get()
Args:
index (str): Index
query (MetaZincQuery): Query
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
_request_auths (list): set to override the auth_settings for an a single
request; this effectively ignores the authentication
in the spec for a single request.
Default is None
async_req (bool): execute request asynchronously
Returns:
MetaHTTPResponseDeleteByQuery
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_spec_property_naming'] = kwargs.get(
'_spec_property_naming', False
)
kwargs['_content_type'] = kwargs.get(
'_content_type')
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['_request_auths'] = kwargs.get('_request_auths', None)
kwargs['index'] = \
index
kwargs['query'] = \
query
return self.delete_by_query_endpoint.call_with_http_info(**kwargs)
def msearch(
self,
query,
**kwargs
):
"""Search V2 MultipleSearch for compatible ES # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.msearch(query, async_req=True)
>>> result = thread.get()
Args:
query (str): Query
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
_request_auths (list): set to override the auth_settings for an a single
request; this effectively ignores the authentication
in the spec for a single request.
Default is None
async_req (bool): execute request asynchronously
Returns:
MetaSearchResponse
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_spec_property_naming'] = kwargs.get(
'_spec_property_naming', False
)
kwargs['_content_type'] = kwargs.get(
'_content_type')
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['_request_auths'] = kwargs.get('_request_auths', None)
kwargs['query'] = \
query
return self.msearch_endpoint.call_with_http_info(**kwargs)
def search(
self,
index,
query,
**kwargs
):
"""Search V2 DSL for compatible ES # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.search(index, query, async_req=True)
>>> result = thread.get()
Args:
index (str): Index
query (MetaZincQuery): Query
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
_request_auths (list): set to override the auth_settings for an a single
request; this effectively ignores the authentication
in the spec for a single request.
Default is None
async_req (bool): execute request asynchronously
Returns:
MetaSearchResponse
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_spec_property_naming'] = kwargs.get(
'_spec_property_naming', False
)
kwargs['_content_type'] = kwargs.get(
'_content_type')
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['_request_auths'] = kwargs.get('_request_auths', None)
kwargs['index'] = \
index
kwargs['query'] = \
query
return self.search_endpoint.call_with_http_info(**kwargs)
def search_v1(
self,
index,
query,
**kwargs
):
"""Search V1 # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.search_v1(index, query, async_req=True)
>>> result = thread.get()
Args:
index (str): Index
query (V1ZincQuery): Query
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
_request_auths (list): set to override the auth_settings for an a single
request; this effectively ignores the authentication
in the spec for a single request.
Default is None
async_req (bool): execute request asynchronously
Returns:
MetaSearchResponse
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_spec_property_naming'] = kwargs.get(
'_spec_property_naming', False
)
kwargs['_content_type'] = kwargs.get(
'_content_type')
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['_request_auths'] = kwargs.get('_request_auths', None)
kwargs['index'] = \
index
kwargs['query'] = \
query
return self.search_v1_endpoint.call_with_http_info(**kwargs) | zincsearch-sdk | /zincsearch-sdk-0.3.3.tar.gz/zincsearch-sdk-0.3.3/zincsearch_sdk/api/search.py | search.py |
import re # noqa: F401
import sys # noqa: F401
from zincsearch_sdk.api_client import ApiClient, Endpoint as _Endpoint
from zincsearch_sdk.model_utils import ( # noqa: F401
check_allowed_values,
check_validations,
date,
datetime,
file_type,
none_type,
validate_and_convert_types
)
from zincsearch_sdk.model.auth_login_request import AuthLoginRequest
from zincsearch_sdk.model.auth_login_response import AuthLoginResponse
from zincsearch_sdk.model.meta_http_response_error import MetaHTTPResponseError
from zincsearch_sdk.model.meta_http_response_id import MetaHTTPResponseID
from zincsearch_sdk.model.meta_user import MetaUser
class User(object):
"""NOTE: This class is auto generated by OpenAPI Generator
Ref: https://openapi-generator.tech
Do not edit the class manually.
"""
def __init__(self, api_client=None):
if api_client is None:
api_client = ApiClient()
self.api_client = api_client
self.create_endpoint = _Endpoint(
settings={
'response_type': (MetaHTTPResponseID,),
'auth': [
'basicAuth'
],
'endpoint_path': '/api/user',
'operation_id': 'create',
'http_method': 'POST',
'servers': None,
},
params_map={
'all': [
'user',
],
'required': [
'user',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'user':
(MetaUser,),
},
'attribute_map': {
},
'location_map': {
'user': 'body',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [
'application/json'
]
},
api_client=api_client
)
self.delete_endpoint = _Endpoint(
settings={
'response_type': (MetaHTTPResponseID,),
'auth': [
'basicAuth'
],
'endpoint_path': '/api/user/{id}',
'operation_id': 'delete',
'http_method': 'DELETE',
'servers': None,
},
params_map={
'all': [
'id',
],
'required': [
'id',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'id':
(str,),
},
'attribute_map': {
'id': 'id',
},
'location_map': {
'id': 'path',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client
)
self.list_endpoint = _Endpoint(
settings={
'response_type': ([MetaUser],),
'auth': [
'basicAuth'
],
'endpoint_path': '/api/user',
'operation_id': 'list',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
],
'required': [],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
},
'attribute_map': {
},
'location_map': {
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client
)
self.login_endpoint = _Endpoint(
settings={
'response_type': (AuthLoginResponse,),
'auth': [
'basicAuth'
],
'endpoint_path': '/api/login',
'operation_id': 'login',
'http_method': 'POST',
'servers': None,
},
params_map={
'all': [
'login',
],
'required': [
'login',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'login':
(AuthLoginRequest,),
},
'attribute_map': {
},
'location_map': {
'login': 'body',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [
'application/json'
]
},
api_client=api_client
)
self.update_endpoint = _Endpoint(
settings={
'response_type': (MetaHTTPResponseID,),
'auth': [
'basicAuth'
],
'endpoint_path': '/api/user',
'operation_id': 'update',
'http_method': 'PUT',
'servers': None,
},
params_map={
'all': [
'user',
],
'required': [
'user',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'user':
(MetaUser,),
},
'attribute_map': {
},
'location_map': {
'user': 'body',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [
'application/json'
]
},
api_client=api_client
)
def create(
self,
user,
**kwargs
):
"""Create user # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.create(user, async_req=True)
>>> result = thread.get()
Args:
user (MetaUser): User data
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
_request_auths (list): set to override the auth_settings for an a single
request; this effectively ignores the authentication
in the spec for a single request.
Default is None
async_req (bool): execute request asynchronously
Returns:
MetaHTTPResponseID
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_spec_property_naming'] = kwargs.get(
'_spec_property_naming', False
)
kwargs['_content_type'] = kwargs.get(
'_content_type')
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['_request_auths'] = kwargs.get('_request_auths', None)
kwargs['user'] = \
user
return self.create_endpoint.call_with_http_info(**kwargs)
def delete(
self,
id,
**kwargs
):
"""Delete user # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.delete(id, async_req=True)
>>> result = thread.get()
Args:
id (str): User id
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
_request_auths (list): set to override the auth_settings for an a single
request; this effectively ignores the authentication
in the spec for a single request.
Default is None
async_req (bool): execute request asynchronously
Returns:
MetaHTTPResponseID
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_spec_property_naming'] = kwargs.get(
'_spec_property_naming', False
)
kwargs['_content_type'] = kwargs.get(
'_content_type')
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['_request_auths'] = kwargs.get('_request_auths', None)
kwargs['id'] = \
id
return self.delete_endpoint.call_with_http_info(**kwargs)
def list(
self,
**kwargs
):
"""List user # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.list(async_req=True)
>>> result = thread.get()
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
_request_auths (list): set to override the auth_settings for an a single
request; this effectively ignores the authentication
in the spec for a single request.
Default is None
async_req (bool): execute request asynchronously
Returns:
[MetaUser]
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_spec_property_naming'] = kwargs.get(
'_spec_property_naming', False
)
kwargs['_content_type'] = kwargs.get(
'_content_type')
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['_request_auths'] = kwargs.get('_request_auths', None)
return self.list_endpoint.call_with_http_info(**kwargs)
def login(
self,
login,
**kwargs
):
"""Login # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.login(login, async_req=True)
>>> result = thread.get()
Args:
login (AuthLoginRequest): Login credentials
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
_request_auths (list): set to override the auth_settings for an a single
request; this effectively ignores the authentication
in the spec for a single request.
Default is None
async_req (bool): execute request asynchronously
Returns:
AuthLoginResponse
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_spec_property_naming'] = kwargs.get(
'_spec_property_naming', False
)
kwargs['_content_type'] = kwargs.get(
'_content_type')
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['_request_auths'] = kwargs.get('_request_auths', None)
kwargs['login'] = \
login
return self.login_endpoint.call_with_http_info(**kwargs)
def update(
self,
user,
**kwargs
):
"""Update user # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.update(user, async_req=True)
>>> result = thread.get()
Args:
user (MetaUser): User data
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
_request_auths (list): set to override the auth_settings for an a single
request; this effectively ignores the authentication
in the spec for a single request.
Default is None
async_req (bool): execute request asynchronously
Returns:
MetaHTTPResponseID
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_spec_property_naming'] = kwargs.get(
'_spec_property_naming', False
)
kwargs['_content_type'] = kwargs.get(
'_content_type')
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['_request_auths'] = kwargs.get('_request_auths', None)
kwargs['user'] = \
user
return self.update_endpoint.call_with_http_info(**kwargs) | zincsearch-sdk | /zincsearch-sdk-0.3.3.tar.gz/zincsearch-sdk-0.3.3/zincsearch_sdk/api/user.py | user.py |
import re # noqa: F401
import sys # noqa: F401
from zincsearch_sdk.api_client import ApiClient, Endpoint as _Endpoint
from zincsearch_sdk.model_utils import ( # noqa: F401
check_allowed_values,
check_validations,
date,
datetime,
file_type,
none_type,
validate_and_convert_types
)
from zincsearch_sdk.model.index_analyze_response import IndexAnalyzeResponse
from zincsearch_sdk.model.index_index_list_response import IndexIndexListResponse
from zincsearch_sdk.model.meta_http_response import MetaHTTPResponse
from zincsearch_sdk.model.meta_http_response_error import MetaHTTPResponseError
from zincsearch_sdk.model.meta_http_response_index import MetaHTTPResponseIndex
from zincsearch_sdk.model.meta_http_response_template import MetaHTTPResponseTemplate
from zincsearch_sdk.model.meta_index_settings import MetaIndexSettings
from zincsearch_sdk.model.meta_index_simple import MetaIndexSimple
from zincsearch_sdk.model.meta_index_template import MetaIndexTemplate
from zincsearch_sdk.model.meta_mappings import MetaMappings
from zincsearch_sdk.model.meta_template import MetaTemplate
class Index(object):
"""NOTE: This class is auto generated by OpenAPI Generator
Ref: https://openapi-generator.tech
Do not edit the class manually.
"""
def __init__(self, api_client=None):
if api_client is None:
api_client = ApiClient()
self.api_client = api_client
self.add_or_remove_es_alias_endpoint = _Endpoint(
settings={
'response_type': (bool, date, datetime, dict, float, int, list, str, none_type,),
'auth': [
'basicAuth'
],
'endpoint_path': '/es/_aliases',
'operation_id': 'add_or_remove_es_alias',
'http_method': 'POST',
'servers': None,
},
params_map={
'all': [
],
'required': [],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
},
'attribute_map': {
},
'location_map': {
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client
)
self.analyze_endpoint = _Endpoint(
settings={
'response_type': (IndexAnalyzeResponse,),
'auth': [
'basicAuth'
],
'endpoint_path': '/api/_analyze',
'operation_id': 'analyze',
'http_method': 'POST',
'servers': None,
},
params_map={
'all': [
'query',
],
'required': [
'query',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'query':
(bool, date, datetime, dict, float, int, list, str, none_type,),
},
'attribute_map': {
},
'location_map': {
'query': 'body',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [
'application/json'
]
},
api_client=api_client
)
self.analyze_index_endpoint = _Endpoint(
settings={
'response_type': (IndexAnalyzeResponse,),
'auth': [
'basicAuth'
],
'endpoint_path': '/api/{index}/_analyze',
'operation_id': 'analyze_index',
'http_method': 'POST',
'servers': None,
},
params_map={
'all': [
'index',
'query',
],
'required': [
'index',
'query',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'index':
(str,),
'query':
(bool, date, datetime, dict, float, int, list, str, none_type,),
},
'attribute_map': {
'index': 'index',
},
'location_map': {
'index': 'path',
'query': 'body',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [
'application/json'
]
},
api_client=api_client
)
self.create_endpoint = _Endpoint(
settings={
'response_type': (MetaHTTPResponseIndex,),
'auth': [
'basicAuth'
],
'endpoint_path': '/api/index',
'operation_id': 'create',
'http_method': 'POST',
'servers': None,
},
params_map={
'all': [
'data',
],
'required': [
'data',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'data':
(MetaIndexSimple,),
},
'attribute_map': {
},
'location_map': {
'data': 'body',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [
'application/json'
]
},
api_client=api_client
)
self.create_template_endpoint = _Endpoint(
settings={
'response_type': (MetaHTTPResponseTemplate,),
'auth': [
'basicAuth'
],
'endpoint_path': '/es/_index_template',
'operation_id': 'create_template',
'http_method': 'POST',
'servers': None,
},
params_map={
'all': [
'template',
],
'required': [
'template',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'template':
(MetaIndexTemplate,),
},
'attribute_map': {
},
'location_map': {
'template': 'body',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [
'application/json'
]
},
api_client=api_client
)
self.delete_endpoint = _Endpoint(
settings={
'response_type': (MetaHTTPResponseIndex,),
'auth': [
'basicAuth'
],
'endpoint_path': '/api/index/{index}',
'operation_id': 'delete',
'http_method': 'DELETE',
'servers': None,
},
params_map={
'all': [
'index',
],
'required': [
'index',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'index':
(str,),
},
'attribute_map': {
'index': 'index',
},
'location_map': {
'index': 'path',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client
)
self.delete_template_endpoint = _Endpoint(
settings={
'response_type': (MetaHTTPResponse,),
'auth': [
'basicAuth'
],
'endpoint_path': '/es/_index_template/{name}',
'operation_id': 'delete_template',
'http_method': 'DELETE',
'servers': None,
},
params_map={
'all': [
'name',
],
'required': [
'name',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'name':
(str,),
},
'attribute_map': {
'name': 'name',
},
'location_map': {
'name': 'path',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client
)
self.e_s_create_index_endpoint = _Endpoint(
settings={
'response_type': (bool, date, datetime, dict, float, int, list, str, none_type,),
'auth': [
'basicAuth'
],
'endpoint_path': '/es/{index}',
'operation_id': 'e_s_create_index',
'http_method': 'PUT',
'servers': None,
},
params_map={
'all': [
'index',
'data',
],
'required': [
'index',
'data',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'index':
(str,),
'data':
(MetaIndexSimple,),
},
'attribute_map': {
'index': 'index',
},
'location_map': {
'index': 'path',
'data': 'body',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client
)
self.e_s_get_mapping_endpoint = _Endpoint(
settings={
'response_type': (bool, date, datetime, dict, float, int, list, str, none_type,),
'auth': [
'basicAuth'
],
'endpoint_path': '/es/{index}/_mapping',
'operation_id': 'e_s_get_mapping',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
'index',
],
'required': [
'index',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'index':
(str,),
},
'attribute_map': {
'index': 'index',
},
'location_map': {
'index': 'path',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client
)
self.es_exists_endpoint = _Endpoint(
settings={
'response_type': (MetaHTTPResponse,),
'auth': [
'basicAuth'
],
'endpoint_path': '/es/{index}',
'operation_id': 'es_exists',
'http_method': 'HEAD',
'servers': None,
},
params_map={
'all': [
'index',
],
'required': [
'index',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'index':
(str,),
},
'attribute_map': {
'index': 'index',
},
'location_map': {
'index': 'path',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client
)
self.exists_endpoint = _Endpoint(
settings={
'response_type': (MetaHTTPResponse,),
'auth': [
'basicAuth'
],
'endpoint_path': '/api/index/{index}',
'operation_id': 'exists',
'http_method': 'HEAD',
'servers': None,
},
params_map={
'all': [
'index',
],
'required': [
'index',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'index':
(str,),
},
'attribute_map': {
'index': 'index',
},
'location_map': {
'index': 'path',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client
)
self.get_es_aliases_endpoint = _Endpoint(
settings={
'response_type': (bool, date, datetime, dict, float, int, list, str, none_type,),
'auth': [
'basicAuth'
],
'endpoint_path': '/es/{target}/_alias/{target_alias}',
'operation_id': 'get_es_aliases',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
'target',
'target_alias',
],
'required': [
'target',
'target_alias',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'target':
(str,),
'target_alias':
(str,),
},
'attribute_map': {
'target': 'target',
'target_alias': 'target_alias',
},
'location_map': {
'target': 'path',
'target_alias': 'path',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client
)
self.get_index_endpoint = _Endpoint(
settings={
'response_type': (bool, date, datetime, dict, float, int, list, str, none_type,),
'auth': [
'basicAuth'
],
'endpoint_path': '/api/index/{index}',
'operation_id': 'get_index',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
'index',
],
'required': [
'index',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'index':
(str,),
},
'attribute_map': {
'index': 'index',
},
'location_map': {
'index': 'path',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client
)
self.get_mapping_endpoint = _Endpoint(
settings={
'response_type': (bool, date, datetime, dict, float, int, list, str, none_type,),
'auth': [
'basicAuth'
],
'endpoint_path': '/api/{index}/_mapping',
'operation_id': 'get_mapping',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
'index',
],
'required': [
'index',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'index':
(str,),
},
'attribute_map': {
'index': 'index',
},
'location_map': {
'index': 'path',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client
)
self.get_settings_endpoint = _Endpoint(
settings={
'response_type': (bool, date, datetime, dict, float, int, list, str, none_type,),
'auth': [
'basicAuth'
],
'endpoint_path': '/api/{index}/_settings',
'operation_id': 'get_settings',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
'index',
],
'required': [
'index',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'index':
(str,),
},
'attribute_map': {
'index': 'index',
},
'location_map': {
'index': 'path',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client
)
self.get_template_endpoint = _Endpoint(
settings={
'response_type': (MetaIndexTemplate,),
'auth': [
'basicAuth'
],
'endpoint_path': '/es/_index_template/{name}',
'operation_id': 'get_template',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
'name',
],
'required': [
'name',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'name':
(str,),
},
'attribute_map': {
'name': 'name',
},
'location_map': {
'name': 'path',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client
)
self.index_name_list_endpoint = _Endpoint(
settings={
'response_type': ([str],),
'auth': [
'basicAuth'
],
'endpoint_path': '/api/index_name',
'operation_id': 'index_name_list',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
'name',
],
'required': [],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'name':
(str,),
},
'attribute_map': {
'name': 'name',
},
'location_map': {
'name': 'query',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client
)
self.list_endpoint = _Endpoint(
settings={
'response_type': (IndexIndexListResponse,),
'auth': [
'basicAuth'
],
'endpoint_path': '/api/index',
'operation_id': 'list',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
'page_num',
'page_size',
'sort_by',
'desc',
'name',
],
'required': [],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'page_num':
(int,),
'page_size':
(int,),
'sort_by':
(str,),
'desc':
(bool,),
'name':
(str,),
},
'attribute_map': {
'page_num': 'page_num',
'page_size': 'page_size',
'sort_by': 'sort_by',
'desc': 'desc',
'name': 'name',
},
'location_map': {
'page_num': 'query',
'page_size': 'query',
'sort_by': 'query',
'desc': 'query',
'name': 'query',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client
)
self.list_templates_endpoint = _Endpoint(
settings={
'response_type': ([MetaTemplate],),
'auth': [
'basicAuth'
],
'endpoint_path': '/es/_index_template',
'operation_id': 'list_templates',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
],
'required': [],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
},
'attribute_map': {
},
'location_map': {
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client
)
self.refresh_endpoint = _Endpoint(
settings={
'response_type': (MetaHTTPResponse,),
'auth': [
'basicAuth'
],
'endpoint_path': '/api/index/{index}/refresh',
'operation_id': 'refresh',
'http_method': 'POST',
'servers': None,
},
params_map={
'all': [
'index',
],
'required': [
'index',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'index':
(str,),
},
'attribute_map': {
'index': 'index',
},
'location_map': {
'index': 'path',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client
)
self.set_mapping_endpoint = _Endpoint(
settings={
'response_type': (MetaHTTPResponse,),
'auth': [
'basicAuth'
],
'endpoint_path': '/api/{index}/_mapping',
'operation_id': 'set_mapping',
'http_method': 'PUT',
'servers': None,
},
params_map={
'all': [
'index',
'mapping',
],
'required': [
'index',
'mapping',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'index':
(str,),
'mapping':
(MetaMappings,),
},
'attribute_map': {
'index': 'index',
},
'location_map': {
'index': 'path',
'mapping': 'body',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [
'application/json'
]
},
api_client=api_client
)
self.set_settings_endpoint = _Endpoint(
settings={
'response_type': (MetaHTTPResponse,),
'auth': [
'basicAuth'
],
'endpoint_path': '/api/{index}/_settings',
'operation_id': 'set_settings',
'http_method': 'PUT',
'servers': None,
},
params_map={
'all': [
'index',
'settings',
],
'required': [
'index',
'settings',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'index':
(str,),
'settings':
(MetaIndexSettings,),
},
'attribute_map': {
'index': 'index',
},
'location_map': {
'index': 'path',
'settings': 'body',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [
'application/json'
]
},
api_client=api_client
)
self.update_template_endpoint = _Endpoint(
settings={
'response_type': (MetaHTTPResponseTemplate,),
'auth': [
'basicAuth'
],
'endpoint_path': '/es/_index_template/{name}',
'operation_id': 'update_template',
'http_method': 'PUT',
'servers': None,
},
params_map={
'all': [
'name',
'template',
],
'required': [
'name',
'template',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'name':
(str,),
'template':
(MetaIndexTemplate,),
},
'attribute_map': {
'name': 'name',
},
'location_map': {
'name': 'path',
'template': 'body',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [
'application/json'
]
},
api_client=api_client
)
def add_or_remove_es_alias(
self,
**kwargs
):
"""Add or remove index alias for compatible ES # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.add_or_remove_es_alias(async_req=True)
>>> result = thread.get()
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
_request_auths (list): set to override the auth_settings for an a single
request; this effectively ignores the authentication
in the spec for a single request.
Default is None
async_req (bool): execute request asynchronously
Returns:
bool, date, datetime, dict, float, int, list, str, none_type
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_spec_property_naming'] = kwargs.get(
'_spec_property_naming', False
)
kwargs['_content_type'] = kwargs.get(
'_content_type')
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['_request_auths'] = kwargs.get('_request_auths', None)
return self.add_or_remove_es_alias_endpoint.call_with_http_info(**kwargs)
def analyze(
self,
query,
**kwargs
):
"""Analyze # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.analyze(query, async_req=True)
>>> result = thread.get()
Args:
query (bool, date, datetime, dict, float, int, list, str, none_type): Query
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
_request_auths (list): set to override the auth_settings for an a single
request; this effectively ignores the authentication
in the spec for a single request.
Default is None
async_req (bool): execute request asynchronously
Returns:
IndexAnalyzeResponse
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_spec_property_naming'] = kwargs.get(
'_spec_property_naming', False
)
kwargs['_content_type'] = kwargs.get(
'_content_type')
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['_request_auths'] = kwargs.get('_request_auths', None)
kwargs['query'] = \
query
return self.analyze_endpoint.call_with_http_info(**kwargs)
def analyze_index(
self,
index,
query,
**kwargs
):
"""Analyze # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.analyze_index(index, query, async_req=True)
>>> result = thread.get()
Args:
index (str): Index
query (bool, date, datetime, dict, float, int, list, str, none_type): Query
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
_request_auths (list): set to override the auth_settings for an a single
request; this effectively ignores the authentication
in the spec for a single request.
Default is None
async_req (bool): execute request asynchronously
Returns:
IndexAnalyzeResponse
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_spec_property_naming'] = kwargs.get(
'_spec_property_naming', False
)
kwargs['_content_type'] = kwargs.get(
'_content_type')
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['_request_auths'] = kwargs.get('_request_auths', None)
kwargs['index'] = \
index
kwargs['query'] = \
query
return self.analyze_index_endpoint.call_with_http_info(**kwargs)
def create(
self,
data,
**kwargs
):
"""Create index # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.create(data, async_req=True)
>>> result = thread.get()
Args:
data (MetaIndexSimple): Index data
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
_request_auths (list): set to override the auth_settings for an a single
request; this effectively ignores the authentication
in the spec for a single request.
Default is None
async_req (bool): execute request asynchronously
Returns:
MetaHTTPResponseIndex
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_spec_property_naming'] = kwargs.get(
'_spec_property_naming', False
)
kwargs['_content_type'] = kwargs.get(
'_content_type')
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['_request_auths'] = kwargs.get('_request_auths', None)
kwargs['data'] = \
data
return self.create_endpoint.call_with_http_info(**kwargs)
def create_template(
self,
template,
**kwargs
):
"""Create update index template # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.create_template(template, async_req=True)
>>> result = thread.get()
Args:
template (MetaIndexTemplate): Template data
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
_request_auths (list): set to override the auth_settings for an a single
request; this effectively ignores the authentication
in the spec for a single request.
Default is None
async_req (bool): execute request asynchronously
Returns:
MetaHTTPResponseTemplate
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_spec_property_naming'] = kwargs.get(
'_spec_property_naming', False
)
kwargs['_content_type'] = kwargs.get(
'_content_type')
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['_request_auths'] = kwargs.get('_request_auths', None)
kwargs['template'] = \
template
return self.create_template_endpoint.call_with_http_info(**kwargs)
def delete(
self,
index,
**kwargs
):
"""Delete index # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.delete(index, async_req=True)
>>> result = thread.get()
Args:
index (str): Index
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
_request_auths (list): set to override the auth_settings for an a single
request; this effectively ignores the authentication
in the spec for a single request.
Default is None
async_req (bool): execute request asynchronously
Returns:
MetaHTTPResponseIndex
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_spec_property_naming'] = kwargs.get(
'_spec_property_naming', False
)
kwargs['_content_type'] = kwargs.get(
'_content_type')
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['_request_auths'] = kwargs.get('_request_auths', None)
kwargs['index'] = \
index
return self.delete_endpoint.call_with_http_info(**kwargs)
def delete_template(
self,
name,
**kwargs
):
"""Delete template # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.delete_template(name, async_req=True)
>>> result = thread.get()
Args:
name (str): Template
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
_request_auths (list): set to override the auth_settings for an a single
request; this effectively ignores the authentication
in the spec for a single request.
Default is None
async_req (bool): execute request asynchronously
Returns:
MetaHTTPResponse
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_spec_property_naming'] = kwargs.get(
'_spec_property_naming', False
)
kwargs['_content_type'] = kwargs.get(
'_content_type')
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['_request_auths'] = kwargs.get('_request_auths', None)
kwargs['name'] = \
name
return self.delete_template_endpoint.call_with_http_info(**kwargs)
def e_s_create_index(
self,
index,
data,
**kwargs
):
"""Create index for compatible ES # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.e_s_create_index(index, data, async_req=True)
>>> result = thread.get()
Args:
index (str): Index
data (MetaIndexSimple): Index data
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
_request_auths (list): set to override the auth_settings for an a single
request; this effectively ignores the authentication
in the spec for a single request.
Default is None
async_req (bool): execute request asynchronously
Returns:
bool, date, datetime, dict, float, int, list, str, none_type
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_spec_property_naming'] = kwargs.get(
'_spec_property_naming', False
)
kwargs['_content_type'] = kwargs.get(
'_content_type')
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['_request_auths'] = kwargs.get('_request_auths', None)
kwargs['index'] = \
index
kwargs['data'] = \
data
return self.e_s_create_index_endpoint.call_with_http_info(**kwargs)
def e_s_get_mapping(
self,
index,
**kwargs
):
"""Get index mappings for compatible ES # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.e_s_get_mapping(index, async_req=True)
>>> result = thread.get()
Args:
index (str): Index
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
_request_auths (list): set to override the auth_settings for an a single
request; this effectively ignores the authentication
in the spec for a single request.
Default is None
async_req (bool): execute request asynchronously
Returns:
bool, date, datetime, dict, float, int, list, str, none_type
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_spec_property_naming'] = kwargs.get(
'_spec_property_naming', False
)
kwargs['_content_type'] = kwargs.get(
'_content_type')
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['_request_auths'] = kwargs.get('_request_auths', None)
kwargs['index'] = \
index
return self.e_s_get_mapping_endpoint.call_with_http_info(**kwargs)
def es_exists(
self,
index,
**kwargs
):
"""Checks if the index exists for compatible ES # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.es_exists(index, async_req=True)
>>> result = thread.get()
Args:
index (str): Index
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
_request_auths (list): set to override the auth_settings for an a single
request; this effectively ignores the authentication
in the spec for a single request.
Default is None
async_req (bool): execute request asynchronously
Returns:
MetaHTTPResponse
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_spec_property_naming'] = kwargs.get(
'_spec_property_naming', False
)
kwargs['_content_type'] = kwargs.get(
'_content_type')
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['_request_auths'] = kwargs.get('_request_auths', None)
kwargs['index'] = \
index
return self.es_exists_endpoint.call_with_http_info(**kwargs)
def exists(
self,
index,
**kwargs
):
"""Checks if the index exists # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.exists(index, async_req=True)
>>> result = thread.get()
Args:
index (str): Index
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
_request_auths (list): set to override the auth_settings for an a single
request; this effectively ignores the authentication
in the spec for a single request.
Default is None
async_req (bool): execute request asynchronously
Returns:
MetaHTTPResponse
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_spec_property_naming'] = kwargs.get(
'_spec_property_naming', False
)
kwargs['_content_type'] = kwargs.get(
'_content_type')
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['_request_auths'] = kwargs.get('_request_auths', None)
kwargs['index'] = \
index
return self.exists_endpoint.call_with_http_info(**kwargs)
def get_es_aliases(
self,
target,
target_alias,
**kwargs
):
"""Get index alias for compatible ES # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_es_aliases(target, target_alias, async_req=True)
>>> result = thread.get()
Args:
target (str): Target Index
target_alias (str): Target Alias
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
_request_auths (list): set to override the auth_settings for an a single
request; this effectively ignores the authentication
in the spec for a single request.
Default is None
async_req (bool): execute request asynchronously
Returns:
bool, date, datetime, dict, float, int, list, str, none_type
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_spec_property_naming'] = kwargs.get(
'_spec_property_naming', False
)
kwargs['_content_type'] = kwargs.get(
'_content_type')
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['_request_auths'] = kwargs.get('_request_auths', None)
kwargs['target'] = \
target
kwargs['target_alias'] = \
target_alias
return self.get_es_aliases_endpoint.call_with_http_info(**kwargs)
def get_index(
self,
index,
**kwargs
):
"""Get index metadata # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_index(index, async_req=True)
>>> result = thread.get()
Args:
index (str): Index
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
_request_auths (list): set to override the auth_settings for an a single
request; this effectively ignores the authentication
in the spec for a single request.
Default is None
async_req (bool): execute request asynchronously
Returns:
bool, date, datetime, dict, float, int, list, str, none_type
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_spec_property_naming'] = kwargs.get(
'_spec_property_naming', False
)
kwargs['_content_type'] = kwargs.get(
'_content_type')
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['_request_auths'] = kwargs.get('_request_auths', None)
kwargs['index'] = \
index
return self.get_index_endpoint.call_with_http_info(**kwargs)
def get_mapping(
self,
index,
**kwargs
):
"""Get index mappings # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_mapping(index, async_req=True)
>>> result = thread.get()
Args:
index (str): Index
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
_request_auths (list): set to override the auth_settings for an a single
request; this effectively ignores the authentication
in the spec for a single request.
Default is None
async_req (bool): execute request asynchronously
Returns:
bool, date, datetime, dict, float, int, list, str, none_type
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_spec_property_naming'] = kwargs.get(
'_spec_property_naming', False
)
kwargs['_content_type'] = kwargs.get(
'_content_type')
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['_request_auths'] = kwargs.get('_request_auths', None)
kwargs['index'] = \
index
return self.get_mapping_endpoint.call_with_http_info(**kwargs)
def get_settings(
self,
index,
**kwargs
):
"""Get index settings # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_settings(index, async_req=True)
>>> result = thread.get()
Args:
index (str): Index
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
_request_auths (list): set to override the auth_settings for an a single
request; this effectively ignores the authentication
in the spec for a single request.
Default is None
async_req (bool): execute request asynchronously
Returns:
bool, date, datetime, dict, float, int, list, str, none_type
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_spec_property_naming'] = kwargs.get(
'_spec_property_naming', False
)
kwargs['_content_type'] = kwargs.get(
'_content_type')
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['_request_auths'] = kwargs.get('_request_auths', None)
kwargs['index'] = \
index
return self.get_settings_endpoint.call_with_http_info(**kwargs)
def get_template(
self,
name,
**kwargs
):
"""Get index template # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_template(name, async_req=True)
>>> result = thread.get()
Args:
name (str): Template
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
_request_auths (list): set to override the auth_settings for an a single
request; this effectively ignores the authentication
in the spec for a single request.
Default is None
async_req (bool): execute request asynchronously
Returns:
MetaIndexTemplate
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_spec_property_naming'] = kwargs.get(
'_spec_property_naming', False
)
kwargs['_content_type'] = kwargs.get(
'_content_type')
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['_request_auths'] = kwargs.get('_request_auths', None)
kwargs['name'] = \
name
return self.get_template_endpoint.call_with_http_info(**kwargs)
def index_name_list(
self,
**kwargs
):
"""List index Name # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.index_name_list(async_req=True)
>>> result = thread.get()
Keyword Args:
name (str): IndexName. [optional]
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
_request_auths (list): set to override the auth_settings for an a single
request; this effectively ignores the authentication
in the spec for a single request.
Default is None
async_req (bool): execute request asynchronously
Returns:
[str]
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_spec_property_naming'] = kwargs.get(
'_spec_property_naming', False
)
kwargs['_content_type'] = kwargs.get(
'_content_type')
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['_request_auths'] = kwargs.get('_request_auths', None)
return self.index_name_list_endpoint.call_with_http_info(**kwargs)
def list(
self,
**kwargs
):
"""List indexes # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.list(async_req=True)
>>> result = thread.get()
Keyword Args:
page_num (int): page num. [optional]
page_size (int): page size. [optional]
sort_by (str): sort by. [optional]
desc (bool): desc. [optional]
name (str): name. [optional]
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
_request_auths (list): set to override the auth_settings for an a single
request; this effectively ignores the authentication
in the spec for a single request.
Default is None
async_req (bool): execute request asynchronously
Returns:
IndexIndexListResponse
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_spec_property_naming'] = kwargs.get(
'_spec_property_naming', False
)
kwargs['_content_type'] = kwargs.get(
'_content_type')
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['_request_auths'] = kwargs.get('_request_auths', None)
return self.list_endpoint.call_with_http_info(**kwargs)
def list_templates(
self,
**kwargs
):
"""List index teplates # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.list_templates(async_req=True)
>>> result = thread.get()
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
_request_auths (list): set to override the auth_settings for an a single
request; this effectively ignores the authentication
in the spec for a single request.
Default is None
async_req (bool): execute request asynchronously
Returns:
[MetaTemplate]
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_spec_property_naming'] = kwargs.get(
'_spec_property_naming', False
)
kwargs['_content_type'] = kwargs.get(
'_content_type')
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['_request_auths'] = kwargs.get('_request_auths', None)
return self.list_templates_endpoint.call_with_http_info(**kwargs)
def refresh(
self,
index,
**kwargs
):
"""Resfresh index # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.refresh(index, async_req=True)
>>> result = thread.get()
Args:
index (str): Index
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
_request_auths (list): set to override the auth_settings for an a single
request; this effectively ignores the authentication
in the spec for a single request.
Default is None
async_req (bool): execute request asynchronously
Returns:
MetaHTTPResponse
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_spec_property_naming'] = kwargs.get(
'_spec_property_naming', False
)
kwargs['_content_type'] = kwargs.get(
'_content_type')
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['_request_auths'] = kwargs.get('_request_auths', None)
kwargs['index'] = \
index
return self.refresh_endpoint.call_with_http_info(**kwargs)
def set_mapping(
self,
index,
mapping,
**kwargs
):
"""Set index mappings # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.set_mapping(index, mapping, async_req=True)
>>> result = thread.get()
Args:
index (str): Index
mapping (MetaMappings): Mapping
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
_request_auths (list): set to override the auth_settings for an a single
request; this effectively ignores the authentication
in the spec for a single request.
Default is None
async_req (bool): execute request asynchronously
Returns:
MetaHTTPResponse
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_spec_property_naming'] = kwargs.get(
'_spec_property_naming', False
)
kwargs['_content_type'] = kwargs.get(
'_content_type')
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['_request_auths'] = kwargs.get('_request_auths', None)
kwargs['index'] = \
index
kwargs['mapping'] = \
mapping
return self.set_mapping_endpoint.call_with_http_info(**kwargs)
def set_settings(
self,
index,
settings,
**kwargs
):
"""Set index Settings # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.set_settings(index, settings, async_req=True)
>>> result = thread.get()
Args:
index (str): Index
settings (MetaIndexSettings): Settings
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
_request_auths (list): set to override the auth_settings for an a single
request; this effectively ignores the authentication
in the spec for a single request.
Default is None
async_req (bool): execute request asynchronously
Returns:
MetaHTTPResponse
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_spec_property_naming'] = kwargs.get(
'_spec_property_naming', False
)
kwargs['_content_type'] = kwargs.get(
'_content_type')
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['_request_auths'] = kwargs.get('_request_auths', None)
kwargs['index'] = \
index
kwargs['settings'] = \
settings
return self.set_settings_endpoint.call_with_http_info(**kwargs)
def update_template(
self,
name,
template,
**kwargs
):
"""Create update index template # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.update_template(name, template, async_req=True)
>>> result = thread.get()
Args:
name (str): Template
template (MetaIndexTemplate): Template data
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
_request_auths (list): set to override the auth_settings for an a single
request; this effectively ignores the authentication
in the spec for a single request.
Default is None
async_req (bool): execute request asynchronously
Returns:
MetaHTTPResponseTemplate
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_spec_property_naming'] = kwargs.get(
'_spec_property_naming', False
)
kwargs['_content_type'] = kwargs.get(
'_content_type')
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['_request_auths'] = kwargs.get('_request_auths', None)
kwargs['name'] = \
name
kwargs['template'] = \
template
return self.update_template_endpoint.call_with_http_info(**kwargs) | zincsearch-sdk | /zincsearch-sdk-0.3.3.tar.gz/zincsearch-sdk-0.3.3/zincsearch_sdk/api/index.py | index.py |
# import all models into this package
# if you have many models here with many references from one model to another this may
# raise a RecursionError
# to avoid this, import only the models that you directly need like:
# from from zincsearch_sdk.model.pet import Pet
# or import this package, but before doing it, use:
# import sys
# sys.setrecursionlimit(n)
from zincsearch_sdk.model.aggregation_histogram_bound import AggregationHistogramBound
from zincsearch_sdk.model.auth_login_request import AuthLoginRequest
from zincsearch_sdk.model.auth_login_response import AuthLoginResponse
from zincsearch_sdk.model.auth_login_user import AuthLoginUser
from zincsearch_sdk.model.index_analyze_response import IndexAnalyzeResponse
from zincsearch_sdk.model.index_analyze_response_token import IndexAnalyzeResponseToken
from zincsearch_sdk.model.index_index_list_response import IndexIndexListResponse
from zincsearch_sdk.model.meta_aggregation_auto_date_histogram import MetaAggregationAutoDateHistogram
from zincsearch_sdk.model.meta_aggregation_date_histogram import MetaAggregationDateHistogram
from zincsearch_sdk.model.meta_aggregation_date_range import MetaAggregationDateRange
from zincsearch_sdk.model.meta_aggregation_histogram import MetaAggregationHistogram
from zincsearch_sdk.model.meta_aggregation_ip_range import MetaAggregationIPRange
from zincsearch_sdk.model.meta_aggregation_metric import MetaAggregationMetric
from zincsearch_sdk.model.meta_aggregation_range import MetaAggregationRange
from zincsearch_sdk.model.meta_aggregation_response import MetaAggregationResponse
from zincsearch_sdk.model.meta_aggregations import MetaAggregations
from zincsearch_sdk.model.meta_aggregations_terms import MetaAggregationsTerms
from zincsearch_sdk.model.meta_analyzer import MetaAnalyzer
from zincsearch_sdk.model.meta_bool_query import MetaBoolQuery
from zincsearch_sdk.model.meta_date_range import MetaDateRange
from zincsearch_sdk.model.meta_exists_query import MetaExistsQuery
from zincsearch_sdk.model.meta_fuzzy_query import MetaFuzzyQuery
from zincsearch_sdk.model.meta_http_response import MetaHTTPResponse
from zincsearch_sdk.model.meta_http_response_delete_by_query import MetaHTTPResponseDeleteByQuery
from zincsearch_sdk.model.meta_http_response_document import MetaHTTPResponseDocument
from zincsearch_sdk.model.meta_http_response_error import MetaHTTPResponseError
from zincsearch_sdk.model.meta_http_response_id import MetaHTTPResponseID
from zincsearch_sdk.model.meta_http_response_index import MetaHTTPResponseIndex
from zincsearch_sdk.model.meta_http_response_record_count import MetaHTTPResponseRecordCount
from zincsearch_sdk.model.meta_http_response_template import MetaHTTPResponseTemplate
from zincsearch_sdk.model.meta_healthz_response import MetaHealthzResponse
from zincsearch_sdk.model.meta_highlight import MetaHighlight
from zincsearch_sdk.model.meta_hit import MetaHit
from zincsearch_sdk.model.meta_hits import MetaHits
from zincsearch_sdk.model.meta_http_retries_response import MetaHttpRetriesResponse
from zincsearch_sdk.model.meta_ip_range import MetaIPRange
from zincsearch_sdk.model.meta_ids_query import MetaIdsQuery
from zincsearch_sdk.model.meta_index_analysis import MetaIndexAnalysis
from zincsearch_sdk.model.meta_index_settings import MetaIndexSettings
from zincsearch_sdk.model.meta_index_simple import MetaIndexSimple
from zincsearch_sdk.model.meta_index_template import MetaIndexTemplate
from zincsearch_sdk.model.meta_json_ingest import MetaJSONIngest
from zincsearch_sdk.model.meta_mappings import MetaMappings
from zincsearch_sdk.model.meta_match_bool_prefix_query import MetaMatchBoolPrefixQuery
from zincsearch_sdk.model.meta_match_phrase_prefix_query import MetaMatchPhrasePrefixQuery
from zincsearch_sdk.model.meta_match_phrase_query import MetaMatchPhraseQuery
from zincsearch_sdk.model.meta_match_query import MetaMatchQuery
from zincsearch_sdk.model.meta_multi_match_query import MetaMultiMatchQuery
from zincsearch_sdk.model.meta_page import MetaPage
from zincsearch_sdk.model.meta_prefix_query import MetaPrefixQuery
from zincsearch_sdk.model.meta_property import MetaProperty
from zincsearch_sdk.model.meta_query import MetaQuery
from zincsearch_sdk.model.meta_query_string_query import MetaQueryStringQuery
from zincsearch_sdk.model.meta_range import MetaRange
from zincsearch_sdk.model.meta_range_query import MetaRangeQuery
from zincsearch_sdk.model.meta_regexp_query import MetaRegexpQuery
from zincsearch_sdk.model.meta_search_response import MetaSearchResponse
from zincsearch_sdk.model.meta_shards import MetaShards
from zincsearch_sdk.model.meta_simple_query_string_query import MetaSimpleQueryStringQuery
from zincsearch_sdk.model.meta_template import MetaTemplate
from zincsearch_sdk.model.meta_template_template import MetaTemplateTemplate
from zincsearch_sdk.model.meta_term_query import MetaTermQuery
from zincsearch_sdk.model.meta_total import MetaTotal
from zincsearch_sdk.model.meta_user import MetaUser
from zincsearch_sdk.model.meta_version_response import MetaVersionResponse
from zincsearch_sdk.model.meta_wildcard_query import MetaWildcardQuery
from zincsearch_sdk.model.meta_zinc_query import MetaZincQuery
from zincsearch_sdk.model.v1_aggregation_date_range import V1AggregationDateRange
from zincsearch_sdk.model.v1_aggregation_number_range import V1AggregationNumberRange
from zincsearch_sdk.model.v1_aggregation_params import V1AggregationParams
from zincsearch_sdk.model.v1_query_params import V1QueryParams
from zincsearch_sdk.model.v1_zinc_query import V1ZincQuery | zincsearch-sdk | /zincsearch-sdk-0.3.3.tar.gz/zincsearch-sdk-0.3.3/zincsearch_sdk/models/__init__.py | __init__.py |
import re # noqa: F401
import sys # noqa: F401
from zincsearch_sdk.model_utils import ( # noqa: F401
ApiTypeError,
ModelComposed,
ModelNormal,
ModelSimple,
cached_property,
change_keys_js_to_python,
convert_js_args_to_python_args,
date,
datetime,
file_type,
none_type,
validate_get_composed_info,
OpenApiModel
)
from zincsearch_sdk.exceptions import ApiAttributeError
class MetaIdsQuery(ModelNormal):
"""NOTE: This class is auto generated by OpenAPI Generator.
Ref: https://openapi-generator.tech
Do not edit the class manually.
Attributes:
allowed_values (dict): The key is the tuple path to the attribute
and the for var_name this is (var_name,). The value is a dict
with a capitalized key describing the allowed value and an allowed
value. These dicts store the allowed enum values.
attribute_map (dict): The key is attribute name
and the value is json key in definition.
discriminator_value_class_map (dict): A dict to go from the discriminator
variable value to the discriminator class name.
validations (dict): The key is the tuple path to the attribute
and the for var_name this is (var_name,). The value is a dict
that stores validations for max_length, min_length, max_items,
min_items, exclusive_maximum, inclusive_maximum, exclusive_minimum,
inclusive_minimum, and regex.
additional_properties_type (tuple): A tuple of classes accepted
as additional properties values.
"""
allowed_values = {
}
validations = {
}
@cached_property
def additional_properties_type():
"""
This must be a method because a model may have properties that are
of type self, this must run after the class is loaded
"""
return (bool, date, datetime, dict, float, int, list, str, none_type,) # noqa: E501
_nullable = False
@cached_property
def openapi_types():
"""
This must be a method because a model may have properties that are
of type self, this must run after the class is loaded
Returns
openapi_types (dict): The key is attribute name
and the value is attribute type.
"""
return {
'values': ([str],), # noqa: E501
}
@cached_property
def discriminator():
return None
attribute_map = {
'values': 'values', # noqa: E501
}
read_only_vars = {
}
_composed_schemas = {}
@classmethod
@convert_js_args_to_python_args
def _from_openapi_data(cls, *args, **kwargs): # noqa: E501
"""MetaIdsQuery - a model defined in OpenAPI
Keyword Args:
_check_type (bool): if True, values for parameters in openapi_types
will be type checked and a TypeError will be
raised if the wrong type is input.
Defaults to True
_path_to_item (tuple/list): This is a list of keys or values to
drill down to the model in received_data
when deserializing a response
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_configuration (Configuration): the instance to use when
deserializing a file_type parameter.
If passed, type conversion is attempted
If omitted no type conversion is done.
_visited_composed_classes (tuple): This stores a tuple of
classes that we have traveled through so that
if we see that class again we will not use its
discriminator again.
When traveling through a discriminator, the
composed schema that is
is traveled through is added to this set.
For example if Animal has a discriminator
petType and we pass in "Dog", and the class Dog
allOf includes Animal, we move through Animal
once using the discriminator, and pick Dog.
Then in Dog, we will make an instance of the
Animal class but this time we won't travel
through its discriminator because we passed in
_visited_composed_classes = (Animal,)
values ([str]): [optional] # noqa: E501
"""
_check_type = kwargs.pop('_check_type', True)
_spec_property_naming = kwargs.pop('_spec_property_naming', True)
_path_to_item = kwargs.pop('_path_to_item', ())
_configuration = kwargs.pop('_configuration', None)
_visited_composed_classes = kwargs.pop('_visited_composed_classes', ())
self = super(OpenApiModel, cls).__new__(cls)
if args:
for arg in args:
if isinstance(arg, dict):
kwargs.update(arg)
else:
raise ApiTypeError(
"Invalid positional arguments=%s passed to %s. Remove those invalid positional arguments." % (
args,
self.__class__.__name__,
),
path_to_item=_path_to_item,
valid_classes=(self.__class__,),
)
self._data_store = {}
self._check_type = _check_type
self._spec_property_naming = _spec_property_naming
self._path_to_item = _path_to_item
self._configuration = _configuration
self._visited_composed_classes = _visited_composed_classes + (self.__class__,)
for var_name, var_value in kwargs.items():
if var_name not in self.attribute_map and \
self._configuration is not None and \
self._configuration.discard_unknown_keys and \
self.additional_properties_type is None:
# discard variable.
continue
setattr(self, var_name, var_value)
return self
required_properties = set([
'_data_store',
'_check_type',
'_spec_property_naming',
'_path_to_item',
'_configuration',
'_visited_composed_classes',
])
@convert_js_args_to_python_args
def __init__(self, *args, **kwargs): # noqa: E501
"""MetaIdsQuery - a model defined in OpenAPI
Keyword Args:
_check_type (bool): if True, values for parameters in openapi_types
will be type checked and a TypeError will be
raised if the wrong type is input.
Defaults to True
_path_to_item (tuple/list): This is a list of keys or values to
drill down to the model in received_data
when deserializing a response
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_configuration (Configuration): the instance to use when
deserializing a file_type parameter.
If passed, type conversion is attempted
If omitted no type conversion is done.
_visited_composed_classes (tuple): This stores a tuple of
classes that we have traveled through so that
if we see that class again we will not use its
discriminator again.
When traveling through a discriminator, the
composed schema that is
is traveled through is added to this set.
For example if Animal has a discriminator
petType and we pass in "Dog", and the class Dog
allOf includes Animal, we move through Animal
once using the discriminator, and pick Dog.
Then in Dog, we will make an instance of the
Animal class but this time we won't travel
through its discriminator because we passed in
_visited_composed_classes = (Animal,)
values ([str]): [optional] # noqa: E501
"""
_check_type = kwargs.pop('_check_type', True)
_spec_property_naming = kwargs.pop('_spec_property_naming', False)
_path_to_item = kwargs.pop('_path_to_item', ())
_configuration = kwargs.pop('_configuration', None)
_visited_composed_classes = kwargs.pop('_visited_composed_classes', ())
if args:
for arg in args:
if isinstance(arg, dict):
kwargs.update(arg)
else:
raise ApiTypeError(
"Invalid positional arguments=%s passed to %s. Remove those invalid positional arguments." % (
args,
self.__class__.__name__,
),
path_to_item=_path_to_item,
valid_classes=(self.__class__,),
)
self._data_store = {}
self._check_type = _check_type
self._spec_property_naming = _spec_property_naming
self._path_to_item = _path_to_item
self._configuration = _configuration
self._visited_composed_classes = _visited_composed_classes + (self.__class__,)
for var_name, var_value in kwargs.items():
if var_name not in self.attribute_map and \
self._configuration is not None and \
self._configuration.discard_unknown_keys and \
self.additional_properties_type is None:
# discard variable.
continue
setattr(self, var_name, var_value)
if var_name in self.read_only_vars:
raise ApiAttributeError(f"`{var_name}` is a read-only attribute. Use `from_openapi_data` to instantiate "
f"class with read only attributes.") | zincsearch-sdk | /zincsearch-sdk-0.3.3.tar.gz/zincsearch-sdk-0.3.3/zincsearch_sdk/model/meta_ids_query.py | meta_ids_query.py |
import re # noqa: F401
import sys # noqa: F401
from zincsearch_sdk.model_utils import ( # noqa: F401
ApiTypeError,
ModelComposed,
ModelNormal,
ModelSimple,
cached_property,
change_keys_js_to_python,
convert_js_args_to_python_args,
date,
datetime,
file_type,
none_type,
validate_get_composed_info,
OpenApiModel
)
from zincsearch_sdk.exceptions import ApiAttributeError
class MetaAnalyzer(ModelNormal):
"""NOTE: This class is auto generated by OpenAPI Generator.
Ref: https://openapi-generator.tech
Do not edit the class manually.
Attributes:
allowed_values (dict): The key is the tuple path to the attribute
and the for var_name this is (var_name,). The value is a dict
with a capitalized key describing the allowed value and an allowed
value. These dicts store the allowed enum values.
attribute_map (dict): The key is attribute name
and the value is json key in definition.
discriminator_value_class_map (dict): A dict to go from the discriminator
variable value to the discriminator class name.
validations (dict): The key is the tuple path to the attribute
and the for var_name this is (var_name,). The value is a dict
that stores validations for max_length, min_length, max_items,
min_items, exclusive_maximum, inclusive_maximum, exclusive_minimum,
inclusive_minimum, and regex.
additional_properties_type (tuple): A tuple of classes accepted
as additional properties values.
"""
allowed_values = {
}
validations = {
}
@cached_property
def additional_properties_type():
"""
This must be a method because a model may have properties that are
of type self, this must run after the class is loaded
"""
return (bool, date, datetime, dict, float, int, list, str, none_type,) # noqa: E501
_nullable = False
@cached_property
def openapi_types():
"""
This must be a method because a model may have properties that are
of type self, this must run after the class is loaded
Returns
openapi_types (dict): The key is attribute name
and the value is attribute type.
"""
return {
'char_filter': ([str],), # noqa: E501
'filter': ([str],), # noqa: E501
'lowercase': (bool,), # noqa: E501
'pattern': (str,), # noqa: E501
'stopwords': ([str],), # noqa: E501
'token_filter': ([str],), # noqa: E501
'tokenizer': (str,), # noqa: E501
'type': (str,), # noqa: E501
}
@cached_property
def discriminator():
return None
attribute_map = {
'char_filter': 'char_filter', # noqa: E501
'filter': 'filter', # noqa: E501
'lowercase': 'lowercase', # noqa: E501
'pattern': 'pattern', # noqa: E501
'stopwords': 'stopwords', # noqa: E501
'token_filter': 'token_filter', # noqa: E501
'tokenizer': 'tokenizer', # noqa: E501
'type': 'type', # noqa: E501
}
read_only_vars = {
}
_composed_schemas = {}
@classmethod
@convert_js_args_to_python_args
def _from_openapi_data(cls, *args, **kwargs): # noqa: E501
"""MetaAnalyzer - a model defined in OpenAPI
Keyword Args:
_check_type (bool): if True, values for parameters in openapi_types
will be type checked and a TypeError will be
raised if the wrong type is input.
Defaults to True
_path_to_item (tuple/list): This is a list of keys or values to
drill down to the model in received_data
when deserializing a response
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_configuration (Configuration): the instance to use when
deserializing a file_type parameter.
If passed, type conversion is attempted
If omitted no type conversion is done.
_visited_composed_classes (tuple): This stores a tuple of
classes that we have traveled through so that
if we see that class again we will not use its
discriminator again.
When traveling through a discriminator, the
composed schema that is
is traveled through is added to this set.
For example if Animal has a discriminator
petType and we pass in "Dog", and the class Dog
allOf includes Animal, we move through Animal
once using the discriminator, and pick Dog.
Then in Dog, we will make an instance of the
Animal class but this time we won't travel
through its discriminator because we passed in
_visited_composed_classes = (Animal,)
char_filter ([str]): [optional] # noqa: E501
filter ([str]): compatibility with es, alias for TokenFilter. [optional] # noqa: E501
lowercase (bool): for type=pattern. [optional] # noqa: E501
pattern (str): for type=pattern. [optional] # noqa: E501
stopwords ([str]): for type=pattern,standard,stop. [optional] # noqa: E501
token_filter ([str]): [optional] # noqa: E501
tokenizer (str): [optional] # noqa: E501
type (str): options for compatible. [optional] # noqa: E501
"""
_check_type = kwargs.pop('_check_type', True)
_spec_property_naming = kwargs.pop('_spec_property_naming', True)
_path_to_item = kwargs.pop('_path_to_item', ())
_configuration = kwargs.pop('_configuration', None)
_visited_composed_classes = kwargs.pop('_visited_composed_classes', ())
self = super(OpenApiModel, cls).__new__(cls)
if args:
for arg in args:
if isinstance(arg, dict):
kwargs.update(arg)
else:
raise ApiTypeError(
"Invalid positional arguments=%s passed to %s. Remove those invalid positional arguments." % (
args,
self.__class__.__name__,
),
path_to_item=_path_to_item,
valid_classes=(self.__class__,),
)
self._data_store = {}
self._check_type = _check_type
self._spec_property_naming = _spec_property_naming
self._path_to_item = _path_to_item
self._configuration = _configuration
self._visited_composed_classes = _visited_composed_classes + (self.__class__,)
for var_name, var_value in kwargs.items():
if var_name not in self.attribute_map and \
self._configuration is not None and \
self._configuration.discard_unknown_keys and \
self.additional_properties_type is None:
# discard variable.
continue
setattr(self, var_name, var_value)
return self
required_properties = set([
'_data_store',
'_check_type',
'_spec_property_naming',
'_path_to_item',
'_configuration',
'_visited_composed_classes',
])
@convert_js_args_to_python_args
def __init__(self, *args, **kwargs): # noqa: E501
"""MetaAnalyzer - a model defined in OpenAPI
Keyword Args:
_check_type (bool): if True, values for parameters in openapi_types
will be type checked and a TypeError will be
raised if the wrong type is input.
Defaults to True
_path_to_item (tuple/list): This is a list of keys or values to
drill down to the model in received_data
when deserializing a response
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_configuration (Configuration): the instance to use when
deserializing a file_type parameter.
If passed, type conversion is attempted
If omitted no type conversion is done.
_visited_composed_classes (tuple): This stores a tuple of
classes that we have traveled through so that
if we see that class again we will not use its
discriminator again.
When traveling through a discriminator, the
composed schema that is
is traveled through is added to this set.
For example if Animal has a discriminator
petType and we pass in "Dog", and the class Dog
allOf includes Animal, we move through Animal
once using the discriminator, and pick Dog.
Then in Dog, we will make an instance of the
Animal class but this time we won't travel
through its discriminator because we passed in
_visited_composed_classes = (Animal,)
char_filter ([str]): [optional] # noqa: E501
filter ([str]): compatibility with es, alias for TokenFilter. [optional] # noqa: E501
lowercase (bool): for type=pattern. [optional] # noqa: E501
pattern (str): for type=pattern. [optional] # noqa: E501
stopwords ([str]): for type=pattern,standard,stop. [optional] # noqa: E501
token_filter ([str]): [optional] # noqa: E501
tokenizer (str): [optional] # noqa: E501
type (str): options for compatible. [optional] # noqa: E501
"""
_check_type = kwargs.pop('_check_type', True)
_spec_property_naming = kwargs.pop('_spec_property_naming', False)
_path_to_item = kwargs.pop('_path_to_item', ())
_configuration = kwargs.pop('_configuration', None)
_visited_composed_classes = kwargs.pop('_visited_composed_classes', ())
if args:
for arg in args:
if isinstance(arg, dict):
kwargs.update(arg)
else:
raise ApiTypeError(
"Invalid positional arguments=%s passed to %s. Remove those invalid positional arguments." % (
args,
self.__class__.__name__,
),
path_to_item=_path_to_item,
valid_classes=(self.__class__,),
)
self._data_store = {}
self._check_type = _check_type
self._spec_property_naming = _spec_property_naming
self._path_to_item = _path_to_item
self._configuration = _configuration
self._visited_composed_classes = _visited_composed_classes + (self.__class__,)
for var_name, var_value in kwargs.items():
if var_name not in self.attribute_map and \
self._configuration is not None and \
self._configuration.discard_unknown_keys and \
self.additional_properties_type is None:
# discard variable.
continue
setattr(self, var_name, var_value)
if var_name in self.read_only_vars:
raise ApiAttributeError(f"`{var_name}` is a read-only attribute. Use `from_openapi_data` to instantiate "
f"class with read only attributes.") | zincsearch-sdk | /zincsearch-sdk-0.3.3.tar.gz/zincsearch-sdk-0.3.3/zincsearch_sdk/model/meta_analyzer.py | meta_analyzer.py |
import re # noqa: F401
import sys # noqa: F401
from zincsearch_sdk.model_utils import ( # noqa: F401
ApiTypeError,
ModelComposed,
ModelNormal,
ModelSimple,
cached_property,
change_keys_js_to_python,
convert_js_args_to_python_args,
date,
datetime,
file_type,
none_type,
validate_get_composed_info,
OpenApiModel
)
from zincsearch_sdk.exceptions import ApiAttributeError
class V1AggregationNumberRange(ModelNormal):
"""NOTE: This class is auto generated by OpenAPI Generator.
Ref: https://openapi-generator.tech
Do not edit the class manually.
Attributes:
allowed_values (dict): The key is the tuple path to the attribute
and the for var_name this is (var_name,). The value is a dict
with a capitalized key describing the allowed value and an allowed
value. These dicts store the allowed enum values.
attribute_map (dict): The key is attribute name
and the value is json key in definition.
discriminator_value_class_map (dict): A dict to go from the discriminator
variable value to the discriminator class name.
validations (dict): The key is the tuple path to the attribute
and the for var_name this is (var_name,). The value is a dict
that stores validations for max_length, min_length, max_items,
min_items, exclusive_maximum, inclusive_maximum, exclusive_minimum,
inclusive_minimum, and regex.
additional_properties_type (tuple): A tuple of classes accepted
as additional properties values.
"""
allowed_values = {
}
validations = {
}
@cached_property
def additional_properties_type():
"""
This must be a method because a model may have properties that are
of type self, this must run after the class is loaded
"""
return (bool, date, datetime, dict, float, int, list, str, none_type,) # noqa: E501
_nullable = False
@cached_property
def openapi_types():
"""
This must be a method because a model may have properties that are
of type self, this must run after the class is loaded
Returns
openapi_types (dict): The key is attribute name
and the value is attribute type.
"""
return {
'_from': (float,), # noqa: E501
'to': (float,), # noqa: E501
}
@cached_property
def discriminator():
return None
attribute_map = {
'_from': 'from', # noqa: E501
'to': 'to', # noqa: E501
}
read_only_vars = {
}
_composed_schemas = {}
@classmethod
@convert_js_args_to_python_args
def _from_openapi_data(cls, *args, **kwargs): # noqa: E501
"""V1AggregationNumberRange - a model defined in OpenAPI
Keyword Args:
_check_type (bool): if True, values for parameters in openapi_types
will be type checked and a TypeError will be
raised if the wrong type is input.
Defaults to True
_path_to_item (tuple/list): This is a list of keys or values to
drill down to the model in received_data
when deserializing a response
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_configuration (Configuration): the instance to use when
deserializing a file_type parameter.
If passed, type conversion is attempted
If omitted no type conversion is done.
_visited_composed_classes (tuple): This stores a tuple of
classes that we have traveled through so that
if we see that class again we will not use its
discriminator again.
When traveling through a discriminator, the
composed schema that is
is traveled through is added to this set.
For example if Animal has a discriminator
petType and we pass in "Dog", and the class Dog
allOf includes Animal, we move through Animal
once using the discriminator, and pick Dog.
Then in Dog, we will make an instance of the
Animal class but this time we won't travel
through its discriminator because we passed in
_visited_composed_classes = (Animal,)
_from (float): [optional] # noqa: E501
to (float): [optional] # noqa: E501
"""
_check_type = kwargs.pop('_check_type', True)
_spec_property_naming = kwargs.pop('_spec_property_naming', True)
_path_to_item = kwargs.pop('_path_to_item', ())
_configuration = kwargs.pop('_configuration', None)
_visited_composed_classes = kwargs.pop('_visited_composed_classes', ())
self = super(OpenApiModel, cls).__new__(cls)
if args:
for arg in args:
if isinstance(arg, dict):
kwargs.update(arg)
else:
raise ApiTypeError(
"Invalid positional arguments=%s passed to %s. Remove those invalid positional arguments." % (
args,
self.__class__.__name__,
),
path_to_item=_path_to_item,
valid_classes=(self.__class__,),
)
self._data_store = {}
self._check_type = _check_type
self._spec_property_naming = _spec_property_naming
self._path_to_item = _path_to_item
self._configuration = _configuration
self._visited_composed_classes = _visited_composed_classes + (self.__class__,)
for var_name, var_value in kwargs.items():
if var_name not in self.attribute_map and \
self._configuration is not None and \
self._configuration.discard_unknown_keys and \
self.additional_properties_type is None:
# discard variable.
continue
setattr(self, var_name, var_value)
return self
required_properties = set([
'_data_store',
'_check_type',
'_spec_property_naming',
'_path_to_item',
'_configuration',
'_visited_composed_classes',
])
@convert_js_args_to_python_args
def __init__(self, *args, **kwargs): # noqa: E501
"""V1AggregationNumberRange - a model defined in OpenAPI
Keyword Args:
_check_type (bool): if True, values for parameters in openapi_types
will be type checked and a TypeError will be
raised if the wrong type is input.
Defaults to True
_path_to_item (tuple/list): This is a list of keys or values to
drill down to the model in received_data
when deserializing a response
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_configuration (Configuration): the instance to use when
deserializing a file_type parameter.
If passed, type conversion is attempted
If omitted no type conversion is done.
_visited_composed_classes (tuple): This stores a tuple of
classes that we have traveled through so that
if we see that class again we will not use its
discriminator again.
When traveling through a discriminator, the
composed schema that is
is traveled through is added to this set.
For example if Animal has a discriminator
petType and we pass in "Dog", and the class Dog
allOf includes Animal, we move through Animal
once using the discriminator, and pick Dog.
Then in Dog, we will make an instance of the
Animal class but this time we won't travel
through its discriminator because we passed in
_visited_composed_classes = (Animal,)
_from (float): [optional] # noqa: E501
to (float): [optional] # noqa: E501
"""
_check_type = kwargs.pop('_check_type', True)
_spec_property_naming = kwargs.pop('_spec_property_naming', False)
_path_to_item = kwargs.pop('_path_to_item', ())
_configuration = kwargs.pop('_configuration', None)
_visited_composed_classes = kwargs.pop('_visited_composed_classes', ())
if args:
for arg in args:
if isinstance(arg, dict):
kwargs.update(arg)
else:
raise ApiTypeError(
"Invalid positional arguments=%s passed to %s. Remove those invalid positional arguments." % (
args,
self.__class__.__name__,
),
path_to_item=_path_to_item,
valid_classes=(self.__class__,),
)
self._data_store = {}
self._check_type = _check_type
self._spec_property_naming = _spec_property_naming
self._path_to_item = _path_to_item
self._configuration = _configuration
self._visited_composed_classes = _visited_composed_classes + (self.__class__,)
for var_name, var_value in kwargs.items():
if var_name not in self.attribute_map and \
self._configuration is not None and \
self._configuration.discard_unknown_keys and \
self.additional_properties_type is None:
# discard variable.
continue
setattr(self, var_name, var_value)
if var_name in self.read_only_vars:
raise ApiAttributeError(f"`{var_name}` is a read-only attribute. Use `from_openapi_data` to instantiate "
f"class with read only attributes.") | zincsearch-sdk | /zincsearch-sdk-0.3.3.tar.gz/zincsearch-sdk-0.3.3/zincsearch_sdk/model/v1_aggregation_number_range.py | v1_aggregation_number_range.py |
import re # noqa: F401
import sys # noqa: F401
from zincsearch_sdk.model_utils import ( # noqa: F401
ApiTypeError,
ModelComposed,
ModelNormal,
ModelSimple,
cached_property,
change_keys_js_to_python,
convert_js_args_to_python_args,
date,
datetime,
file_type,
none_type,
validate_get_composed_info,
OpenApiModel
)
from zincsearch_sdk.exceptions import ApiAttributeError
def lazy_import():
from zincsearch_sdk.model.meta_aggregations import MetaAggregations
from zincsearch_sdk.model.meta_highlight import MetaHighlight
from zincsearch_sdk.model.meta_query import MetaQuery
globals()['MetaAggregations'] = MetaAggregations
globals()['MetaHighlight'] = MetaHighlight
globals()['MetaQuery'] = MetaQuery
class MetaZincQuery(ModelNormal):
"""NOTE: This class is auto generated by OpenAPI Generator.
Ref: https://openapi-generator.tech
Do not edit the class manually.
Attributes:
allowed_values (dict): The key is the tuple path to the attribute
and the for var_name this is (var_name,). The value is a dict
with a capitalized key describing the allowed value and an allowed
value. These dicts store the allowed enum values.
attribute_map (dict): The key is attribute name
and the value is json key in definition.
discriminator_value_class_map (dict): A dict to go from the discriminator
variable value to the discriminator class name.
validations (dict): The key is the tuple path to the attribute
and the for var_name this is (var_name,). The value is a dict
that stores validations for max_length, min_length, max_items,
min_items, exclusive_maximum, inclusive_maximum, exclusive_minimum,
inclusive_minimum, and regex.
additional_properties_type (tuple): A tuple of classes accepted
as additional properties values.
"""
allowed_values = {
}
validations = {
}
@cached_property
def additional_properties_type():
"""
This must be a method because a model may have properties that are
of type self, this must run after the class is loaded
"""
lazy_import()
return (bool, date, datetime, dict, float, int, list, str, none_type,) # noqa: E501
_nullable = False
@cached_property
def openapi_types():
"""
This must be a method because a model may have properties that are
of type self, this must run after the class is loaded
Returns
openapi_types (dict): The key is attribute name
and the value is attribute type.
"""
lazy_import()
return {
'source': ([str],), # noqa: E501
'aggs': ({str: (MetaAggregations,)},), # noqa: E501
'explain': (bool,), # noqa: E501
'fields': ([str],), # noqa: E501
'_from': (int,), # noqa: E501
'highlight': (MetaHighlight,), # noqa: E501
'query': (MetaQuery,), # noqa: E501
'size': (int,), # noqa: E501
'sort': ([str],), # noqa: E501
'timeout': (int,), # noqa: E501
'track_total_hits': (bool,), # noqa: E501
}
@cached_property
def discriminator():
return None
attribute_map = {
'source': '_source', # noqa: E501
'aggs': 'aggs', # noqa: E501
'explain': 'explain', # noqa: E501
'fields': 'fields', # noqa: E501
'_from': 'from', # noqa: E501
'highlight': 'highlight', # noqa: E501
'query': 'query', # noqa: E501
'size': 'size', # noqa: E501
'sort': 'sort', # noqa: E501
'timeout': 'timeout', # noqa: E501
'track_total_hits': 'track_total_hits', # noqa: E501
}
read_only_vars = {
}
_composed_schemas = {}
@classmethod
@convert_js_args_to_python_args
def _from_openapi_data(cls, *args, **kwargs): # noqa: E501
"""MetaZincQuery - a model defined in OpenAPI
Keyword Args:
_check_type (bool): if True, values for parameters in openapi_types
will be type checked and a TypeError will be
raised if the wrong type is input.
Defaults to True
_path_to_item (tuple/list): This is a list of keys or values to
drill down to the model in received_data
when deserializing a response
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_configuration (Configuration): the instance to use when
deserializing a file_type parameter.
If passed, type conversion is attempted
If omitted no type conversion is done.
_visited_composed_classes (tuple): This stores a tuple of
classes that we have traveled through so that
if we see that class again we will not use its
discriminator again.
When traveling through a discriminator, the
composed schema that is
is traveled through is added to this set.
For example if Animal has a discriminator
petType and we pass in "Dog", and the class Dog
allOf includes Animal, we move through Animal
once using the discriminator, and pick Dog.
Then in Dog, we will make an instance of the
Animal class but this time we won't travel
through its discriminator because we passed in
_visited_composed_classes = (Animal,)
source ([str]): true, false, [\"field1\", \"field2.*\"]. [optional] # noqa: E501
aggs ({str: (MetaAggregations,)}): [optional] # noqa: E501
explain (bool): [optional] # noqa: E501
fields ([str]): [\"field1\", \"field2.*\", {\"field\": \"fieldName\", \"format\": \"epoch_millis\"}]. [optional] # noqa: E501
_from (int): [optional] # noqa: E501
highlight (MetaHighlight): [optional] # noqa: E501
query (MetaQuery): [optional] # noqa: E501
size (int): [optional] # noqa: E501
sort ([str]): \"_sorce\", [\"+Year\",\"-Year\", {\"Year\": \"desc\"}, \"Date\": {\"order\": \"asc\"\", \"format\": \"yyyy-MM-dd\"}}\"}]. [optional] # noqa: E501
timeout (int): [optional] # noqa: E501
track_total_hits (bool): [optional] # noqa: E501
"""
_check_type = kwargs.pop('_check_type', True)
_spec_property_naming = kwargs.pop('_spec_property_naming', True)
_path_to_item = kwargs.pop('_path_to_item', ())
_configuration = kwargs.pop('_configuration', None)
_visited_composed_classes = kwargs.pop('_visited_composed_classes', ())
self = super(OpenApiModel, cls).__new__(cls)
if args:
for arg in args:
if isinstance(arg, dict):
kwargs.update(arg)
else:
raise ApiTypeError(
"Invalid positional arguments=%s passed to %s. Remove those invalid positional arguments." % (
args,
self.__class__.__name__,
),
path_to_item=_path_to_item,
valid_classes=(self.__class__,),
)
self._data_store = {}
self._check_type = _check_type
self._spec_property_naming = _spec_property_naming
self._path_to_item = _path_to_item
self._configuration = _configuration
self._visited_composed_classes = _visited_composed_classes + (self.__class__,)
for var_name, var_value in kwargs.items():
if var_name not in self.attribute_map and \
self._configuration is not None and \
self._configuration.discard_unknown_keys and \
self.additional_properties_type is None:
# discard variable.
continue
setattr(self, var_name, var_value)
return self
required_properties = set([
'_data_store',
'_check_type',
'_spec_property_naming',
'_path_to_item',
'_configuration',
'_visited_composed_classes',
])
@convert_js_args_to_python_args
def __init__(self, *args, **kwargs): # noqa: E501
"""MetaZincQuery - a model defined in OpenAPI
Keyword Args:
_check_type (bool): if True, values for parameters in openapi_types
will be type checked and a TypeError will be
raised if the wrong type is input.
Defaults to True
_path_to_item (tuple/list): This is a list of keys or values to
drill down to the model in received_data
when deserializing a response
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_configuration (Configuration): the instance to use when
deserializing a file_type parameter.
If passed, type conversion is attempted
If omitted no type conversion is done.
_visited_composed_classes (tuple): This stores a tuple of
classes that we have traveled through so that
if we see that class again we will not use its
discriminator again.
When traveling through a discriminator, the
composed schema that is
is traveled through is added to this set.
For example if Animal has a discriminator
petType and we pass in "Dog", and the class Dog
allOf includes Animal, we move through Animal
once using the discriminator, and pick Dog.
Then in Dog, we will make an instance of the
Animal class but this time we won't travel
through its discriminator because we passed in
_visited_composed_classes = (Animal,)
source ([str]): true, false, [\"field1\", \"field2.*\"]. [optional] # noqa: E501
aggs ({str: (MetaAggregations,)}): [optional] # noqa: E501
explain (bool): [optional] # noqa: E501
fields ([str]): [\"field1\", \"field2.*\", {\"field\": \"fieldName\", \"format\": \"epoch_millis\"}]. [optional] # noqa: E501
_from (int): [optional] # noqa: E501
highlight (MetaHighlight): [optional] # noqa: E501
query (MetaQuery): [optional] # noqa: E501
size (int): [optional] # noqa: E501
sort ([str]): \"_sorce\", [\"+Year\",\"-Year\", {\"Year\": \"desc\"}, \"Date\": {\"order\": \"asc\"\", \"format\": \"yyyy-MM-dd\"}}\"}]. [optional] # noqa: E501
timeout (int): [optional] # noqa: E501
track_total_hits (bool): [optional] # noqa: E501
"""
_check_type = kwargs.pop('_check_type', True)
_spec_property_naming = kwargs.pop('_spec_property_naming', False)
_path_to_item = kwargs.pop('_path_to_item', ())
_configuration = kwargs.pop('_configuration', None)
_visited_composed_classes = kwargs.pop('_visited_composed_classes', ())
if args:
for arg in args:
if isinstance(arg, dict):
kwargs.update(arg)
else:
raise ApiTypeError(
"Invalid positional arguments=%s passed to %s. Remove those invalid positional arguments." % (
args,
self.__class__.__name__,
),
path_to_item=_path_to_item,
valid_classes=(self.__class__,),
)
self._data_store = {}
self._check_type = _check_type
self._spec_property_naming = _spec_property_naming
self._path_to_item = _path_to_item
self._configuration = _configuration
self._visited_composed_classes = _visited_composed_classes + (self.__class__,)
for var_name, var_value in kwargs.items():
if var_name not in self.attribute_map and \
self._configuration is not None and \
self._configuration.discard_unknown_keys and \
self.additional_properties_type is None:
# discard variable.
continue
setattr(self, var_name, var_value)
if var_name in self.read_only_vars:
raise ApiAttributeError(f"`{var_name}` is a read-only attribute. Use `from_openapi_data` to instantiate "
f"class with read only attributes.") | zincsearch-sdk | /zincsearch-sdk-0.3.3.tar.gz/zincsearch-sdk-0.3.3/zincsearch_sdk/model/meta_zinc_query.py | meta_zinc_query.py |
import re # noqa: F401
import sys # noqa: F401
from zincsearch_sdk.model_utils import ( # noqa: F401
ApiTypeError,
ModelComposed,
ModelNormal,
ModelSimple,
cached_property,
change_keys_js_to_python,
convert_js_args_to_python_args,
date,
datetime,
file_type,
none_type,
validate_get_composed_info,
OpenApiModel
)
from zincsearch_sdk.exceptions import ApiAttributeError
class MetaHighlight(ModelNormal):
"""NOTE: This class is auto generated by OpenAPI Generator.
Ref: https://openapi-generator.tech
Do not edit the class manually.
Attributes:
allowed_values (dict): The key is the tuple path to the attribute
and the for var_name this is (var_name,). The value is a dict
with a capitalized key describing the allowed value and an allowed
value. These dicts store the allowed enum values.
attribute_map (dict): The key is attribute name
and the value is json key in definition.
discriminator_value_class_map (dict): A dict to go from the discriminator
variable value to the discriminator class name.
validations (dict): The key is the tuple path to the attribute
and the for var_name this is (var_name,). The value is a dict
that stores validations for max_length, min_length, max_items,
min_items, exclusive_maximum, inclusive_maximum, exclusive_minimum,
inclusive_minimum, and regex.
additional_properties_type (tuple): A tuple of classes accepted
as additional properties values.
"""
allowed_values = {
}
validations = {
}
@cached_property
def additional_properties_type():
"""
This must be a method because a model may have properties that are
of type self, this must run after the class is loaded
"""
return (bool, date, datetime, dict, float, int, list, str, none_type,) # noqa: E501
_nullable = False
@cached_property
def openapi_types():
"""
This must be a method because a model may have properties that are
of type self, this must run after the class is loaded
Returns
openapi_types (dict): The key is attribute name
and the value is attribute type.
"""
return {
'fields': ({str: (MetaHighlight,)},), # noqa: E501
'fragment_size': (int,), # noqa: E501
'number_of_fragments': (int,), # noqa: E501
'post_tags': ([str],), # noqa: E501
'pre_tags': ([str],), # noqa: E501
}
@cached_property
def discriminator():
return None
attribute_map = {
'fields': 'fields', # noqa: E501
'fragment_size': 'fragment_size', # noqa: E501
'number_of_fragments': 'number_of_fragments', # noqa: E501
'post_tags': 'post_tags', # noqa: E501
'pre_tags': 'pre_tags', # noqa: E501
}
read_only_vars = {
}
_composed_schemas = {}
@classmethod
@convert_js_args_to_python_args
def _from_openapi_data(cls, *args, **kwargs): # noqa: E501
"""MetaHighlight - a model defined in OpenAPI
Keyword Args:
_check_type (bool): if True, values for parameters in openapi_types
will be type checked and a TypeError will be
raised if the wrong type is input.
Defaults to True
_path_to_item (tuple/list): This is a list of keys or values to
drill down to the model in received_data
when deserializing a response
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_configuration (Configuration): the instance to use when
deserializing a file_type parameter.
If passed, type conversion is attempted
If omitted no type conversion is done.
_visited_composed_classes (tuple): This stores a tuple of
classes that we have traveled through so that
if we see that class again we will not use its
discriminator again.
When traveling through a discriminator, the
composed schema that is
is traveled through is added to this set.
For example if Animal has a discriminator
petType and we pass in "Dog", and the class Dog
allOf includes Animal, we move through Animal
once using the discriminator, and pick Dog.
Then in Dog, we will make an instance of the
Animal class but this time we won't travel
through its discriminator because we passed in
_visited_composed_classes = (Animal,)
fields ({str: (MetaHighlight,)}): [optional] # noqa: E501
fragment_size (int): [optional] # noqa: E501
number_of_fragments (int): [optional] # noqa: E501
post_tags ([str]): [optional] # noqa: E501
pre_tags ([str]): [optional] # noqa: E501
"""
_check_type = kwargs.pop('_check_type', True)
_spec_property_naming = kwargs.pop('_spec_property_naming', True)
_path_to_item = kwargs.pop('_path_to_item', ())
_configuration = kwargs.pop('_configuration', None)
_visited_composed_classes = kwargs.pop('_visited_composed_classes', ())
self = super(OpenApiModel, cls).__new__(cls)
if args:
for arg in args:
if isinstance(arg, dict):
kwargs.update(arg)
else:
raise ApiTypeError(
"Invalid positional arguments=%s passed to %s. Remove those invalid positional arguments." % (
args,
self.__class__.__name__,
),
path_to_item=_path_to_item,
valid_classes=(self.__class__,),
)
self._data_store = {}
self._check_type = _check_type
self._spec_property_naming = _spec_property_naming
self._path_to_item = _path_to_item
self._configuration = _configuration
self._visited_composed_classes = _visited_composed_classes + (self.__class__,)
for var_name, var_value in kwargs.items():
if var_name not in self.attribute_map and \
self._configuration is not None and \
self._configuration.discard_unknown_keys and \
self.additional_properties_type is None:
# discard variable.
continue
setattr(self, var_name, var_value)
return self
required_properties = set([
'_data_store',
'_check_type',
'_spec_property_naming',
'_path_to_item',
'_configuration',
'_visited_composed_classes',
])
@convert_js_args_to_python_args
def __init__(self, *args, **kwargs): # noqa: E501
"""MetaHighlight - a model defined in OpenAPI
Keyword Args:
_check_type (bool): if True, values for parameters in openapi_types
will be type checked and a TypeError will be
raised if the wrong type is input.
Defaults to True
_path_to_item (tuple/list): This is a list of keys or values to
drill down to the model in received_data
when deserializing a response
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_configuration (Configuration): the instance to use when
deserializing a file_type parameter.
If passed, type conversion is attempted
If omitted no type conversion is done.
_visited_composed_classes (tuple): This stores a tuple of
classes that we have traveled through so that
if we see that class again we will not use its
discriminator again.
When traveling through a discriminator, the
composed schema that is
is traveled through is added to this set.
For example if Animal has a discriminator
petType and we pass in "Dog", and the class Dog
allOf includes Animal, we move through Animal
once using the discriminator, and pick Dog.
Then in Dog, we will make an instance of the
Animal class but this time we won't travel
through its discriminator because we passed in
_visited_composed_classes = (Animal,)
fields ({str: (MetaHighlight,)}): [optional] # noqa: E501
fragment_size (int): [optional] # noqa: E501
number_of_fragments (int): [optional] # noqa: E501
post_tags ([str]): [optional] # noqa: E501
pre_tags ([str]): [optional] # noqa: E501
"""
_check_type = kwargs.pop('_check_type', True)
_spec_property_naming = kwargs.pop('_spec_property_naming', False)
_path_to_item = kwargs.pop('_path_to_item', ())
_configuration = kwargs.pop('_configuration', None)
_visited_composed_classes = kwargs.pop('_visited_composed_classes', ())
if args:
for arg in args:
if isinstance(arg, dict):
kwargs.update(arg)
else:
raise ApiTypeError(
"Invalid positional arguments=%s passed to %s. Remove those invalid positional arguments." % (
args,
self.__class__.__name__,
),
path_to_item=_path_to_item,
valid_classes=(self.__class__,),
)
self._data_store = {}
self._check_type = _check_type
self._spec_property_naming = _spec_property_naming
self._path_to_item = _path_to_item
self._configuration = _configuration
self._visited_composed_classes = _visited_composed_classes + (self.__class__,)
for var_name, var_value in kwargs.items():
if var_name not in self.attribute_map and \
self._configuration is not None and \
self._configuration.discard_unknown_keys and \
self.additional_properties_type is None:
# discard variable.
continue
setattr(self, var_name, var_value)
if var_name in self.read_only_vars:
raise ApiAttributeError(f"`{var_name}` is a read-only attribute. Use `from_openapi_data` to instantiate "
f"class with read only attributes.") | zincsearch-sdk | /zincsearch-sdk-0.3.3.tar.gz/zincsearch-sdk-0.3.3/zincsearch_sdk/model/meta_highlight.py | meta_highlight.py |
import re # noqa: F401
import sys # noqa: F401
from zincsearch_sdk.model_utils import ( # noqa: F401
ApiTypeError,
ModelComposed,
ModelNormal,
ModelSimple,
cached_property,
change_keys_js_to_python,
convert_js_args_to_python_args,
date,
datetime,
file_type,
none_type,
validate_get_composed_info,
OpenApiModel
)
from zincsearch_sdk.exceptions import ApiAttributeError
class MetaFuzzyQuery(ModelNormal):
"""NOTE: This class is auto generated by OpenAPI Generator.
Ref: https://openapi-generator.tech
Do not edit the class manually.
Attributes:
allowed_values (dict): The key is the tuple path to the attribute
and the for var_name this is (var_name,). The value is a dict
with a capitalized key describing the allowed value and an allowed
value. These dicts store the allowed enum values.
attribute_map (dict): The key is attribute name
and the value is json key in definition.
discriminator_value_class_map (dict): A dict to go from the discriminator
variable value to the discriminator class name.
validations (dict): The key is the tuple path to the attribute
and the for var_name this is (var_name,). The value is a dict
that stores validations for max_length, min_length, max_items,
min_items, exclusive_maximum, inclusive_maximum, exclusive_minimum,
inclusive_minimum, and regex.
additional_properties_type (tuple): A tuple of classes accepted
as additional properties values.
"""
allowed_values = {
}
validations = {
}
@cached_property
def additional_properties_type():
"""
This must be a method because a model may have properties that are
of type self, this must run after the class is loaded
"""
return (bool, date, datetime, dict, float, int, list, str, none_type,) # noqa: E501
_nullable = False
@cached_property
def openapi_types():
"""
This must be a method because a model may have properties that are
of type self, this must run after the class is loaded
Returns
openapi_types (dict): The key is attribute name
and the value is attribute type.
"""
return {
'boost': (float,), # noqa: E501
'fuzziness': (bool, date, datetime, dict, float, int, list, str, none_type,), # noqa: E501
'prefix_length': (float,), # noqa: E501
'value': (str,), # noqa: E501
}
@cached_property
def discriminator():
return None
attribute_map = {
'boost': 'boost', # noqa: E501
'fuzziness': 'fuzziness', # noqa: E501
'prefix_length': 'prefix_length', # noqa: E501
'value': 'value', # noqa: E501
}
read_only_vars = {
}
_composed_schemas = {}
@classmethod
@convert_js_args_to_python_args
def _from_openapi_data(cls, *args, **kwargs): # noqa: E501
"""MetaFuzzyQuery - a model defined in OpenAPI
Keyword Args:
_check_type (bool): if True, values for parameters in openapi_types
will be type checked and a TypeError will be
raised if the wrong type is input.
Defaults to True
_path_to_item (tuple/list): This is a list of keys or values to
drill down to the model in received_data
when deserializing a response
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_configuration (Configuration): the instance to use when
deserializing a file_type parameter.
If passed, type conversion is attempted
If omitted no type conversion is done.
_visited_composed_classes (tuple): This stores a tuple of
classes that we have traveled through so that
if we see that class again we will not use its
discriminator again.
When traveling through a discriminator, the
composed schema that is
is traveled through is added to this set.
For example if Animal has a discriminator
petType and we pass in "Dog", and the class Dog
allOf includes Animal, we move through Animal
once using the discriminator, and pick Dog.
Then in Dog, we will make an instance of the
Animal class but this time we won't travel
through its discriminator because we passed in
_visited_composed_classes = (Animal,)
boost (float): [optional] # noqa: E501
fuzziness (bool, date, datetime, dict, float, int, list, str, none_type): auto, 1,2,3,n. [optional] # noqa: E501
prefix_length (float): [optional] # noqa: E501
value (str): [optional] # noqa: E501
"""
_check_type = kwargs.pop('_check_type', True)
_spec_property_naming = kwargs.pop('_spec_property_naming', True)
_path_to_item = kwargs.pop('_path_to_item', ())
_configuration = kwargs.pop('_configuration', None)
_visited_composed_classes = kwargs.pop('_visited_composed_classes', ())
self = super(OpenApiModel, cls).__new__(cls)
if args:
for arg in args:
if isinstance(arg, dict):
kwargs.update(arg)
else:
raise ApiTypeError(
"Invalid positional arguments=%s passed to %s. Remove those invalid positional arguments." % (
args,
self.__class__.__name__,
),
path_to_item=_path_to_item,
valid_classes=(self.__class__,),
)
self._data_store = {}
self._check_type = _check_type
self._spec_property_naming = _spec_property_naming
self._path_to_item = _path_to_item
self._configuration = _configuration
self._visited_composed_classes = _visited_composed_classes + (self.__class__,)
for var_name, var_value in kwargs.items():
if var_name not in self.attribute_map and \
self._configuration is not None and \
self._configuration.discard_unknown_keys and \
self.additional_properties_type is None:
# discard variable.
continue
setattr(self, var_name, var_value)
return self
required_properties = set([
'_data_store',
'_check_type',
'_spec_property_naming',
'_path_to_item',
'_configuration',
'_visited_composed_classes',
])
@convert_js_args_to_python_args
def __init__(self, *args, **kwargs): # noqa: E501
"""MetaFuzzyQuery - a model defined in OpenAPI
Keyword Args:
_check_type (bool): if True, values for parameters in openapi_types
will be type checked and a TypeError will be
raised if the wrong type is input.
Defaults to True
_path_to_item (tuple/list): This is a list of keys or values to
drill down to the model in received_data
when deserializing a response
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_configuration (Configuration): the instance to use when
deserializing a file_type parameter.
If passed, type conversion is attempted
If omitted no type conversion is done.
_visited_composed_classes (tuple): This stores a tuple of
classes that we have traveled through so that
if we see that class again we will not use its
discriminator again.
When traveling through a discriminator, the
composed schema that is
is traveled through is added to this set.
For example if Animal has a discriminator
petType and we pass in "Dog", and the class Dog
allOf includes Animal, we move through Animal
once using the discriminator, and pick Dog.
Then in Dog, we will make an instance of the
Animal class but this time we won't travel
through its discriminator because we passed in
_visited_composed_classes = (Animal,)
boost (float): [optional] # noqa: E501
fuzziness (bool, date, datetime, dict, float, int, list, str, none_type): auto, 1,2,3,n. [optional] # noqa: E501
prefix_length (float): [optional] # noqa: E501
value (str): [optional] # noqa: E501
"""
_check_type = kwargs.pop('_check_type', True)
_spec_property_naming = kwargs.pop('_spec_property_naming', False)
_path_to_item = kwargs.pop('_path_to_item', ())
_configuration = kwargs.pop('_configuration', None)
_visited_composed_classes = kwargs.pop('_visited_composed_classes', ())
if args:
for arg in args:
if isinstance(arg, dict):
kwargs.update(arg)
else:
raise ApiTypeError(
"Invalid positional arguments=%s passed to %s. Remove those invalid positional arguments." % (
args,
self.__class__.__name__,
),
path_to_item=_path_to_item,
valid_classes=(self.__class__,),
)
self._data_store = {}
self._check_type = _check_type
self._spec_property_naming = _spec_property_naming
self._path_to_item = _path_to_item
self._configuration = _configuration
self._visited_composed_classes = _visited_composed_classes + (self.__class__,)
for var_name, var_value in kwargs.items():
if var_name not in self.attribute_map and \
self._configuration is not None and \
self._configuration.discard_unknown_keys and \
self.additional_properties_type is None:
# discard variable.
continue
setattr(self, var_name, var_value)
if var_name in self.read_only_vars:
raise ApiAttributeError(f"`{var_name}` is a read-only attribute. Use `from_openapi_data` to instantiate "
f"class with read only attributes.") | zincsearch-sdk | /zincsearch-sdk-0.3.3.tar.gz/zincsearch-sdk-0.3.3/zincsearch_sdk/model/meta_fuzzy_query.py | meta_fuzzy_query.py |
import re # noqa: F401
import sys # noqa: F401
from zincsearch_sdk.model_utils import ( # noqa: F401
ApiTypeError,
ModelComposed,
ModelNormal,
ModelSimple,
cached_property,
change_keys_js_to_python,
convert_js_args_to_python_args,
date,
datetime,
file_type,
none_type,
validate_get_composed_info,
OpenApiModel
)
from zincsearch_sdk.exceptions import ApiAttributeError
class MetaHTTPResponseDocument(ModelNormal):
"""NOTE: This class is auto generated by OpenAPI Generator.
Ref: https://openapi-generator.tech
Do not edit the class manually.
Attributes:
allowed_values (dict): The key is the tuple path to the attribute
and the for var_name this is (var_name,). The value is a dict
with a capitalized key describing the allowed value and an allowed
value. These dicts store the allowed enum values.
attribute_map (dict): The key is attribute name
and the value is json key in definition.
discriminator_value_class_map (dict): A dict to go from the discriminator
variable value to the discriminator class name.
validations (dict): The key is the tuple path to the attribute
and the for var_name this is (var_name,). The value is a dict
that stores validations for max_length, min_length, max_items,
min_items, exclusive_maximum, inclusive_maximum, exclusive_minimum,
inclusive_minimum, and regex.
additional_properties_type (tuple): A tuple of classes accepted
as additional properties values.
"""
allowed_values = {
}
validations = {
}
@cached_property
def additional_properties_type():
"""
This must be a method because a model may have properties that are
of type self, this must run after the class is loaded
"""
return (bool, date, datetime, dict, float, int, list, str, none_type,) # noqa: E501
_nullable = False
@cached_property
def openapi_types():
"""
This must be a method because a model may have properties that are
of type self, this must run after the class is loaded
Returns
openapi_types (dict): The key is attribute name
and the value is attribute type.
"""
return {
'id': (str,), # noqa: E501
'index': (str,), # noqa: E501
'message': (str,), # noqa: E501
}
@cached_property
def discriminator():
return None
attribute_map = {
'id': 'id', # noqa: E501
'index': 'index', # noqa: E501
'message': 'message', # noqa: E501
}
read_only_vars = {
}
_composed_schemas = {}
@classmethod
@convert_js_args_to_python_args
def _from_openapi_data(cls, *args, **kwargs): # noqa: E501
"""MetaHTTPResponseDocument - a model defined in OpenAPI
Keyword Args:
_check_type (bool): if True, values for parameters in openapi_types
will be type checked and a TypeError will be
raised if the wrong type is input.
Defaults to True
_path_to_item (tuple/list): This is a list of keys or values to
drill down to the model in received_data
when deserializing a response
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_configuration (Configuration): the instance to use when
deserializing a file_type parameter.
If passed, type conversion is attempted
If omitted no type conversion is done.
_visited_composed_classes (tuple): This stores a tuple of
classes that we have traveled through so that
if we see that class again we will not use its
discriminator again.
When traveling through a discriminator, the
composed schema that is
is traveled through is added to this set.
For example if Animal has a discriminator
petType and we pass in "Dog", and the class Dog
allOf includes Animal, we move through Animal
once using the discriminator, and pick Dog.
Then in Dog, we will make an instance of the
Animal class but this time we won't travel
through its discriminator because we passed in
_visited_composed_classes = (Animal,)
id (str): [optional] # noqa: E501
index (str): [optional] # noqa: E501
message (str): [optional] # noqa: E501
"""
_check_type = kwargs.pop('_check_type', True)
_spec_property_naming = kwargs.pop('_spec_property_naming', True)
_path_to_item = kwargs.pop('_path_to_item', ())
_configuration = kwargs.pop('_configuration', None)
_visited_composed_classes = kwargs.pop('_visited_composed_classes', ())
self = super(OpenApiModel, cls).__new__(cls)
if args:
for arg in args:
if isinstance(arg, dict):
kwargs.update(arg)
else:
raise ApiTypeError(
"Invalid positional arguments=%s passed to %s. Remove those invalid positional arguments." % (
args,
self.__class__.__name__,
),
path_to_item=_path_to_item,
valid_classes=(self.__class__,),
)
self._data_store = {}
self._check_type = _check_type
self._spec_property_naming = _spec_property_naming
self._path_to_item = _path_to_item
self._configuration = _configuration
self._visited_composed_classes = _visited_composed_classes + (self.__class__,)
for var_name, var_value in kwargs.items():
if var_name not in self.attribute_map and \
self._configuration is not None and \
self._configuration.discard_unknown_keys and \
self.additional_properties_type is None:
# discard variable.
continue
setattr(self, var_name, var_value)
return self
required_properties = set([
'_data_store',
'_check_type',
'_spec_property_naming',
'_path_to_item',
'_configuration',
'_visited_composed_classes',
])
@convert_js_args_to_python_args
def __init__(self, *args, **kwargs): # noqa: E501
"""MetaHTTPResponseDocument - a model defined in OpenAPI
Keyword Args:
_check_type (bool): if True, values for parameters in openapi_types
will be type checked and a TypeError will be
raised if the wrong type is input.
Defaults to True
_path_to_item (tuple/list): This is a list of keys or values to
drill down to the model in received_data
when deserializing a response
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_configuration (Configuration): the instance to use when
deserializing a file_type parameter.
If passed, type conversion is attempted
If omitted no type conversion is done.
_visited_composed_classes (tuple): This stores a tuple of
classes that we have traveled through so that
if we see that class again we will not use its
discriminator again.
When traveling through a discriminator, the
composed schema that is
is traveled through is added to this set.
For example if Animal has a discriminator
petType and we pass in "Dog", and the class Dog
allOf includes Animal, we move through Animal
once using the discriminator, and pick Dog.
Then in Dog, we will make an instance of the
Animal class but this time we won't travel
through its discriminator because we passed in
_visited_composed_classes = (Animal,)
id (str): [optional] # noqa: E501
index (str): [optional] # noqa: E501
message (str): [optional] # noqa: E501
"""
_check_type = kwargs.pop('_check_type', True)
_spec_property_naming = kwargs.pop('_spec_property_naming', False)
_path_to_item = kwargs.pop('_path_to_item', ())
_configuration = kwargs.pop('_configuration', None)
_visited_composed_classes = kwargs.pop('_visited_composed_classes', ())
if args:
for arg in args:
if isinstance(arg, dict):
kwargs.update(arg)
else:
raise ApiTypeError(
"Invalid positional arguments=%s passed to %s. Remove those invalid positional arguments." % (
args,
self.__class__.__name__,
),
path_to_item=_path_to_item,
valid_classes=(self.__class__,),
)
self._data_store = {}
self._check_type = _check_type
self._spec_property_naming = _spec_property_naming
self._path_to_item = _path_to_item
self._configuration = _configuration
self._visited_composed_classes = _visited_composed_classes + (self.__class__,)
for var_name, var_value in kwargs.items():
if var_name not in self.attribute_map and \
self._configuration is not None and \
self._configuration.discard_unknown_keys and \
self.additional_properties_type is None:
# discard variable.
continue
setattr(self, var_name, var_value)
if var_name in self.read_only_vars:
raise ApiAttributeError(f"`{var_name}` is a read-only attribute. Use `from_openapi_data` to instantiate "
f"class with read only attributes.") | zincsearch-sdk | /zincsearch-sdk-0.3.3.tar.gz/zincsearch-sdk-0.3.3/zincsearch_sdk/model/meta_http_response_document.py | meta_http_response_document.py |
import re # noqa: F401
import sys # noqa: F401
from zincsearch_sdk.model_utils import ( # noqa: F401
ApiTypeError,
ModelComposed,
ModelNormal,
ModelSimple,
cached_property,
change_keys_js_to_python,
convert_js_args_to_python_args,
date,
datetime,
file_type,
none_type,
validate_get_composed_info,
OpenApiModel
)
from zincsearch_sdk.exceptions import ApiAttributeError
class AuthLoginUser(ModelNormal):
"""NOTE: This class is auto generated by OpenAPI Generator.
Ref: https://openapi-generator.tech
Do not edit the class manually.
Attributes:
allowed_values (dict): The key is the tuple path to the attribute
and the for var_name this is (var_name,). The value is a dict
with a capitalized key describing the allowed value and an allowed
value. These dicts store the allowed enum values.
attribute_map (dict): The key is attribute name
and the value is json key in definition.
discriminator_value_class_map (dict): A dict to go from the discriminator
variable value to the discriminator class name.
validations (dict): The key is the tuple path to the attribute
and the for var_name this is (var_name,). The value is a dict
that stores validations for max_length, min_length, max_items,
min_items, exclusive_maximum, inclusive_maximum, exclusive_minimum,
inclusive_minimum, and regex.
additional_properties_type (tuple): A tuple of classes accepted
as additional properties values.
"""
allowed_values = {
}
validations = {
}
@cached_property
def additional_properties_type():
"""
This must be a method because a model may have properties that are
of type self, this must run after the class is loaded
"""
return (bool, date, datetime, dict, float, int, list, str, none_type,) # noqa: E501
_nullable = False
@cached_property
def openapi_types():
"""
This must be a method because a model may have properties that are
of type self, this must run after the class is loaded
Returns
openapi_types (dict): The key is attribute name
and the value is attribute type.
"""
return {
'id': (str,), # noqa: E501
'name': (str,), # noqa: E501
'role': (str,), # noqa: E501
}
@cached_property
def discriminator():
return None
attribute_map = {
'id': '_id', # noqa: E501
'name': 'name', # noqa: E501
'role': 'role', # noqa: E501
}
read_only_vars = {
}
_composed_schemas = {}
@classmethod
@convert_js_args_to_python_args
def _from_openapi_data(cls, *args, **kwargs): # noqa: E501
"""AuthLoginUser - a model defined in OpenAPI
Keyword Args:
_check_type (bool): if True, values for parameters in openapi_types
will be type checked and a TypeError will be
raised if the wrong type is input.
Defaults to True
_path_to_item (tuple/list): This is a list of keys or values to
drill down to the model in received_data
when deserializing a response
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_configuration (Configuration): the instance to use when
deserializing a file_type parameter.
If passed, type conversion is attempted
If omitted no type conversion is done.
_visited_composed_classes (tuple): This stores a tuple of
classes that we have traveled through so that
if we see that class again we will not use its
discriminator again.
When traveling through a discriminator, the
composed schema that is
is traveled through is added to this set.
For example if Animal has a discriminator
petType and we pass in "Dog", and the class Dog
allOf includes Animal, we move through Animal
once using the discriminator, and pick Dog.
Then in Dog, we will make an instance of the
Animal class but this time we won't travel
through its discriminator because we passed in
_visited_composed_classes = (Animal,)
id (str): [optional] # noqa: E501
name (str): [optional] # noqa: E501
role (str): [optional] # noqa: E501
"""
_check_type = kwargs.pop('_check_type', True)
_spec_property_naming = kwargs.pop('_spec_property_naming', True)
_path_to_item = kwargs.pop('_path_to_item', ())
_configuration = kwargs.pop('_configuration', None)
_visited_composed_classes = kwargs.pop('_visited_composed_classes', ())
self = super(OpenApiModel, cls).__new__(cls)
if args:
for arg in args:
if isinstance(arg, dict):
kwargs.update(arg)
else:
raise ApiTypeError(
"Invalid positional arguments=%s passed to %s. Remove those invalid positional arguments." % (
args,
self.__class__.__name__,
),
path_to_item=_path_to_item,
valid_classes=(self.__class__,),
)
self._data_store = {}
self._check_type = _check_type
self._spec_property_naming = _spec_property_naming
self._path_to_item = _path_to_item
self._configuration = _configuration
self._visited_composed_classes = _visited_composed_classes + (self.__class__,)
for var_name, var_value in kwargs.items():
if var_name not in self.attribute_map and \
self._configuration is not None and \
self._configuration.discard_unknown_keys and \
self.additional_properties_type is None:
# discard variable.
continue
setattr(self, var_name, var_value)
return self
required_properties = set([
'_data_store',
'_check_type',
'_spec_property_naming',
'_path_to_item',
'_configuration',
'_visited_composed_classes',
])
@convert_js_args_to_python_args
def __init__(self, *args, **kwargs): # noqa: E501
"""AuthLoginUser - a model defined in OpenAPI
Keyword Args:
_check_type (bool): if True, values for parameters in openapi_types
will be type checked and a TypeError will be
raised if the wrong type is input.
Defaults to True
_path_to_item (tuple/list): This is a list of keys or values to
drill down to the model in received_data
when deserializing a response
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_configuration (Configuration): the instance to use when
deserializing a file_type parameter.
If passed, type conversion is attempted
If omitted no type conversion is done.
_visited_composed_classes (tuple): This stores a tuple of
classes that we have traveled through so that
if we see that class again we will not use its
discriminator again.
When traveling through a discriminator, the
composed schema that is
is traveled through is added to this set.
For example if Animal has a discriminator
petType and we pass in "Dog", and the class Dog
allOf includes Animal, we move through Animal
once using the discriminator, and pick Dog.
Then in Dog, we will make an instance of the
Animal class but this time we won't travel
through its discriminator because we passed in
_visited_composed_classes = (Animal,)
id (str): [optional] # noqa: E501
name (str): [optional] # noqa: E501
role (str): [optional] # noqa: E501
"""
_check_type = kwargs.pop('_check_type', True)
_spec_property_naming = kwargs.pop('_spec_property_naming', False)
_path_to_item = kwargs.pop('_path_to_item', ())
_configuration = kwargs.pop('_configuration', None)
_visited_composed_classes = kwargs.pop('_visited_composed_classes', ())
if args:
for arg in args:
if isinstance(arg, dict):
kwargs.update(arg)
else:
raise ApiTypeError(
"Invalid positional arguments=%s passed to %s. Remove those invalid positional arguments." % (
args,
self.__class__.__name__,
),
path_to_item=_path_to_item,
valid_classes=(self.__class__,),
)
self._data_store = {}
self._check_type = _check_type
self._spec_property_naming = _spec_property_naming
self._path_to_item = _path_to_item
self._configuration = _configuration
self._visited_composed_classes = _visited_composed_classes + (self.__class__,)
for var_name, var_value in kwargs.items():
if var_name not in self.attribute_map and \
self._configuration is not None and \
self._configuration.discard_unknown_keys and \
self.additional_properties_type is None:
# discard variable.
continue
setattr(self, var_name, var_value)
if var_name in self.read_only_vars:
raise ApiAttributeError(f"`{var_name}` is a read-only attribute. Use `from_openapi_data` to instantiate "
f"class with read only attributes.") | zincsearch-sdk | /zincsearch-sdk-0.3.3.tar.gz/zincsearch-sdk-0.3.3/zincsearch_sdk/model/auth_login_user.py | auth_login_user.py |
import re # noqa: F401
import sys # noqa: F401
from zincsearch_sdk.model_utils import ( # noqa: F401
ApiTypeError,
ModelComposed,
ModelNormal,
ModelSimple,
cached_property,
change_keys_js_to_python,
convert_js_args_to_python_args,
date,
datetime,
file_type,
none_type,
validate_get_composed_info,
OpenApiModel
)
from zincsearch_sdk.exceptions import ApiAttributeError
def lazy_import():
from zincsearch_sdk.model.meta_hit import MetaHit
from zincsearch_sdk.model.meta_total import MetaTotal
globals()['MetaHit'] = MetaHit
globals()['MetaTotal'] = MetaTotal
class MetaHits(ModelNormal):
"""NOTE: This class is auto generated by OpenAPI Generator.
Ref: https://openapi-generator.tech
Do not edit the class manually.
Attributes:
allowed_values (dict): The key is the tuple path to the attribute
and the for var_name this is (var_name,). The value is a dict
with a capitalized key describing the allowed value and an allowed
value. These dicts store the allowed enum values.
attribute_map (dict): The key is attribute name
and the value is json key in definition.
discriminator_value_class_map (dict): A dict to go from the discriminator
variable value to the discriminator class name.
validations (dict): The key is the tuple path to the attribute
and the for var_name this is (var_name,). The value is a dict
that stores validations for max_length, min_length, max_items,
min_items, exclusive_maximum, inclusive_maximum, exclusive_minimum,
inclusive_minimum, and regex.
additional_properties_type (tuple): A tuple of classes accepted
as additional properties values.
"""
allowed_values = {
}
validations = {
}
@cached_property
def additional_properties_type():
"""
This must be a method because a model may have properties that are
of type self, this must run after the class is loaded
"""
lazy_import()
return (bool, date, datetime, dict, float, int, list, str, none_type,) # noqa: E501
_nullable = False
@cached_property
def openapi_types():
"""
This must be a method because a model may have properties that are
of type self, this must run after the class is loaded
Returns
openapi_types (dict): The key is attribute name
and the value is attribute type.
"""
lazy_import()
return {
'hits': ([MetaHit],), # noqa: E501
'max_score': (float,), # noqa: E501
'total': (MetaTotal,), # noqa: E501
}
@cached_property
def discriminator():
return None
attribute_map = {
'hits': 'hits', # noqa: E501
'max_score': 'max_score', # noqa: E501
'total': 'total', # noqa: E501
}
read_only_vars = {
}
_composed_schemas = {}
@classmethod
@convert_js_args_to_python_args
def _from_openapi_data(cls, *args, **kwargs): # noqa: E501
"""MetaHits - a model defined in OpenAPI
Keyword Args:
_check_type (bool): if True, values for parameters in openapi_types
will be type checked and a TypeError will be
raised if the wrong type is input.
Defaults to True
_path_to_item (tuple/list): This is a list of keys or values to
drill down to the model in received_data
when deserializing a response
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_configuration (Configuration): the instance to use when
deserializing a file_type parameter.
If passed, type conversion is attempted
If omitted no type conversion is done.
_visited_composed_classes (tuple): This stores a tuple of
classes that we have traveled through so that
if we see that class again we will not use its
discriminator again.
When traveling through a discriminator, the
composed schema that is
is traveled through is added to this set.
For example if Animal has a discriminator
petType and we pass in "Dog", and the class Dog
allOf includes Animal, we move through Animal
once using the discriminator, and pick Dog.
Then in Dog, we will make an instance of the
Animal class but this time we won't travel
through its discriminator because we passed in
_visited_composed_classes = (Animal,)
hits ([MetaHit]): [optional] # noqa: E501
max_score (float): [optional] # noqa: E501
total (MetaTotal): [optional] # noqa: E501
"""
_check_type = kwargs.pop('_check_type', True)
_spec_property_naming = kwargs.pop('_spec_property_naming', True)
_path_to_item = kwargs.pop('_path_to_item', ())
_configuration = kwargs.pop('_configuration', None)
_visited_composed_classes = kwargs.pop('_visited_composed_classes', ())
self = super(OpenApiModel, cls).__new__(cls)
if args:
for arg in args:
if isinstance(arg, dict):
kwargs.update(arg)
else:
raise ApiTypeError(
"Invalid positional arguments=%s passed to %s. Remove those invalid positional arguments." % (
args,
self.__class__.__name__,
),
path_to_item=_path_to_item,
valid_classes=(self.__class__,),
)
self._data_store = {}
self._check_type = _check_type
self._spec_property_naming = _spec_property_naming
self._path_to_item = _path_to_item
self._configuration = _configuration
self._visited_composed_classes = _visited_composed_classes + (self.__class__,)
for var_name, var_value in kwargs.items():
if var_name not in self.attribute_map and \
self._configuration is not None and \
self._configuration.discard_unknown_keys and \
self.additional_properties_type is None:
# discard variable.
continue
setattr(self, var_name, var_value)
return self
required_properties = set([
'_data_store',
'_check_type',
'_spec_property_naming',
'_path_to_item',
'_configuration',
'_visited_composed_classes',
])
@convert_js_args_to_python_args
def __init__(self, *args, **kwargs): # noqa: E501
"""MetaHits - a model defined in OpenAPI
Keyword Args:
_check_type (bool): if True, values for parameters in openapi_types
will be type checked and a TypeError will be
raised if the wrong type is input.
Defaults to True
_path_to_item (tuple/list): This is a list of keys or values to
drill down to the model in received_data
when deserializing a response
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_configuration (Configuration): the instance to use when
deserializing a file_type parameter.
If passed, type conversion is attempted
If omitted no type conversion is done.
_visited_composed_classes (tuple): This stores a tuple of
classes that we have traveled through so that
if we see that class again we will not use its
discriminator again.
When traveling through a discriminator, the
composed schema that is
is traveled through is added to this set.
For example if Animal has a discriminator
petType and we pass in "Dog", and the class Dog
allOf includes Animal, we move through Animal
once using the discriminator, and pick Dog.
Then in Dog, we will make an instance of the
Animal class but this time we won't travel
through its discriminator because we passed in
_visited_composed_classes = (Animal,)
hits ([MetaHit]): [optional] # noqa: E501
max_score (float): [optional] # noqa: E501
total (MetaTotal): [optional] # noqa: E501
"""
_check_type = kwargs.pop('_check_type', True)
_spec_property_naming = kwargs.pop('_spec_property_naming', False)
_path_to_item = kwargs.pop('_path_to_item', ())
_configuration = kwargs.pop('_configuration', None)
_visited_composed_classes = kwargs.pop('_visited_composed_classes', ())
if args:
for arg in args:
if isinstance(arg, dict):
kwargs.update(arg)
else:
raise ApiTypeError(
"Invalid positional arguments=%s passed to %s. Remove those invalid positional arguments." % (
args,
self.__class__.__name__,
),
path_to_item=_path_to_item,
valid_classes=(self.__class__,),
)
self._data_store = {}
self._check_type = _check_type
self._spec_property_naming = _spec_property_naming
self._path_to_item = _path_to_item
self._configuration = _configuration
self._visited_composed_classes = _visited_composed_classes + (self.__class__,)
for var_name, var_value in kwargs.items():
if var_name not in self.attribute_map and \
self._configuration is not None and \
self._configuration.discard_unknown_keys and \
self.additional_properties_type is None:
# discard variable.
continue
setattr(self, var_name, var_value)
if var_name in self.read_only_vars:
raise ApiAttributeError(f"`{var_name}` is a read-only attribute. Use `from_openapi_data` to instantiate "
f"class with read only attributes.") | zincsearch-sdk | /zincsearch-sdk-0.3.3.tar.gz/zincsearch-sdk-0.3.3/zincsearch_sdk/model/meta_hits.py | meta_hits.py |
import re # noqa: F401
import sys # noqa: F401
from zincsearch_sdk.model_utils import ( # noqa: F401
ApiTypeError,
ModelComposed,
ModelNormal,
ModelSimple,
cached_property,
change_keys_js_to_python,
convert_js_args_to_python_args,
date,
datetime,
file_type,
none_type,
validate_get_composed_info,
OpenApiModel
)
from zincsearch_sdk.exceptions import ApiAttributeError
class MetaProperty(ModelNormal):
"""NOTE: This class is auto generated by OpenAPI Generator.
Ref: https://openapi-generator.tech
Do not edit the class manually.
Attributes:
allowed_values (dict): The key is the tuple path to the attribute
and the for var_name this is (var_name,). The value is a dict
with a capitalized key describing the allowed value and an allowed
value. These dicts store the allowed enum values.
attribute_map (dict): The key is attribute name
and the value is json key in definition.
discriminator_value_class_map (dict): A dict to go from the discriminator
variable value to the discriminator class name.
validations (dict): The key is the tuple path to the attribute
and the for var_name this is (var_name,). The value is a dict
that stores validations for max_length, min_length, max_items,
min_items, exclusive_maximum, inclusive_maximum, exclusive_minimum,
inclusive_minimum, and regex.
additional_properties_type (tuple): A tuple of classes accepted
as additional properties values.
"""
allowed_values = {
}
validations = {
}
@cached_property
def additional_properties_type():
"""
This must be a method because a model may have properties that are
of type self, this must run after the class is loaded
"""
return (bool, date, datetime, dict, float, int, list, str, none_type,) # noqa: E501
_nullable = False
@cached_property
def openapi_types():
"""
This must be a method because a model may have properties that are
of type self, this must run after the class is loaded
Returns
openapi_types (dict): The key is attribute name
and the value is attribute type.
"""
return {
'aggregatable': (bool,), # noqa: E501
'analyzer': (str,), # noqa: E501
'fields': ({str: (MetaProperty,)},), # noqa: E501
'format': (str,), # noqa: E501
'highlightable': (bool,), # noqa: E501
'index': (bool,), # noqa: E501
'search_analyzer': (str,), # noqa: E501
'sortable': (bool,), # noqa: E501
'store': (bool,), # noqa: E501
'time_zone': (str,), # noqa: E501
'type': (str,), # noqa: E501
}
@cached_property
def discriminator():
return None
attribute_map = {
'aggregatable': 'aggregatable', # noqa: E501
'analyzer': 'analyzer', # noqa: E501
'fields': 'fields', # noqa: E501
'format': 'format', # noqa: E501
'highlightable': 'highlightable', # noqa: E501
'index': 'index', # noqa: E501
'search_analyzer': 'search_analyzer', # noqa: E501
'sortable': 'sortable', # noqa: E501
'store': 'store', # noqa: E501
'time_zone': 'time_zone', # noqa: E501
'type': 'type', # noqa: E501
}
read_only_vars = {
}
_composed_schemas = {}
@classmethod
@convert_js_args_to_python_args
def _from_openapi_data(cls, *args, **kwargs): # noqa: E501
"""MetaProperty - a model defined in OpenAPI
Keyword Args:
_check_type (bool): if True, values for parameters in openapi_types
will be type checked and a TypeError will be
raised if the wrong type is input.
Defaults to True
_path_to_item (tuple/list): This is a list of keys or values to
drill down to the model in received_data
when deserializing a response
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_configuration (Configuration): the instance to use when
deserializing a file_type parameter.
If passed, type conversion is attempted
If omitted no type conversion is done.
_visited_composed_classes (tuple): This stores a tuple of
classes that we have traveled through so that
if we see that class again we will not use its
discriminator again.
When traveling through a discriminator, the
composed schema that is
is traveled through is added to this set.
For example if Animal has a discriminator
petType and we pass in "Dog", and the class Dog
allOf includes Animal, we move through Animal
once using the discriminator, and pick Dog.
Then in Dog, we will make an instance of the
Animal class but this time we won't travel
through its discriminator because we passed in
_visited_composed_classes = (Animal,)
aggregatable (bool): [optional] # noqa: E501
analyzer (str): [optional] # noqa: E501
fields ({str: (MetaProperty,)}): Fields allow the same string value to be indexed in multiple ways for different purposes, such as one field for search and a multi-field for sorting and aggregations, or the same string value analyzed by different analyzers. If the Fields property is defined within a sub-field, it will be ignored. Currently, only \"text\" fields support the Fields parameter.. [optional] # noqa: E501
format (str): date format yyyy-MM-dd HH:mm:ss || yyyy-MM-dd || epoch_millis. [optional] # noqa: E501
highlightable (bool): [optional] # noqa: E501
index (bool): [optional] # noqa: E501
search_analyzer (str): [optional] # noqa: E501
sortable (bool): [optional] # noqa: E501
store (bool): [optional] # noqa: E501
time_zone (str): date format time_zone. [optional] # noqa: E501
type (str): text, keyword, date, numeric, boolean, geo_point. [optional] # noqa: E501
"""
_check_type = kwargs.pop('_check_type', True)
_spec_property_naming = kwargs.pop('_spec_property_naming', True)
_path_to_item = kwargs.pop('_path_to_item', ())
_configuration = kwargs.pop('_configuration', None)
_visited_composed_classes = kwargs.pop('_visited_composed_classes', ())
self = super(OpenApiModel, cls).__new__(cls)
if args:
for arg in args:
if isinstance(arg, dict):
kwargs.update(arg)
else:
raise ApiTypeError(
"Invalid positional arguments=%s passed to %s. Remove those invalid positional arguments." % (
args,
self.__class__.__name__,
),
path_to_item=_path_to_item,
valid_classes=(self.__class__,),
)
self._data_store = {}
self._check_type = _check_type
self._spec_property_naming = _spec_property_naming
self._path_to_item = _path_to_item
self._configuration = _configuration
self._visited_composed_classes = _visited_composed_classes + (self.__class__,)
for var_name, var_value in kwargs.items():
if var_name not in self.attribute_map and \
self._configuration is not None and \
self._configuration.discard_unknown_keys and \
self.additional_properties_type is None:
# discard variable.
continue
setattr(self, var_name, var_value)
return self
required_properties = set([
'_data_store',
'_check_type',
'_spec_property_naming',
'_path_to_item',
'_configuration',
'_visited_composed_classes',
])
@convert_js_args_to_python_args
def __init__(self, *args, **kwargs): # noqa: E501
"""MetaProperty - a model defined in OpenAPI
Keyword Args:
_check_type (bool): if True, values for parameters in openapi_types
will be type checked and a TypeError will be
raised if the wrong type is input.
Defaults to True
_path_to_item (tuple/list): This is a list of keys or values to
drill down to the model in received_data
when deserializing a response
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_configuration (Configuration): the instance to use when
deserializing a file_type parameter.
If passed, type conversion is attempted
If omitted no type conversion is done.
_visited_composed_classes (tuple): This stores a tuple of
classes that we have traveled through so that
if we see that class again we will not use its
discriminator again.
When traveling through a discriminator, the
composed schema that is
is traveled through is added to this set.
For example if Animal has a discriminator
petType and we pass in "Dog", and the class Dog
allOf includes Animal, we move through Animal
once using the discriminator, and pick Dog.
Then in Dog, we will make an instance of the
Animal class but this time we won't travel
through its discriminator because we passed in
_visited_composed_classes = (Animal,)
aggregatable (bool): [optional] # noqa: E501
analyzer (str): [optional] # noqa: E501
fields ({str: (MetaProperty,)}): Fields allow the same string value to be indexed in multiple ways for different purposes, such as one field for search and a multi-field for sorting and aggregations, or the same string value analyzed by different analyzers. If the Fields property is defined within a sub-field, it will be ignored. Currently, only \"text\" fields support the Fields parameter.. [optional] # noqa: E501
format (str): date format yyyy-MM-dd HH:mm:ss || yyyy-MM-dd || epoch_millis. [optional] # noqa: E501
highlightable (bool): [optional] # noqa: E501
index (bool): [optional] # noqa: E501
search_analyzer (str): [optional] # noqa: E501
sortable (bool): [optional] # noqa: E501
store (bool): [optional] # noqa: E501
time_zone (str): date format time_zone. [optional] # noqa: E501
type (str): text, keyword, date, numeric, boolean, geo_point. [optional] # noqa: E501
"""
_check_type = kwargs.pop('_check_type', True)
_spec_property_naming = kwargs.pop('_spec_property_naming', False)
_path_to_item = kwargs.pop('_path_to_item', ())
_configuration = kwargs.pop('_configuration', None)
_visited_composed_classes = kwargs.pop('_visited_composed_classes', ())
if args:
for arg in args:
if isinstance(arg, dict):
kwargs.update(arg)
else:
raise ApiTypeError(
"Invalid positional arguments=%s passed to %s. Remove those invalid positional arguments." % (
args,
self.__class__.__name__,
),
path_to_item=_path_to_item,
valid_classes=(self.__class__,),
)
self._data_store = {}
self._check_type = _check_type
self._spec_property_naming = _spec_property_naming
self._path_to_item = _path_to_item
self._configuration = _configuration
self._visited_composed_classes = _visited_composed_classes + (self.__class__,)
for var_name, var_value in kwargs.items():
if var_name not in self.attribute_map and \
self._configuration is not None and \
self._configuration.discard_unknown_keys and \
self.additional_properties_type is None:
# discard variable.
continue
setattr(self, var_name, var_value)
if var_name in self.read_only_vars:
raise ApiAttributeError(f"`{var_name}` is a read-only attribute. Use `from_openapi_data` to instantiate "
f"class with read only attributes.") | zincsearch-sdk | /zincsearch-sdk-0.3.3.tar.gz/zincsearch-sdk-0.3.3/zincsearch_sdk/model/meta_property.py | meta_property.py |
import re # noqa: F401
import sys # noqa: F401
from zincsearch_sdk.model_utils import ( # noqa: F401
ApiTypeError,
ModelComposed,
ModelNormal,
ModelSimple,
cached_property,
change_keys_js_to_python,
convert_js_args_to_python_args,
date,
datetime,
file_type,
none_type,
validate_get_composed_info,
OpenApiModel
)
from zincsearch_sdk.exceptions import ApiAttributeError
def lazy_import():
from zincsearch_sdk.model.meta_index_settings import MetaIndexSettings
globals()['MetaIndexSettings'] = MetaIndexSettings
class MetaIndexSimple(ModelNormal):
"""NOTE: This class is auto generated by OpenAPI Generator.
Ref: https://openapi-generator.tech
Do not edit the class manually.
Attributes:
allowed_values (dict): The key is the tuple path to the attribute
and the for var_name this is (var_name,). The value is a dict
with a capitalized key describing the allowed value and an allowed
value. These dicts store the allowed enum values.
attribute_map (dict): The key is attribute name
and the value is json key in definition.
discriminator_value_class_map (dict): A dict to go from the discriminator
variable value to the discriminator class name.
validations (dict): The key is the tuple path to the attribute
and the for var_name this is (var_name,). The value is a dict
that stores validations for max_length, min_length, max_items,
min_items, exclusive_maximum, inclusive_maximum, exclusive_minimum,
inclusive_minimum, and regex.
additional_properties_type (tuple): A tuple of classes accepted
as additional properties values.
"""
allowed_values = {
}
validations = {
}
@cached_property
def additional_properties_type():
"""
This must be a method because a model may have properties that are
of type self, this must run after the class is loaded
"""
lazy_import()
return (bool, date, datetime, dict, float, int, list, str, none_type,) # noqa: E501
_nullable = False
@cached_property
def openapi_types():
"""
This must be a method because a model may have properties that are
of type self, this must run after the class is loaded
Returns
openapi_types (dict): The key is attribute name
and the value is attribute type.
"""
lazy_import()
return {
'mappings': (bool, date, datetime, dict, float, int, list, str, none_type,), # noqa: E501
'name': (str,), # noqa: E501
'settings': (MetaIndexSettings,), # noqa: E501
'shard_num': (int,), # noqa: E501
'storage_type': (str,), # noqa: E501
}
@cached_property
def discriminator():
return None
attribute_map = {
'mappings': 'mappings', # noqa: E501
'name': 'name', # noqa: E501
'settings': 'settings', # noqa: E501
'shard_num': 'shard_num', # noqa: E501
'storage_type': 'storage_type', # noqa: E501
}
read_only_vars = {
}
_composed_schemas = {}
@classmethod
@convert_js_args_to_python_args
def _from_openapi_data(cls, *args, **kwargs): # noqa: E501
"""MetaIndexSimple - a model defined in OpenAPI
Keyword Args:
_check_type (bool): if True, values for parameters in openapi_types
will be type checked and a TypeError will be
raised if the wrong type is input.
Defaults to True
_path_to_item (tuple/list): This is a list of keys or values to
drill down to the model in received_data
when deserializing a response
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_configuration (Configuration): the instance to use when
deserializing a file_type parameter.
If passed, type conversion is attempted
If omitted no type conversion is done.
_visited_composed_classes (tuple): This stores a tuple of
classes that we have traveled through so that
if we see that class again we will not use its
discriminator again.
When traveling through a discriminator, the
composed schema that is
is traveled through is added to this set.
For example if Animal has a discriminator
petType and we pass in "Dog", and the class Dog
allOf includes Animal, we move through Animal
once using the discriminator, and pick Dog.
Then in Dog, we will make an instance of the
Animal class but this time we won't travel
through its discriminator because we passed in
_visited_composed_classes = (Animal,)
mappings (bool, date, datetime, dict, float, int, list, str, none_type): [optional] # noqa: E501
name (str): [optional] # noqa: E501
settings (MetaIndexSettings): [optional] # noqa: E501
shard_num (int): [optional] # noqa: E501
storage_type (str): [optional] # noqa: E501
"""
_check_type = kwargs.pop('_check_type', True)
_spec_property_naming = kwargs.pop('_spec_property_naming', True)
_path_to_item = kwargs.pop('_path_to_item', ())
_configuration = kwargs.pop('_configuration', None)
_visited_composed_classes = kwargs.pop('_visited_composed_classes', ())
self = super(OpenApiModel, cls).__new__(cls)
if args:
for arg in args:
if isinstance(arg, dict):
kwargs.update(arg)
else:
raise ApiTypeError(
"Invalid positional arguments=%s passed to %s. Remove those invalid positional arguments." % (
args,
self.__class__.__name__,
),
path_to_item=_path_to_item,
valid_classes=(self.__class__,),
)
self._data_store = {}
self._check_type = _check_type
self._spec_property_naming = _spec_property_naming
self._path_to_item = _path_to_item
self._configuration = _configuration
self._visited_composed_classes = _visited_composed_classes + (self.__class__,)
for var_name, var_value in kwargs.items():
if var_name not in self.attribute_map and \
self._configuration is not None and \
self._configuration.discard_unknown_keys and \
self.additional_properties_type is None:
# discard variable.
continue
setattr(self, var_name, var_value)
return self
required_properties = set([
'_data_store',
'_check_type',
'_spec_property_naming',
'_path_to_item',
'_configuration',
'_visited_composed_classes',
])
@convert_js_args_to_python_args
def __init__(self, *args, **kwargs): # noqa: E501
"""MetaIndexSimple - a model defined in OpenAPI
Keyword Args:
_check_type (bool): if True, values for parameters in openapi_types
will be type checked and a TypeError will be
raised if the wrong type is input.
Defaults to True
_path_to_item (tuple/list): This is a list of keys or values to
drill down to the model in received_data
when deserializing a response
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_configuration (Configuration): the instance to use when
deserializing a file_type parameter.
If passed, type conversion is attempted
If omitted no type conversion is done.
_visited_composed_classes (tuple): This stores a tuple of
classes that we have traveled through so that
if we see that class again we will not use its
discriminator again.
When traveling through a discriminator, the
composed schema that is
is traveled through is added to this set.
For example if Animal has a discriminator
petType and we pass in "Dog", and the class Dog
allOf includes Animal, we move through Animal
once using the discriminator, and pick Dog.
Then in Dog, we will make an instance of the
Animal class but this time we won't travel
through its discriminator because we passed in
_visited_composed_classes = (Animal,)
mappings (bool, date, datetime, dict, float, int, list, str, none_type): [optional] # noqa: E501
name (str): [optional] # noqa: E501
settings (MetaIndexSettings): [optional] # noqa: E501
shard_num (int): [optional] # noqa: E501
storage_type (str): [optional] # noqa: E501
"""
_check_type = kwargs.pop('_check_type', True)
_spec_property_naming = kwargs.pop('_spec_property_naming', False)
_path_to_item = kwargs.pop('_path_to_item', ())
_configuration = kwargs.pop('_configuration', None)
_visited_composed_classes = kwargs.pop('_visited_composed_classes', ())
if args:
for arg in args:
if isinstance(arg, dict):
kwargs.update(arg)
else:
raise ApiTypeError(
"Invalid positional arguments=%s passed to %s. Remove those invalid positional arguments." % (
args,
self.__class__.__name__,
),
path_to_item=_path_to_item,
valid_classes=(self.__class__,),
)
self._data_store = {}
self._check_type = _check_type
self._spec_property_naming = _spec_property_naming
self._path_to_item = _path_to_item
self._configuration = _configuration
self._visited_composed_classes = _visited_composed_classes + (self.__class__,)
for var_name, var_value in kwargs.items():
if var_name not in self.attribute_map and \
self._configuration is not None and \
self._configuration.discard_unknown_keys and \
self.additional_properties_type is None:
# discard variable.
continue
setattr(self, var_name, var_value)
if var_name in self.read_only_vars:
raise ApiAttributeError(f"`{var_name}` is a read-only attribute. Use `from_openapi_data` to instantiate "
f"class with read only attributes.") | zincsearch-sdk | /zincsearch-sdk-0.3.3.tar.gz/zincsearch-sdk-0.3.3/zincsearch_sdk/model/meta_index_simple.py | meta_index_simple.py |
import re # noqa: F401
import sys # noqa: F401
from zincsearch_sdk.model_utils import ( # noqa: F401
ApiTypeError,
ModelComposed,
ModelNormal,
ModelSimple,
cached_property,
change_keys_js_to_python,
convert_js_args_to_python_args,
date,
datetime,
file_type,
none_type,
validate_get_composed_info,
OpenApiModel
)
from zincsearch_sdk.exceptions import ApiAttributeError
class MetaRangeQuery(ModelNormal):
"""NOTE: This class is auto generated by OpenAPI Generator.
Ref: https://openapi-generator.tech
Do not edit the class manually.
Attributes:
allowed_values (dict): The key is the tuple path to the attribute
and the for var_name this is (var_name,). The value is a dict
with a capitalized key describing the allowed value and an allowed
value. These dicts store the allowed enum values.
attribute_map (dict): The key is attribute name
and the value is json key in definition.
discriminator_value_class_map (dict): A dict to go from the discriminator
variable value to the discriminator class name.
validations (dict): The key is the tuple path to the attribute
and the for var_name this is (var_name,). The value is a dict
that stores validations for max_length, min_length, max_items,
min_items, exclusive_maximum, inclusive_maximum, exclusive_minimum,
inclusive_minimum, and regex.
additional_properties_type (tuple): A tuple of classes accepted
as additional properties values.
"""
allowed_values = {
}
validations = {
}
@cached_property
def additional_properties_type():
"""
This must be a method because a model may have properties that are
of type self, this must run after the class is loaded
"""
return (bool, date, datetime, dict, float, int, list, str, none_type,) # noqa: E501
_nullable = False
@cached_property
def openapi_types():
"""
This must be a method because a model may have properties that are
of type self, this must run after the class is loaded
Returns
openapi_types (dict): The key is attribute name
and the value is attribute type.
"""
return {
'boost': (float,), # noqa: E501
'format': (str,), # noqa: E501
'gt': (str,), # noqa: E501
'gte': (str,), # noqa: E501
'lt': (str,), # noqa: E501
'lte': (str,), # noqa: E501
'time_zone': (str,), # noqa: E501
}
@cached_property
def discriminator():
return None
attribute_map = {
'boost': 'boost', # noqa: E501
'format': 'format', # noqa: E501
'gt': 'gt', # noqa: E501
'gte': 'gte', # noqa: E501
'lt': 'lt', # noqa: E501
'lte': 'lte', # noqa: E501
'time_zone': 'time_zone', # noqa: E501
}
read_only_vars = {
}
_composed_schemas = {}
@classmethod
@convert_js_args_to_python_args
def _from_openapi_data(cls, *args, **kwargs): # noqa: E501
"""MetaRangeQuery - a model defined in OpenAPI
Keyword Args:
_check_type (bool): if True, values for parameters in openapi_types
will be type checked and a TypeError will be
raised if the wrong type is input.
Defaults to True
_path_to_item (tuple/list): This is a list of keys or values to
drill down to the model in received_data
when deserializing a response
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_configuration (Configuration): the instance to use when
deserializing a file_type parameter.
If passed, type conversion is attempted
If omitted no type conversion is done.
_visited_composed_classes (tuple): This stores a tuple of
classes that we have traveled through so that
if we see that class again we will not use its
discriminator again.
When traveling through a discriminator, the
composed schema that is
is traveled through is added to this set.
For example if Animal has a discriminator
petType and we pass in "Dog", and the class Dog
allOf includes Animal, we move through Animal
once using the discriminator, and pick Dog.
Then in Dog, we will make an instance of the
Animal class but this time we won't travel
through its discriminator because we passed in
_visited_composed_classes = (Animal,)
boost (float): [optional] # noqa: E501
format (str): Date format used to convert date values in the query.. [optional] # noqa: E501
gt (str): string, float64. [optional] # noqa: E501
gte (str): string, float64. [optional] # noqa: E501
lt (str): string, float64. [optional] # noqa: E501
lte (str): string, float64. [optional] # noqa: E501
time_zone (str): used to convert date values in the query to UTC.. [optional] # noqa: E501
"""
_check_type = kwargs.pop('_check_type', True)
_spec_property_naming = kwargs.pop('_spec_property_naming', True)
_path_to_item = kwargs.pop('_path_to_item', ())
_configuration = kwargs.pop('_configuration', None)
_visited_composed_classes = kwargs.pop('_visited_composed_classes', ())
self = super(OpenApiModel, cls).__new__(cls)
if args:
for arg in args:
if isinstance(arg, dict):
kwargs.update(arg)
else:
raise ApiTypeError(
"Invalid positional arguments=%s passed to %s. Remove those invalid positional arguments." % (
args,
self.__class__.__name__,
),
path_to_item=_path_to_item,
valid_classes=(self.__class__,),
)
self._data_store = {}
self._check_type = _check_type
self._spec_property_naming = _spec_property_naming
self._path_to_item = _path_to_item
self._configuration = _configuration
self._visited_composed_classes = _visited_composed_classes + (self.__class__,)
for var_name, var_value in kwargs.items():
if var_name not in self.attribute_map and \
self._configuration is not None and \
self._configuration.discard_unknown_keys and \
self.additional_properties_type is None:
# discard variable.
continue
setattr(self, var_name, var_value)
return self
required_properties = set([
'_data_store',
'_check_type',
'_spec_property_naming',
'_path_to_item',
'_configuration',
'_visited_composed_classes',
])
@convert_js_args_to_python_args
def __init__(self, *args, **kwargs): # noqa: E501
"""MetaRangeQuery - a model defined in OpenAPI
Keyword Args:
_check_type (bool): if True, values for parameters in openapi_types
will be type checked and a TypeError will be
raised if the wrong type is input.
Defaults to True
_path_to_item (tuple/list): This is a list of keys or values to
drill down to the model in received_data
when deserializing a response
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_configuration (Configuration): the instance to use when
deserializing a file_type parameter.
If passed, type conversion is attempted
If omitted no type conversion is done.
_visited_composed_classes (tuple): This stores a tuple of
classes that we have traveled through so that
if we see that class again we will not use its
discriminator again.
When traveling through a discriminator, the
composed schema that is
is traveled through is added to this set.
For example if Animal has a discriminator
petType and we pass in "Dog", and the class Dog
allOf includes Animal, we move through Animal
once using the discriminator, and pick Dog.
Then in Dog, we will make an instance of the
Animal class but this time we won't travel
through its discriminator because we passed in
_visited_composed_classes = (Animal,)
boost (float): [optional] # noqa: E501
format (str): Date format used to convert date values in the query.. [optional] # noqa: E501
gt (str): string, float64. [optional] # noqa: E501
gte (str): string, float64. [optional] # noqa: E501
lt (str): string, float64. [optional] # noqa: E501
lte (str): string, float64. [optional] # noqa: E501
time_zone (str): used to convert date values in the query to UTC.. [optional] # noqa: E501
"""
_check_type = kwargs.pop('_check_type', True)
_spec_property_naming = kwargs.pop('_spec_property_naming', False)
_path_to_item = kwargs.pop('_path_to_item', ())
_configuration = kwargs.pop('_configuration', None)
_visited_composed_classes = kwargs.pop('_visited_composed_classes', ())
if args:
for arg in args:
if isinstance(arg, dict):
kwargs.update(arg)
else:
raise ApiTypeError(
"Invalid positional arguments=%s passed to %s. Remove those invalid positional arguments." % (
args,
self.__class__.__name__,
),
path_to_item=_path_to_item,
valid_classes=(self.__class__,),
)
self._data_store = {}
self._check_type = _check_type
self._spec_property_naming = _spec_property_naming
self._path_to_item = _path_to_item
self._configuration = _configuration
self._visited_composed_classes = _visited_composed_classes + (self.__class__,)
for var_name, var_value in kwargs.items():
if var_name not in self.attribute_map and \
self._configuration is not None and \
self._configuration.discard_unknown_keys and \
self.additional_properties_type is None:
# discard variable.
continue
setattr(self, var_name, var_value)
if var_name in self.read_only_vars:
raise ApiAttributeError(f"`{var_name}` is a read-only attribute. Use `from_openapi_data` to instantiate "
f"class with read only attributes.") | zincsearch-sdk | /zincsearch-sdk-0.3.3.tar.gz/zincsearch-sdk-0.3.3/zincsearch_sdk/model/meta_range_query.py | meta_range_query.py |
import re # noqa: F401
import sys # noqa: F401
from zincsearch_sdk.model_utils import ( # noqa: F401
ApiTypeError,
ModelComposed,
ModelNormal,
ModelSimple,
cached_property,
change_keys_js_to_python,
convert_js_args_to_python_args,
date,
datetime,
file_type,
none_type,
validate_get_composed_info,
OpenApiModel
)
from zincsearch_sdk.exceptions import ApiAttributeError
def lazy_import():
from zincsearch_sdk.model.meta_query import MetaQuery
globals()['MetaQuery'] = MetaQuery
class MetaBoolQuery(ModelNormal):
"""NOTE: This class is auto generated by OpenAPI Generator.
Ref: https://openapi-generator.tech
Do not edit the class manually.
Attributes:
allowed_values (dict): The key is the tuple path to the attribute
and the for var_name this is (var_name,). The value is a dict
with a capitalized key describing the allowed value and an allowed
value. These dicts store the allowed enum values.
attribute_map (dict): The key is attribute name
and the value is json key in definition.
discriminator_value_class_map (dict): A dict to go from the discriminator
variable value to the discriminator class name.
validations (dict): The key is the tuple path to the attribute
and the for var_name this is (var_name,). The value is a dict
that stores validations for max_length, min_length, max_items,
min_items, exclusive_maximum, inclusive_maximum, exclusive_minimum,
inclusive_minimum, and regex.
additional_properties_type (tuple): A tuple of classes accepted
as additional properties values.
"""
allowed_values = {
}
validations = {
}
@cached_property
def additional_properties_type():
"""
This must be a method because a model may have properties that are
of type self, this must run after the class is loaded
"""
lazy_import()
return (bool, date, datetime, dict, float, int, list, str, none_type,) # noqa: E501
_nullable = False
@cached_property
def openapi_types():
"""
This must be a method because a model may have properties that are
of type self, this must run after the class is loaded
Returns
openapi_types (dict): The key is attribute name
and the value is attribute type.
"""
lazy_import()
return {
'filter': ([MetaQuery],), # noqa: E501
'minimum_should_match': (float,), # noqa: E501
'must': ([MetaQuery],), # noqa: E501
'must_not': ([MetaQuery],), # noqa: E501
'should': ([MetaQuery],), # noqa: E501
}
@cached_property
def discriminator():
return None
attribute_map = {
'filter': 'filter', # noqa: E501
'minimum_should_match': 'minimum_should_match', # noqa: E501
'must': 'must', # noqa: E501
'must_not': 'must_not', # noqa: E501
'should': 'should', # noqa: E501
}
read_only_vars = {
}
_composed_schemas = {}
@classmethod
@convert_js_args_to_python_args
def _from_openapi_data(cls, *args, **kwargs): # noqa: E501
"""MetaBoolQuery - a model defined in OpenAPI
Keyword Args:
_check_type (bool): if True, values for parameters in openapi_types
will be type checked and a TypeError will be
raised if the wrong type is input.
Defaults to True
_path_to_item (tuple/list): This is a list of keys or values to
drill down to the model in received_data
when deserializing a response
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_configuration (Configuration): the instance to use when
deserializing a file_type parameter.
If passed, type conversion is attempted
If omitted no type conversion is done.
_visited_composed_classes (tuple): This stores a tuple of
classes that we have traveled through so that
if we see that class again we will not use its
discriminator again.
When traveling through a discriminator, the
composed schema that is
is traveled through is added to this set.
For example if Animal has a discriminator
petType and we pass in "Dog", and the class Dog
allOf includes Animal, we move through Animal
once using the discriminator, and pick Dog.
Then in Dog, we will make an instance of the
Animal class but this time we won't travel
through its discriminator because we passed in
_visited_composed_classes = (Animal,)
filter ([MetaQuery]): query, [query1, query2]. [optional] # noqa: E501
minimum_should_match (float): only for should. [optional] # noqa: E501
must ([MetaQuery]): query, [query1, query2]. [optional] # noqa: E501
must_not ([MetaQuery]): query, [query1, query2]. [optional] # noqa: E501
should ([MetaQuery]): query, [query1, query2]. [optional] # noqa: E501
"""
_check_type = kwargs.pop('_check_type', True)
_spec_property_naming = kwargs.pop('_spec_property_naming', True)
_path_to_item = kwargs.pop('_path_to_item', ())
_configuration = kwargs.pop('_configuration', None)
_visited_composed_classes = kwargs.pop('_visited_composed_classes', ())
self = super(OpenApiModel, cls).__new__(cls)
if args:
for arg in args:
if isinstance(arg, dict):
kwargs.update(arg)
else:
raise ApiTypeError(
"Invalid positional arguments=%s passed to %s. Remove those invalid positional arguments." % (
args,
self.__class__.__name__,
),
path_to_item=_path_to_item,
valid_classes=(self.__class__,),
)
self._data_store = {}
self._check_type = _check_type
self._spec_property_naming = _spec_property_naming
self._path_to_item = _path_to_item
self._configuration = _configuration
self._visited_composed_classes = _visited_composed_classes + (self.__class__,)
for var_name, var_value in kwargs.items():
if var_name not in self.attribute_map and \
self._configuration is not None and \
self._configuration.discard_unknown_keys and \
self.additional_properties_type is None:
# discard variable.
continue
setattr(self, var_name, var_value)
return self
required_properties = set([
'_data_store',
'_check_type',
'_spec_property_naming',
'_path_to_item',
'_configuration',
'_visited_composed_classes',
])
@convert_js_args_to_python_args
def __init__(self, *args, **kwargs): # noqa: E501
"""MetaBoolQuery - a model defined in OpenAPI
Keyword Args:
_check_type (bool): if True, values for parameters in openapi_types
will be type checked and a TypeError will be
raised if the wrong type is input.
Defaults to True
_path_to_item (tuple/list): This is a list of keys or values to
drill down to the model in received_data
when deserializing a response
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_configuration (Configuration): the instance to use when
deserializing a file_type parameter.
If passed, type conversion is attempted
If omitted no type conversion is done.
_visited_composed_classes (tuple): This stores a tuple of
classes that we have traveled through so that
if we see that class again we will not use its
discriminator again.
When traveling through a discriminator, the
composed schema that is
is traveled through is added to this set.
For example if Animal has a discriminator
petType and we pass in "Dog", and the class Dog
allOf includes Animal, we move through Animal
once using the discriminator, and pick Dog.
Then in Dog, we will make an instance of the
Animal class but this time we won't travel
through its discriminator because we passed in
_visited_composed_classes = (Animal,)
filter ([MetaQuery]): query, [query1, query2]. [optional] # noqa: E501
minimum_should_match (float): only for should. [optional] # noqa: E501
must ([MetaQuery]): query, [query1, query2]. [optional] # noqa: E501
must_not ([MetaQuery]): query, [query1, query2]. [optional] # noqa: E501
should ([MetaQuery]): query, [query1, query2]. [optional] # noqa: E501
"""
_check_type = kwargs.pop('_check_type', True)
_spec_property_naming = kwargs.pop('_spec_property_naming', False)
_path_to_item = kwargs.pop('_path_to_item', ())
_configuration = kwargs.pop('_configuration', None)
_visited_composed_classes = kwargs.pop('_visited_composed_classes', ())
if args:
for arg in args:
if isinstance(arg, dict):
kwargs.update(arg)
else:
raise ApiTypeError(
"Invalid positional arguments=%s passed to %s. Remove those invalid positional arguments." % (
args,
self.__class__.__name__,
),
path_to_item=_path_to_item,
valid_classes=(self.__class__,),
)
self._data_store = {}
self._check_type = _check_type
self._spec_property_naming = _spec_property_naming
self._path_to_item = _path_to_item
self._configuration = _configuration
self._visited_composed_classes = _visited_composed_classes + (self.__class__,)
for var_name, var_value in kwargs.items():
if var_name not in self.attribute_map and \
self._configuration is not None and \
self._configuration.discard_unknown_keys and \
self.additional_properties_type is None:
# discard variable.
continue
setattr(self, var_name, var_value)
if var_name in self.read_only_vars:
raise ApiAttributeError(f"`{var_name}` is a read-only attribute. Use `from_openapi_data` to instantiate "
f"class with read only attributes.") | zincsearch-sdk | /zincsearch-sdk-0.3.3.tar.gz/zincsearch-sdk-0.3.3/zincsearch_sdk/model/meta_bool_query.py | meta_bool_query.py |
import re # noqa: F401
import sys # noqa: F401
from zincsearch_sdk.model_utils import ( # noqa: F401
ApiTypeError,
ModelComposed,
ModelNormal,
ModelSimple,
cached_property,
change_keys_js_to_python,
convert_js_args_to_python_args,
date,
datetime,
file_type,
none_type,
validate_get_composed_info,
OpenApiModel
)
from zincsearch_sdk.exceptions import ApiAttributeError
class V1QueryParams(ModelNormal):
"""NOTE: This class is auto generated by OpenAPI Generator.
Ref: https://openapi-generator.tech
Do not edit the class manually.
Attributes:
allowed_values (dict): The key is the tuple path to the attribute
and the for var_name this is (var_name,). The value is a dict
with a capitalized key describing the allowed value and an allowed
value. These dicts store the allowed enum values.
attribute_map (dict): The key is attribute name
and the value is json key in definition.
discriminator_value_class_map (dict): A dict to go from the discriminator
variable value to the discriminator class name.
validations (dict): The key is the tuple path to the attribute
and the for var_name this is (var_name,). The value is a dict
that stores validations for max_length, min_length, max_items,
min_items, exclusive_maximum, inclusive_maximum, exclusive_minimum,
inclusive_minimum, and regex.
additional_properties_type (tuple): A tuple of classes accepted
as additional properties values.
"""
allowed_values = {
}
validations = {
}
@cached_property
def additional_properties_type():
"""
This must be a method because a model may have properties that are
of type self, this must run after the class is loaded
"""
return (bool, date, datetime, dict, float, int, list, str, none_type,) # noqa: E501
_nullable = False
@cached_property
def openapi_types():
"""
This must be a method because a model may have properties that are
of type self, this must run after the class is loaded
Returns
openapi_types (dict): The key is attribute name
and the value is attribute type.
"""
return {
'boost': (int,), # noqa: E501
'end_time': (str,), # noqa: E501
'field': (str,), # noqa: E501
'start_time': (str,), # noqa: E501
'term': (str,), # noqa: E501
'terms': ([[str]],), # noqa: E501
}
@cached_property
def discriminator():
return None
attribute_map = {
'boost': 'boost', # noqa: E501
'end_time': 'end_time', # noqa: E501
'field': 'field', # noqa: E501
'start_time': 'start_time', # noqa: E501
'term': 'term', # noqa: E501
'terms': 'terms', # noqa: E501
}
read_only_vars = {
}
_composed_schemas = {}
@classmethod
@convert_js_args_to_python_args
def _from_openapi_data(cls, *args, **kwargs): # noqa: E501
"""V1QueryParams - a model defined in OpenAPI
Keyword Args:
_check_type (bool): if True, values for parameters in openapi_types
will be type checked and a TypeError will be
raised if the wrong type is input.
Defaults to True
_path_to_item (tuple/list): This is a list of keys or values to
drill down to the model in received_data
when deserializing a response
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_configuration (Configuration): the instance to use when
deserializing a file_type parameter.
If passed, type conversion is attempted
If omitted no type conversion is done.
_visited_composed_classes (tuple): This stores a tuple of
classes that we have traveled through so that
if we see that class again we will not use its
discriminator again.
When traveling through a discriminator, the
composed schema that is
is traveled through is added to this set.
For example if Animal has a discriminator
petType and we pass in "Dog", and the class Dog
allOf includes Animal, we move through Animal
once using the discriminator, and pick Dog.
Then in Dog, we will make an instance of the
Animal class but this time we won't travel
through its discriminator because we passed in
_visited_composed_classes = (Animal,)
boost (int): [optional] # noqa: E501
end_time (str): [optional] # noqa: E501
field (str): [optional] # noqa: E501
start_time (str): [optional] # noqa: E501
term (str): [optional] # noqa: E501
terms ([[str]]): For multi phrase query. [optional] # noqa: E501
"""
_check_type = kwargs.pop('_check_type', True)
_spec_property_naming = kwargs.pop('_spec_property_naming', True)
_path_to_item = kwargs.pop('_path_to_item', ())
_configuration = kwargs.pop('_configuration', None)
_visited_composed_classes = kwargs.pop('_visited_composed_classes', ())
self = super(OpenApiModel, cls).__new__(cls)
if args:
for arg in args:
if isinstance(arg, dict):
kwargs.update(arg)
else:
raise ApiTypeError(
"Invalid positional arguments=%s passed to %s. Remove those invalid positional arguments." % (
args,
self.__class__.__name__,
),
path_to_item=_path_to_item,
valid_classes=(self.__class__,),
)
self._data_store = {}
self._check_type = _check_type
self._spec_property_naming = _spec_property_naming
self._path_to_item = _path_to_item
self._configuration = _configuration
self._visited_composed_classes = _visited_composed_classes + (self.__class__,)
for var_name, var_value in kwargs.items():
if var_name not in self.attribute_map and \
self._configuration is not None and \
self._configuration.discard_unknown_keys and \
self.additional_properties_type is None:
# discard variable.
continue
setattr(self, var_name, var_value)
return self
required_properties = set([
'_data_store',
'_check_type',
'_spec_property_naming',
'_path_to_item',
'_configuration',
'_visited_composed_classes',
])
@convert_js_args_to_python_args
def __init__(self, *args, **kwargs): # noqa: E501
"""V1QueryParams - a model defined in OpenAPI
Keyword Args:
_check_type (bool): if True, values for parameters in openapi_types
will be type checked and a TypeError will be
raised if the wrong type is input.
Defaults to True
_path_to_item (tuple/list): This is a list of keys or values to
drill down to the model in received_data
when deserializing a response
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_configuration (Configuration): the instance to use when
deserializing a file_type parameter.
If passed, type conversion is attempted
If omitted no type conversion is done.
_visited_composed_classes (tuple): This stores a tuple of
classes that we have traveled through so that
if we see that class again we will not use its
discriminator again.
When traveling through a discriminator, the
composed schema that is
is traveled through is added to this set.
For example if Animal has a discriminator
petType and we pass in "Dog", and the class Dog
allOf includes Animal, we move through Animal
once using the discriminator, and pick Dog.
Then in Dog, we will make an instance of the
Animal class but this time we won't travel
through its discriminator because we passed in
_visited_composed_classes = (Animal,)
boost (int): [optional] # noqa: E501
end_time (str): [optional] # noqa: E501
field (str): [optional] # noqa: E501
start_time (str): [optional] # noqa: E501
term (str): [optional] # noqa: E501
terms ([[str]]): For multi phrase query. [optional] # noqa: E501
"""
_check_type = kwargs.pop('_check_type', True)
_spec_property_naming = kwargs.pop('_spec_property_naming', False)
_path_to_item = kwargs.pop('_path_to_item', ())
_configuration = kwargs.pop('_configuration', None)
_visited_composed_classes = kwargs.pop('_visited_composed_classes', ())
if args:
for arg in args:
if isinstance(arg, dict):
kwargs.update(arg)
else:
raise ApiTypeError(
"Invalid positional arguments=%s passed to %s. Remove those invalid positional arguments." % (
args,
self.__class__.__name__,
),
path_to_item=_path_to_item,
valid_classes=(self.__class__,),
)
self._data_store = {}
self._check_type = _check_type
self._spec_property_naming = _spec_property_naming
self._path_to_item = _path_to_item
self._configuration = _configuration
self._visited_composed_classes = _visited_composed_classes + (self.__class__,)
for var_name, var_value in kwargs.items():
if var_name not in self.attribute_map and \
self._configuration is not None and \
self._configuration.discard_unknown_keys and \
self.additional_properties_type is None:
# discard variable.
continue
setattr(self, var_name, var_value)
if var_name in self.read_only_vars:
raise ApiAttributeError(f"`{var_name}` is a read-only attribute. Use `from_openapi_data` to instantiate "
f"class with read only attributes.") | zincsearch-sdk | /zincsearch-sdk-0.3.3.tar.gz/zincsearch-sdk-0.3.3/zincsearch_sdk/model/v1_query_params.py | v1_query_params.py |
import re # noqa: F401
import sys # noqa: F401
from zincsearch_sdk.model_utils import ( # noqa: F401
ApiTypeError,
ModelComposed,
ModelNormal,
ModelSimple,
cached_property,
change_keys_js_to_python,
convert_js_args_to_python_args,
date,
datetime,
file_type,
none_type,
validate_get_composed_info,
OpenApiModel
)
from zincsearch_sdk.exceptions import ApiAttributeError
def lazy_import():
from zincsearch_sdk.model.meta_http_retries_response import MetaHttpRetriesResponse
globals()['MetaHttpRetriesResponse'] = MetaHttpRetriesResponse
class MetaHTTPResponseDeleteByQuery(ModelNormal):
"""NOTE: This class is auto generated by OpenAPI Generator.
Ref: https://openapi-generator.tech
Do not edit the class manually.
Attributes:
allowed_values (dict): The key is the tuple path to the attribute
and the for var_name this is (var_name,). The value is a dict
with a capitalized key describing the allowed value and an allowed
value. These dicts store the allowed enum values.
attribute_map (dict): The key is attribute name
and the value is json key in definition.
discriminator_value_class_map (dict): A dict to go from the discriminator
variable value to the discriminator class name.
validations (dict): The key is the tuple path to the attribute
and the for var_name this is (var_name,). The value is a dict
that stores validations for max_length, min_length, max_items,
min_items, exclusive_maximum, inclusive_maximum, exclusive_minimum,
inclusive_minimum, and regex.
additional_properties_type (tuple): A tuple of classes accepted
as additional properties values.
"""
allowed_values = {
}
validations = {
}
@cached_property
def additional_properties_type():
"""
This must be a method because a model may have properties that are
of type self, this must run after the class is loaded
"""
lazy_import()
return (bool, date, datetime, dict, float, int, list, str, none_type,) # noqa: E501
_nullable = False
@cached_property
def openapi_types():
"""
This must be a method because a model may have properties that are
of type self, this must run after the class is loaded
Returns
openapi_types (dict): The key is attribute name
and the value is attribute type.
"""
lazy_import()
return {
'batches': (int,), # noqa: E501
'deleted': (int,), # noqa: E501
'failures': ([str],), # noqa: E501
'noops': (int,), # noqa: E501
'requests_per_second': (int,), # noqa: E501
'retries': (MetaHttpRetriesResponse,), # noqa: E501
'throttled_millis': (int,), # noqa: E501
'throttled_until_millis': (int,), # noqa: E501
'time_out': (bool,), # noqa: E501
'took': (int,), # noqa: E501
'total': (int,), # noqa: E501
'version_conflicts': (int,), # noqa: E501
}
@cached_property
def discriminator():
return None
attribute_map = {
'batches': 'batches', # noqa: E501
'deleted': 'deleted', # noqa: E501
'failures': 'failures', # noqa: E501
'noops': 'noops', # noqa: E501
'requests_per_second': 'requests_per_second', # noqa: E501
'retries': 'retries', # noqa: E501
'throttled_millis': 'throttled_millis', # noqa: E501
'throttled_until_millis': 'throttled_until_millis', # noqa: E501
'time_out': 'time_out', # noqa: E501
'took': 'took', # noqa: E501
'total': 'total', # noqa: E501
'version_conflicts': 'version_conflicts', # noqa: E501
}
read_only_vars = {
}
_composed_schemas = {}
@classmethod
@convert_js_args_to_python_args
def _from_openapi_data(cls, *args, **kwargs): # noqa: E501
"""MetaHTTPResponseDeleteByQuery - a model defined in OpenAPI
Keyword Args:
_check_type (bool): if True, values for parameters in openapi_types
will be type checked and a TypeError will be
raised if the wrong type is input.
Defaults to True
_path_to_item (tuple/list): This is a list of keys or values to
drill down to the model in received_data
when deserializing a response
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_configuration (Configuration): the instance to use when
deserializing a file_type parameter.
If passed, type conversion is attempted
If omitted no type conversion is done.
_visited_composed_classes (tuple): This stores a tuple of
classes that we have traveled through so that
if we see that class again we will not use its
discriminator again.
When traveling through a discriminator, the
composed schema that is
is traveled through is added to this set.
For example if Animal has a discriminator
petType and we pass in "Dog", and the class Dog
allOf includes Animal, we move through Animal
once using the discriminator, and pick Dog.
Then in Dog, we will make an instance of the
Animal class but this time we won't travel
through its discriminator because we passed in
_visited_composed_classes = (Animal,)
batches (int): [optional] # noqa: E501
deleted (int): [optional] # noqa: E501
failures ([str]): [optional] # noqa: E501
noops (int): [optional] # noqa: E501
requests_per_second (int): [optional] # noqa: E501
retries (MetaHttpRetriesResponse): [optional] # noqa: E501
throttled_millis (int): [optional] # noqa: E501
throttled_until_millis (int): [optional] # noqa: E501
time_out (bool): [optional] # noqa: E501
took (int): [optional] # noqa: E501
total (int): [optional] # noqa: E501
version_conflicts (int): [optional] # noqa: E501
"""
_check_type = kwargs.pop('_check_type', True)
_spec_property_naming = kwargs.pop('_spec_property_naming', True)
_path_to_item = kwargs.pop('_path_to_item', ())
_configuration = kwargs.pop('_configuration', None)
_visited_composed_classes = kwargs.pop('_visited_composed_classes', ())
self = super(OpenApiModel, cls).__new__(cls)
if args:
for arg in args:
if isinstance(arg, dict):
kwargs.update(arg)
else:
raise ApiTypeError(
"Invalid positional arguments=%s passed to %s. Remove those invalid positional arguments." % (
args,
self.__class__.__name__,
),
path_to_item=_path_to_item,
valid_classes=(self.__class__,),
)
self._data_store = {}
self._check_type = _check_type
self._spec_property_naming = _spec_property_naming
self._path_to_item = _path_to_item
self._configuration = _configuration
self._visited_composed_classes = _visited_composed_classes + (self.__class__,)
for var_name, var_value in kwargs.items():
if var_name not in self.attribute_map and \
self._configuration is not None and \
self._configuration.discard_unknown_keys and \
self.additional_properties_type is None:
# discard variable.
continue
setattr(self, var_name, var_value)
return self
required_properties = set([
'_data_store',
'_check_type',
'_spec_property_naming',
'_path_to_item',
'_configuration',
'_visited_composed_classes',
])
@convert_js_args_to_python_args
def __init__(self, *args, **kwargs): # noqa: E501
"""MetaHTTPResponseDeleteByQuery - a model defined in OpenAPI
Keyword Args:
_check_type (bool): if True, values for parameters in openapi_types
will be type checked and a TypeError will be
raised if the wrong type is input.
Defaults to True
_path_to_item (tuple/list): This is a list of keys or values to
drill down to the model in received_data
when deserializing a response
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_configuration (Configuration): the instance to use when
deserializing a file_type parameter.
If passed, type conversion is attempted
If omitted no type conversion is done.
_visited_composed_classes (tuple): This stores a tuple of
classes that we have traveled through so that
if we see that class again we will not use its
discriminator again.
When traveling through a discriminator, the
composed schema that is
is traveled through is added to this set.
For example if Animal has a discriminator
petType and we pass in "Dog", and the class Dog
allOf includes Animal, we move through Animal
once using the discriminator, and pick Dog.
Then in Dog, we will make an instance of the
Animal class but this time we won't travel
through its discriminator because we passed in
_visited_composed_classes = (Animal,)
batches (int): [optional] # noqa: E501
deleted (int): [optional] # noqa: E501
failures ([str]): [optional] # noqa: E501
noops (int): [optional] # noqa: E501
requests_per_second (int): [optional] # noqa: E501
retries (MetaHttpRetriesResponse): [optional] # noqa: E501
throttled_millis (int): [optional] # noqa: E501
throttled_until_millis (int): [optional] # noqa: E501
time_out (bool): [optional] # noqa: E501
took (int): [optional] # noqa: E501
total (int): [optional] # noqa: E501
version_conflicts (int): [optional] # noqa: E501
"""
_check_type = kwargs.pop('_check_type', True)
_spec_property_naming = kwargs.pop('_spec_property_naming', False)
_path_to_item = kwargs.pop('_path_to_item', ())
_configuration = kwargs.pop('_configuration', None)
_visited_composed_classes = kwargs.pop('_visited_composed_classes', ())
if args:
for arg in args:
if isinstance(arg, dict):
kwargs.update(arg)
else:
raise ApiTypeError(
"Invalid positional arguments=%s passed to %s. Remove those invalid positional arguments." % (
args,
self.__class__.__name__,
),
path_to_item=_path_to_item,
valid_classes=(self.__class__,),
)
self._data_store = {}
self._check_type = _check_type
self._spec_property_naming = _spec_property_naming
self._path_to_item = _path_to_item
self._configuration = _configuration
self._visited_composed_classes = _visited_composed_classes + (self.__class__,)
for var_name, var_value in kwargs.items():
if var_name not in self.attribute_map and \
self._configuration is not None and \
self._configuration.discard_unknown_keys and \
self.additional_properties_type is None:
# discard variable.
continue
setattr(self, var_name, var_value)
if var_name in self.read_only_vars:
raise ApiAttributeError(f"`{var_name}` is a read-only attribute. Use `from_openapi_data` to instantiate "
f"class with read only attributes.") | zincsearch-sdk | /zincsearch-sdk-0.3.3.tar.gz/zincsearch-sdk-0.3.3/zincsearch_sdk/model/meta_http_response_delete_by_query.py | meta_http_response_delete_by_query.py |
import re # noqa: F401
import sys # noqa: F401
from zincsearch_sdk.model_utils import ( # noqa: F401
ApiTypeError,
ModelComposed,
ModelNormal,
ModelSimple,
cached_property,
change_keys_js_to_python,
convert_js_args_to_python_args,
date,
datetime,
file_type,
none_type,
validate_get_composed_info,
OpenApiModel
)
from zincsearch_sdk.exceptions import ApiAttributeError
class MetaAggregationResponse(ModelNormal):
"""NOTE: This class is auto generated by OpenAPI Generator.
Ref: https://openapi-generator.tech
Do not edit the class manually.
Attributes:
allowed_values (dict): The key is the tuple path to the attribute
and the for var_name this is (var_name,). The value is a dict
with a capitalized key describing the allowed value and an allowed
value. These dicts store the allowed enum values.
attribute_map (dict): The key is attribute name
and the value is json key in definition.
discriminator_value_class_map (dict): A dict to go from the discriminator
variable value to the discriminator class name.
validations (dict): The key is the tuple path to the attribute
and the for var_name this is (var_name,). The value is a dict
that stores validations for max_length, min_length, max_items,
min_items, exclusive_maximum, inclusive_maximum, exclusive_minimum,
inclusive_minimum, and regex.
additional_properties_type (tuple): A tuple of classes accepted
as additional properties values.
"""
allowed_values = {
}
validations = {
}
@cached_property
def additional_properties_type():
"""
This must be a method because a model may have properties that are
of type self, this must run after the class is loaded
"""
return (bool, date, datetime, dict, float, int, list, str, none_type,) # noqa: E501
_nullable = False
@cached_property
def openapi_types():
"""
This must be a method because a model may have properties that are
of type self, this must run after the class is loaded
Returns
openapi_types (dict): The key is attribute name
and the value is attribute type.
"""
return {
'buckets': (bool, date, datetime, dict, float, int, list, str, none_type,), # noqa: E501
'interval': (str,), # noqa: E501
'value': (bool, date, datetime, dict, float, int, list, str, none_type,), # noqa: E501
}
@cached_property
def discriminator():
return None
attribute_map = {
'buckets': 'buckets', # noqa: E501
'interval': 'interval', # noqa: E501
'value': 'value', # noqa: E501
}
read_only_vars = {
}
_composed_schemas = {}
@classmethod
@convert_js_args_to_python_args
def _from_openapi_data(cls, *args, **kwargs): # noqa: E501
"""MetaAggregationResponse - a model defined in OpenAPI
Keyword Args:
_check_type (bool): if True, values for parameters in openapi_types
will be type checked and a TypeError will be
raised if the wrong type is input.
Defaults to True
_path_to_item (tuple/list): This is a list of keys or values to
drill down to the model in received_data
when deserializing a response
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_configuration (Configuration): the instance to use when
deserializing a file_type parameter.
If passed, type conversion is attempted
If omitted no type conversion is done.
_visited_composed_classes (tuple): This stores a tuple of
classes that we have traveled through so that
if we see that class again we will not use its
discriminator again.
When traveling through a discriminator, the
composed schema that is
is traveled through is added to this set.
For example if Animal has a discriminator
petType and we pass in "Dog", and the class Dog
allOf includes Animal, we move through Animal
once using the discriminator, and pick Dog.
Then in Dog, we will make an instance of the
Animal class but this time we won't travel
through its discriminator because we passed in
_visited_composed_classes = (Animal,)
buckets (bool, date, datetime, dict, float, int, list, str, none_type): slice or map. [optional] # noqa: E501
interval (str): support for auto_date_histogram_aggregation. [optional] # noqa: E501
value (bool, date, datetime, dict, float, int, list, str, none_type): [optional] # noqa: E501
"""
_check_type = kwargs.pop('_check_type', True)
_spec_property_naming = kwargs.pop('_spec_property_naming', True)
_path_to_item = kwargs.pop('_path_to_item', ())
_configuration = kwargs.pop('_configuration', None)
_visited_composed_classes = kwargs.pop('_visited_composed_classes', ())
self = super(OpenApiModel, cls).__new__(cls)
if args:
for arg in args:
if isinstance(arg, dict):
kwargs.update(arg)
else:
raise ApiTypeError(
"Invalid positional arguments=%s passed to %s. Remove those invalid positional arguments." % (
args,
self.__class__.__name__,
),
path_to_item=_path_to_item,
valid_classes=(self.__class__,),
)
self._data_store = {}
self._check_type = _check_type
self._spec_property_naming = _spec_property_naming
self._path_to_item = _path_to_item
self._configuration = _configuration
self._visited_composed_classes = _visited_composed_classes + (self.__class__,)
for var_name, var_value in kwargs.items():
if var_name not in self.attribute_map and \
self._configuration is not None and \
self._configuration.discard_unknown_keys and \
self.additional_properties_type is None:
# discard variable.
continue
setattr(self, var_name, var_value)
return self
required_properties = set([
'_data_store',
'_check_type',
'_spec_property_naming',
'_path_to_item',
'_configuration',
'_visited_composed_classes',
])
@convert_js_args_to_python_args
def __init__(self, *args, **kwargs): # noqa: E501
"""MetaAggregationResponse - a model defined in OpenAPI
Keyword Args:
_check_type (bool): if True, values for parameters in openapi_types
will be type checked and a TypeError will be
raised if the wrong type is input.
Defaults to True
_path_to_item (tuple/list): This is a list of keys or values to
drill down to the model in received_data
when deserializing a response
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_configuration (Configuration): the instance to use when
deserializing a file_type parameter.
If passed, type conversion is attempted
If omitted no type conversion is done.
_visited_composed_classes (tuple): This stores a tuple of
classes that we have traveled through so that
if we see that class again we will not use its
discriminator again.
When traveling through a discriminator, the
composed schema that is
is traveled through is added to this set.
For example if Animal has a discriminator
petType and we pass in "Dog", and the class Dog
allOf includes Animal, we move through Animal
once using the discriminator, and pick Dog.
Then in Dog, we will make an instance of the
Animal class but this time we won't travel
through its discriminator because we passed in
_visited_composed_classes = (Animal,)
buckets (bool, date, datetime, dict, float, int, list, str, none_type): slice or map. [optional] # noqa: E501
interval (str): support for auto_date_histogram_aggregation. [optional] # noqa: E501
value (bool, date, datetime, dict, float, int, list, str, none_type): [optional] # noqa: E501
"""
_check_type = kwargs.pop('_check_type', True)
_spec_property_naming = kwargs.pop('_spec_property_naming', False)
_path_to_item = kwargs.pop('_path_to_item', ())
_configuration = kwargs.pop('_configuration', None)
_visited_composed_classes = kwargs.pop('_visited_composed_classes', ())
if args:
for arg in args:
if isinstance(arg, dict):
kwargs.update(arg)
else:
raise ApiTypeError(
"Invalid positional arguments=%s passed to %s. Remove those invalid positional arguments." % (
args,
self.__class__.__name__,
),
path_to_item=_path_to_item,
valid_classes=(self.__class__,),
)
self._data_store = {}
self._check_type = _check_type
self._spec_property_naming = _spec_property_naming
self._path_to_item = _path_to_item
self._configuration = _configuration
self._visited_composed_classes = _visited_composed_classes + (self.__class__,)
for var_name, var_value in kwargs.items():
if var_name not in self.attribute_map and \
self._configuration is not None and \
self._configuration.discard_unknown_keys and \
self.additional_properties_type is None:
# discard variable.
continue
setattr(self, var_name, var_value)
if var_name in self.read_only_vars:
raise ApiAttributeError(f"`{var_name}` is a read-only attribute. Use `from_openapi_data` to instantiate "
f"class with read only attributes.") | zincsearch-sdk | /zincsearch-sdk-0.3.3.tar.gz/zincsearch-sdk-0.3.3/zincsearch_sdk/model/meta_aggregation_response.py | meta_aggregation_response.py |
import re # noqa: F401
import sys # noqa: F401
from zincsearch_sdk.model_utils import ( # noqa: F401
ApiTypeError,
ModelComposed,
ModelNormal,
ModelSimple,
cached_property,
change_keys_js_to_python,
convert_js_args_to_python_args,
date,
datetime,
file_type,
none_type,
validate_get_composed_info,
OpenApiModel
)
from zincsearch_sdk.exceptions import ApiAttributeError
def lazy_import():
from zincsearch_sdk.model.meta_aggregation_response import MetaAggregationResponse
from zincsearch_sdk.model.meta_hits import MetaHits
from zincsearch_sdk.model.meta_shards import MetaShards
globals()['MetaAggregationResponse'] = MetaAggregationResponse
globals()['MetaHits'] = MetaHits
globals()['MetaShards'] = MetaShards
class MetaSearchResponse(ModelNormal):
"""NOTE: This class is auto generated by OpenAPI Generator.
Ref: https://openapi-generator.tech
Do not edit the class manually.
Attributes:
allowed_values (dict): The key is the tuple path to the attribute
and the for var_name this is (var_name,). The value is a dict
with a capitalized key describing the allowed value and an allowed
value. These dicts store the allowed enum values.
attribute_map (dict): The key is attribute name
and the value is json key in definition.
discriminator_value_class_map (dict): A dict to go from the discriminator
variable value to the discriminator class name.
validations (dict): The key is the tuple path to the attribute
and the for var_name this is (var_name,). The value is a dict
that stores validations for max_length, min_length, max_items,
min_items, exclusive_maximum, inclusive_maximum, exclusive_minimum,
inclusive_minimum, and regex.
additional_properties_type (tuple): A tuple of classes accepted
as additional properties values.
"""
allowed_values = {
}
validations = {
}
@cached_property
def additional_properties_type():
"""
This must be a method because a model may have properties that are
of type self, this must run after the class is loaded
"""
lazy_import()
return (bool, date, datetime, dict, float, int, list, str, none_type,) # noqa: E501
_nullable = False
@cached_property
def openapi_types():
"""
This must be a method because a model may have properties that are
of type self, this must run after the class is loaded
Returns
openapi_types (dict): The key is attribute name
and the value is attribute type.
"""
lazy_import()
return {
'shards': (MetaShards,), # noqa: E501
'aggregations': ({str: (MetaAggregationResponse,)},), # noqa: E501
'error': (str,), # noqa: E501
'hits': (MetaHits,), # noqa: E501
'timed_out': (bool,), # noqa: E501
'took': (int,), # noqa: E501
}
@cached_property
def discriminator():
return None
attribute_map = {
'shards': '_shards', # noqa: E501
'aggregations': 'aggregations', # noqa: E501
'error': 'error', # noqa: E501
'hits': 'hits', # noqa: E501
'timed_out': 'timed_out', # noqa: E501
'took': 'took', # noqa: E501
}
read_only_vars = {
}
_composed_schemas = {}
@classmethod
@convert_js_args_to_python_args
def _from_openapi_data(cls, *args, **kwargs): # noqa: E501
"""MetaSearchResponse - a model defined in OpenAPI
Keyword Args:
_check_type (bool): if True, values for parameters in openapi_types
will be type checked and a TypeError will be
raised if the wrong type is input.
Defaults to True
_path_to_item (tuple/list): This is a list of keys or values to
drill down to the model in received_data
when deserializing a response
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_configuration (Configuration): the instance to use when
deserializing a file_type parameter.
If passed, type conversion is attempted
If omitted no type conversion is done.
_visited_composed_classes (tuple): This stores a tuple of
classes that we have traveled through so that
if we see that class again we will not use its
discriminator again.
When traveling through a discriminator, the
composed schema that is
is traveled through is added to this set.
For example if Animal has a discriminator
petType and we pass in "Dog", and the class Dog
allOf includes Animal, we move through Animal
once using the discriminator, and pick Dog.
Then in Dog, we will make an instance of the
Animal class but this time we won't travel
through its discriminator because we passed in
_visited_composed_classes = (Animal,)
shards (MetaShards): [optional] # noqa: E501
aggregations ({str: (MetaAggregationResponse,)}): [optional] # noqa: E501
error (str): [optional] # noqa: E501
hits (MetaHits): [optional] # noqa: E501
timed_out (bool): [optional] # noqa: E501
took (int): Time it took to generate the response. [optional] # noqa: E501
"""
_check_type = kwargs.pop('_check_type', True)
_spec_property_naming = kwargs.pop('_spec_property_naming', True)
_path_to_item = kwargs.pop('_path_to_item', ())
_configuration = kwargs.pop('_configuration', None)
_visited_composed_classes = kwargs.pop('_visited_composed_classes', ())
self = super(OpenApiModel, cls).__new__(cls)
if args:
for arg in args:
if isinstance(arg, dict):
kwargs.update(arg)
else:
raise ApiTypeError(
"Invalid positional arguments=%s passed to %s. Remove those invalid positional arguments." % (
args,
self.__class__.__name__,
),
path_to_item=_path_to_item,
valid_classes=(self.__class__,),
)
self._data_store = {}
self._check_type = _check_type
self._spec_property_naming = _spec_property_naming
self._path_to_item = _path_to_item
self._configuration = _configuration
self._visited_composed_classes = _visited_composed_classes + (self.__class__,)
for var_name, var_value in kwargs.items():
if var_name not in self.attribute_map and \
self._configuration is not None and \
self._configuration.discard_unknown_keys and \
self.additional_properties_type is None:
# discard variable.
continue
setattr(self, var_name, var_value)
return self
required_properties = set([
'_data_store',
'_check_type',
'_spec_property_naming',
'_path_to_item',
'_configuration',
'_visited_composed_classes',
])
@convert_js_args_to_python_args
def __init__(self, *args, **kwargs): # noqa: E501
"""MetaSearchResponse - a model defined in OpenAPI
Keyword Args:
_check_type (bool): if True, values for parameters in openapi_types
will be type checked and a TypeError will be
raised if the wrong type is input.
Defaults to True
_path_to_item (tuple/list): This is a list of keys or values to
drill down to the model in received_data
when deserializing a response
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_configuration (Configuration): the instance to use when
deserializing a file_type parameter.
If passed, type conversion is attempted
If omitted no type conversion is done.
_visited_composed_classes (tuple): This stores a tuple of
classes that we have traveled through so that
if we see that class again we will not use its
discriminator again.
When traveling through a discriminator, the
composed schema that is
is traveled through is added to this set.
For example if Animal has a discriminator
petType and we pass in "Dog", and the class Dog
allOf includes Animal, we move through Animal
once using the discriminator, and pick Dog.
Then in Dog, we will make an instance of the
Animal class but this time we won't travel
through its discriminator because we passed in
_visited_composed_classes = (Animal,)
shards (MetaShards): [optional] # noqa: E501
aggregations ({str: (MetaAggregationResponse,)}): [optional] # noqa: E501
error (str): [optional] # noqa: E501
hits (MetaHits): [optional] # noqa: E501
timed_out (bool): [optional] # noqa: E501
took (int): Time it took to generate the response. [optional] # noqa: E501
"""
_check_type = kwargs.pop('_check_type', True)
_spec_property_naming = kwargs.pop('_spec_property_naming', False)
_path_to_item = kwargs.pop('_path_to_item', ())
_configuration = kwargs.pop('_configuration', None)
_visited_composed_classes = kwargs.pop('_visited_composed_classes', ())
if args:
for arg in args:
if isinstance(arg, dict):
kwargs.update(arg)
else:
raise ApiTypeError(
"Invalid positional arguments=%s passed to %s. Remove those invalid positional arguments." % (
args,
self.__class__.__name__,
),
path_to_item=_path_to_item,
valid_classes=(self.__class__,),
)
self._data_store = {}
self._check_type = _check_type
self._spec_property_naming = _spec_property_naming
self._path_to_item = _path_to_item
self._configuration = _configuration
self._visited_composed_classes = _visited_composed_classes + (self.__class__,)
for var_name, var_value in kwargs.items():
if var_name not in self.attribute_map and \
self._configuration is not None and \
self._configuration.discard_unknown_keys and \
self.additional_properties_type is None:
# discard variable.
continue
setattr(self, var_name, var_value)
if var_name in self.read_only_vars:
raise ApiAttributeError(f"`{var_name}` is a read-only attribute. Use `from_openapi_data` to instantiate "
f"class with read only attributes.") | zincsearch-sdk | /zincsearch-sdk-0.3.3.tar.gz/zincsearch-sdk-0.3.3/zincsearch_sdk/model/meta_search_response.py | meta_search_response.py |
import re # noqa: F401
import sys # noqa: F401
from zincsearch_sdk.model_utils import ( # noqa: F401
ApiTypeError,
ModelComposed,
ModelNormal,
ModelSimple,
cached_property,
change_keys_js_to_python,
convert_js_args_to_python_args,
date,
datetime,
file_type,
none_type,
validate_get_composed_info,
OpenApiModel
)
from zincsearch_sdk.exceptions import ApiAttributeError
def lazy_import():
from zincsearch_sdk.model.meta_analyzer import MetaAnalyzer
globals()['MetaAnalyzer'] = MetaAnalyzer
class MetaIndexAnalysis(ModelNormal):
"""NOTE: This class is auto generated by OpenAPI Generator.
Ref: https://openapi-generator.tech
Do not edit the class manually.
Attributes:
allowed_values (dict): The key is the tuple path to the attribute
and the for var_name this is (var_name,). The value is a dict
with a capitalized key describing the allowed value and an allowed
value. These dicts store the allowed enum values.
attribute_map (dict): The key is attribute name
and the value is json key in definition.
discriminator_value_class_map (dict): A dict to go from the discriminator
variable value to the discriminator class name.
validations (dict): The key is the tuple path to the attribute
and the for var_name this is (var_name,). The value is a dict
that stores validations for max_length, min_length, max_items,
min_items, exclusive_maximum, inclusive_maximum, exclusive_minimum,
inclusive_minimum, and regex.
additional_properties_type (tuple): A tuple of classes accepted
as additional properties values.
"""
allowed_values = {
}
validations = {
}
@cached_property
def additional_properties_type():
"""
This must be a method because a model may have properties that are
of type self, this must run after the class is loaded
"""
lazy_import()
return (bool, date, datetime, dict, float, int, list, str, none_type,) # noqa: E501
_nullable = False
@cached_property
def openapi_types():
"""
This must be a method because a model may have properties that are
of type self, this must run after the class is loaded
Returns
openapi_types (dict): The key is attribute name
and the value is attribute type.
"""
lazy_import()
return {
'analyzer': ({str: (MetaAnalyzer,)},), # noqa: E501
'char_filter': (bool, date, datetime, dict, float, int, list, str, none_type,), # noqa: E501
'filter': (bool, date, datetime, dict, float, int, list, str, none_type,), # noqa: E501
'token_filter': (bool, date, datetime, dict, float, int, list, str, none_type,), # noqa: E501
'tokenizer': (bool, date, datetime, dict, float, int, list, str, none_type,), # noqa: E501
}
@cached_property
def discriminator():
return None
attribute_map = {
'analyzer': 'analyzer', # noqa: E501
'char_filter': 'char_filter', # noqa: E501
'filter': 'filter', # noqa: E501
'token_filter': 'token_filter', # noqa: E501
'tokenizer': 'tokenizer', # noqa: E501
}
read_only_vars = {
}
_composed_schemas = {}
@classmethod
@convert_js_args_to_python_args
def _from_openapi_data(cls, *args, **kwargs): # noqa: E501
"""MetaIndexAnalysis - a model defined in OpenAPI
Keyword Args:
_check_type (bool): if True, values for parameters in openapi_types
will be type checked and a TypeError will be
raised if the wrong type is input.
Defaults to True
_path_to_item (tuple/list): This is a list of keys or values to
drill down to the model in received_data
when deserializing a response
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_configuration (Configuration): the instance to use when
deserializing a file_type parameter.
If passed, type conversion is attempted
If omitted no type conversion is done.
_visited_composed_classes (tuple): This stores a tuple of
classes that we have traveled through so that
if we see that class again we will not use its
discriminator again.
When traveling through a discriminator, the
composed schema that is
is traveled through is added to this set.
For example if Animal has a discriminator
petType and we pass in "Dog", and the class Dog
allOf includes Animal, we move through Animal
once using the discriminator, and pick Dog.
Then in Dog, we will make an instance of the
Animal class but this time we won't travel
through its discriminator because we passed in
_visited_composed_classes = (Animal,)
analyzer ({str: (MetaAnalyzer,)}): [optional] # noqa: E501
char_filter (bool, date, datetime, dict, float, int, list, str, none_type): [optional] # noqa: E501
filter (bool, date, datetime, dict, float, int, list, str, none_type): compatibility with es, alias for TokenFilter. [optional] # noqa: E501
token_filter (bool, date, datetime, dict, float, int, list, str, none_type): [optional] # noqa: E501
tokenizer (bool, date, datetime, dict, float, int, list, str, none_type): [optional] # noqa: E501
"""
_check_type = kwargs.pop('_check_type', True)
_spec_property_naming = kwargs.pop('_spec_property_naming', True)
_path_to_item = kwargs.pop('_path_to_item', ())
_configuration = kwargs.pop('_configuration', None)
_visited_composed_classes = kwargs.pop('_visited_composed_classes', ())
self = super(OpenApiModel, cls).__new__(cls)
if args:
for arg in args:
if isinstance(arg, dict):
kwargs.update(arg)
else:
raise ApiTypeError(
"Invalid positional arguments=%s passed to %s. Remove those invalid positional arguments." % (
args,
self.__class__.__name__,
),
path_to_item=_path_to_item,
valid_classes=(self.__class__,),
)
self._data_store = {}
self._check_type = _check_type
self._spec_property_naming = _spec_property_naming
self._path_to_item = _path_to_item
self._configuration = _configuration
self._visited_composed_classes = _visited_composed_classes + (self.__class__,)
for var_name, var_value in kwargs.items():
if var_name not in self.attribute_map and \
self._configuration is not None and \
self._configuration.discard_unknown_keys and \
self.additional_properties_type is None:
# discard variable.
continue
setattr(self, var_name, var_value)
return self
required_properties = set([
'_data_store',
'_check_type',
'_spec_property_naming',
'_path_to_item',
'_configuration',
'_visited_composed_classes',
])
@convert_js_args_to_python_args
def __init__(self, *args, **kwargs): # noqa: E501
"""MetaIndexAnalysis - a model defined in OpenAPI
Keyword Args:
_check_type (bool): if True, values for parameters in openapi_types
will be type checked and a TypeError will be
raised if the wrong type is input.
Defaults to True
_path_to_item (tuple/list): This is a list of keys or values to
drill down to the model in received_data
when deserializing a response
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_configuration (Configuration): the instance to use when
deserializing a file_type parameter.
If passed, type conversion is attempted
If omitted no type conversion is done.
_visited_composed_classes (tuple): This stores a tuple of
classes that we have traveled through so that
if we see that class again we will not use its
discriminator again.
When traveling through a discriminator, the
composed schema that is
is traveled through is added to this set.
For example if Animal has a discriminator
petType and we pass in "Dog", and the class Dog
allOf includes Animal, we move through Animal
once using the discriminator, and pick Dog.
Then in Dog, we will make an instance of the
Animal class but this time we won't travel
through its discriminator because we passed in
_visited_composed_classes = (Animal,)
analyzer ({str: (MetaAnalyzer,)}): [optional] # noqa: E501
char_filter (bool, date, datetime, dict, float, int, list, str, none_type): [optional] # noqa: E501
filter (bool, date, datetime, dict, float, int, list, str, none_type): compatibility with es, alias for TokenFilter. [optional] # noqa: E501
token_filter (bool, date, datetime, dict, float, int, list, str, none_type): [optional] # noqa: E501
tokenizer (bool, date, datetime, dict, float, int, list, str, none_type): [optional] # noqa: E501
"""
_check_type = kwargs.pop('_check_type', True)
_spec_property_naming = kwargs.pop('_spec_property_naming', False)
_path_to_item = kwargs.pop('_path_to_item', ())
_configuration = kwargs.pop('_configuration', None)
_visited_composed_classes = kwargs.pop('_visited_composed_classes', ())
if args:
for arg in args:
if isinstance(arg, dict):
kwargs.update(arg)
else:
raise ApiTypeError(
"Invalid positional arguments=%s passed to %s. Remove those invalid positional arguments." % (
args,
self.__class__.__name__,
),
path_to_item=_path_to_item,
valid_classes=(self.__class__,),
)
self._data_store = {}
self._check_type = _check_type
self._spec_property_naming = _spec_property_naming
self._path_to_item = _path_to_item
self._configuration = _configuration
self._visited_composed_classes = _visited_composed_classes + (self.__class__,)
for var_name, var_value in kwargs.items():
if var_name not in self.attribute_map and \
self._configuration is not None and \
self._configuration.discard_unknown_keys and \
self.additional_properties_type is None:
# discard variable.
continue
setattr(self, var_name, var_value)
if var_name in self.read_only_vars:
raise ApiAttributeError(f"`{var_name}` is a read-only attribute. Use `from_openapi_data` to instantiate "
f"class with read only attributes.") | zincsearch-sdk | /zincsearch-sdk-0.3.3.tar.gz/zincsearch-sdk-0.3.3/zincsearch_sdk/model/meta_index_analysis.py | meta_index_analysis.py |
import re # noqa: F401
import sys # noqa: F401
from zincsearch_sdk.model_utils import ( # noqa: F401
ApiTypeError,
ModelComposed,
ModelNormal,
ModelSimple,
cached_property,
change_keys_js_to_python,
convert_js_args_to_python_args,
date,
datetime,
file_type,
none_type,
validate_get_composed_info,
OpenApiModel
)
from zincsearch_sdk.exceptions import ApiAttributeError
def lazy_import():
from zincsearch_sdk.model.auth_login_user import AuthLoginUser
globals()['AuthLoginUser'] = AuthLoginUser
class AuthLoginResponse(ModelNormal):
"""NOTE: This class is auto generated by OpenAPI Generator.
Ref: https://openapi-generator.tech
Do not edit the class manually.
Attributes:
allowed_values (dict): The key is the tuple path to the attribute
and the for var_name this is (var_name,). The value is a dict
with a capitalized key describing the allowed value and an allowed
value. These dicts store the allowed enum values.
attribute_map (dict): The key is attribute name
and the value is json key in definition.
discriminator_value_class_map (dict): A dict to go from the discriminator
variable value to the discriminator class name.
validations (dict): The key is the tuple path to the attribute
and the for var_name this is (var_name,). The value is a dict
that stores validations for max_length, min_length, max_items,
min_items, exclusive_maximum, inclusive_maximum, exclusive_minimum,
inclusive_minimum, and regex.
additional_properties_type (tuple): A tuple of classes accepted
as additional properties values.
"""
allowed_values = {
}
validations = {
}
@cached_property
def additional_properties_type():
"""
This must be a method because a model may have properties that are
of type self, this must run after the class is loaded
"""
lazy_import()
return (bool, date, datetime, dict, float, int, list, str, none_type,) # noqa: E501
_nullable = False
@cached_property
def openapi_types():
"""
This must be a method because a model may have properties that are
of type self, this must run after the class is loaded
Returns
openapi_types (dict): The key is attribute name
and the value is attribute type.
"""
lazy_import()
return {
'user': (AuthLoginUser,), # noqa: E501
'validated': (bool,), # noqa: E501
}
@cached_property
def discriminator():
return None
attribute_map = {
'user': 'user', # noqa: E501
'validated': 'validated', # noqa: E501
}
read_only_vars = {
}
_composed_schemas = {}
@classmethod
@convert_js_args_to_python_args
def _from_openapi_data(cls, *args, **kwargs): # noqa: E501
"""AuthLoginResponse - a model defined in OpenAPI
Keyword Args:
_check_type (bool): if True, values for parameters in openapi_types
will be type checked and a TypeError will be
raised if the wrong type is input.
Defaults to True
_path_to_item (tuple/list): This is a list of keys or values to
drill down to the model in received_data
when deserializing a response
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_configuration (Configuration): the instance to use when
deserializing a file_type parameter.
If passed, type conversion is attempted
If omitted no type conversion is done.
_visited_composed_classes (tuple): This stores a tuple of
classes that we have traveled through so that
if we see that class again we will not use its
discriminator again.
When traveling through a discriminator, the
composed schema that is
is traveled through is added to this set.
For example if Animal has a discriminator
petType and we pass in "Dog", and the class Dog
allOf includes Animal, we move through Animal
once using the discriminator, and pick Dog.
Then in Dog, we will make an instance of the
Animal class but this time we won't travel
through its discriminator because we passed in
_visited_composed_classes = (Animal,)
user (AuthLoginUser): [optional] # noqa: E501
validated (bool): [optional] # noqa: E501
"""
_check_type = kwargs.pop('_check_type', True)
_spec_property_naming = kwargs.pop('_spec_property_naming', True)
_path_to_item = kwargs.pop('_path_to_item', ())
_configuration = kwargs.pop('_configuration', None)
_visited_composed_classes = kwargs.pop('_visited_composed_classes', ())
self = super(OpenApiModel, cls).__new__(cls)
if args:
for arg in args:
if isinstance(arg, dict):
kwargs.update(arg)
else:
raise ApiTypeError(
"Invalid positional arguments=%s passed to %s. Remove those invalid positional arguments." % (
args,
self.__class__.__name__,
),
path_to_item=_path_to_item,
valid_classes=(self.__class__,),
)
self._data_store = {}
self._check_type = _check_type
self._spec_property_naming = _spec_property_naming
self._path_to_item = _path_to_item
self._configuration = _configuration
self._visited_composed_classes = _visited_composed_classes + (self.__class__,)
for var_name, var_value in kwargs.items():
if var_name not in self.attribute_map and \
self._configuration is not None and \
self._configuration.discard_unknown_keys and \
self.additional_properties_type is None:
# discard variable.
continue
setattr(self, var_name, var_value)
return self
required_properties = set([
'_data_store',
'_check_type',
'_spec_property_naming',
'_path_to_item',
'_configuration',
'_visited_composed_classes',
])
@convert_js_args_to_python_args
def __init__(self, *args, **kwargs): # noqa: E501
"""AuthLoginResponse - a model defined in OpenAPI
Keyword Args:
_check_type (bool): if True, values for parameters in openapi_types
will be type checked and a TypeError will be
raised if the wrong type is input.
Defaults to True
_path_to_item (tuple/list): This is a list of keys or values to
drill down to the model in received_data
when deserializing a response
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_configuration (Configuration): the instance to use when
deserializing a file_type parameter.
If passed, type conversion is attempted
If omitted no type conversion is done.
_visited_composed_classes (tuple): This stores a tuple of
classes that we have traveled through so that
if we see that class again we will not use its
discriminator again.
When traveling through a discriminator, the
composed schema that is
is traveled through is added to this set.
For example if Animal has a discriminator
petType and we pass in "Dog", and the class Dog
allOf includes Animal, we move through Animal
once using the discriminator, and pick Dog.
Then in Dog, we will make an instance of the
Animal class but this time we won't travel
through its discriminator because we passed in
_visited_composed_classes = (Animal,)
user (AuthLoginUser): [optional] # noqa: E501
validated (bool): [optional] # noqa: E501
"""
_check_type = kwargs.pop('_check_type', True)
_spec_property_naming = kwargs.pop('_spec_property_naming', False)
_path_to_item = kwargs.pop('_path_to_item', ())
_configuration = kwargs.pop('_configuration', None)
_visited_composed_classes = kwargs.pop('_visited_composed_classes', ())
if args:
for arg in args:
if isinstance(arg, dict):
kwargs.update(arg)
else:
raise ApiTypeError(
"Invalid positional arguments=%s passed to %s. Remove those invalid positional arguments." % (
args,
self.__class__.__name__,
),
path_to_item=_path_to_item,
valid_classes=(self.__class__,),
)
self._data_store = {}
self._check_type = _check_type
self._spec_property_naming = _spec_property_naming
self._path_to_item = _path_to_item
self._configuration = _configuration
self._visited_composed_classes = _visited_composed_classes + (self.__class__,)
for var_name, var_value in kwargs.items():
if var_name not in self.attribute_map and \
self._configuration is not None and \
self._configuration.discard_unknown_keys and \
self.additional_properties_type is None:
# discard variable.
continue
setattr(self, var_name, var_value)
if var_name in self.read_only_vars:
raise ApiAttributeError(f"`{var_name}` is a read-only attribute. Use `from_openapi_data` to instantiate "
f"class with read only attributes.") | zincsearch-sdk | /zincsearch-sdk-0.3.3.tar.gz/zincsearch-sdk-0.3.3/zincsearch_sdk/model/auth_login_response.py | auth_login_response.py |
import re # noqa: F401
import sys # noqa: F401
from zincsearch_sdk.model_utils import ( # noqa: F401
ApiTypeError,
ModelComposed,
ModelNormal,
ModelSimple,
cached_property,
change_keys_js_to_python,
convert_js_args_to_python_args,
date,
datetime,
file_type,
none_type,
validate_get_composed_info,
OpenApiModel
)
from zincsearch_sdk.exceptions import ApiAttributeError
def lazy_import():
from zincsearch_sdk.model.aggregation_histogram_bound import AggregationHistogramBound
globals()['AggregationHistogramBound'] = AggregationHistogramBound
class MetaAggregationDateHistogram(ModelNormal):
"""NOTE: This class is auto generated by OpenAPI Generator.
Ref: https://openapi-generator.tech
Do not edit the class manually.
Attributes:
allowed_values (dict): The key is the tuple path to the attribute
and the for var_name this is (var_name,). The value is a dict
with a capitalized key describing the allowed value and an allowed
value. These dicts store the allowed enum values.
attribute_map (dict): The key is attribute name
and the value is json key in definition.
discriminator_value_class_map (dict): A dict to go from the discriminator
variable value to the discriminator class name.
validations (dict): The key is the tuple path to the attribute
and the for var_name this is (var_name,). The value is a dict
that stores validations for max_length, min_length, max_items,
min_items, exclusive_maximum, inclusive_maximum, exclusive_minimum,
inclusive_minimum, and regex.
additional_properties_type (tuple): A tuple of classes accepted
as additional properties values.
"""
allowed_values = {
}
validations = {
}
@cached_property
def additional_properties_type():
"""
This must be a method because a model may have properties that are
of type self, this must run after the class is loaded
"""
lazy_import()
return (bool, date, datetime, dict, float, int, list, str, none_type,) # noqa: E501
_nullable = False
@cached_property
def openapi_types():
"""
This must be a method because a model may have properties that are
of type self, this must run after the class is loaded
Returns
openapi_types (dict): The key is attribute name
and the value is attribute type.
"""
lazy_import()
return {
'calendar_interval': (str,), # noqa: E501
'extended_bounds': (AggregationHistogramBound,), # noqa: E501
'field': (str,), # noqa: E501
'fixed_interval': (str,), # noqa: E501
'format': (str,), # noqa: E501
'hard_bounds': (AggregationHistogramBound,), # noqa: E501
'interval': (str,), # noqa: E501
'keyed': (bool,), # noqa: E501
'min_doc_count': (int,), # noqa: E501
'size': (int,), # noqa: E501
'time_zone': (str,), # noqa: E501
}
@cached_property
def discriminator():
return None
attribute_map = {
'calendar_interval': 'calendar_interval', # noqa: E501
'extended_bounds': 'extended_bounds', # noqa: E501
'field': 'field', # noqa: E501
'fixed_interval': 'fixed_interval', # noqa: E501
'format': 'format', # noqa: E501
'hard_bounds': 'hard_bounds', # noqa: E501
'interval': 'interval', # noqa: E501
'keyed': 'keyed', # noqa: E501
'min_doc_count': 'min_doc_count', # noqa: E501
'size': 'size', # noqa: E501
'time_zone': 'time_zone', # noqa: E501
}
read_only_vars = {
}
_composed_schemas = {}
@classmethod
@convert_js_args_to_python_args
def _from_openapi_data(cls, *args, **kwargs): # noqa: E501
"""MetaAggregationDateHistogram - a model defined in OpenAPI
Keyword Args:
_check_type (bool): if True, values for parameters in openapi_types
will be type checked and a TypeError will be
raised if the wrong type is input.
Defaults to True
_path_to_item (tuple/list): This is a list of keys or values to
drill down to the model in received_data
when deserializing a response
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_configuration (Configuration): the instance to use when
deserializing a file_type parameter.
If passed, type conversion is attempted
If omitted no type conversion is done.
_visited_composed_classes (tuple): This stores a tuple of
classes that we have traveled through so that
if we see that class again we will not use its
discriminator again.
When traveling through a discriminator, the
composed schema that is
is traveled through is added to this set.
For example if Animal has a discriminator
petType and we pass in "Dog", and the class Dog
allOf includes Animal, we move through Animal
once using the discriminator, and pick Dog.
Then in Dog, we will make an instance of the
Animal class but this time we won't travel
through its discriminator because we passed in
_visited_composed_classes = (Animal,)
calendar_interval (str): minute,hour,day,week,month,quarter,year. [optional] # noqa: E501
extended_bounds (AggregationHistogramBound): [optional] # noqa: E501
field (str): [optional] # noqa: E501
fixed_interval (str): ms,s,m,h,d. [optional] # noqa: E501
format (str): format key_as_string. [optional] # noqa: E501
hard_bounds (AggregationHistogramBound): [optional] # noqa: E501
interval (str): ms,s,m,h,d. [optional] # noqa: E501
keyed (bool): [optional] # noqa: E501
min_doc_count (int): [optional] # noqa: E501
size (int): [optional] # noqa: E501
time_zone (str): time_zone. [optional] # noqa: E501
"""
_check_type = kwargs.pop('_check_type', True)
_spec_property_naming = kwargs.pop('_spec_property_naming', True)
_path_to_item = kwargs.pop('_path_to_item', ())
_configuration = kwargs.pop('_configuration', None)
_visited_composed_classes = kwargs.pop('_visited_composed_classes', ())
self = super(OpenApiModel, cls).__new__(cls)
if args:
for arg in args:
if isinstance(arg, dict):
kwargs.update(arg)
else:
raise ApiTypeError(
"Invalid positional arguments=%s passed to %s. Remove those invalid positional arguments." % (
args,
self.__class__.__name__,
),
path_to_item=_path_to_item,
valid_classes=(self.__class__,),
)
self._data_store = {}
self._check_type = _check_type
self._spec_property_naming = _spec_property_naming
self._path_to_item = _path_to_item
self._configuration = _configuration
self._visited_composed_classes = _visited_composed_classes + (self.__class__,)
for var_name, var_value in kwargs.items():
if var_name not in self.attribute_map and \
self._configuration is not None and \
self._configuration.discard_unknown_keys and \
self.additional_properties_type is None:
# discard variable.
continue
setattr(self, var_name, var_value)
return self
required_properties = set([
'_data_store',
'_check_type',
'_spec_property_naming',
'_path_to_item',
'_configuration',
'_visited_composed_classes',
])
@convert_js_args_to_python_args
def __init__(self, *args, **kwargs): # noqa: E501
"""MetaAggregationDateHistogram - a model defined in OpenAPI
Keyword Args:
_check_type (bool): if True, values for parameters in openapi_types
will be type checked and a TypeError will be
raised if the wrong type is input.
Defaults to True
_path_to_item (tuple/list): This is a list of keys or values to
drill down to the model in received_data
when deserializing a response
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_configuration (Configuration): the instance to use when
deserializing a file_type parameter.
If passed, type conversion is attempted
If omitted no type conversion is done.
_visited_composed_classes (tuple): This stores a tuple of
classes that we have traveled through so that
if we see that class again we will not use its
discriminator again.
When traveling through a discriminator, the
composed schema that is
is traveled through is added to this set.
For example if Animal has a discriminator
petType and we pass in "Dog", and the class Dog
allOf includes Animal, we move through Animal
once using the discriminator, and pick Dog.
Then in Dog, we will make an instance of the
Animal class but this time we won't travel
through its discriminator because we passed in
_visited_composed_classes = (Animal,)
calendar_interval (str): minute,hour,day,week,month,quarter,year. [optional] # noqa: E501
extended_bounds (AggregationHistogramBound): [optional] # noqa: E501
field (str): [optional] # noqa: E501
fixed_interval (str): ms,s,m,h,d. [optional] # noqa: E501
format (str): format key_as_string. [optional] # noqa: E501
hard_bounds (AggregationHistogramBound): [optional] # noqa: E501
interval (str): ms,s,m,h,d. [optional] # noqa: E501
keyed (bool): [optional] # noqa: E501
min_doc_count (int): [optional] # noqa: E501
size (int): [optional] # noqa: E501
time_zone (str): time_zone. [optional] # noqa: E501
"""
_check_type = kwargs.pop('_check_type', True)
_spec_property_naming = kwargs.pop('_spec_property_naming', False)
_path_to_item = kwargs.pop('_path_to_item', ())
_configuration = kwargs.pop('_configuration', None)
_visited_composed_classes = kwargs.pop('_visited_composed_classes', ())
if args:
for arg in args:
if isinstance(arg, dict):
kwargs.update(arg)
else:
raise ApiTypeError(
"Invalid positional arguments=%s passed to %s. Remove those invalid positional arguments." % (
args,
self.__class__.__name__,
),
path_to_item=_path_to_item,
valid_classes=(self.__class__,),
)
self._data_store = {}
self._check_type = _check_type
self._spec_property_naming = _spec_property_naming
self._path_to_item = _path_to_item
self._configuration = _configuration
self._visited_composed_classes = _visited_composed_classes + (self.__class__,)
for var_name, var_value in kwargs.items():
if var_name not in self.attribute_map and \
self._configuration is not None and \
self._configuration.discard_unknown_keys and \
self.additional_properties_type is None:
# discard variable.
continue
setattr(self, var_name, var_value)
if var_name in self.read_only_vars:
raise ApiAttributeError(f"`{var_name}` is a read-only attribute. Use `from_openapi_data` to instantiate "
f"class with read only attributes.") | zincsearch-sdk | /zincsearch-sdk-0.3.3.tar.gz/zincsearch-sdk-0.3.3/zincsearch_sdk/model/meta_aggregation_date_histogram.py | meta_aggregation_date_histogram.py |
import re # noqa: F401
import sys # noqa: F401
from zincsearch_sdk.model_utils import ( # noqa: F401
ApiTypeError,
ModelComposed,
ModelNormal,
ModelSimple,
cached_property,
change_keys_js_to_python,
convert_js_args_to_python_args,
date,
datetime,
file_type,
none_type,
validate_get_composed_info,
OpenApiModel
)
from zincsearch_sdk.exceptions import ApiAttributeError
def lazy_import():
from zincsearch_sdk.model.v1_aggregation_date_range import V1AggregationDateRange
from zincsearch_sdk.model.v1_aggregation_number_range import V1AggregationNumberRange
globals()['V1AggregationDateRange'] = V1AggregationDateRange
globals()['V1AggregationNumberRange'] = V1AggregationNumberRange
class V1AggregationParams(ModelNormal):
"""NOTE: This class is auto generated by OpenAPI Generator.
Ref: https://openapi-generator.tech
Do not edit the class manually.
Attributes:
allowed_values (dict): The key is the tuple path to the attribute
and the for var_name this is (var_name,). The value is a dict
with a capitalized key describing the allowed value and an allowed
value. These dicts store the allowed enum values.
attribute_map (dict): The key is attribute name
and the value is json key in definition.
discriminator_value_class_map (dict): A dict to go from the discriminator
variable value to the discriminator class name.
validations (dict): The key is the tuple path to the attribute
and the for var_name this is (var_name,). The value is a dict
that stores validations for max_length, min_length, max_items,
min_items, exclusive_maximum, inclusive_maximum, exclusive_minimum,
inclusive_minimum, and regex.
additional_properties_type (tuple): A tuple of classes accepted
as additional properties values.
"""
allowed_values = {
}
validations = {
}
@cached_property
def additional_properties_type():
"""
This must be a method because a model may have properties that are
of type self, this must run after the class is loaded
"""
lazy_import()
return (bool, date, datetime, dict, float, int, list, str, none_type,) # noqa: E501
_nullable = False
@cached_property
def openapi_types():
"""
This must be a method because a model may have properties that are
of type self, this must run after the class is loaded
Returns
openapi_types (dict): The key is attribute name
and the value is attribute type.
"""
lazy_import()
return {
'agg_type': (str,), # noqa: E501
'aggs': ({str: (V1AggregationParams,)},), # noqa: E501
'date_ranges': ([V1AggregationDateRange],), # noqa: E501
'field': (str,), # noqa: E501
'ranges': ([V1AggregationNumberRange],), # noqa: E501
'size': (int,), # noqa: E501
'weight_field': (str,), # noqa: E501
}
@cached_property
def discriminator():
return None
attribute_map = {
'agg_type': 'agg_type', # noqa: E501
'aggs': 'aggs', # noqa: E501
'date_ranges': 'date_ranges', # noqa: E501
'field': 'field', # noqa: E501
'ranges': 'ranges', # noqa: E501
'size': 'size', # noqa: E501
'weight_field': 'weight_field', # noqa: E501
}
read_only_vars = {
}
_composed_schemas = {}
@classmethod
@convert_js_args_to_python_args
def _from_openapi_data(cls, *args, **kwargs): # noqa: E501
"""V1AggregationParams - a model defined in OpenAPI
Keyword Args:
_check_type (bool): if True, values for parameters in openapi_types
will be type checked and a TypeError will be
raised if the wrong type is input.
Defaults to True
_path_to_item (tuple/list): This is a list of keys or values to
drill down to the model in received_data
when deserializing a response
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_configuration (Configuration): the instance to use when
deserializing a file_type parameter.
If passed, type conversion is attempted
If omitted no type conversion is done.
_visited_composed_classes (tuple): This stores a tuple of
classes that we have traveled through so that
if we see that class again we will not use its
discriminator again.
When traveling through a discriminator, the
composed schema that is
is traveled through is added to this set.
For example if Animal has a discriminator
petType and we pass in "Dog", and the class Dog
allOf includes Animal, we move through Animal
once using the discriminator, and pick Dog.
Then in Dog, we will make an instance of the
Animal class but this time we won't travel
through its discriminator because we passed in
_visited_composed_classes = (Animal,)
agg_type (str): [optional] # noqa: E501
aggs ({str: (V1AggregationParams,)}): [optional] # noqa: E501
date_ranges ([V1AggregationDateRange]): [optional] # noqa: E501
field (str): [optional] # noqa: E501
ranges ([V1AggregationNumberRange]): [optional] # noqa: E501
size (int): [optional] # noqa: E501
weight_field (str): Field name to be used for setting weight for primary field for weighted average aggregation. [optional] # noqa: E501
"""
_check_type = kwargs.pop('_check_type', True)
_spec_property_naming = kwargs.pop('_spec_property_naming', True)
_path_to_item = kwargs.pop('_path_to_item', ())
_configuration = kwargs.pop('_configuration', None)
_visited_composed_classes = kwargs.pop('_visited_composed_classes', ())
self = super(OpenApiModel, cls).__new__(cls)
if args:
for arg in args:
if isinstance(arg, dict):
kwargs.update(arg)
else:
raise ApiTypeError(
"Invalid positional arguments=%s passed to %s. Remove those invalid positional arguments." % (
args,
self.__class__.__name__,
),
path_to_item=_path_to_item,
valid_classes=(self.__class__,),
)
self._data_store = {}
self._check_type = _check_type
self._spec_property_naming = _spec_property_naming
self._path_to_item = _path_to_item
self._configuration = _configuration
self._visited_composed_classes = _visited_composed_classes + (self.__class__,)
for var_name, var_value in kwargs.items():
if var_name not in self.attribute_map and \
self._configuration is not None and \
self._configuration.discard_unknown_keys and \
self.additional_properties_type is None:
# discard variable.
continue
setattr(self, var_name, var_value)
return self
required_properties = set([
'_data_store',
'_check_type',
'_spec_property_naming',
'_path_to_item',
'_configuration',
'_visited_composed_classes',
])
@convert_js_args_to_python_args
def __init__(self, *args, **kwargs): # noqa: E501
"""V1AggregationParams - a model defined in OpenAPI
Keyword Args:
_check_type (bool): if True, values for parameters in openapi_types
will be type checked and a TypeError will be
raised if the wrong type is input.
Defaults to True
_path_to_item (tuple/list): This is a list of keys or values to
drill down to the model in received_data
when deserializing a response
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_configuration (Configuration): the instance to use when
deserializing a file_type parameter.
If passed, type conversion is attempted
If omitted no type conversion is done.
_visited_composed_classes (tuple): This stores a tuple of
classes that we have traveled through so that
if we see that class again we will not use its
discriminator again.
When traveling through a discriminator, the
composed schema that is
is traveled through is added to this set.
For example if Animal has a discriminator
petType and we pass in "Dog", and the class Dog
allOf includes Animal, we move through Animal
once using the discriminator, and pick Dog.
Then in Dog, we will make an instance of the
Animal class but this time we won't travel
through its discriminator because we passed in
_visited_composed_classes = (Animal,)
agg_type (str): [optional] # noqa: E501
aggs ({str: (V1AggregationParams,)}): [optional] # noqa: E501
date_ranges ([V1AggregationDateRange]): [optional] # noqa: E501
field (str): [optional] # noqa: E501
ranges ([V1AggregationNumberRange]): [optional] # noqa: E501
size (int): [optional] # noqa: E501
weight_field (str): Field name to be used for setting weight for primary field for weighted average aggregation. [optional] # noqa: E501
"""
_check_type = kwargs.pop('_check_type', True)
_spec_property_naming = kwargs.pop('_spec_property_naming', False)
_path_to_item = kwargs.pop('_path_to_item', ())
_configuration = kwargs.pop('_configuration', None)
_visited_composed_classes = kwargs.pop('_visited_composed_classes', ())
if args:
for arg in args:
if isinstance(arg, dict):
kwargs.update(arg)
else:
raise ApiTypeError(
"Invalid positional arguments=%s passed to %s. Remove those invalid positional arguments." % (
args,
self.__class__.__name__,
),
path_to_item=_path_to_item,
valid_classes=(self.__class__,),
)
self._data_store = {}
self._check_type = _check_type
self._spec_property_naming = _spec_property_naming
self._path_to_item = _path_to_item
self._configuration = _configuration
self._visited_composed_classes = _visited_composed_classes + (self.__class__,)
for var_name, var_value in kwargs.items():
if var_name not in self.attribute_map and \
self._configuration is not None and \
self._configuration.discard_unknown_keys and \
self.additional_properties_type is None:
# discard variable.
continue
setattr(self, var_name, var_value)
if var_name in self.read_only_vars:
raise ApiAttributeError(f"`{var_name}` is a read-only attribute. Use `from_openapi_data` to instantiate "
f"class with read only attributes.") | zincsearch-sdk | /zincsearch-sdk-0.3.3.tar.gz/zincsearch-sdk-0.3.3/zincsearch_sdk/model/v1_aggregation_params.py | v1_aggregation_params.py |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.