code
stringlengths 501
4.91M
| package
stringlengths 2
88
| path
stringlengths 11
291
| filename
stringlengths 4
197
| parsed_code
stringlengths 0
4.91M
| quality_prob
float64 0
0.99
| learning_prob
float64 0.02
1
|
---|---|---|---|---|---|---|
Public API reference
====================
Checks
~~~~~~
.. autofunction:: schemathesis.check
Fixups
~~~~~~
**Available fixups**:
- fast_api
- utf8_bom
.. autofunction:: schemathesis.fixups.install
.. autofunction:: schemathesis.fixups.uninstall
Authentication
~~~~~~~~~~~~~~
.. automodule:: schemathesis.auths
.. autofunction:: schemathesis.auth
.. autoclass:: schemathesis.auths.AuthProvider
:members:
.. autoclass:: schemathesis.auths.AuthContext
:members:
Hooks
~~~~~
.. autoclass:: schemathesis.hooks.HookContext
:members:
These functions affect Schemathesis behavior globally:
.. autofunction:: schemathesis.hook
.. autofunction:: schemathesis.hooks.unregister
.. autofunction:: schemathesis.hooks.unregister_all
.. class:: schemathesis.schemas.BaseSchema
:noindex:
All functions above can be accessed via ``schema.hooks.<function-name>`` on a schema instance. Such calls will affect
only tests generated from the schema instance. Additionally you can use the following:
.. method:: schema.hooks.apply
Register hook to run only on one test function.
:param hook: A hook function.
:param Optional[str] name: A hook name.
.. code-block:: python
def before_generate_query(context, strategy):
...
@schema.hooks.apply(before_generate_query)
@schema.parametrize()
def test_api(case):
...
Serializers
~~~~~~~~~~~
.. autoclass:: schemathesis.serializers.SerializerContext
:members:
.. autofunction:: schemathesis.serializer
.. autofunction:: schemathesis.serializers.unregister
Targeted testing
~~~~~~~~~~~~~~~~
.. autoclass:: schemathesis.targets.TargetContext
:members:
.. autofunction:: schemathesis.target
Custom strategies for Open API "format" keyword
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. autofunction:: schemathesis.openapi.format
Custom scalars for GraphQL
~~~~~~~~~~~~~~~~~~~~~~~~~~
.. autofunction:: schemathesis.graphql.scalar
Loaders
~~~~~~~
.. autofunction:: schemathesis.from_aiohttp
.. autofunction:: schemathesis.from_asgi
.. autofunction:: schemathesis.from_dict
.. autofunction:: schemathesis.from_file
.. autofunction:: schemathesis.from_path
.. autofunction:: schemathesis.from_pytest_fixture
.. autofunction:: schemathesis.from_uri
.. autofunction:: schemathesis.from_wsgi
.. autofunction:: schemathesis.graphql.from_dict
.. autofunction:: schemathesis.graphql.from_url
.. autofunction:: schemathesis.graphql.from_wsgi
Schema
~~~~~~
.. autoclass:: schemathesis.schemas.BaseSchema()
.. automethod:: parametrize
.. automethod:: given
.. automethod:: as_state_machine
.. autoclass:: schemathesis.models.APIOperation()
:members:
.. automethod:: validate_response
.. automethod:: is_response_valid
.. automethod:: make_case
.. automethod:: as_strategy
Open API-specific API
.. autoclass:: schemathesis.specs.openapi.schemas.BaseOpenAPISchema()
:noindex:
.. automethod:: add_link
:noindex:
|
schemathesis
|
/schemathesis-3.19.2.tar.gz/schemathesis-3.19.2/docs/api.rst
|
api.rst
|
Public API reference
====================
Checks
~~~~~~
.. autofunction:: schemathesis.check
Fixups
~~~~~~
**Available fixups**:
- fast_api
- utf8_bom
.. autofunction:: schemathesis.fixups.install
.. autofunction:: schemathesis.fixups.uninstall
Authentication
~~~~~~~~~~~~~~
.. automodule:: schemathesis.auths
.. autofunction:: schemathesis.auth
.. autoclass:: schemathesis.auths.AuthProvider
:members:
.. autoclass:: schemathesis.auths.AuthContext
:members:
Hooks
~~~~~
.. autoclass:: schemathesis.hooks.HookContext
:members:
These functions affect Schemathesis behavior globally:
.. autofunction:: schemathesis.hook
.. autofunction:: schemathesis.hooks.unregister
.. autofunction:: schemathesis.hooks.unregister_all
.. class:: schemathesis.schemas.BaseSchema
:noindex:
All functions above can be accessed via ``schema.hooks.<function-name>`` on a schema instance. Such calls will affect
only tests generated from the schema instance. Additionally you can use the following:
.. method:: schema.hooks.apply
Register hook to run only on one test function.
:param hook: A hook function.
:param Optional[str] name: A hook name.
.. code-block:: python
def before_generate_query(context, strategy):
...
@schema.hooks.apply(before_generate_query)
@schema.parametrize()
def test_api(case):
...
Serializers
~~~~~~~~~~~
.. autoclass:: schemathesis.serializers.SerializerContext
:members:
.. autofunction:: schemathesis.serializer
.. autofunction:: schemathesis.serializers.unregister
Targeted testing
~~~~~~~~~~~~~~~~
.. autoclass:: schemathesis.targets.TargetContext
:members:
.. autofunction:: schemathesis.target
Custom strategies for Open API "format" keyword
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. autofunction:: schemathesis.openapi.format
Custom scalars for GraphQL
~~~~~~~~~~~~~~~~~~~~~~~~~~
.. autofunction:: schemathesis.graphql.scalar
Loaders
~~~~~~~
.. autofunction:: schemathesis.from_aiohttp
.. autofunction:: schemathesis.from_asgi
.. autofunction:: schemathesis.from_dict
.. autofunction:: schemathesis.from_file
.. autofunction:: schemathesis.from_path
.. autofunction:: schemathesis.from_pytest_fixture
.. autofunction:: schemathesis.from_uri
.. autofunction:: schemathesis.from_wsgi
.. autofunction:: schemathesis.graphql.from_dict
.. autofunction:: schemathesis.graphql.from_url
.. autofunction:: schemathesis.graphql.from_wsgi
Schema
~~~~~~
.. autoclass:: schemathesis.schemas.BaseSchema()
.. automethod:: parametrize
.. automethod:: given
.. automethod:: as_state_machine
.. autoclass:: schemathesis.models.APIOperation()
:members:
.. automethod:: validate_response
.. automethod:: is_response_valid
.. automethod:: make_case
.. automethod:: as_strategy
Open API-specific API
.. autoclass:: schemathesis.specs.openapi.schemas.BaseOpenAPISchema()
:noindex:
.. automethod:: add_link
:noindex:
| 0.807043 | 0.222679 |
Schemathesis as a Service
=========================
`Schemathesis.io <https://app.schemathesis.io/auth/sign-up/?utm_source=oss_docs&utm_content=saas_docs_top>`_ is a platform that runs property-based API tests and visualises their outcomes for you. It also may store
your CLI test results and run additional analysis on them.
On top of the usual Schemathesis benefits, the platform gives you:
- Handy visual navigation through test results
- Additional static analysis of your API schema & app responses
- Improved data generation, that finds more bugs
- Many more additional checks for Open API & GraphQL issues
- Visual API schema coverage (**COMING SOON**)
- Tailored tips on API schema improvement (**COMING SOON**)
- Support for gRPC, AsyncAPI, and SOAP (**COMING SOON**)
Tutorial
--------
This step-by-step tutorial walks you through the flow of setting up your Schemathesis.io account to test your Open API schema.
As part of this tutorial, you will:
- Add your Open API schema to Schemathesis.io
- Execute property-based tests against your application
- See what request parameters cause issues
.. note::
We provide a sample Flask application with a pre-defined set of problems to demonstrate some of the possible issues
Schemathesis.io can find automatically. You can find the `source code <https://github.com/schemathesis/schemathesis/tree/master/test/apps/openapi/_flask>`_ in the Schemathesis repository.
Alternatively, you can follow this guide as a reference and run tests against your Open API or GraphQL based application.
Prerequisites
~~~~~~~~~~~~~
- A Schemathesis.io `account <https://app.schemathesis.io/auth/sign-up/?utm_source=oss_docs&utm_content=saas_docs_prerequisites>`_
Step 1: Add the API schema
~~~~~~~~~~~~~~~~~~~~~~~~~~
1. Open `Schemathesis.io dashboard <https://app.schemathesis.io/apis/>`_
2. Click on the **Add API** button to get to the API schema submission form
.. image:: https://raw.githubusercontent.com/schemathesis/schemathesis/master/img/service_no_apis_yet.png
3. Enter your API name, so you can easily identify it later (for example, "Example API")
4. Fill **https://example.schemathesis.io/openapi.json** into the "API Schema" field
5. **Optional**. If your API requires authentication, choose the appropriate authentication type (HTTP Basic & Header are available at the moment) and fill in its details
6. **Optional**. If your API is available on a different domain than your API schema, fill the proper base URL into the "Base URL" field
7. Save the API schema entry by clicking "Add"
.. image:: https://raw.githubusercontent.com/schemathesis/schemathesis/master/img/service_api_form.png
.. warning::
Don't ever run tests against your production deployments!
Step 2: Run API tests
~~~~~~~~~~~~~~~~~~~~~
At this point, you can start testing your API! The simplest option is to use our test runners on the "Cloud" tab.
.. image:: https://raw.githubusercontent.com/schemathesis/schemathesis/master/img/service_api_created.png
**Optional**. If you'd like to run tests on your side and upload the results to Schemathesis.io feel free to use one of the provided code samples:
Generate an access token and authenticate into Schemathesis.io first:
.. code:: text
# Replace `LOmOZoBh3V12aP3rRkvqYYKGGGV6Ag` with your token
st auth login LOmOZoBh3V12aP3rRkvqYYKGGGV6Ag
And then run the tests:
.. code::
st run demo-1 --checks all --report
.. note::
Replace ``demo-1`` with the appropriate API ID shown in the SaaS code sample
Once all events are uploaded to Schemathesis.io you'll see a message at the end of the CLI output:
.. code:: text
Upload: COMPLETED
Your test report is successfully uploaded! Please, follow this link for details:
https://app.schemathesis.io/r/mF9ke/
To observe the test run results, follow the link from the output.
Step 3: Observe the results
~~~~~~~~~~~~~~~~~~~~~~~~~~~
As the tests are running you will see failures appear in the UI:
.. image:: https://raw.githubusercontent.com/schemathesis/schemathesis/master/img/service_run_results.png
Each entry in the **Failures** list is clickable, and you can check its details. The failure below shows that the application
response does not conform to its API schema and shows what part of the schema was violated.
.. image:: https://raw.githubusercontent.com/schemathesis/schemathesis/master/img/service_non_conforming_response.png
In this case, the schema requires the "success" property to be present but it is absent in the response.
Each failure is accompanied by a cURL snippet you can use to reproduce the issue.
.. image:: https://raw.githubusercontent.com/schemathesis/schemathesis/master/img/service_server_error.png
Alternatively, you can use the **Replay** button on the failure page.
What data is sent?
------------------
CLI sends info to Schemathesis.io in the following cases:
- Authentication. Metadata about your host machine, that helps us to understand our users better. We collect your Python interpreter version, implementation, system/OS name and release. For more information look at ``service/metadata.py``
- Test runs. Most of Schemathesis runner's events, including all generated data and explicitly passed headers. For more information look at ``service/serialization.py``
- Some environment variables specific to CI providers. We use them to comment on pull requests.
- Command-line options without free-form values. It helps us to understand how you use the CLI.
|
schemathesis
|
/schemathesis-3.19.2.tar.gz/schemathesis-3.19.2/docs/service.rst
|
service.rst
|
Schemathesis as a Service
=========================
`Schemathesis.io <https://app.schemathesis.io/auth/sign-up/?utm_source=oss_docs&utm_content=saas_docs_top>`_ is a platform that runs property-based API tests and visualises their outcomes for you. It also may store
your CLI test results and run additional analysis on them.
On top of the usual Schemathesis benefits, the platform gives you:
- Handy visual navigation through test results
- Additional static analysis of your API schema & app responses
- Improved data generation, that finds more bugs
- Many more additional checks for Open API & GraphQL issues
- Visual API schema coverage (**COMING SOON**)
- Tailored tips on API schema improvement (**COMING SOON**)
- Support for gRPC, AsyncAPI, and SOAP (**COMING SOON**)
Tutorial
--------
This step-by-step tutorial walks you through the flow of setting up your Schemathesis.io account to test your Open API schema.
As part of this tutorial, you will:
- Add your Open API schema to Schemathesis.io
- Execute property-based tests against your application
- See what request parameters cause issues
.. note::
We provide a sample Flask application with a pre-defined set of problems to demonstrate some of the possible issues
Schemathesis.io can find automatically. You can find the `source code <https://github.com/schemathesis/schemathesis/tree/master/test/apps/openapi/_flask>`_ in the Schemathesis repository.
Alternatively, you can follow this guide as a reference and run tests against your Open API or GraphQL based application.
Prerequisites
~~~~~~~~~~~~~
- A Schemathesis.io `account <https://app.schemathesis.io/auth/sign-up/?utm_source=oss_docs&utm_content=saas_docs_prerequisites>`_
Step 1: Add the API schema
~~~~~~~~~~~~~~~~~~~~~~~~~~
1. Open `Schemathesis.io dashboard <https://app.schemathesis.io/apis/>`_
2. Click on the **Add API** button to get to the API schema submission form
.. image:: https://raw.githubusercontent.com/schemathesis/schemathesis/master/img/service_no_apis_yet.png
3. Enter your API name, so you can easily identify it later (for example, "Example API")
4. Fill **https://example.schemathesis.io/openapi.json** into the "API Schema" field
5. **Optional**. If your API requires authentication, choose the appropriate authentication type (HTTP Basic & Header are available at the moment) and fill in its details
6. **Optional**. If your API is available on a different domain than your API schema, fill the proper base URL into the "Base URL" field
7. Save the API schema entry by clicking "Add"
.. image:: https://raw.githubusercontent.com/schemathesis/schemathesis/master/img/service_api_form.png
.. warning::
Don't ever run tests against your production deployments!
Step 2: Run API tests
~~~~~~~~~~~~~~~~~~~~~
At this point, you can start testing your API! The simplest option is to use our test runners on the "Cloud" tab.
.. image:: https://raw.githubusercontent.com/schemathesis/schemathesis/master/img/service_api_created.png
**Optional**. If you'd like to run tests on your side and upload the results to Schemathesis.io feel free to use one of the provided code samples:
Generate an access token and authenticate into Schemathesis.io first:
.. code:: text
# Replace `LOmOZoBh3V12aP3rRkvqYYKGGGV6Ag` with your token
st auth login LOmOZoBh3V12aP3rRkvqYYKGGGV6Ag
And then run the tests:
.. code::
st run demo-1 --checks all --report
.. note::
Replace ``demo-1`` with the appropriate API ID shown in the SaaS code sample
Once all events are uploaded to Schemathesis.io you'll see a message at the end of the CLI output:
.. code:: text
Upload: COMPLETED
Your test report is successfully uploaded! Please, follow this link for details:
https://app.schemathesis.io/r/mF9ke/
To observe the test run results, follow the link from the output.
Step 3: Observe the results
~~~~~~~~~~~~~~~~~~~~~~~~~~~
As the tests are running you will see failures appear in the UI:
.. image:: https://raw.githubusercontent.com/schemathesis/schemathesis/master/img/service_run_results.png
Each entry in the **Failures** list is clickable, and you can check its details. The failure below shows that the application
response does not conform to its API schema and shows what part of the schema was violated.
.. image:: https://raw.githubusercontent.com/schemathesis/schemathesis/master/img/service_non_conforming_response.png
In this case, the schema requires the "success" property to be present but it is absent in the response.
Each failure is accompanied by a cURL snippet you can use to reproduce the issue.
.. image:: https://raw.githubusercontent.com/schemathesis/schemathesis/master/img/service_server_error.png
Alternatively, you can use the **Replay** button on the failure page.
What data is sent?
------------------
CLI sends info to Schemathesis.io in the following cases:
- Authentication. Metadata about your host machine, that helps us to understand our users better. We collect your Python interpreter version, implementation, system/OS name and release. For more information look at ``service/metadata.py``
- Test runs. Most of Schemathesis runner's events, including all generated data and explicitly passed headers. For more information look at ``service/serialization.py``
- Some environment variables specific to CI providers. We use them to comment on pull requests.
- Command-line options without free-form values. It helps us to understand how you use the CLI.
| 0.905286 | 0.615608 |
Introduction
============
Schemathesis is a tool for testing your web applications built with Open API or GraphQL specifications.
Installation
------------
We recommend using the latest version of Python. Schemathesis supports Python 3.7 and newer.
Install the most recent Schemathesis version using ``pip``:
.. code-block:: text
$ pip install schemathesis
Supported API specs
-------------------
We support the following API specifications:
- Swagger 2.0. Python tests + CLI
- Open API 3.0.x. Python tests + CLI
- GraphQL June 2018. Python tests + CLI
|
schemathesis
|
/schemathesis-3.19.2.tar.gz/schemathesis-3.19.2/docs/introduction.rst
|
introduction.rst
|
Introduction
============
Schemathesis is a tool for testing your web applications built with Open API or GraphQL specifications.
Installation
------------
We recommend using the latest version of Python. Schemathesis supports Python 3.7 and newer.
Install the most recent Schemathesis version using ``pip``:
.. code-block:: text
$ pip install schemathesis
Supported API specs
-------------------
We support the following API specifications:
- Swagger 2.0. Python tests + CLI
- Open API 3.0.x. Python tests + CLI
- GraphQL June 2018. Python tests + CLI
| 0.88011 | 0.470737 |
Data generation
===============
This section describes how Schemathesis generates test examples and their serialization process.
Schemathesis converts Open API schemas to compatible JSON Schemas and passes them to ``hypothesis-jsonschema``, which generates data for those schemas.
.. important::
If the API schema is complex or deeply nested, data generation may be slow or produce data without much variance.
It is a known behavior and caused by the way Hypothesis works internally.
There are many tradeoffs in this process, and Hypothesis tries to give reasonable defaults for a typical case
and not be too slow for pathological cases.
Negative testing
----------------
By default, Schemathesis generates data that matches the input schema. Alternatively it can generate the contrary - examples that do not match the input schema.
CLI:
.. code:: text
$ st run -D negative https://example.schemathesis.io/openapi.json
Python:
.. code:: python
import schemathesis
from schemathesis import DataGenerationMethod
schema = schemathesis.from_uri(
"https://example.schemathesis.io/openapi.json",
data_generation_methods=[DataGenerationMethod.negative],
)
@schema.parametrize()
def test_api(case):
case.call_and_validate()
.. note:: At this moment, negative testing is significantly slower than positive testing.
Payload serialization
---------------------
When your API accepts a payload, requests should have a media type located in their ``Content-Type`` header.
In Open API 3.0, you may write something like this:
.. code-block::
:emphasize-lines: 7
openapi: 3.0.0
paths:
/pet:
post:
requestBody:
content:
application/json:
schema:
type: object
required: true
In this example, operation ``POST /pet`` expects ``application/json`` payload. For each defined media type Schemathesis
generates data according to the relevant schema (``{"type": "object"}`` in the example).
.. note:: This data is stored in the ``case`` fixture you use in tests when you use our ``pytest`` integration.
Before sending, this data should be serialized to the format expected by the tested operation. Schemathesis supports
most common media types like ``application/json`` and ``text/plain`` out of the box and allows you to add support for other
media types via the ``serializers`` mechanism.
Schemathesis uses ``requests`` to send API requests over network and ``werkzeug.Client`` for direct WSGI integration.
Serializers define the process of transforming generated Python objects into structures that can be sent by these tools.
If Schemathesis is unable to serialize data for a media type, the generated samples will be rejected.
If an API operation does not define media types that Schemathesis can serialize, you will see a ``Unsatisfiable`` error.
If the operation under tests considers payload to be optional, these cases are still generated by Schemathesis, but
not passed to serializers.
CSV data example
~~~~~~~~~~~~~~~~
In this example, we will define an operation that expects CSV data and setup a serializer for it.
Even though, Open API does not define a standard way to describe the structure of CSV payload, we can use the ``array``
type to describe it:
.. code-block::
:emphasize-lines: 8-21
paths:
/csv:
post:
requestBody:
content:
text/csv:
schema:
items:
additionalProperties: false
properties:
first_name:
pattern: \A[A-Za-z]*\Z
type: string
last_name:
pattern: \A[A-Za-z]*\Z
type: string
required:
- first_name
- last_name
type: object
type: array
required: true
responses:
'200':
description: OK
This schema describes a CSV structure with two string fields - ``first_name`` and ``last_name``. Schemathesis will
generate lists of Python dictionaries that can be serialized by ``csv.DictWriter``.
You are free to write a schema of any complexity, but be aware that Schemathesis may generate uncommon data
that your serializer will need to handle. In this example we restrict string characters only to ASCII letters
to avoid handling Unicode symbols for simplicity.
First, let's define a function that will transform lists of dictionaries to CSV strings:
.. code-block:: python
import csv
from io import StringIO
def to_csv(data):
if not data:
# Empty CSV file
return ""
output = StringIO()
# Assume all items have the same fields
field_names = sorted(data[0].keys())
writer = csv.DictWriter(output, field_names)
writer.writeheader()
writer.writerows(data)
return output.getvalue()
.. note::
You can take a look at the official `csv module documentation <https://docs.python.org/3/library/csv.html>`_ for more examples of CSV serialization.
Second, register a serializer class via the ``schemathesis.serializer`` decorator:
.. code-block:: python
:emphasize-lines: 4
import schemathesis
@schemathesis.serializer("text/csv")
class CSVSerializer:
...
This decorator requires the name of the media type you need to handle and optionally accepts additional media types via its ``aliases`` keyword argument.
Third, the serializer should have two methods - ``as_requests`` and ``as_werkzeug``.
.. code-block:: python
...
class CSVSerializer:
def as_requests(self, context, value):
if isinstance(value, bytes):
return {"data": value}
return {"data": to_csv(value)}
def as_werkzeug(self, context, value):
if isinstance(value, bytes):
return {"data": value}
return {"data": to_csv(value)}
They should return dictionaries of keyword arguments that will be passed to ``requests.request`` and ``werkzeug.Client.open``, respectively.
With the CSV example, we create payload with the ``to_csv`` function defined earlier and return it as ``data``, which is valid for both cases.
Note that both methods explicitly handle binary data - for non-binary media types, it may happen if the API schema contains examples via the ``externalValue`` keyword.
In these cases, the loaded example is passed directly as binary data.
Additionally, you have ``context`` where you can access the current test case via ``context.case``.
.. important::
Please, note that ``value`` will match your schema in positive testing scenarios, and it is your responsibility
to handle errors during data serialization.
|
schemathesis
|
/schemathesis-3.19.2.tar.gz/schemathesis-3.19.2/docs/how.rst
|
how.rst
|
Data generation
===============
This section describes how Schemathesis generates test examples and their serialization process.
Schemathesis converts Open API schemas to compatible JSON Schemas and passes them to ``hypothesis-jsonschema``, which generates data for those schemas.
.. important::
If the API schema is complex or deeply nested, data generation may be slow or produce data without much variance.
It is a known behavior and caused by the way Hypothesis works internally.
There are many tradeoffs in this process, and Hypothesis tries to give reasonable defaults for a typical case
and not be too slow for pathological cases.
Negative testing
----------------
By default, Schemathesis generates data that matches the input schema. Alternatively it can generate the contrary - examples that do not match the input schema.
CLI:
.. code:: text
$ st run -D negative https://example.schemathesis.io/openapi.json
Python:
.. code:: python
import schemathesis
from schemathesis import DataGenerationMethod
schema = schemathesis.from_uri(
"https://example.schemathesis.io/openapi.json",
data_generation_methods=[DataGenerationMethod.negative],
)
@schema.parametrize()
def test_api(case):
case.call_and_validate()
.. note:: At this moment, negative testing is significantly slower than positive testing.
Payload serialization
---------------------
When your API accepts a payload, requests should have a media type located in their ``Content-Type`` header.
In Open API 3.0, you may write something like this:
.. code-block::
:emphasize-lines: 7
openapi: 3.0.0
paths:
/pet:
post:
requestBody:
content:
application/json:
schema:
type: object
required: true
In this example, operation ``POST /pet`` expects ``application/json`` payload. For each defined media type Schemathesis
generates data according to the relevant schema (``{"type": "object"}`` in the example).
.. note:: This data is stored in the ``case`` fixture you use in tests when you use our ``pytest`` integration.
Before sending, this data should be serialized to the format expected by the tested operation. Schemathesis supports
most common media types like ``application/json`` and ``text/plain`` out of the box and allows you to add support for other
media types via the ``serializers`` mechanism.
Schemathesis uses ``requests`` to send API requests over network and ``werkzeug.Client`` for direct WSGI integration.
Serializers define the process of transforming generated Python objects into structures that can be sent by these tools.
If Schemathesis is unable to serialize data for a media type, the generated samples will be rejected.
If an API operation does not define media types that Schemathesis can serialize, you will see a ``Unsatisfiable`` error.
If the operation under tests considers payload to be optional, these cases are still generated by Schemathesis, but
not passed to serializers.
CSV data example
~~~~~~~~~~~~~~~~
In this example, we will define an operation that expects CSV data and setup a serializer for it.
Even though, Open API does not define a standard way to describe the structure of CSV payload, we can use the ``array``
type to describe it:
.. code-block::
:emphasize-lines: 8-21
paths:
/csv:
post:
requestBody:
content:
text/csv:
schema:
items:
additionalProperties: false
properties:
first_name:
pattern: \A[A-Za-z]*\Z
type: string
last_name:
pattern: \A[A-Za-z]*\Z
type: string
required:
- first_name
- last_name
type: object
type: array
required: true
responses:
'200':
description: OK
This schema describes a CSV structure with two string fields - ``first_name`` and ``last_name``. Schemathesis will
generate lists of Python dictionaries that can be serialized by ``csv.DictWriter``.
You are free to write a schema of any complexity, but be aware that Schemathesis may generate uncommon data
that your serializer will need to handle. In this example we restrict string characters only to ASCII letters
to avoid handling Unicode symbols for simplicity.
First, let's define a function that will transform lists of dictionaries to CSV strings:
.. code-block:: python
import csv
from io import StringIO
def to_csv(data):
if not data:
# Empty CSV file
return ""
output = StringIO()
# Assume all items have the same fields
field_names = sorted(data[0].keys())
writer = csv.DictWriter(output, field_names)
writer.writeheader()
writer.writerows(data)
return output.getvalue()
.. note::
You can take a look at the official `csv module documentation <https://docs.python.org/3/library/csv.html>`_ for more examples of CSV serialization.
Second, register a serializer class via the ``schemathesis.serializer`` decorator:
.. code-block:: python
:emphasize-lines: 4
import schemathesis
@schemathesis.serializer("text/csv")
class CSVSerializer:
...
This decorator requires the name of the media type you need to handle and optionally accepts additional media types via its ``aliases`` keyword argument.
Third, the serializer should have two methods - ``as_requests`` and ``as_werkzeug``.
.. code-block:: python
...
class CSVSerializer:
def as_requests(self, context, value):
if isinstance(value, bytes):
return {"data": value}
return {"data": to_csv(value)}
def as_werkzeug(self, context, value):
if isinstance(value, bytes):
return {"data": value}
return {"data": to_csv(value)}
They should return dictionaries of keyword arguments that will be passed to ``requests.request`` and ``werkzeug.Client.open``, respectively.
With the CSV example, we create payload with the ``to_csv`` function defined earlier and return it as ``data``, which is valid for both cases.
Note that both methods explicitly handle binary data - for non-binary media types, it may happen if the API schema contains examples via the ``externalValue`` keyword.
In these cases, the loaded example is passed directly as binary data.
Additionally, you have ``context`` where you can access the current test case via ``context.case``.
.. important::
Please, note that ``value`` will match your schema in positive testing scenarios, and it is your responsibility
to handle errors during data serialization.
| 0.90341 | 0.753399 |
Additional features
===================
Schemathesis ships a set of optional features that could help to tune your tests.
Unique data generation
~~~~~~~~~~~~~~~~~~~~~~
By default, Schemathesis may generate the same test cases as all data is randomized. If this behavior does not match your expectations, or
your test budges, you can force Schemathesis to generate unique test cases.
In CLI:
.. code:: text
$ st run --contrib-unique-data https://example.schemathesis.io/openapi.json
In Python tests:
.. code:: python
from schemathesis import contrib
# This is a global hook that will affect all the tests
contrib.unique_data.install()
Uniqueness is determined by the following parts of the generated data:
- ``media_type``
- ``path_parameters``
- ``headers``
- ``cookies``
- ``query``
- ``body``
UUID data for ``format: uuid`` in Open API
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Open API 2.0 / 3.0 do not declare the ``uuid`` format as built-in, hence it is available as an extension:
.. code:: python
from schemathesis.contrib.openapi import formats
formats.uuid.install()
You could also enable it via the ``--contrib-openapi-formats-uuid`` CLI option.
|
schemathesis
|
/schemathesis-3.19.2.tar.gz/schemathesis-3.19.2/docs/contrib.rst
|
contrib.rst
|
Additional features
===================
Schemathesis ships a set of optional features that could help to tune your tests.
Unique data generation
~~~~~~~~~~~~~~~~~~~~~~
By default, Schemathesis may generate the same test cases as all data is randomized. If this behavior does not match your expectations, or
your test budges, you can force Schemathesis to generate unique test cases.
In CLI:
.. code:: text
$ st run --contrib-unique-data https://example.schemathesis.io/openapi.json
In Python tests:
.. code:: python
from schemathesis import contrib
# This is a global hook that will affect all the tests
contrib.unique_data.install()
Uniqueness is determined by the following parts of the generated data:
- ``media_type``
- ``path_parameters``
- ``headers``
- ``cookies``
- ``query``
- ``body``
UUID data for ``format: uuid`` in Open API
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Open API 2.0 / 3.0 do not declare the ``uuid`` format as built-in, hence it is available as an extension:
.. code:: python
from schemathesis.contrib.openapi import formats
formats.uuid.install()
You could also enable it via the ``--contrib-openapi-formats-uuid`` CLI option.
| 0.888723 | 0.4165 |
import os
import platform
from functools import lru_cache
from typing import Any, Callable, Dict, Type
import click
import pytest
import requests
import urllib3
import yaml
import schemathesis
from schemathesis import Case
from schemathesis.schemas import BaseSchema
from schemathesis.utils import StringDatesYAMLLoader, merge
HERE = os.path.dirname(os.path.abspath(__file__))
def get_schema_path(schema_name: str) -> str:
return os.path.join(HERE, "data", schema_name)
SIMPLE_PATH = get_schema_path("simple_swagger.yaml")
def get_schema(schema_name: str = "simple_swagger.yaml", **kwargs: Any) -> BaseSchema:
schema = make_schema(schema_name, **kwargs)
return schemathesis.from_dict(schema)
def make_schema(schema_name: str = "simple_swagger.yaml", **kwargs: Any) -> Dict[str, Any]:
schema = load_schema(schema_name)
return merge(kwargs, schema)
@lru_cache()
def load_schema(schema_name: str) -> Dict[str, Any]:
path = get_schema_path(schema_name)
with open(path) as fd:
return yaml.load(fd, StringDatesYAMLLoader)
def integer(**kwargs: Any) -> Dict[str, Any]:
return {"type": "integer", "in": "query", **kwargs}
def as_param(*parameters: Any) -> Dict[str, Any]:
return {"paths": {"/users": {"get": {"parameters": list(parameters), "responses": {"200": {"description": "OK"}}}}}}
def noop(value: Any) -> bool:
return True
def _assert_value(value: Any, type: Type, predicate: Callable = noop) -> None:
assert isinstance(value, type)
assert predicate(value)
def assert_int(value: Any, predicate: Callable = noop) -> None:
_assert_value(value, int, predicate)
def assert_str(value: Any, predicate: Callable = noop) -> None:
_assert_value(value, str, predicate)
def assert_list(value: Any, predicate: Callable = noop) -> None:
_assert_value(value, list, predicate)
def assert_requests_call(case: Case):
"""Verify that all generated input parameters are usable by requests."""
with pytest.raises((requests.exceptions.ConnectionError, urllib3.exceptions.NewConnectionError)):
case.call(base_url="http://127.0.0.1:1")
def strip_style_win32(styled_output: str) -> str:
"""Strip text style on Windows.
`click.style` produces ANSI sequences, however they were not supported
by PowerShell until recently and colored output is created differently.
"""
if platform.system() == "Windows":
return click.unstyle(styled_output)
return styled_output
|
schemathesis
|
/schemathesis-3.19.2.tar.gz/schemathesis-3.19.2/test/utils.py
|
utils.py
|
import os
import platform
from functools import lru_cache
from typing import Any, Callable, Dict, Type
import click
import pytest
import requests
import urllib3
import yaml
import schemathesis
from schemathesis import Case
from schemathesis.schemas import BaseSchema
from schemathesis.utils import StringDatesYAMLLoader, merge
HERE = os.path.dirname(os.path.abspath(__file__))
def get_schema_path(schema_name: str) -> str:
return os.path.join(HERE, "data", schema_name)
SIMPLE_PATH = get_schema_path("simple_swagger.yaml")
def get_schema(schema_name: str = "simple_swagger.yaml", **kwargs: Any) -> BaseSchema:
schema = make_schema(schema_name, **kwargs)
return schemathesis.from_dict(schema)
def make_schema(schema_name: str = "simple_swagger.yaml", **kwargs: Any) -> Dict[str, Any]:
schema = load_schema(schema_name)
return merge(kwargs, schema)
@lru_cache()
def load_schema(schema_name: str) -> Dict[str, Any]:
path = get_schema_path(schema_name)
with open(path) as fd:
return yaml.load(fd, StringDatesYAMLLoader)
def integer(**kwargs: Any) -> Dict[str, Any]:
return {"type": "integer", "in": "query", **kwargs}
def as_param(*parameters: Any) -> Dict[str, Any]:
return {"paths": {"/users": {"get": {"parameters": list(parameters), "responses": {"200": {"description": "OK"}}}}}}
def noop(value: Any) -> bool:
return True
def _assert_value(value: Any, type: Type, predicate: Callable = noop) -> None:
assert isinstance(value, type)
assert predicate(value)
def assert_int(value: Any, predicate: Callable = noop) -> None:
_assert_value(value, int, predicate)
def assert_str(value: Any, predicate: Callable = noop) -> None:
_assert_value(value, str, predicate)
def assert_list(value: Any, predicate: Callable = noop) -> None:
_assert_value(value, list, predicate)
def assert_requests_call(case: Case):
"""Verify that all generated input parameters are usable by requests."""
with pytest.raises((requests.exceptions.ConnectionError, urllib3.exceptions.NewConnectionError)):
case.call(base_url="http://127.0.0.1:1")
def strip_style_win32(styled_output: str) -> str:
"""Strip text style on Windows.
`click.style` produces ANSI sequences, however they were not supported
by PowerShell until recently and colored output is created differently.
"""
if platform.system() == "Windows":
return click.unstyle(styled_output)
return styled_output
| 0.596903 | 0.25602 |
import logging
from typing import List
import click
from aiohttp import web
from schemathesis.cli import CsvEnumChoice
try:
from . import _graphql, openapi
except ImportError as exc:
# try/except for cases when there is a different ImportError in the block before, that
# doesn't imply another running environment (test_server.sh vs usual pytest run)
# Ref: https://github.com/schemathesis/schemathesis/issues/658
try:
import _graphql
import openapi
except ImportError:
raise exc
INVALID_OPERATIONS = ("invalid", "invalid_response", "invalid_path_parameter", "missing_path_parameter")
AvailableOperations = CsvEnumChoice(openapi.schema.Operation)
@click.command()
@click.argument("port", type=int)
@click.option("--operations", type=AvailableOperations)
@click.option("--spec", type=click.Choice(["openapi2", "openapi3", "graphql"]), default="openapi2")
@click.option("--framework", type=click.Choice(["aiohttp", "flask"]), default="aiohttp")
def run_app(port: int, operations: List[openapi.schema.Operation], spec: str, framework: str) -> None:
if spec == "graphql":
app = _graphql._flask.create_app()
app.run(port=port)
else:
if operations is not None:
prepared_operations = tuple(operation.name for operation in operations)
if "all" in prepared_operations:
prepared_operations = tuple(
operation.name for operation in openapi.schema.Operation if operation.name != "all"
)
else:
prepared_operations = tuple(
operation.name
for operation in openapi.schema.Operation
if operation.name not in INVALID_OPERATIONS and operation.name != "all"
)
version = {"openapi2": openapi.schema.OpenAPIVersion("2.0"), "openapi3": openapi.schema.OpenAPIVersion("3.0")}[
spec
]
click.secho(
f"Schemathesis test server is running!\n\n"
f"API Schema is available at: http://0.0.0.0:{port}/schema.yaml\n",
bold=True,
)
if framework == "aiohttp":
app = openapi._aiohttp.create_app(prepared_operations, version)
web.run_app(app, port=port)
elif framework == "flask":
app = openapi._flask.create_app(prepared_operations, version)
app.run(port=port)
if __name__ == "__main__":
logging.basicConfig(level=logging.DEBUG)
run_app()
|
schemathesis
|
/schemathesis-3.19.2.tar.gz/schemathesis-3.19.2/test/apps/__init__.py
|
__init__.py
|
import logging
from typing import List
import click
from aiohttp import web
from schemathesis.cli import CsvEnumChoice
try:
from . import _graphql, openapi
except ImportError as exc:
# try/except for cases when there is a different ImportError in the block before, that
# doesn't imply another running environment (test_server.sh vs usual pytest run)
# Ref: https://github.com/schemathesis/schemathesis/issues/658
try:
import _graphql
import openapi
except ImportError:
raise exc
INVALID_OPERATIONS = ("invalid", "invalid_response", "invalid_path_parameter", "missing_path_parameter")
AvailableOperations = CsvEnumChoice(openapi.schema.Operation)
@click.command()
@click.argument("port", type=int)
@click.option("--operations", type=AvailableOperations)
@click.option("--spec", type=click.Choice(["openapi2", "openapi3", "graphql"]), default="openapi2")
@click.option("--framework", type=click.Choice(["aiohttp", "flask"]), default="aiohttp")
def run_app(port: int, operations: List[openapi.schema.Operation], spec: str, framework: str) -> None:
if spec == "graphql":
app = _graphql._flask.create_app()
app.run(port=port)
else:
if operations is not None:
prepared_operations = tuple(operation.name for operation in operations)
if "all" in prepared_operations:
prepared_operations = tuple(
operation.name for operation in openapi.schema.Operation if operation.name != "all"
)
else:
prepared_operations = tuple(
operation.name
for operation in openapi.schema.Operation
if operation.name not in INVALID_OPERATIONS and operation.name != "all"
)
version = {"openapi2": openapi.schema.OpenAPIVersion("2.0"), "openapi3": openapi.schema.OpenAPIVersion("3.0")}[
spec
]
click.secho(
f"Schemathesis test server is running!\n\n"
f"API Schema is available at: http://0.0.0.0:{port}/schema.yaml\n",
bold=True,
)
if framework == "aiohttp":
app = openapi._aiohttp.create_app(prepared_operations, version)
web.run_app(app, port=port)
elif framework == "flask":
app = openapi._flask.create_app(prepared_operations, version)
app.run(port=port)
if __name__ == "__main__":
logging.basicConfig(level=logging.DEBUG)
run_app()
| 0.55254 | 0.143608 |
from enum import Enum
from typing import Any, Dict, Tuple
import jsonschema
class Operation(Enum):
success = ("GET", "/api/success")
failure = ("GET", "/api/failure")
payload = ("POST", "/api/payload")
# Not compliant, but used by some tools like Elasticsearch
get_payload = ("GET", "/api/get_payload")
basic = ("GET", "/api/basic")
empty = ("GET", "/api/empty")
empty_string = ("GET", "/api/empty_string")
multiple_failures = ("GET", "/api/multiple_failures")
slow = ("GET", "/api/slow")
path_variable = ("GET", "/api/path_variable/{key}")
unsatisfiable = ("POST", "/api/unsatisfiable")
performance = ("POST", "/api/performance")
invalid = ("POST", "/api/invalid")
flaky = ("GET", "/api/flaky")
recursive = ("GET", "/api/recursive")
multipart = ("POST", "/api/multipart")
upload_file = ("POST", "/api/upload_file")
form = ("POST", "/api/form")
teapot = ("POST", "/api/teapot")
text = ("GET", "/api/text")
cp866 = ("GET", "/api/cp866")
conformance = ("GET", "/api/conformance")
plain_text_body = ("POST", "/api/text")
csv_payload = ("POST", "/api/csv")
malformed_json = ("GET", "/api/malformed_json")
invalid_response = ("GET", "/api/invalid_response")
custom_format = ("GET", "/api/custom_format")
invalid_path_parameter = ("GET", "/api/invalid_path_parameter/{id}")
missing_path_parameter = ("GET", "/api/missing_path_parameter/{id}")
headers = ("GET", "/api/headers")
reserved = ("GET", "/api/foo:bar")
read_only = ("GET", "/api/read_only")
write_only = ("POST", "/api/write_only")
create_user = ("POST", "/api/users/")
get_user = ("GET", "/api/users/{user_id}")
update_user = ("PATCH", "/api/users/{user_id}")
all = object()
class OpenAPIVersion(Enum):
_2 = "2.0"
_3 = "3.0"
def __str__(self):
return f"Open API {self.value}"
@property
def is_openapi_2(self):
return self.value == "2.0"
@property
def is_openapi_3(self):
return self.value == "3.0"
def make_openapi_schema(operations: Tuple[str, ...], version: OpenAPIVersion = OpenAPIVersion("2.0")) -> Dict:
"""Generate an OAS 2/3 schemas with the given API operations.
Example:
-------
If `operations` is ("success", "failure")
then the app will contain GET /success and GET /failure
"""
return {OpenAPIVersion("2.0"): _make_openapi_2_schema, OpenAPIVersion("3.0"): _make_openapi_3_schema}[version](
operations
)
def make_node_definition(reference):
return {
"description": "Recursive!",
"type": "object",
"additionalProperties": False,
"properties": {
"children": {"type": "array", "items": reference},
"value": {"type": "integer", "maximum": 4, "exclusiveMaximum": True},
},
}
PAYLOAD = {
"type": "object",
"properties": {
"name": {"type": "string"},
"age": {"type": "integer", "minimum": 0, "exclusiveMinimum": True},
"boolean": {"type": "boolean"},
"nested": {
"type": "array",
"items": {
"type": "integer",
"minimum": 0,
"exclusiveMinimum": True,
"maximum": 10,
"exclusiveMaximum": True,
},
},
},
"required": ["name"],
"example": {"name": "John"},
"additionalProperties": False,
}
PAYLOAD_VALIDATOR = jsonschema.validators.Draft4Validator({"anyOf": [{"type": "null"}, PAYLOAD]})
def _make_openapi_2_schema(operations: Tuple[str, ...]) -> Dict:
template: Dict[str, Any] = {
"swagger": "2.0",
"info": {"title": "Example API", "description": "An API to test Schemathesis", "version": "1.0.0"},
"host": "127.0.0.1:8888",
"basePath": "/api",
"schemes": ["http"],
"produces": ["application/json"],
"paths": {},
"securityDefinitions": {
"api_key": {"type": "apiKey", "name": "X-Token", "in": "header"},
"basicAuth": {"type": "basic"},
},
}
def add_link(name, definition):
components = template.setdefault("x-components", {})
links = components.setdefault("x-links", {})
links.setdefault(name, definition)
def add_read_write_only():
template.setdefault("definitions", {})
template["definitions"]["ReadWrite"] = {
"type": "object",
"properties": {
"read": {
"type": "string",
"readOnly": True,
},
"write": {"type": "integer", "x-writeOnly": True},
},
# Open API 2.0 forbids `readOnly` properties in `required`, but we follow the Open API 3 semantics here
"required": ["read", "write"],
"additionalProperties": False,
}
for name in operations:
method, path = Operation[name].value
path = path.replace(template["basePath"], "")
reference = {"$ref": "#/definitions/Node"}
if name == "recursive":
schema = {"responses": {"200": {"description": "OK", "schema": reference}}}
definitions = template.setdefault("definitions", {})
definitions["Node"] = make_node_definition(reference)
elif name in ("payload", "get_payload"):
schema = {
"parameters": [{"name": "body", "in": "body", "required": True, "schema": PAYLOAD}],
"responses": {"200": {"description": "OK", "schema": PAYLOAD}},
}
elif name == "unsatisfiable":
schema = {
"parameters": [
{
"name": "id",
"in": "body",
"required": True,
# Impossible to satisfy
"schema": {"allOf": [{"type": "integer"}, {"type": "string"}]},
}
],
"responses": {"200": {"description": "OK"}},
}
elif name == "performance":
schema = {
"parameters": [{"name": "data", "in": "body", "required": True, "schema": {"type": "integer"}}],
"responses": {"200": {"description": "OK"}},
}
elif name in ("flaky", "multiple_failures"):
schema = {
"parameters": [{"name": "id", "in": "query", "required": True, "type": "integer"}],
"responses": {"200": {"description": "OK"}},
}
elif name == "path_variable":
schema = {
"parameters": [{"name": "key", "in": "path", "required": True, "type": "string", "minLength": 1}],
"responses": {"200": {"description": "OK"}},
}
elif name == "invalid":
schema = {
"parameters": [{"name": "id", "in": "query", "required": True, "type": "int"}],
"responses": {"200": {"description": "OK"}},
}
elif name == "upload_file":
schema = {
"parameters": [
{"name": "note", "in": "formData", "required": True, "type": "string"},
{"name": "data", "in": "formData", "required": True, "type": "file"},
],
"responses": {"200": {"description": "OK"}},
}
elif name == "form":
schema = {
"parameters": [
{"name": "first_name", "in": "formData", "required": True, "type": "string"},
{"name": "last_name", "in": "formData", "required": True, "type": "string"},
],
"consumes": ["application/x-www-form-urlencoded"],
"responses": {"200": {"description": "OK"}},
}
elif name == "custom_format":
schema = {
"parameters": [{"name": "id", "in": "query", "required": True, "type": "string", "format": "digits"}],
"responses": {"200": {"description": "OK"}},
}
elif name == "multipart":
schema = {
"parameters": [
{"in": "formData", "name": "key", "required": True, "type": "string"},
{"in": "formData", "name": "value", "required": True, "type": "integer"},
{"in": "formData", "name": "maybe", "type": "boolean"},
],
"consumes": ["multipart/form-data"],
"responses": {"200": {"description": "OK"}},
}
elif name == "teapot":
schema = {"produces": ["application/json"], "responses": {"200": {"description": "OK"}}}
elif name == "plain_text_body":
schema = {
"parameters": [
{"in": "body", "name": "value", "required": True, "schema": {"type": "string"}},
],
"consumes": ["text/plain"],
"produces": ["text/plain"],
"responses": {"200": {"description": "OK"}},
}
elif name == "cp866":
schema = {
"responses": {"200": {"description": "OK", "schema": {"type": "string"}}},
}
elif name == "invalid_path_parameter":
schema = {
"parameters": [{"name": "id", "in": "path", "required": False, "type": "integer"}],
"responses": {"200": {"description": "OK"}},
}
elif name == "headers":
schema = {
"security": [{"api_key": []}],
"responses": {
"200": {
"description": "OK",
"schema": {"type": "object"},
"headers": {
"X-Custom-Header": {"description": "Custom header", "type": "integer", "x-required": True}
},
},
"default": {"description": "Default response"},
},
}
elif name == "basic":
schema = {
"security": [{"basicAuth": []}],
"responses": {
"200": {
"description": "OK",
"schema": {"type": "object", "properties": {"secret": {"type": "integer"}}},
},
# 401 is not described on purpose to cause a testing error
},
}
elif name == "conformance":
schema = {
"responses": {
"200": {
"description": "OK",
"schema": {
"type": "object",
"properties": {"value": {"enum": ["foo"]}},
"required": ["value"],
"additionalProperties": False,
},
},
},
}
elif name == "create_user":
schema = {
"parameters": [
{
"name": "data",
"in": "body",
"required": True,
"schema": {
"type": "object",
"properties": {
"first_name": {"type": "string", "minLength": 3},
"last_name": {"type": "string", "minLength": 3},
},
"required": ["first_name", "last_name"],
"additionalProperties": False,
},
}
],
"responses": {"201": {"$ref": "#/x-components/responses/ResponseWithLinks"}},
}
add_link(
"UpdateUserById",
{
"operationId": "updateUser",
"parameters": {"user_id": "$response.body#/id"},
},
)
template["x-components"]["responses"] = {
"ResponseWithLinks": {
"description": "OK",
"x-links": {
"GetUserByUserId": {
"operationId": "getUser",
"parameters": {
"path.user_id": "$response.body#/id",
"query.user_id": "$response.body#/id",
},
},
"UpdateUserById": {"$ref": "#/x-components/x-links/UpdateUserById"},
},
}
}
elif name == "get_user":
parent = template["paths"].setdefault(path, {})
parent["parameters"] = [{"in": "path", "name": "user_id", "required": True, "type": "string"}]
schema = {
"operationId": "getUser",
"parameters": [
{"in": "query", "name": "code", "required": True, "type": "integer"},
{"in": "query", "name": "user_id", "required": True, "type": "string"},
],
"responses": {
"200": {
"description": "OK",
"x-links": {
"UpdateUserById": {
"operationRef": "#/paths/~1users~1{user_id}/patch",
"parameters": {"user_id": "$response.body#/id"},
"requestBody": {"first_name": "foo", "last_name": "bar"},
}
},
},
"404": {"description": "Not found"},
},
}
elif name == "update_user":
parent = template["paths"].setdefault(path, {})
parent["parameters"] = [
{"in": "path", "name": "user_id", "required": True, "type": "string"},
{"in": "query", "name": "common", "required": True, "type": "integer"},
]
schema = {
"operationId": "updateUser",
"parameters": [
{
"in": "body",
"name": "username",
"required": True,
"schema": {
"type": "object",
"properties": {
"first_name": {"type": "string", "minLength": 3},
# Note, the `last_name` field should not be nullable, it is a placed bug
"last_name": {"type": "string", "minLength": 3, "x-nullable": True},
},
"required": ["first_name", "last_name"],
"additionalProperties": False,
},
},
],
"responses": {"200": {"description": "OK"}, "404": {"description": "Not found"}},
}
elif name == "csv_payload":
schema = {
"parameters": [
{
"in": "body",
"name": "payload",
"required": True,
"schema": {
"type": "array",
"items": {
"additionalProperties": False,
"type": "object",
"properties": {
"first_name": {"type": "string", "pattern": r"\A[A-Za-z]*\Z"},
"last_name": {"type": "string", "pattern": r"\A[A-Za-z]*\Z"},
},
"required": ["first_name", "last_name"],
},
},
},
],
"consumes": ["text/csv"],
"responses": {"200": {"description": "OK"}},
}
elif name == "read_only":
schema = {
"responses": {
"200": {
"description": "OK",
"schema": {"$ref": "#/definitions/ReadWrite"},
}
},
}
add_read_write_only()
elif name == "write_only":
schema = {
"parameters": [
{
"in": "body",
"name": "payload",
"required": True,
"schema": {"$ref": "#/definitions/ReadWrite"},
}
],
"responses": {
"200": {
"description": "OK",
"schema": {"$ref": "#/definitions/ReadWrite"},
}
},
}
add_read_write_only()
else:
schema = {
"responses": {
"200": {
"description": "OK",
"schema": {
"type": "object",
"properties": {"success": {"type": "boolean"}},
"required": ["success"],
},
},
"default": {"description": "Default response"},
}
}
template["paths"].setdefault(path, {})
template["paths"][path][method.lower()] = schema
return template
def _make_openapi_3_schema(operations: Tuple[str, ...]) -> Dict:
_base_path = "api"
template: Dict[str, Any] = {
"openapi": "3.0.2",
"info": {"title": "Example API", "description": "An API to test Schemathesis", "version": "1.0.0"},
"paths": {},
"servers": [{"url": "https://127.0.0.1:8888/{basePath}", "variables": {"basePath": {"default": _base_path}}}],
"components": {
"securitySchemes": {
"api_key": {"type": "apiKey", "name": "X-Token", "in": "header"},
"basicAuth": {"type": "http", "scheme": "basic"},
}
},
}
base_path = f"/{_base_path}"
def add_read_write_only():
template["components"]["schemas"] = {
"ReadWrite": {
"type": "object",
"properties": {
"read": {
"type": "string",
"readOnly": True,
},
"write": {
"type": "integer",
"writeOnly": True,
},
},
# If a readOnly or writeOnly property is included in the required list,
# required affects just the relevant scope – responses only or requests only
"required": ["read", "write"],
"additionalProperties": False,
}
}
def add_link(name, definition):
links = template["components"].setdefault("links", {})
links.setdefault(name, definition)
for name in operations:
method, path = Operation[name].value
path = path.replace(base_path, "")
reference = {"$ref": "#/x-definitions/Node"}
if name == "recursive":
schema = {
"responses": {"200": {"description": "OK", "content": {"application/json": {"schema": reference}}}}
}
definitions = template.setdefault("x-definitions", {})
definitions["Node"] = make_node_definition(reference)
elif name in ("payload", "get_payload"):
schema = {
"requestBody": {"content": {"application/json": {"schema": PAYLOAD}}},
"responses": {"200": {"description": "OK", "content": {"application/json": {"schema": PAYLOAD}}}},
}
elif name == "unsatisfiable":
schema = {
"requestBody": {
"content": {"application/json": {"schema": {"allOf": [{"type": "integer"}, {"type": "string"}]}}},
"required": True,
},
"responses": {"200": {"description": "OK"}},
}
elif name == "performance":
schema = {
"requestBody": {"content": {"application/json": {"schema": {"type": "integer"}}}, "required": True},
"responses": {"200": {"description": "OK"}},
}
elif name == "plain_text_body":
schema = {
"requestBody": {"content": {"text/plain": {"schema": {"type": "string"}}}, "required": True},
"responses": {"200": {"description": "OK"}},
}
elif name == "cp866":
schema = {
"responses": {
"200": {"description": "OK", "content": {"application/json": {"schema": {"type": "string"}}}}
},
}
elif name in ("flaky", "multiple_failures"):
schema = {
"parameters": [{"name": "id", "in": "query", "required": True, "schema": {"type": "integer"}}],
"responses": {"200": {"description": "OK"}},
}
elif name == "path_variable":
schema = {
"parameters": [
{"name": "key", "in": "path", "required": True, "schema": {"type": "string", "minLength": 1}}
],
"responses": {"200": {"description": "OK"}},
}
elif name == "invalid":
schema = {
"parameters": [{"name": "id", "in": "query", "required": True, "schema": {"type": "int"}}],
"responses": {"200": {"description": "OK"}},
}
elif name == "upload_file":
schema = {
"requestBody": {
"required": True,
"content": {
"multipart/form-data": {
"schema": {
"type": "object",
"additionalProperties": False,
"properties": {
"data": {"type": "string", "format": "binary"},
"note": {"type": "string"},
},
"required": ["data", "note"],
}
}
},
},
"responses": {"200": {"description": "OK"}},
}
elif name == "form":
schema = {
"requestBody": {
"required": True,
"content": {
"application/x-www-form-urlencoded": {
"schema": {
"additionalProperties": False,
"type": "object",
"properties": {
"first_name": {"type": "string"},
"last_name": {"type": "string"},
},
"required": ["first_name", "last_name"],
}
}
},
},
"responses": {"200": {"description": "OK"}},
}
elif name == "custom_format":
schema = {
"parameters": [
{"name": "id", "in": "query", "required": True, "schema": {"type": "string", "format": "digits"}}
],
"responses": {"200": {"description": "OK"}},
}
elif name == "multipart":
schema = {
"requestBody": {
"required": True,
"content": {
"multipart/form-data": {
"schema": {
"type": "object",
"properties": {
"key": {"type": "string"},
"value": {"type": "integer"},
"maybe": {"type": "boolean"},
},
"required": ["key", "value"],
"additionalProperties": False,
}
}
},
},
"responses": {"200": {"description": "OK"}},
}
elif name == "teapot":
schema = {
"responses": {
"200": {
"description": "OK",
"content": {
"application/json": {
"schema": {
"type": "object",
"properties": {"success": {"type": "boolean"}},
"required": ["success"],
}
}
},
}
}
}
elif name == "invalid_path_parameter":
schema = {
"parameters": [{"name": "id", "in": "path", "required": False, "schema": {"type": "integer"}}],
"responses": {"200": {"description": "OK"}},
}
elif name == "headers":
schema = {
"security": [{"api_key": []}],
"responses": {
"200": {
"description": "OK",
"content": {"application/json": {"schema": {"type": "object"}}},
"headers": {
"X-Custom-Header": {
"description": "Custom header",
"schema": {"type": "integer"},
"required": True,
}
},
},
"default": {"description": "Default response"},
},
}
elif name == "basic":
schema = {
"security": [{"basicAuth": []}],
"responses": {
"200": {
"description": "OK",
"content": {
"application/json": {
"schema": {"type": "object", "properties": {"secret": {"type": "integer"}}}
}
},
},
# 401 is not described on purpose to cause a testing error
},
}
elif name == "conformance":
schema = {
"responses": {
"200": {
"description": "OK",
"content": {
"application/json": {
"schema": {
"type": "object",
"properties": {"value": {"enum": ["foo"]}},
"required": ["value"],
"additionalProperties": False,
}
}
},
},
},
}
elif name == "create_user":
schema = {
"requestBody": {
"content": {
"application/json": {
"schema": {
"type": "object",
"properties": {
"first_name": {"type": "string", "minLength": 3},
"last_name": {"type": "string", "minLength": 3},
},
"required": ["first_name", "last_name"],
"additionalProperties": False,
}
}
},
"required": True,
},
"responses": {"201": {"$ref": "#/components/responses/ResponseWithLinks"}},
}
add_link(
"UpdateUserById",
{
"operationId": "updateUser",
"parameters": {"user_id": "$response.body#/id"},
},
)
template["components"]["responses"] = {
"ResponseWithLinks": {
"description": "OK",
"links": {
"GetUserByUserId": {
"operationId": "getUser",
"parameters": {
"path.user_id": "$response.body#/id",
"query.user_id": "$response.body#/id",
},
},
"UpdateUserById": {"$ref": "#/components/links/UpdateUserById"},
},
}
}
elif name == "get_user":
parent = template["paths"].setdefault(path, {})
parent["parameters"] = [{"in": "path", "name": "user_id", "required": True, "schema": {"type": "string"}}]
schema = {
"operationId": "getUser",
"parameters": [
{"in": "query", "name": "code", "required": True, "schema": {"type": "integer"}},
{"in": "query", "name": "user_id", "required": True, "schema": {"type": "string"}},
],
"responses": {
"200": {
"description": "OK",
"links": {
"UpdateUserById": {
"operationRef": "#/paths/~1users~1{user_id}/patch",
"parameters": {"user_id": "$response.body#/id"},
"requestBody": {"first_name": "foo", "last_name": "bar"},
}
},
},
"404": {"description": "Not found"},
},
}
elif name == "update_user":
parent = template["paths"].setdefault(path, {})
parent["parameters"] = [
{"in": "path", "name": "user_id", "required": True, "schema": {"type": "string"}},
{"in": "query", "name": "common", "required": True, "schema": {"type": "integer"}},
]
schema = {
"operationId": "updateUser",
"requestBody": {
"content": {
"application/json": {
"schema": {
"type": "object",
"properties": {
"first_name": {"type": "string", "minLength": 3},
# Note, the `last_name` field should not be nullable, it is a placed bug
"last_name": {"type": "string", "minLength": 3, "nullable": True},
},
"required": ["first_name", "last_name"],
"additionalProperties": False,
}
}
},
"required": True,
},
"responses": {"200": {"description": "OK"}, "404": {"description": "Not found"}},
}
elif name == "csv_payload":
schema = {
"requestBody": {
"required": True,
"content": {
"text/csv": {
"schema": {
"type": "array",
"items": {
"additionalProperties": False,
"type": "object",
"properties": {
"first_name": {"type": "string", "pattern": r"\A[A-Za-z]*\Z"},
"last_name": {"type": "string", "pattern": r"\A[A-Za-z]*\Z"},
},
"required": ["first_name", "last_name"],
},
}
}
},
},
"responses": {"200": {"description": "OK"}},
}
elif name == "read_only":
schema = {
"responses": {
"200": {
"description": "OK",
"content": {"application/json": {"schema": {"$ref": "#/components/schemas/ReadWrite"}}},
}
},
}
add_read_write_only()
elif name == "write_only":
schema = {
"requestBody": {
"required": True,
"content": {"application/json": {"schema": {"$ref": "#/components/schemas/ReadWrite"}}},
},
"responses": {
"200": {
"description": "OK",
"content": {"application/json": {"schema": {"$ref": "#/components/schemas/ReadWrite"}}},
}
},
}
add_read_write_only()
else:
schema = {
"responses": {
"200": {
"description": "OK",
"content": {
"application/json": {
"schema": {
"type": "object",
"properties": {"success": {"type": "boolean"}},
"required": ["success"],
}
}
},
},
"default": {"description": "Default response", "content": {"application/json": {"schema": {}}}},
}
}
template["paths"].setdefault(path, {})
template["paths"][path][method.lower()] = schema
template["paths"].setdefault(path, {})
template["paths"][path][method.lower()] = schema
return template
|
schemathesis
|
/schemathesis-3.19.2.tar.gz/schemathesis-3.19.2/test/apps/openapi/schema.py
|
schema.py
|
from enum import Enum
from typing import Any, Dict, Tuple
import jsonschema
class Operation(Enum):
success = ("GET", "/api/success")
failure = ("GET", "/api/failure")
payload = ("POST", "/api/payload")
# Not compliant, but used by some tools like Elasticsearch
get_payload = ("GET", "/api/get_payload")
basic = ("GET", "/api/basic")
empty = ("GET", "/api/empty")
empty_string = ("GET", "/api/empty_string")
multiple_failures = ("GET", "/api/multiple_failures")
slow = ("GET", "/api/slow")
path_variable = ("GET", "/api/path_variable/{key}")
unsatisfiable = ("POST", "/api/unsatisfiable")
performance = ("POST", "/api/performance")
invalid = ("POST", "/api/invalid")
flaky = ("GET", "/api/flaky")
recursive = ("GET", "/api/recursive")
multipart = ("POST", "/api/multipart")
upload_file = ("POST", "/api/upload_file")
form = ("POST", "/api/form")
teapot = ("POST", "/api/teapot")
text = ("GET", "/api/text")
cp866 = ("GET", "/api/cp866")
conformance = ("GET", "/api/conformance")
plain_text_body = ("POST", "/api/text")
csv_payload = ("POST", "/api/csv")
malformed_json = ("GET", "/api/malformed_json")
invalid_response = ("GET", "/api/invalid_response")
custom_format = ("GET", "/api/custom_format")
invalid_path_parameter = ("GET", "/api/invalid_path_parameter/{id}")
missing_path_parameter = ("GET", "/api/missing_path_parameter/{id}")
headers = ("GET", "/api/headers")
reserved = ("GET", "/api/foo:bar")
read_only = ("GET", "/api/read_only")
write_only = ("POST", "/api/write_only")
create_user = ("POST", "/api/users/")
get_user = ("GET", "/api/users/{user_id}")
update_user = ("PATCH", "/api/users/{user_id}")
all = object()
class OpenAPIVersion(Enum):
_2 = "2.0"
_3 = "3.0"
def __str__(self):
return f"Open API {self.value}"
@property
def is_openapi_2(self):
return self.value == "2.0"
@property
def is_openapi_3(self):
return self.value == "3.0"
def make_openapi_schema(operations: Tuple[str, ...], version: OpenAPIVersion = OpenAPIVersion("2.0")) -> Dict:
"""Generate an OAS 2/3 schemas with the given API operations.
Example:
-------
If `operations` is ("success", "failure")
then the app will contain GET /success and GET /failure
"""
return {OpenAPIVersion("2.0"): _make_openapi_2_schema, OpenAPIVersion("3.0"): _make_openapi_3_schema}[version](
operations
)
def make_node_definition(reference):
return {
"description": "Recursive!",
"type": "object",
"additionalProperties": False,
"properties": {
"children": {"type": "array", "items": reference},
"value": {"type": "integer", "maximum": 4, "exclusiveMaximum": True},
},
}
PAYLOAD = {
"type": "object",
"properties": {
"name": {"type": "string"},
"age": {"type": "integer", "minimum": 0, "exclusiveMinimum": True},
"boolean": {"type": "boolean"},
"nested": {
"type": "array",
"items": {
"type": "integer",
"minimum": 0,
"exclusiveMinimum": True,
"maximum": 10,
"exclusiveMaximum": True,
},
},
},
"required": ["name"],
"example": {"name": "John"},
"additionalProperties": False,
}
PAYLOAD_VALIDATOR = jsonschema.validators.Draft4Validator({"anyOf": [{"type": "null"}, PAYLOAD]})
def _make_openapi_2_schema(operations: Tuple[str, ...]) -> Dict:
template: Dict[str, Any] = {
"swagger": "2.0",
"info": {"title": "Example API", "description": "An API to test Schemathesis", "version": "1.0.0"},
"host": "127.0.0.1:8888",
"basePath": "/api",
"schemes": ["http"],
"produces": ["application/json"],
"paths": {},
"securityDefinitions": {
"api_key": {"type": "apiKey", "name": "X-Token", "in": "header"},
"basicAuth": {"type": "basic"},
},
}
def add_link(name, definition):
components = template.setdefault("x-components", {})
links = components.setdefault("x-links", {})
links.setdefault(name, definition)
def add_read_write_only():
template.setdefault("definitions", {})
template["definitions"]["ReadWrite"] = {
"type": "object",
"properties": {
"read": {
"type": "string",
"readOnly": True,
},
"write": {"type": "integer", "x-writeOnly": True},
},
# Open API 2.0 forbids `readOnly` properties in `required`, but we follow the Open API 3 semantics here
"required": ["read", "write"],
"additionalProperties": False,
}
for name in operations:
method, path = Operation[name].value
path = path.replace(template["basePath"], "")
reference = {"$ref": "#/definitions/Node"}
if name == "recursive":
schema = {"responses": {"200": {"description": "OK", "schema": reference}}}
definitions = template.setdefault("definitions", {})
definitions["Node"] = make_node_definition(reference)
elif name in ("payload", "get_payload"):
schema = {
"parameters": [{"name": "body", "in": "body", "required": True, "schema": PAYLOAD}],
"responses": {"200": {"description": "OK", "schema": PAYLOAD}},
}
elif name == "unsatisfiable":
schema = {
"parameters": [
{
"name": "id",
"in": "body",
"required": True,
# Impossible to satisfy
"schema": {"allOf": [{"type": "integer"}, {"type": "string"}]},
}
],
"responses": {"200": {"description": "OK"}},
}
elif name == "performance":
schema = {
"parameters": [{"name": "data", "in": "body", "required": True, "schema": {"type": "integer"}}],
"responses": {"200": {"description": "OK"}},
}
elif name in ("flaky", "multiple_failures"):
schema = {
"parameters": [{"name": "id", "in": "query", "required": True, "type": "integer"}],
"responses": {"200": {"description": "OK"}},
}
elif name == "path_variable":
schema = {
"parameters": [{"name": "key", "in": "path", "required": True, "type": "string", "minLength": 1}],
"responses": {"200": {"description": "OK"}},
}
elif name == "invalid":
schema = {
"parameters": [{"name": "id", "in": "query", "required": True, "type": "int"}],
"responses": {"200": {"description": "OK"}},
}
elif name == "upload_file":
schema = {
"parameters": [
{"name": "note", "in": "formData", "required": True, "type": "string"},
{"name": "data", "in": "formData", "required": True, "type": "file"},
],
"responses": {"200": {"description": "OK"}},
}
elif name == "form":
schema = {
"parameters": [
{"name": "first_name", "in": "formData", "required": True, "type": "string"},
{"name": "last_name", "in": "formData", "required": True, "type": "string"},
],
"consumes": ["application/x-www-form-urlencoded"],
"responses": {"200": {"description": "OK"}},
}
elif name == "custom_format":
schema = {
"parameters": [{"name": "id", "in": "query", "required": True, "type": "string", "format": "digits"}],
"responses": {"200": {"description": "OK"}},
}
elif name == "multipart":
schema = {
"parameters": [
{"in": "formData", "name": "key", "required": True, "type": "string"},
{"in": "formData", "name": "value", "required": True, "type": "integer"},
{"in": "formData", "name": "maybe", "type": "boolean"},
],
"consumes": ["multipart/form-data"],
"responses": {"200": {"description": "OK"}},
}
elif name == "teapot":
schema = {"produces": ["application/json"], "responses": {"200": {"description": "OK"}}}
elif name == "plain_text_body":
schema = {
"parameters": [
{"in": "body", "name": "value", "required": True, "schema": {"type": "string"}},
],
"consumes": ["text/plain"],
"produces": ["text/plain"],
"responses": {"200": {"description": "OK"}},
}
elif name == "cp866":
schema = {
"responses": {"200": {"description": "OK", "schema": {"type": "string"}}},
}
elif name == "invalid_path_parameter":
schema = {
"parameters": [{"name": "id", "in": "path", "required": False, "type": "integer"}],
"responses": {"200": {"description": "OK"}},
}
elif name == "headers":
schema = {
"security": [{"api_key": []}],
"responses": {
"200": {
"description": "OK",
"schema": {"type": "object"},
"headers": {
"X-Custom-Header": {"description": "Custom header", "type": "integer", "x-required": True}
},
},
"default": {"description": "Default response"},
},
}
elif name == "basic":
schema = {
"security": [{"basicAuth": []}],
"responses": {
"200": {
"description": "OK",
"schema": {"type": "object", "properties": {"secret": {"type": "integer"}}},
},
# 401 is not described on purpose to cause a testing error
},
}
elif name == "conformance":
schema = {
"responses": {
"200": {
"description": "OK",
"schema": {
"type": "object",
"properties": {"value": {"enum": ["foo"]}},
"required": ["value"],
"additionalProperties": False,
},
},
},
}
elif name == "create_user":
schema = {
"parameters": [
{
"name": "data",
"in": "body",
"required": True,
"schema": {
"type": "object",
"properties": {
"first_name": {"type": "string", "minLength": 3},
"last_name": {"type": "string", "minLength": 3},
},
"required": ["first_name", "last_name"],
"additionalProperties": False,
},
}
],
"responses": {"201": {"$ref": "#/x-components/responses/ResponseWithLinks"}},
}
add_link(
"UpdateUserById",
{
"operationId": "updateUser",
"parameters": {"user_id": "$response.body#/id"},
},
)
template["x-components"]["responses"] = {
"ResponseWithLinks": {
"description": "OK",
"x-links": {
"GetUserByUserId": {
"operationId": "getUser",
"parameters": {
"path.user_id": "$response.body#/id",
"query.user_id": "$response.body#/id",
},
},
"UpdateUserById": {"$ref": "#/x-components/x-links/UpdateUserById"},
},
}
}
elif name == "get_user":
parent = template["paths"].setdefault(path, {})
parent["parameters"] = [{"in": "path", "name": "user_id", "required": True, "type": "string"}]
schema = {
"operationId": "getUser",
"parameters": [
{"in": "query", "name": "code", "required": True, "type": "integer"},
{"in": "query", "name": "user_id", "required": True, "type": "string"},
],
"responses": {
"200": {
"description": "OK",
"x-links": {
"UpdateUserById": {
"operationRef": "#/paths/~1users~1{user_id}/patch",
"parameters": {"user_id": "$response.body#/id"},
"requestBody": {"first_name": "foo", "last_name": "bar"},
}
},
},
"404": {"description": "Not found"},
},
}
elif name == "update_user":
parent = template["paths"].setdefault(path, {})
parent["parameters"] = [
{"in": "path", "name": "user_id", "required": True, "type": "string"},
{"in": "query", "name": "common", "required": True, "type": "integer"},
]
schema = {
"operationId": "updateUser",
"parameters": [
{
"in": "body",
"name": "username",
"required": True,
"schema": {
"type": "object",
"properties": {
"first_name": {"type": "string", "minLength": 3},
# Note, the `last_name` field should not be nullable, it is a placed bug
"last_name": {"type": "string", "minLength": 3, "x-nullable": True},
},
"required": ["first_name", "last_name"],
"additionalProperties": False,
},
},
],
"responses": {"200": {"description": "OK"}, "404": {"description": "Not found"}},
}
elif name == "csv_payload":
schema = {
"parameters": [
{
"in": "body",
"name": "payload",
"required": True,
"schema": {
"type": "array",
"items": {
"additionalProperties": False,
"type": "object",
"properties": {
"first_name": {"type": "string", "pattern": r"\A[A-Za-z]*\Z"},
"last_name": {"type": "string", "pattern": r"\A[A-Za-z]*\Z"},
},
"required": ["first_name", "last_name"],
},
},
},
],
"consumes": ["text/csv"],
"responses": {"200": {"description": "OK"}},
}
elif name == "read_only":
schema = {
"responses": {
"200": {
"description": "OK",
"schema": {"$ref": "#/definitions/ReadWrite"},
}
},
}
add_read_write_only()
elif name == "write_only":
schema = {
"parameters": [
{
"in": "body",
"name": "payload",
"required": True,
"schema": {"$ref": "#/definitions/ReadWrite"},
}
],
"responses": {
"200": {
"description": "OK",
"schema": {"$ref": "#/definitions/ReadWrite"},
}
},
}
add_read_write_only()
else:
schema = {
"responses": {
"200": {
"description": "OK",
"schema": {
"type": "object",
"properties": {"success": {"type": "boolean"}},
"required": ["success"],
},
},
"default": {"description": "Default response"},
}
}
template["paths"].setdefault(path, {})
template["paths"][path][method.lower()] = schema
return template
def _make_openapi_3_schema(operations: Tuple[str, ...]) -> Dict:
_base_path = "api"
template: Dict[str, Any] = {
"openapi": "3.0.2",
"info": {"title": "Example API", "description": "An API to test Schemathesis", "version": "1.0.0"},
"paths": {},
"servers": [{"url": "https://127.0.0.1:8888/{basePath}", "variables": {"basePath": {"default": _base_path}}}],
"components": {
"securitySchemes": {
"api_key": {"type": "apiKey", "name": "X-Token", "in": "header"},
"basicAuth": {"type": "http", "scheme": "basic"},
}
},
}
base_path = f"/{_base_path}"
def add_read_write_only():
template["components"]["schemas"] = {
"ReadWrite": {
"type": "object",
"properties": {
"read": {
"type": "string",
"readOnly": True,
},
"write": {
"type": "integer",
"writeOnly": True,
},
},
# If a readOnly or writeOnly property is included in the required list,
# required affects just the relevant scope – responses only or requests only
"required": ["read", "write"],
"additionalProperties": False,
}
}
def add_link(name, definition):
links = template["components"].setdefault("links", {})
links.setdefault(name, definition)
for name in operations:
method, path = Operation[name].value
path = path.replace(base_path, "")
reference = {"$ref": "#/x-definitions/Node"}
if name == "recursive":
schema = {
"responses": {"200": {"description": "OK", "content": {"application/json": {"schema": reference}}}}
}
definitions = template.setdefault("x-definitions", {})
definitions["Node"] = make_node_definition(reference)
elif name in ("payload", "get_payload"):
schema = {
"requestBody": {"content": {"application/json": {"schema": PAYLOAD}}},
"responses": {"200": {"description": "OK", "content": {"application/json": {"schema": PAYLOAD}}}},
}
elif name == "unsatisfiable":
schema = {
"requestBody": {
"content": {"application/json": {"schema": {"allOf": [{"type": "integer"}, {"type": "string"}]}}},
"required": True,
},
"responses": {"200": {"description": "OK"}},
}
elif name == "performance":
schema = {
"requestBody": {"content": {"application/json": {"schema": {"type": "integer"}}}, "required": True},
"responses": {"200": {"description": "OK"}},
}
elif name == "plain_text_body":
schema = {
"requestBody": {"content": {"text/plain": {"schema": {"type": "string"}}}, "required": True},
"responses": {"200": {"description": "OK"}},
}
elif name == "cp866":
schema = {
"responses": {
"200": {"description": "OK", "content": {"application/json": {"schema": {"type": "string"}}}}
},
}
elif name in ("flaky", "multiple_failures"):
schema = {
"parameters": [{"name": "id", "in": "query", "required": True, "schema": {"type": "integer"}}],
"responses": {"200": {"description": "OK"}},
}
elif name == "path_variable":
schema = {
"parameters": [
{"name": "key", "in": "path", "required": True, "schema": {"type": "string", "minLength": 1}}
],
"responses": {"200": {"description": "OK"}},
}
elif name == "invalid":
schema = {
"parameters": [{"name": "id", "in": "query", "required": True, "schema": {"type": "int"}}],
"responses": {"200": {"description": "OK"}},
}
elif name == "upload_file":
schema = {
"requestBody": {
"required": True,
"content": {
"multipart/form-data": {
"schema": {
"type": "object",
"additionalProperties": False,
"properties": {
"data": {"type": "string", "format": "binary"},
"note": {"type": "string"},
},
"required": ["data", "note"],
}
}
},
},
"responses": {"200": {"description": "OK"}},
}
elif name == "form":
schema = {
"requestBody": {
"required": True,
"content": {
"application/x-www-form-urlencoded": {
"schema": {
"additionalProperties": False,
"type": "object",
"properties": {
"first_name": {"type": "string"},
"last_name": {"type": "string"},
},
"required": ["first_name", "last_name"],
}
}
},
},
"responses": {"200": {"description": "OK"}},
}
elif name == "custom_format":
schema = {
"parameters": [
{"name": "id", "in": "query", "required": True, "schema": {"type": "string", "format": "digits"}}
],
"responses": {"200": {"description": "OK"}},
}
elif name == "multipart":
schema = {
"requestBody": {
"required": True,
"content": {
"multipart/form-data": {
"schema": {
"type": "object",
"properties": {
"key": {"type": "string"},
"value": {"type": "integer"},
"maybe": {"type": "boolean"},
},
"required": ["key", "value"],
"additionalProperties": False,
}
}
},
},
"responses": {"200": {"description": "OK"}},
}
elif name == "teapot":
schema = {
"responses": {
"200": {
"description": "OK",
"content": {
"application/json": {
"schema": {
"type": "object",
"properties": {"success": {"type": "boolean"}},
"required": ["success"],
}
}
},
}
}
}
elif name == "invalid_path_parameter":
schema = {
"parameters": [{"name": "id", "in": "path", "required": False, "schema": {"type": "integer"}}],
"responses": {"200": {"description": "OK"}},
}
elif name == "headers":
schema = {
"security": [{"api_key": []}],
"responses": {
"200": {
"description": "OK",
"content": {"application/json": {"schema": {"type": "object"}}},
"headers": {
"X-Custom-Header": {
"description": "Custom header",
"schema": {"type": "integer"},
"required": True,
}
},
},
"default": {"description": "Default response"},
},
}
elif name == "basic":
schema = {
"security": [{"basicAuth": []}],
"responses": {
"200": {
"description": "OK",
"content": {
"application/json": {
"schema": {"type": "object", "properties": {"secret": {"type": "integer"}}}
}
},
},
# 401 is not described on purpose to cause a testing error
},
}
elif name == "conformance":
schema = {
"responses": {
"200": {
"description": "OK",
"content": {
"application/json": {
"schema": {
"type": "object",
"properties": {"value": {"enum": ["foo"]}},
"required": ["value"],
"additionalProperties": False,
}
}
},
},
},
}
elif name == "create_user":
schema = {
"requestBody": {
"content": {
"application/json": {
"schema": {
"type": "object",
"properties": {
"first_name": {"type": "string", "minLength": 3},
"last_name": {"type": "string", "minLength": 3},
},
"required": ["first_name", "last_name"],
"additionalProperties": False,
}
}
},
"required": True,
},
"responses": {"201": {"$ref": "#/components/responses/ResponseWithLinks"}},
}
add_link(
"UpdateUserById",
{
"operationId": "updateUser",
"parameters": {"user_id": "$response.body#/id"},
},
)
template["components"]["responses"] = {
"ResponseWithLinks": {
"description": "OK",
"links": {
"GetUserByUserId": {
"operationId": "getUser",
"parameters": {
"path.user_id": "$response.body#/id",
"query.user_id": "$response.body#/id",
},
},
"UpdateUserById": {"$ref": "#/components/links/UpdateUserById"},
},
}
}
elif name == "get_user":
parent = template["paths"].setdefault(path, {})
parent["parameters"] = [{"in": "path", "name": "user_id", "required": True, "schema": {"type": "string"}}]
schema = {
"operationId": "getUser",
"parameters": [
{"in": "query", "name": "code", "required": True, "schema": {"type": "integer"}},
{"in": "query", "name": "user_id", "required": True, "schema": {"type": "string"}},
],
"responses": {
"200": {
"description": "OK",
"links": {
"UpdateUserById": {
"operationRef": "#/paths/~1users~1{user_id}/patch",
"parameters": {"user_id": "$response.body#/id"},
"requestBody": {"first_name": "foo", "last_name": "bar"},
}
},
},
"404": {"description": "Not found"},
},
}
elif name == "update_user":
parent = template["paths"].setdefault(path, {})
parent["parameters"] = [
{"in": "path", "name": "user_id", "required": True, "schema": {"type": "string"}},
{"in": "query", "name": "common", "required": True, "schema": {"type": "integer"}},
]
schema = {
"operationId": "updateUser",
"requestBody": {
"content": {
"application/json": {
"schema": {
"type": "object",
"properties": {
"first_name": {"type": "string", "minLength": 3},
# Note, the `last_name` field should not be nullable, it is a placed bug
"last_name": {"type": "string", "minLength": 3, "nullable": True},
},
"required": ["first_name", "last_name"],
"additionalProperties": False,
}
}
},
"required": True,
},
"responses": {"200": {"description": "OK"}, "404": {"description": "Not found"}},
}
elif name == "csv_payload":
schema = {
"requestBody": {
"required": True,
"content": {
"text/csv": {
"schema": {
"type": "array",
"items": {
"additionalProperties": False,
"type": "object",
"properties": {
"first_name": {"type": "string", "pattern": r"\A[A-Za-z]*\Z"},
"last_name": {"type": "string", "pattern": r"\A[A-Za-z]*\Z"},
},
"required": ["first_name", "last_name"],
},
}
}
},
},
"responses": {"200": {"description": "OK"}},
}
elif name == "read_only":
schema = {
"responses": {
"200": {
"description": "OK",
"content": {"application/json": {"schema": {"$ref": "#/components/schemas/ReadWrite"}}},
}
},
}
add_read_write_only()
elif name == "write_only":
schema = {
"requestBody": {
"required": True,
"content": {"application/json": {"schema": {"$ref": "#/components/schemas/ReadWrite"}}},
},
"responses": {
"200": {
"description": "OK",
"content": {"application/json": {"schema": {"$ref": "#/components/schemas/ReadWrite"}}},
}
},
}
add_read_write_only()
else:
schema = {
"responses": {
"200": {
"description": "OK",
"content": {
"application/json": {
"schema": {
"type": "object",
"properties": {"success": {"type": "boolean"}},
"required": ["success"],
}
}
},
},
"default": {"description": "Default response", "content": {"application/json": {"schema": {}}}},
}
}
template["paths"].setdefault(path, {})
template["paths"][path][method.lower()] = schema
template["paths"].setdefault(path, {})
template["paths"][path][method.lower()] = schema
return template
| 0.725649 | 0.178687 |
from functools import wraps
from typing import Callable, Tuple
import yaml
from aiohttp import web
from ..schema import OpenAPIVersion, Operation, make_openapi_schema
from . import handlers
def create_app(
operations: Tuple[str, ...] = ("success", "failure"), version: OpenAPIVersion = OpenAPIVersion("2.0")
) -> web.Application:
"""Factory for aioHTTP app.
Each handler except the one for schema saves requests in the list shared in the app instance and could be
used to verify generated requests.
>>> def test_something(app, server):
>>> # make some request to the app here
>>> assert app["incoming_requests"][0].method == "GET"
"""
incoming_requests = []
schema_requests = []
async def schema(request: web.Request) -> web.Response:
schema_data = request.app["config"]["schema_data"]
content = yaml.dump(schema_data)
schema_requests.append(request)
return web.Response(body=content)
async def set_cookies(request: web.Request) -> web.Response:
response = web.Response()
response.set_cookie("foo", "bar")
response.set_cookie("baz", "spam")
return response
def wrapper(handler_name: str) -> Callable:
handler = getattr(handlers, handler_name)
@wraps(handler)
async def inner(request: web.Request) -> web.Response:
await request.read() # to introspect the payload in tests
incoming_requests.append(request)
return await handler(request)
return inner
app = web.Application()
app.add_routes(
[web.get("/schema.yaml", schema), web.get("/api/cookies", set_cookies)]
+ [web.route(item.value[0], item.value[1], wrapper(item.name)) for item in Operation if item.name != "all"]
)
async def answer(request: web.Request) -> web.Response:
return web.json_response(42)
app.add_routes([web.get("/answer.json", answer)])
app["users"] = {}
app["incoming_requests"] = incoming_requests
app["schema_requests"] = schema_requests
app["config"] = {
"should_fail": True,
"schema_data": make_openapi_schema(operations, version),
"prefix_with_bom": False,
}
return app
def reset_app(
app: web.Application,
operations: Tuple[str, ...] = ("success", "failure"),
version: OpenAPIVersion = OpenAPIVersion("2.0"),
) -> None:
"""Clean up all internal containers of the application and resets its config."""
app["users"].clear()
app["incoming_requests"][:] = []
app["schema_requests"][:] = []
app["config"].update(
{"should_fail": True, "schema_data": make_openapi_schema(operations, version), "prefix_with_bom": False}
)
|
schemathesis
|
/schemathesis-3.19.2.tar.gz/schemathesis-3.19.2/test/apps/openapi/_aiohttp/__init__.py
|
__init__.py
|
from functools import wraps
from typing import Callable, Tuple
import yaml
from aiohttp import web
from ..schema import OpenAPIVersion, Operation, make_openapi_schema
from . import handlers
def create_app(
operations: Tuple[str, ...] = ("success", "failure"), version: OpenAPIVersion = OpenAPIVersion("2.0")
) -> web.Application:
"""Factory for aioHTTP app.
Each handler except the one for schema saves requests in the list shared in the app instance and could be
used to verify generated requests.
>>> def test_something(app, server):
>>> # make some request to the app here
>>> assert app["incoming_requests"][0].method == "GET"
"""
incoming_requests = []
schema_requests = []
async def schema(request: web.Request) -> web.Response:
schema_data = request.app["config"]["schema_data"]
content = yaml.dump(schema_data)
schema_requests.append(request)
return web.Response(body=content)
async def set_cookies(request: web.Request) -> web.Response:
response = web.Response()
response.set_cookie("foo", "bar")
response.set_cookie("baz", "spam")
return response
def wrapper(handler_name: str) -> Callable:
handler = getattr(handlers, handler_name)
@wraps(handler)
async def inner(request: web.Request) -> web.Response:
await request.read() # to introspect the payload in tests
incoming_requests.append(request)
return await handler(request)
return inner
app = web.Application()
app.add_routes(
[web.get("/schema.yaml", schema), web.get("/api/cookies", set_cookies)]
+ [web.route(item.value[0], item.value[1], wrapper(item.name)) for item in Operation if item.name != "all"]
)
async def answer(request: web.Request) -> web.Response:
return web.json_response(42)
app.add_routes([web.get("/answer.json", answer)])
app["users"] = {}
app["incoming_requests"] = incoming_requests
app["schema_requests"] = schema_requests
app["config"] = {
"should_fail": True,
"schema_data": make_openapi_schema(operations, version),
"prefix_with_bom": False,
}
return app
def reset_app(
app: web.Application,
operations: Tuple[str, ...] = ("success", "failure"),
version: OpenAPIVersion = OpenAPIVersion("2.0"),
) -> None:
"""Clean up all internal containers of the application and resets its config."""
app["users"].clear()
app["incoming_requests"][:] = []
app["schema_requests"][:] = []
app["config"].update(
{"should_fail": True, "schema_data": make_openapi_schema(operations, version), "prefix_with_bom": False}
)
| 0.779322 | 0.163345 |
import asyncio
import cgi
import csv
import io
from typing import Dict
from uuid import uuid4
import jsonschema
from aiohttp import web
from schemathesis.constants import BOM_MARK
try:
from ..schema import PAYLOAD_VALIDATOR
except (ImportError, ValueError):
from utils import PAYLOAD_VALIDATOR
async def expect_content_type(request: web.Request, value: str):
content_type = request.headers.get("Content-Type", "")
content_type, _ = cgi.parse_header(content_type)
if content_type != value:
raise web.HTTPInternalServerError(text=f"Expected {value} payload")
return await request.read()
async def success(request: web.Request) -> web.Response:
if request.app["config"]["prefix_with_bom"]:
return web.Response(body=(BOM_MARK + '{"success": true}').encode(), content_type="application/json")
return web.json_response({"success": True})
async def conformance(request: web.Request) -> web.Response:
# The schema expects `value` to be "foo", but it is different every time
return web.json_response({"value": uuid4().hex})
async def basic(request: web.Request) -> web.Response:
if "Authorization" in request.headers and request.headers["Authorization"] == "Basic dGVzdDp0ZXN0":
return web.json_response({"secret": 42})
raise web.HTTPUnauthorized(text='{"detail": "Unauthorized"}', content_type="application/json")
async def empty(request: web.Request) -> web.Response:
return web.Response(body=None, status=204)
async def empty_string(request: web.Request) -> web.Response:
return web.Response(body="")
async def payload(request: web.Request) -> web.Response:
body = await request.read()
if body:
data = await request.json()
try:
PAYLOAD_VALIDATOR.validate(data)
except jsonschema.ValidationError as exc:
raise web.HTTPBadRequest(text=str(exc)) # noqa: B904
return web.json_response(body=body)
return web.json_response({"name": "Nothing!"})
async def invalid_response(request: web.Request) -> web.Response:
return web.json_response({"random": "key"})
async def custom_format(request: web.Request) -> web.Response:
if "id" not in request.query:
raise web.HTTPBadRequest(text='{"detail": "Missing `id`"}')
if not request.query["id"].isdigit():
raise web.HTTPBadRequest(text='{"detail": "Invalid `id`"}')
value = request.query["id"]
return web.json_response({"value": value})
async def teapot(request: web.Request) -> web.Response:
return web.json_response({"success": True}, status=418)
async def recursive(request: web.Request) -> web.Response:
return web.json_response({"children": [{"children": [{"children": []}]}]})
async def text(request: web.Request) -> web.Response:
return web.Response(body="Text response", content_type="text/plain")
async def cp866(request: web.Request) -> web.Response:
return web.Response(body="Тест".encode("cp866"), content_type="text/plain", charset="cp866")
async def plain_text_body(request: web.Request) -> web.Response:
body = await expect_content_type(request, "text/plain")
return web.Response(body=body, content_type="text/plain")
async def csv_payload(request: web.Request) -> web.Response:
body = await expect_content_type(request, "text/csv")
if body:
reader = csv.DictReader(body.decode().splitlines())
data = list(reader)
else:
data = []
return web.json_response(data)
async def headers(request: web.Request) -> web.Response:
values = dict(request.headers)
return web.json_response(values, headers=values)
async def malformed_json(request: web.Request) -> web.Response:
return web.Response(body="{malformed}" + str(uuid4()), content_type="application/json")
async def failure(request: web.Request) -> web.Response:
raise web.HTTPInternalServerError
async def slow(request: web.Request) -> web.Response:
await asyncio.sleep(0.1)
return web.json_response({"success": True})
async def performance(request: web.Request) -> web.Response:
# Emulate bad performance on certain input type
# This API operation is for Schemathesis targeted testing, the failure should be discovered
decoded = await request.json()
number = str(decoded).count("0")
if number > 0:
await asyncio.sleep(0.01 * number)
if number > 10:
raise web.HTTPInternalServerError
return web.json_response({"slow": True})
async def unsatisfiable(request: web.Request) -> web.Response:
return web.json_response({"result": "IMPOSSIBLE!"})
async def flaky(request: web.Request) -> web.Response:
config = request.app["config"]
if config["should_fail"]:
config["should_fail"] = False
raise web.HTTPInternalServerError
return web.json_response({"result": "flaky!"})
async def multiple_failures(request: web.Request) -> web.Response:
try:
id_value = int(request.query["id"])
except KeyError:
raise web.HTTPBadRequest(text='{"detail": "Missing `id`"}') # noqa: B904
except ValueError:
raise web.HTTPBadRequest(text='{"detail": "Invalid `id`"}') # noqa: B904
if id_value == 0:
raise web.HTTPInternalServerError
if id_value > 0:
raise web.HTTPGatewayTimeout
return web.json_response({"result": "OK"})
def _decode_multipart(content: bytes, content_type: str) -> Dict[str, str]:
# a simplified version of multipart encoding that satisfies testing purposes
_, options = cgi.parse_header(content_type)
options["boundary"] = options["boundary"].encode()
options["CONTENT-LENGTH"] = len(content)
return {
key: value[0].decode() if isinstance(value[0], bytes) else value[0]
for key, value in cgi.parse_multipart(io.BytesIO(content), options).items()
}
async def multipart(request: web.Request) -> web.Response:
if not request.headers.get("Content-Type", "").startswith("multipart/"):
raise web.HTTPBadRequest(text="Not a multipart request!")
# We need to have payload stored in the request, thus can't use `request.multipart` that consumes the reader
content = await request.read()
data = _decode_multipart(content, request.headers["Content-Type"])
return web.json_response(data)
SUCCESS_RESPONSE = {"read": "success!"}
async def read_only(request: web.Request) -> web.Response:
return web.json_response(SUCCESS_RESPONSE)
async def write_only(request: web.Request) -> web.Response:
data = await request.json()
if len(data) == 1 and isinstance(data["write"], int):
return web.json_response(SUCCESS_RESPONSE)
raise web.HTTPInternalServerError
async def upload_file(request: web.Request) -> web.Response:
if not request.headers.get("Content-Type", "").startswith("multipart/"):
raise web.HTTPBadRequest(text="Not a multipart request!")
content = await request.read()
expected_lines = [
b'Content-Disposition: form-data; name="data"; filename="data"\r\n',
# "note" field is not file and should be encoded without filename
b'Content-Disposition: form-data; name="note"\r\n',
]
if any(line not in content for line in expected_lines):
raise web.HTTPBadRequest(text="Request does not contain expected lines!")
return web.json_response({"size": request.content_length})
def is_properly_encoded(data: bytes, charset: str) -> bool:
try:
data.decode(charset)
return True
except UnicodeDecodeError:
return False
async def form(request: web.Request) -> web.Response:
if not request.headers.get("Content-Type", "").startswith("application/x-www-form-urlencoded"):
raise web.HTTPInternalServerError(text="Not an urlencoded request!")
raw = await request.read()
if not is_properly_encoded(raw, request.charset or "utf8"):
raise web.HTTPBadRequest(text='{"detail": "Invalid payload"}')
data = await request.post()
for field in ("first_name", "last_name"):
if field not in data:
raise web.HTTPBadRequest(text=f'{{"detail": "Missing `{field}`"}}')
if not isinstance(data[field], str):
raise web.HTTPBadRequest(text=f'{{"detail": "Invalid `{field}`"}}')
return web.json_response({"size": request.content_length})
async def create_user(request: web.Request) -> web.Response:
data = await request.json()
if not isinstance(data, dict):
raise web.HTTPBadRequest(text='{"detail": "Invalid payload"}')
for field in ("first_name", "last_name"):
if field not in data:
raise web.HTTPBadRequest(text=f'{{"detail": "Missing `{field}`"}}')
if not isinstance(data[field], str):
raise web.HTTPBadRequest(text=f'{{"detail": "Invalid `{field}`"}}')
user_id = str(uuid4())
request.app["users"][user_id] = {**data, "id": user_id}
return web.json_response({"id": user_id}, status=201)
def get_user_id(request: web.Request) -> str:
try:
return request.match_info["user_id"]
except KeyError:
raise web.HTTPBadRequest(text='{"detail": "Missing `user_id`"}') # noqa: B904
async def get_user(request: web.Request) -> web.Response:
user_id = get_user_id(request)
try:
user = request.app["users"][user_id]
# The full name is done specifically via concatenation to trigger a bug when the last name is `None`
full_name = user["first_name"] + " " + user["last_name"]
return web.json_response({"id": user["id"], "full_name": full_name})
except KeyError:
return web.json_response({"message": "Not found"}, status=404)
async def update_user(request: web.Request) -> web.Response:
user_id = get_user_id(request)
try:
user = request.app["users"][user_id]
data = await request.json()
for field in ("first_name", "last_name"):
if field not in data:
raise web.HTTPBadRequest(text=f'{{"detail": "Missing `{field}`"}}')
# Here we don't check the input value type to emulate a bug in another operation
user[field] = data[field]
return web.json_response(user)
except KeyError:
return web.json_response({"message": "Not found"}, status=404)
get_payload = payload
path_variable = success
reserved = success
invalid = success
invalid_path_parameter = success
missing_path_parameter = success
|
schemathesis
|
/schemathesis-3.19.2.tar.gz/schemathesis-3.19.2/test/apps/openapi/_aiohttp/handlers.py
|
handlers.py
|
import asyncio
import cgi
import csv
import io
from typing import Dict
from uuid import uuid4
import jsonschema
from aiohttp import web
from schemathesis.constants import BOM_MARK
try:
from ..schema import PAYLOAD_VALIDATOR
except (ImportError, ValueError):
from utils import PAYLOAD_VALIDATOR
async def expect_content_type(request: web.Request, value: str):
content_type = request.headers.get("Content-Type", "")
content_type, _ = cgi.parse_header(content_type)
if content_type != value:
raise web.HTTPInternalServerError(text=f"Expected {value} payload")
return await request.read()
async def success(request: web.Request) -> web.Response:
if request.app["config"]["prefix_with_bom"]:
return web.Response(body=(BOM_MARK + '{"success": true}').encode(), content_type="application/json")
return web.json_response({"success": True})
async def conformance(request: web.Request) -> web.Response:
# The schema expects `value` to be "foo", but it is different every time
return web.json_response({"value": uuid4().hex})
async def basic(request: web.Request) -> web.Response:
if "Authorization" in request.headers and request.headers["Authorization"] == "Basic dGVzdDp0ZXN0":
return web.json_response({"secret": 42})
raise web.HTTPUnauthorized(text='{"detail": "Unauthorized"}', content_type="application/json")
async def empty(request: web.Request) -> web.Response:
return web.Response(body=None, status=204)
async def empty_string(request: web.Request) -> web.Response:
return web.Response(body="")
async def payload(request: web.Request) -> web.Response:
body = await request.read()
if body:
data = await request.json()
try:
PAYLOAD_VALIDATOR.validate(data)
except jsonschema.ValidationError as exc:
raise web.HTTPBadRequest(text=str(exc)) # noqa: B904
return web.json_response(body=body)
return web.json_response({"name": "Nothing!"})
async def invalid_response(request: web.Request) -> web.Response:
return web.json_response({"random": "key"})
async def custom_format(request: web.Request) -> web.Response:
if "id" not in request.query:
raise web.HTTPBadRequest(text='{"detail": "Missing `id`"}')
if not request.query["id"].isdigit():
raise web.HTTPBadRequest(text='{"detail": "Invalid `id`"}')
value = request.query["id"]
return web.json_response({"value": value})
async def teapot(request: web.Request) -> web.Response:
return web.json_response({"success": True}, status=418)
async def recursive(request: web.Request) -> web.Response:
return web.json_response({"children": [{"children": [{"children": []}]}]})
async def text(request: web.Request) -> web.Response:
return web.Response(body="Text response", content_type="text/plain")
async def cp866(request: web.Request) -> web.Response:
return web.Response(body="Тест".encode("cp866"), content_type="text/plain", charset="cp866")
async def plain_text_body(request: web.Request) -> web.Response:
body = await expect_content_type(request, "text/plain")
return web.Response(body=body, content_type="text/plain")
async def csv_payload(request: web.Request) -> web.Response:
body = await expect_content_type(request, "text/csv")
if body:
reader = csv.DictReader(body.decode().splitlines())
data = list(reader)
else:
data = []
return web.json_response(data)
async def headers(request: web.Request) -> web.Response:
values = dict(request.headers)
return web.json_response(values, headers=values)
async def malformed_json(request: web.Request) -> web.Response:
return web.Response(body="{malformed}" + str(uuid4()), content_type="application/json")
async def failure(request: web.Request) -> web.Response:
raise web.HTTPInternalServerError
async def slow(request: web.Request) -> web.Response:
await asyncio.sleep(0.1)
return web.json_response({"success": True})
async def performance(request: web.Request) -> web.Response:
# Emulate bad performance on certain input type
# This API operation is for Schemathesis targeted testing, the failure should be discovered
decoded = await request.json()
number = str(decoded).count("0")
if number > 0:
await asyncio.sleep(0.01 * number)
if number > 10:
raise web.HTTPInternalServerError
return web.json_response({"slow": True})
async def unsatisfiable(request: web.Request) -> web.Response:
return web.json_response({"result": "IMPOSSIBLE!"})
async def flaky(request: web.Request) -> web.Response:
config = request.app["config"]
if config["should_fail"]:
config["should_fail"] = False
raise web.HTTPInternalServerError
return web.json_response({"result": "flaky!"})
async def multiple_failures(request: web.Request) -> web.Response:
try:
id_value = int(request.query["id"])
except KeyError:
raise web.HTTPBadRequest(text='{"detail": "Missing `id`"}') # noqa: B904
except ValueError:
raise web.HTTPBadRequest(text='{"detail": "Invalid `id`"}') # noqa: B904
if id_value == 0:
raise web.HTTPInternalServerError
if id_value > 0:
raise web.HTTPGatewayTimeout
return web.json_response({"result": "OK"})
def _decode_multipart(content: bytes, content_type: str) -> Dict[str, str]:
# a simplified version of multipart encoding that satisfies testing purposes
_, options = cgi.parse_header(content_type)
options["boundary"] = options["boundary"].encode()
options["CONTENT-LENGTH"] = len(content)
return {
key: value[0].decode() if isinstance(value[0], bytes) else value[0]
for key, value in cgi.parse_multipart(io.BytesIO(content), options).items()
}
async def multipart(request: web.Request) -> web.Response:
if not request.headers.get("Content-Type", "").startswith("multipart/"):
raise web.HTTPBadRequest(text="Not a multipart request!")
# We need to have payload stored in the request, thus can't use `request.multipart` that consumes the reader
content = await request.read()
data = _decode_multipart(content, request.headers["Content-Type"])
return web.json_response(data)
SUCCESS_RESPONSE = {"read": "success!"}
async def read_only(request: web.Request) -> web.Response:
return web.json_response(SUCCESS_RESPONSE)
async def write_only(request: web.Request) -> web.Response:
data = await request.json()
if len(data) == 1 and isinstance(data["write"], int):
return web.json_response(SUCCESS_RESPONSE)
raise web.HTTPInternalServerError
async def upload_file(request: web.Request) -> web.Response:
if not request.headers.get("Content-Type", "").startswith("multipart/"):
raise web.HTTPBadRequest(text="Not a multipart request!")
content = await request.read()
expected_lines = [
b'Content-Disposition: form-data; name="data"; filename="data"\r\n',
# "note" field is not file and should be encoded without filename
b'Content-Disposition: form-data; name="note"\r\n',
]
if any(line not in content for line in expected_lines):
raise web.HTTPBadRequest(text="Request does not contain expected lines!")
return web.json_response({"size": request.content_length})
def is_properly_encoded(data: bytes, charset: str) -> bool:
try:
data.decode(charset)
return True
except UnicodeDecodeError:
return False
async def form(request: web.Request) -> web.Response:
if not request.headers.get("Content-Type", "").startswith("application/x-www-form-urlencoded"):
raise web.HTTPInternalServerError(text="Not an urlencoded request!")
raw = await request.read()
if not is_properly_encoded(raw, request.charset or "utf8"):
raise web.HTTPBadRequest(text='{"detail": "Invalid payload"}')
data = await request.post()
for field in ("first_name", "last_name"):
if field not in data:
raise web.HTTPBadRequest(text=f'{{"detail": "Missing `{field}`"}}')
if not isinstance(data[field], str):
raise web.HTTPBadRequest(text=f'{{"detail": "Invalid `{field}`"}}')
return web.json_response({"size": request.content_length})
async def create_user(request: web.Request) -> web.Response:
data = await request.json()
if not isinstance(data, dict):
raise web.HTTPBadRequest(text='{"detail": "Invalid payload"}')
for field in ("first_name", "last_name"):
if field not in data:
raise web.HTTPBadRequest(text=f'{{"detail": "Missing `{field}`"}}')
if not isinstance(data[field], str):
raise web.HTTPBadRequest(text=f'{{"detail": "Invalid `{field}`"}}')
user_id = str(uuid4())
request.app["users"][user_id] = {**data, "id": user_id}
return web.json_response({"id": user_id}, status=201)
def get_user_id(request: web.Request) -> str:
try:
return request.match_info["user_id"]
except KeyError:
raise web.HTTPBadRequest(text='{"detail": "Missing `user_id`"}') # noqa: B904
async def get_user(request: web.Request) -> web.Response:
user_id = get_user_id(request)
try:
user = request.app["users"][user_id]
# The full name is done specifically via concatenation to trigger a bug when the last name is `None`
full_name = user["first_name"] + " " + user["last_name"]
return web.json_response({"id": user["id"], "full_name": full_name})
except KeyError:
return web.json_response({"message": "Not found"}, status=404)
async def update_user(request: web.Request) -> web.Response:
user_id = get_user_id(request)
try:
user = request.app["users"][user_id]
data = await request.json()
for field in ("first_name", "last_name"):
if field not in data:
raise web.HTTPBadRequest(text=f'{{"detail": "Missing `{field}`"}}')
# Here we don't check the input value type to emulate a bug in another operation
user[field] = data[field]
return web.json_response(user)
except KeyError:
return web.json_response({"message": "Not found"}, status=404)
get_payload = payload
path_variable = success
reserved = success
invalid = success
invalid_path_parameter = success
missing_path_parameter = success
| 0.627152 | 0.127598 |
from typing import Optional
from uuid import uuid4
from fastapi import FastAPI, HTTPException, Query
from pydantic import BaseModel, Field
from ..schema import OpenAPIVersion
class User(BaseModel):
first_name: str = Field(min_length=3)
last_name: str = Field(min_length=3)
class Config:
extra = "forbid"
class BuggyUser(BaseModel):
first_name: str = Field(min_length=3)
last_name: Optional[str] = Field(min_length=3, nullable=True)
class Config:
extra = "forbid"
class Message(BaseModel):
detail: str
class Config:
extra = "forbid"
def create_app(operations=("root",), version=OpenAPIVersion("3.0")):
if version != OpenAPIVersion("3.0"):
raise ValueError("FastAPI supports only Open API 3.0")
app = FastAPI()
users = {}
if "root" in operations:
@app.get("/users")
async def root():
return {"success": True}
if "create_user" in operations:
@app.post("/users/", status_code=201)
def create_user(user: User):
user_id = str(uuid4())
users[user_id] = {**user.dict(), "id": user_id}
return {"id": user_id}
if "get_user" in operations:
@app.get("/users/{user_id}", responses={404: {"model": Message}})
def get_user(user_id: str, uid: str = Query(...), code: int = Query(...)):
try:
user = users[user_id]
# The full name is done specifically via concatenation to trigger a bug when the last name is `None`
try:
full_name = user["first_name"] + " " + user["last_name"]
except TypeError:
# We test it via out ASGI integration and `TypeError` will be propagated otherwise.
# To keep the same behavior across all test server implementations we reraise it as a server error
raise HTTPException(status_code=500, detail="We got a problem!") # noqa: B904
return {"id": user["id"], "full_name": full_name}
except KeyError as exc:
raise HTTPException(status_code=404, detail="Not found") from exc
if "update_user" in operations:
@app.patch("/users/{user_id}", responses={404: {"model": Message}})
def update_user(user_id: str, update: BuggyUser, common: int = Query(...)):
try:
user = users[user_id]
for field in ("first_name", "last_name"):
user[field] = getattr(update, field)
return user
except KeyError:
raise HTTPException(status_code=404, detail="Not found") # noqa: B904
return app
|
schemathesis
|
/schemathesis-3.19.2.tar.gz/schemathesis-3.19.2/test/apps/openapi/_fastapi/__init__.py
|
__init__.py
|
from typing import Optional
from uuid import uuid4
from fastapi import FastAPI, HTTPException, Query
from pydantic import BaseModel, Field
from ..schema import OpenAPIVersion
class User(BaseModel):
first_name: str = Field(min_length=3)
last_name: str = Field(min_length=3)
class Config:
extra = "forbid"
class BuggyUser(BaseModel):
first_name: str = Field(min_length=3)
last_name: Optional[str] = Field(min_length=3, nullable=True)
class Config:
extra = "forbid"
class Message(BaseModel):
detail: str
class Config:
extra = "forbid"
def create_app(operations=("root",), version=OpenAPIVersion("3.0")):
if version != OpenAPIVersion("3.0"):
raise ValueError("FastAPI supports only Open API 3.0")
app = FastAPI()
users = {}
if "root" in operations:
@app.get("/users")
async def root():
return {"success": True}
if "create_user" in operations:
@app.post("/users/", status_code=201)
def create_user(user: User):
user_id = str(uuid4())
users[user_id] = {**user.dict(), "id": user_id}
return {"id": user_id}
if "get_user" in operations:
@app.get("/users/{user_id}", responses={404: {"model": Message}})
def get_user(user_id: str, uid: str = Query(...), code: int = Query(...)):
try:
user = users[user_id]
# The full name is done specifically via concatenation to trigger a bug when the last name is `None`
try:
full_name = user["first_name"] + " " + user["last_name"]
except TypeError:
# We test it via out ASGI integration and `TypeError` will be propagated otherwise.
# To keep the same behavior across all test server implementations we reraise it as a server error
raise HTTPException(status_code=500, detail="We got a problem!") # noqa: B904
return {"id": user["id"], "full_name": full_name}
except KeyError as exc:
raise HTTPException(status_code=404, detail="Not found") from exc
if "update_user" in operations:
@app.patch("/users/{user_id}", responses={404: {"model": Message}})
def update_user(user_id: str, update: BuggyUser, common: int = Query(...)):
try:
user = users[user_id]
for field in ("first_name", "last_name"):
user[field] = getattr(update, field)
return user
except KeyError:
raise HTTPException(status_code=404, detail="Not found") # noqa: B904
return app
| 0.753467 | 0.099865 |
import cgi
import csv
import json
from time import sleep
from typing import Tuple
from uuid import uuid4
import jsonschema
import yaml
from flask import Flask, Response, jsonify, request
from werkzeug.exceptions import BadRequest, GatewayTimeout, InternalServerError
from schemathesis.constants import BOM_MARK
from ..schema import PAYLOAD_VALIDATOR, OpenAPIVersion, make_openapi_schema
SUCCESS_RESPONSE = {"read": "success!"}
def expect_content_type(value: str):
content_type = request.headers["Content-Type"]
content_type, _ = cgi.parse_header(content_type)
if content_type != value:
raise InternalServerError(f"Expected {value} payload")
def create_app(
operations: Tuple[str, ...] = ("success", "failure"), version: OpenAPIVersion = OpenAPIVersion("2.0")
) -> Flask:
app = Flask("test_app")
app.config["should_fail"] = True
app.config["schema_data"] = make_openapi_schema(operations, version)
app.config["incoming_requests"] = []
app.config["schema_requests"] = []
app.config["internal_exception"] = False
app.config["random_delay"] = False
app.config["prefix_with_bom"] = False
app.config["users"] = {}
@app.before_request
def store_request():
current_request = request._get_current_object()
if request.path == "/schema.yaml":
app.config["schema_requests"].append(current_request)
else:
app.config["incoming_requests"].append(current_request)
@app.route("/schema.yaml")
def schema():
schema_data = app.config["schema_data"]
content = yaml.dump(schema_data)
return Response(content, content_type="text/plain")
@app.route("/api/success", methods=["GET"])
def success():
if app.config["internal_exception"]:
1 / 0
if app.config["prefix_with_bom"]:
return Response((BOM_MARK + '{"success": true}').encode(), content_type="application/json")
return jsonify({"success": True})
@app.route("/api/foo:bar", methods=["GET"])
def reserved():
return jsonify({"success": True})
@app.route("/api/recursive", methods=["GET"])
def recursive():
return jsonify({"children": [{"children": [{"children": []}]}]})
@app.route("/api/payload", methods=["POST"])
def payload():
try:
data = request.json
try:
PAYLOAD_VALIDATOR.validate(data)
except jsonschema.ValidationError:
return jsonify({"detail": "Validation error"}), 400
except BadRequest:
data = {"name": "Nothing!"}
return jsonify(data)
@app.route("/api/get_payload", methods=["GET"])
def get_payload():
return jsonify(request.json)
@app.route("/api/basic", methods=["GET"])
def basic():
if "Authorization" in request.headers and request.headers["Authorization"] == "Basic dGVzdDp0ZXN0":
return jsonify({"secret": 42})
return {"detail": "Unauthorized"}, 401
@app.route("/api/empty", methods=["GET"])
def empty():
return Response(status=204)
@app.route("/api/empty_string", methods=["GET"])
def empty_string():
return Response(response="")
@app.route("/api/headers", methods=["GET"])
def headers():
values = dict(request.headers)
return Response(json.dumps(values), content_type="application/json", headers=values)
@app.route("/api/conformance", methods=["GET"])
def conformance():
# The schema expects `value` to be "foo", but it is different every time
return jsonify({"value": uuid4().hex})
@app.route("/api/failure", methods=["GET"])
def failure():
raise InternalServerError
@app.route("/api/multiple_failures", methods=["GET"])
def multiple_failures():
try:
id_value = int(request.args["id"])
except KeyError:
return jsonify({"detail": "Missing `id`"}), 400
except ValueError:
return jsonify({"detail": "Invalid `id`"}), 400
if id_value == 0:
raise InternalServerError
if id_value > 0:
raise GatewayTimeout
return jsonify({"result": "OK"})
@app.route("/api/slow", methods=["GET"])
def slow():
sleep(0.1)
return jsonify({"success": True})
@app.route("/api/path_variable/<key>", methods=["GET"])
def path_variable(key):
if app.config["random_delay"]:
sleep(app.config["random_delay"])
app.config["random_delay"] = False
return jsonify({"success": True})
@app.route("/api/unsatisfiable", methods=["POST"])
def unsatisfiable():
return jsonify({"result": "IMPOSSIBLE!"})
@app.route("/api/invalid", methods=["POST"])
def invalid():
return jsonify({"success": True})
@app.route("/api/performance", methods=["POST"])
def performance():
data = request.json
number = str(data).count("0")
if number > 0:
sleep(0.01 * number)
if number > 10:
raise InternalServerError
return jsonify({"success": True})
@app.route("/api/flaky", methods=["GET"])
def flaky():
if app.config["should_fail"]:
app.config["should_fail"] = False
raise InternalServerError
return jsonify({"result": "flaky!"})
@app.route("/api/multipart", methods=["POST"])
def multipart():
files = {name: value.stream.read().decode() for name, value in request.files.items()}
return jsonify(**files, **request.form.to_dict())
@app.route("/api/upload_file", methods=["POST"])
def upload_file():
return jsonify({"size": request.content_length})
@app.route("/api/form", methods=["POST"])
def form():
expect_content_type("application/x-www-form-urlencoded")
data = request.form
for field in ("first_name", "last_name"):
if field not in data:
return jsonify({"detail": f"Missing `{field}`"}), 400
if not isinstance(data[field], str):
return jsonify({"detail": f"Invalid `{field}`"}), 400
return jsonify({"size": request.content_length})
@app.route("/api/csv", methods=["POST"])
def csv_payload():
expect_content_type("text/csv")
data = request.get_data(as_text=True)
if data:
reader = csv.DictReader(data.splitlines())
data = list(reader)
else:
data = []
return jsonify(data)
@app.route("/api/teapot", methods=["POST"])
def teapot():
return jsonify({"success": True}), 418
@app.route("/api/read_only", methods=["GET"])
def read_only():
return jsonify(SUCCESS_RESPONSE)
@app.route("/api/write_only", methods=["POST"])
def write_only():
data = request.get_json()
if len(data) == 1 and isinstance(data["write"], int):
return jsonify(SUCCESS_RESPONSE)
raise InternalServerError
@app.route("/api/text", methods=["GET"])
def text():
return Response("Text response", content_type="text/plain")
@app.route("/api/cp866", methods=["GET"])
def cp866():
# NOTE. Setting `Response.charset` don't have effect in test client as it re-wraps this response with the
# default one where `charset` is `utf-8`
return Response("Тест".encode("cp866"), content_type="text/plain;charset=cp866")
@app.route("/api/text", methods=["POST"])
def plain_text_body():
expect_content_type("text/plain")
return Response(request.data, content_type="text/plain")
@app.route("/api/malformed_json", methods=["GET"])
def malformed_json():
return Response("{malformed}" + str(uuid4()), content_type="application/json")
@app.route("/api/invalid_response", methods=["GET"])
def invalid_response():
return jsonify({"random": "key"})
@app.route("/api/custom_format", methods=["GET"])
def custom_format():
if "id" not in request.args:
return jsonify({"detail": "Missing `id`"}), 400
if not request.args["id"].isdigit():
return jsonify({"detail": "Invalid `id`"}), 400
return jsonify({"value": request.args["id"]})
@app.route("/api/invalid_path_parameter/<id>", methods=["GET"])
def invalid_path_parameter(id):
return jsonify({"success": True})
@app.route("/api/users/", methods=["POST"])
def create_user():
data = request.json
if not isinstance(data, dict):
return jsonify({"detail": "Invalid payload"}), 400
for field in ("first_name", "last_name"):
if field not in data:
return jsonify({"detail": f"Missing `{field}`"}), 400
if not isinstance(data[field], str):
return jsonify({"detail": f"Invalid `{field}`"}), 400
user_id = str(uuid4())
app.config["users"][user_id] = {**data, "id": user_id}
return jsonify({"id": user_id}), 201
@app.route("/api/users/<user_id>", methods=["GET"])
def get_user(user_id):
try:
user = app.config["users"][user_id]
# The full name is done specifically via concatenation to trigger a bug when the last name is `None`
full_name = user["first_name"] + " " + user["last_name"]
return jsonify({"id": user["id"], "full_name": full_name})
except KeyError:
return jsonify({"message": "Not found"}), 404
@app.route("/api/users/<user_id>", methods=["PATCH"])
def update_user(user_id):
try:
user = app.config["users"][user_id]
data = request.json
for field in ("first_name", "last_name"):
if field not in data:
return jsonify({"detail": f"Missing `{field}`"}), 400
# Here we don't check the input value type to emulate a bug in another operation
user[field] = data[field]
return jsonify(user)
except KeyError:
return jsonify({"message": "Not found"}), 404
return app
|
schemathesis
|
/schemathesis-3.19.2.tar.gz/schemathesis-3.19.2/test/apps/openapi/_flask/__init__.py
|
__init__.py
|
import cgi
import csv
import json
from time import sleep
from typing import Tuple
from uuid import uuid4
import jsonschema
import yaml
from flask import Flask, Response, jsonify, request
from werkzeug.exceptions import BadRequest, GatewayTimeout, InternalServerError
from schemathesis.constants import BOM_MARK
from ..schema import PAYLOAD_VALIDATOR, OpenAPIVersion, make_openapi_schema
SUCCESS_RESPONSE = {"read": "success!"}
def expect_content_type(value: str):
content_type = request.headers["Content-Type"]
content_type, _ = cgi.parse_header(content_type)
if content_type != value:
raise InternalServerError(f"Expected {value} payload")
def create_app(
operations: Tuple[str, ...] = ("success", "failure"), version: OpenAPIVersion = OpenAPIVersion("2.0")
) -> Flask:
app = Flask("test_app")
app.config["should_fail"] = True
app.config["schema_data"] = make_openapi_schema(operations, version)
app.config["incoming_requests"] = []
app.config["schema_requests"] = []
app.config["internal_exception"] = False
app.config["random_delay"] = False
app.config["prefix_with_bom"] = False
app.config["users"] = {}
@app.before_request
def store_request():
current_request = request._get_current_object()
if request.path == "/schema.yaml":
app.config["schema_requests"].append(current_request)
else:
app.config["incoming_requests"].append(current_request)
@app.route("/schema.yaml")
def schema():
schema_data = app.config["schema_data"]
content = yaml.dump(schema_data)
return Response(content, content_type="text/plain")
@app.route("/api/success", methods=["GET"])
def success():
if app.config["internal_exception"]:
1 / 0
if app.config["prefix_with_bom"]:
return Response((BOM_MARK + '{"success": true}').encode(), content_type="application/json")
return jsonify({"success": True})
@app.route("/api/foo:bar", methods=["GET"])
def reserved():
return jsonify({"success": True})
@app.route("/api/recursive", methods=["GET"])
def recursive():
return jsonify({"children": [{"children": [{"children": []}]}]})
@app.route("/api/payload", methods=["POST"])
def payload():
try:
data = request.json
try:
PAYLOAD_VALIDATOR.validate(data)
except jsonschema.ValidationError:
return jsonify({"detail": "Validation error"}), 400
except BadRequest:
data = {"name": "Nothing!"}
return jsonify(data)
@app.route("/api/get_payload", methods=["GET"])
def get_payload():
return jsonify(request.json)
@app.route("/api/basic", methods=["GET"])
def basic():
if "Authorization" in request.headers and request.headers["Authorization"] == "Basic dGVzdDp0ZXN0":
return jsonify({"secret": 42})
return {"detail": "Unauthorized"}, 401
@app.route("/api/empty", methods=["GET"])
def empty():
return Response(status=204)
@app.route("/api/empty_string", methods=["GET"])
def empty_string():
return Response(response="")
@app.route("/api/headers", methods=["GET"])
def headers():
values = dict(request.headers)
return Response(json.dumps(values), content_type="application/json", headers=values)
@app.route("/api/conformance", methods=["GET"])
def conformance():
# The schema expects `value` to be "foo", but it is different every time
return jsonify({"value": uuid4().hex})
@app.route("/api/failure", methods=["GET"])
def failure():
raise InternalServerError
@app.route("/api/multiple_failures", methods=["GET"])
def multiple_failures():
try:
id_value = int(request.args["id"])
except KeyError:
return jsonify({"detail": "Missing `id`"}), 400
except ValueError:
return jsonify({"detail": "Invalid `id`"}), 400
if id_value == 0:
raise InternalServerError
if id_value > 0:
raise GatewayTimeout
return jsonify({"result": "OK"})
@app.route("/api/slow", methods=["GET"])
def slow():
sleep(0.1)
return jsonify({"success": True})
@app.route("/api/path_variable/<key>", methods=["GET"])
def path_variable(key):
if app.config["random_delay"]:
sleep(app.config["random_delay"])
app.config["random_delay"] = False
return jsonify({"success": True})
@app.route("/api/unsatisfiable", methods=["POST"])
def unsatisfiable():
return jsonify({"result": "IMPOSSIBLE!"})
@app.route("/api/invalid", methods=["POST"])
def invalid():
return jsonify({"success": True})
@app.route("/api/performance", methods=["POST"])
def performance():
data = request.json
number = str(data).count("0")
if number > 0:
sleep(0.01 * number)
if number > 10:
raise InternalServerError
return jsonify({"success": True})
@app.route("/api/flaky", methods=["GET"])
def flaky():
if app.config["should_fail"]:
app.config["should_fail"] = False
raise InternalServerError
return jsonify({"result": "flaky!"})
@app.route("/api/multipart", methods=["POST"])
def multipart():
files = {name: value.stream.read().decode() for name, value in request.files.items()}
return jsonify(**files, **request.form.to_dict())
@app.route("/api/upload_file", methods=["POST"])
def upload_file():
return jsonify({"size": request.content_length})
@app.route("/api/form", methods=["POST"])
def form():
expect_content_type("application/x-www-form-urlencoded")
data = request.form
for field in ("first_name", "last_name"):
if field not in data:
return jsonify({"detail": f"Missing `{field}`"}), 400
if not isinstance(data[field], str):
return jsonify({"detail": f"Invalid `{field}`"}), 400
return jsonify({"size": request.content_length})
@app.route("/api/csv", methods=["POST"])
def csv_payload():
expect_content_type("text/csv")
data = request.get_data(as_text=True)
if data:
reader = csv.DictReader(data.splitlines())
data = list(reader)
else:
data = []
return jsonify(data)
@app.route("/api/teapot", methods=["POST"])
def teapot():
return jsonify({"success": True}), 418
@app.route("/api/read_only", methods=["GET"])
def read_only():
return jsonify(SUCCESS_RESPONSE)
@app.route("/api/write_only", methods=["POST"])
def write_only():
data = request.get_json()
if len(data) == 1 and isinstance(data["write"], int):
return jsonify(SUCCESS_RESPONSE)
raise InternalServerError
@app.route("/api/text", methods=["GET"])
def text():
return Response("Text response", content_type="text/plain")
@app.route("/api/cp866", methods=["GET"])
def cp866():
# NOTE. Setting `Response.charset` don't have effect in test client as it re-wraps this response with the
# default one where `charset` is `utf-8`
return Response("Тест".encode("cp866"), content_type="text/plain;charset=cp866")
@app.route("/api/text", methods=["POST"])
def plain_text_body():
expect_content_type("text/plain")
return Response(request.data, content_type="text/plain")
@app.route("/api/malformed_json", methods=["GET"])
def malformed_json():
return Response("{malformed}" + str(uuid4()), content_type="application/json")
@app.route("/api/invalid_response", methods=["GET"])
def invalid_response():
return jsonify({"random": "key"})
@app.route("/api/custom_format", methods=["GET"])
def custom_format():
if "id" not in request.args:
return jsonify({"detail": "Missing `id`"}), 400
if not request.args["id"].isdigit():
return jsonify({"detail": "Invalid `id`"}), 400
return jsonify({"value": request.args["id"]})
@app.route("/api/invalid_path_parameter/<id>", methods=["GET"])
def invalid_path_parameter(id):
return jsonify({"success": True})
@app.route("/api/users/", methods=["POST"])
def create_user():
data = request.json
if not isinstance(data, dict):
return jsonify({"detail": "Invalid payload"}), 400
for field in ("first_name", "last_name"):
if field not in data:
return jsonify({"detail": f"Missing `{field}`"}), 400
if not isinstance(data[field], str):
return jsonify({"detail": f"Invalid `{field}`"}), 400
user_id = str(uuid4())
app.config["users"][user_id] = {**data, "id": user_id}
return jsonify({"id": user_id}), 201
@app.route("/api/users/<user_id>", methods=["GET"])
def get_user(user_id):
try:
user = app.config["users"][user_id]
# The full name is done specifically via concatenation to trigger a bug when the last name is `None`
full_name = user["first_name"] + " " + user["last_name"]
return jsonify({"id": user["id"], "full_name": full_name})
except KeyError:
return jsonify({"message": "Not found"}), 404
@app.route("/api/users/<user_id>", methods=["PATCH"])
def update_user(user_id):
try:
user = app.config["users"][user_id]
data = request.json
for field in ("first_name", "last_name"):
if field not in data:
return jsonify({"detail": f"Missing `{field}`"}), 400
# Here we don't check the input value type to emulate a bug in another operation
user[field] = data[field]
return jsonify(user)
except KeyError:
return jsonify({"message": "Not found"}), 404
return app
| 0.477798 | 0.073264 |
from typing import List, Tuple
import strawberry
@strawberry.type
class Book:
title: str
author: "Author"
@strawberry.type
class Author:
name: str
books: List[Book]
TOLKIEN = Author(name="J.R.R Tolkien", books=[])
JANSSON = Author(name="Tove Marika Jansson", books=[])
BOOKS = {
1: Book(title="The Fellowship of the Ring", author=TOLKIEN),
2: Book(title="The Two Towers", author=TOLKIEN),
3: Book(title="The Return of the King", author=TOLKIEN),
4: Book(title="Kometen kommer", author=JANSSON),
5: Book(title="Trollvinter", author=JANSSON),
6: Book(title="Farlig midsommar", author=JANSSON),
}
TOLKIEN.books = [BOOKS[1], BOOKS[2], BOOKS[3]]
JANSSON.books = [BOOKS[4], BOOKS[5], BOOKS[6]]
AUTHORS = {
1: TOLKIEN,
2: JANSSON,
}
@strawberry.type
class Query:
@strawberry.field
def getBooks(self) -> List[Book]:
return list(BOOKS.values())
@strawberry.field
def getAuthors(self) -> List[Author]:
return list(AUTHORS.values())
def get_or_create_author(name: str) -> Tuple[int, Author]:
for author_id, author in AUTHORS.items(): # noqa: B007
if author.name == name:
break
else:
author = Author(name=name, books=[])
author_id = len(AUTHORS) + 1
AUTHORS[author_id] = author
return author_id, author
@strawberry.type
class Mutation:
@strawberry.mutation
def addBook(self, title: str, author: str) -> Book:
for book in BOOKS.values():
if book.title == title:
break
else:
# New book and potentially new author
author_id, author = get_or_create_author(author)
book = Book(title=title, author=author)
book_id = len(BOOKS) + 1
BOOKS[book_id] = book
author.books.append(book)
return book
@strawberry.mutation
def addAuthor(self, name: str) -> Author:
return get_or_create_author(name)[1]
schema = strawberry.Schema(Query, Mutation)
|
schemathesis
|
/schemathesis-3.19.2.tar.gz/schemathesis-3.19.2/test/apps/_graphql/schema.py
|
schema.py
|
from typing import List, Tuple
import strawberry
@strawberry.type
class Book:
title: str
author: "Author"
@strawberry.type
class Author:
name: str
books: List[Book]
TOLKIEN = Author(name="J.R.R Tolkien", books=[])
JANSSON = Author(name="Tove Marika Jansson", books=[])
BOOKS = {
1: Book(title="The Fellowship of the Ring", author=TOLKIEN),
2: Book(title="The Two Towers", author=TOLKIEN),
3: Book(title="The Return of the King", author=TOLKIEN),
4: Book(title="Kometen kommer", author=JANSSON),
5: Book(title="Trollvinter", author=JANSSON),
6: Book(title="Farlig midsommar", author=JANSSON),
}
TOLKIEN.books = [BOOKS[1], BOOKS[2], BOOKS[3]]
JANSSON.books = [BOOKS[4], BOOKS[5], BOOKS[6]]
AUTHORS = {
1: TOLKIEN,
2: JANSSON,
}
@strawberry.type
class Query:
@strawberry.field
def getBooks(self) -> List[Book]:
return list(BOOKS.values())
@strawberry.field
def getAuthors(self) -> List[Author]:
return list(AUTHORS.values())
def get_or_create_author(name: str) -> Tuple[int, Author]:
for author_id, author in AUTHORS.items(): # noqa: B007
if author.name == name:
break
else:
author = Author(name=name, books=[])
author_id = len(AUTHORS) + 1
AUTHORS[author_id] = author
return author_id, author
@strawberry.type
class Mutation:
@strawberry.mutation
def addBook(self, title: str, author: str) -> Book:
for book in BOOKS.values():
if book.title == title:
break
else:
# New book and potentially new author
author_id, author = get_or_create_author(author)
book = Book(title=title, author=author)
book_id = len(BOOKS) + 1
BOOKS[book_id] = book
author.books.append(book)
return book
@strawberry.mutation
def addAuthor(self, name: str) -> Author:
return get_or_create_author(name)[1]
schema = strawberry.Schema(Query, Mutation)
| 0.726037 | 0.493897 |
Example project
===============
A simple web app, built with `connexion <https://github.com/zalando/connexion>`_,
`aiohttp <https://github.com/aio-libs/aiohttp>`_ and `asyncpg <https://github.com/MagicStack/asyncpg>`_.
It contains many intentional errors, which should be found by running Schemathesis.
There is also `a tutorial <https://habr.com/ru/company/oleg-bunin/blog/576496/>`_ in Russian that follows this example project.
Setup
-----
To run the examples below, you need the recent version of `docker-compose <https://docs.docker.com/compose/install/>`_ and Schemathesis installed locally.
Start the application via `docker-compose`:
.. code::
docker-compose up
It will spin up a web server available at ``http://127.0.0.1:5000``. You can take a look at API documentation at ``http://127.0.0.1:5000/api/ui/``.
Note, the app will run in the current terminal.
Install ``schemathesis`` via ``pip`` to a virtual environment:
.. code::
pip install schemathesis
It will install additional dependencies, including ``pytest``.
Python tests
------------
Run the test suite via ``pytest`` in a separate terminal:
.. code::
pytest -v test
These tests include:
- A unit test & an integration test;
- Custom hypothesis settings;
- Using ``pytest`` fixtures;
- Providing a custom authorization header;
- Custom strategy for Open API string format;
- A hook for data generation;
- Custom response check;
See the details in the ``/test`` directory.
Command-line
------------
Here are examples of how you can run Schemathesis CLI:
.. code:: bash
export SCHEMA_URL="http://127.0.0.1:5000/api/openapi.json"
export PYTHONPATH=$(pwd)
# Default config. Runs unit tests for all API operations with `not_a_server_error` check
st run $SCHEMA_URL
# Select what to test. Only `POST` operations that have `booking` in their path
st run -E booking -M POST $SCHEMA_URL
# What checks to run
st run -c status_code_conformance $SCHEMA_URL
# Include your own checks. They should be registered in the `test/hooks.py` module
SCHEMATHESIS_HOOKS=test.hooks st run $SCHEMA_URL
# Provide custom headers
st run -H "Authorization: Bearer <token>" $SCHEMA_URL
# Configure hypothesis parameters. Run up to 1000 examples per tested operation
st run --hypothesis-max-examples 1000 $SCHEMA_URL
# Run in multiple threads
st run -w 8 $SCHEMA_URL
# Store network log to a file
st run --cassette-path=cassette.yaml $SCHEMA_URL
# Replay requests from the log
st replay cassette.yaml
# Integration tests
st run $SCHEMA_URL
|
schemathesis
|
/schemathesis-3.19.2.tar.gz/schemathesis-3.19.2/example/README.rst
|
README.rst
|
Example project
===============
A simple web app, built with `connexion <https://github.com/zalando/connexion>`_,
`aiohttp <https://github.com/aio-libs/aiohttp>`_ and `asyncpg <https://github.com/MagicStack/asyncpg>`_.
It contains many intentional errors, which should be found by running Schemathesis.
There is also `a tutorial <https://habr.com/ru/company/oleg-bunin/blog/576496/>`_ in Russian that follows this example project.
Setup
-----
To run the examples below, you need the recent version of `docker-compose <https://docs.docker.com/compose/install/>`_ and Schemathesis installed locally.
Start the application via `docker-compose`:
.. code::
docker-compose up
It will spin up a web server available at ``http://127.0.0.1:5000``. You can take a look at API documentation at ``http://127.0.0.1:5000/api/ui/``.
Note, the app will run in the current terminal.
Install ``schemathesis`` via ``pip`` to a virtual environment:
.. code::
pip install schemathesis
It will install additional dependencies, including ``pytest``.
Python tests
------------
Run the test suite via ``pytest`` in a separate terminal:
.. code::
pytest -v test
These tests include:
- A unit test & an integration test;
- Custom hypothesis settings;
- Using ``pytest`` fixtures;
- Providing a custom authorization header;
- Custom strategy for Open API string format;
- A hook for data generation;
- Custom response check;
See the details in the ``/test`` directory.
Command-line
------------
Here are examples of how you can run Schemathesis CLI:
.. code:: bash
export SCHEMA_URL="http://127.0.0.1:5000/api/openapi.json"
export PYTHONPATH=$(pwd)
# Default config. Runs unit tests for all API operations with `not_a_server_error` check
st run $SCHEMA_URL
# Select what to test. Only `POST` operations that have `booking` in their path
st run -E booking -M POST $SCHEMA_URL
# What checks to run
st run -c status_code_conformance $SCHEMA_URL
# Include your own checks. They should be registered in the `test/hooks.py` module
SCHEMATHESIS_HOOKS=test.hooks st run $SCHEMA_URL
# Provide custom headers
st run -H "Authorization: Bearer <token>" $SCHEMA_URL
# Configure hypothesis parameters. Run up to 1000 examples per tested operation
st run --hypothesis-max-examples 1000 $SCHEMA_URL
# Run in multiple threads
st run -w 8 $SCHEMA_URL
# Store network log to a file
st run --cassette-path=cassette.yaml $SCHEMA_URL
# Replay requests from the log
st replay cassette.yaml
# Integration tests
st run $SCHEMA_URL
| 0.8549 | 0.563378 |
---
name: Bug report
about: Create a report to help us improve
title: "[BUG]"
labels: "Status: Review Needed, Type: Bug"
assignees: Stranger6667
---
**Checklist**
- [ ] I checked the [FAQ section](https://schemathesis.readthedocs.io/en/stable/faq.html#frequently-asked-questions) of the documentation
- [ ] I looked for similar issues in the [issue tracker](https://github.com/schemathesis/schemathesis/issues)
**Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
Steps to reproduce the behavior:
1. Run this command '...'
2. See error
If possible, please post a minimal version of your API schema that cause this behavior:
```yaml
{
"openapi": "3.0.2"
... Add more here
}
```
**Expected behavior**
A clear and concise description of what you expected to happen.
**Environment (please complete the following information):**
- OS: [e.g. Linux or Windows]
- Python version: [e.g. 3.7.2]
- Schemathesis version: [e.g. 2.4.1]
- Spec version: [e.g. Open API 3.0.2]
**Additional context**
Add any other context about the problem here.
|
schemathesis
|
/schemathesis-3.19.2.tar.gz/schemathesis-3.19.2/.github/ISSUE_TEMPLATE/bug_report.md
|
bug_report.md
|
{
"openapi": "3.0.2"
... Add more here
}
| 0.392453 | 0.69539 |
---
name: Feature request
about: Suggest an idea for this project
title: "[FEATURE]"
labels: "Status: Review Needed, Type: Feature"
assignees: Stranger6667
---
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
|
schemathesis
|
/schemathesis-3.19.2.tar.gz/schemathesis-3.19.2/.github/ISSUE_TEMPLATE/feature_request.md
|
feature_request.md
|
---
name: Feature request
about: Suggest an idea for this project
title: "[FEATURE]"
labels: "Status: Review Needed, Type: Feature"
assignees: Stranger6667
---
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
| 0.695752 | 0.437763 |
from typing import Any
from tenacity import retry, stop_after_attempt, wait_fixed, retry_if_exception_type
import synapseclient # type: ignore
import pandas # type: ignore
class SynapseTableNameError(Exception):
"""SynapseTableNameError"""
def __init__(self, message: str, table_name: str) -> None:
"""
Args:
message (str): A message describing the error
table_name (str): The name of the table
"""
self.message = message
self.table_name = table_name
super().__init__(self.message)
def __str__(self) -> str:
return f"{self.message}:{self.table_name}"
class SynapseDeleteRowsError(Exception):
"""SynapseDeleteRowsError"""
def __init__(self, message: str, table_id: str, columns: list[str]) -> None:
"""
Args:
message (str): A message describing the error
table_id (str): The synapse id of the table
columns (list[str]): A list of columns in the synapse table
"""
self.message = message
self.table_id = table_id
self.columns = columns
super().__init__(self.message)
def __str__(self) -> str:
return f"{self.message}; table_id:{self.table_id}; columns: {', '.join(self.columns)}"
class Synapse: # pylint: disable=too-many-public-methods
"""
The Synapse class handles interactions with a project in Synapse.
"""
def __init__(self, auth_token: str, project_id: str) -> None:
"""Init
Args:
auth_token (str): A Synapse auth_token
project_id (str): A Synapse id for a project
"""
self.project_id = project_id
syn = synapseclient.Synapse()
syn.login(authToken=auth_token)
self.syn = syn
self.project_id = project_id
def download_csv_as_dataframe(self, synapse_id: str) -> pandas.DataFrame:
"""Downloads a csv file form Synapse and reads it
Args:
synapse_id (str): The Synapse id of the file
Returns:
pandas.DataFrame: The file in dataframe form
"""
entity = self.syn.get(synapse_id)
return pandas.read_csv(entity.path)
def get_table_names(self) -> list[str]:
"""Gets the names of the tables in the schema
Returns:
list[str]: A list of table names
"""
tables = self._get_tables()
return [table["name"] for table in tables]
def _get_tables(self) -> list[synapseclient.Table]:
"""Gets the list of Synapse table entities for the project
Returns:
list[synapseclient.Table]: A list of all Synapse table entities
"""
project = self.syn.get(self.project_id)
return list(self.syn.getChildren(project, includeTypes=["table"]))
def get_table_column_names(self, table_name: str) -> list[str]:
"""Gets the column names from a synapse table
Args:
table_name (str): The name of the table
Returns:
list[str]: A list of column names
"""
synapse_id = self.get_synapse_id_from_table_name(table_name)
table = self.syn.get(synapse_id)
columns = list(self.syn.getTableColumns(table))
return [column.name for column in columns]
def get_synapse_id_from_table_name(self, table_name: str) -> str:
"""Gets the synapse id from the table name
Args:
table_name (str): The name of the table
Raises:
SynapseTableNameError: When no tables match the name
SynapseTableNameError: When multiple tables match the name
Returns:
str: A synapse id
"""
tables = self._get_tables()
matching_tables = [table for table in tables if table["name"] == table_name]
if len(matching_tables) == 0:
raise SynapseTableNameError("No matching tables with name:", table_name)
if len(matching_tables) > 1:
raise SynapseTableNameError(
"Multiple matching tables with name:", table_name
)
return matching_tables[0]["id"]
def get_table_name_from_synapse_id(self, synapse_id: str) -> str:
"""Gets the table name from the synapse id
Args:
synapse_id (str): A synapse id
Returns:
str: The name of the table with the synapse id
"""
tables = self._get_tables()
return [table["name"] for table in tables if table["id"] == synapse_id][0]
def query_table(
self, synapse_id: str, include_row_data: bool = False
) -> pandas.DataFrame:
"""Queries a whole table
Args:
synapse_id (str): The Synapse id of the table to delete
include_row_data (bool): Include row_id and row_etag. Defaults to False.
Returns:
pandas.DataFrame: The queried table
"""
query = f"SELECT * FROM {synapse_id}"
return self.execute_sql_query(query, include_row_data)
def execute_sql_query(
self, query: str, include_row_data: bool = False
) -> pandas.DataFrame:
"""Execute a Sql query
Args:
query (str): A SQL statement that can be run by Synapse
include_row_data (bool): Include row_id and row_etag. Defaults to False.
Returns:
pandas.DataFrame: The queried table
"""
result = self.execute_sql_statement(query, include_row_data)
table = pandas.read_csv(result.filepath)
return table
def execute_sql_statement(
self, statement: str, include_row_data: bool = False
) -> Any:
"""Execute a SQL statement
Args:
statement (str): A SQL statement that can be run by Synapse
include_row_data (bool): Include row_id and row_etag. Defaults to False.
Returns:
any: An object from
"""
return self.syn.tableQuery(
statement, includeRowIdAndRowVersion=include_row_data
)
def build_table(self, table_name: str, table: pandas.DataFrame) -> None:
"""Adds a table to the project based on the input table
Args:
table_name (str): The name fo the table
table (pandas.DataFrame): A dataframe of the table
"""
table_copy = table.copy(deep=False)
project = self.syn.get(self.project_id)
table_copy = synapseclient.table.build_table(table_name, project, table_copy)
self.syn.store(table_copy)
def add_table(self, table_name: str, columns: list[synapseclient.Column]) -> None:
"""Adds a synapse table
Args:
table_name (str): The name of the table to be added
columns (list[synapseclient.Column]): The columns to be added
"""
# create a dictionary with a key for every column, and value of an empty list
values: dict[str, list] = {column.name: [] for column in columns}
schema = synapseclient.Schema(
name=table_name, columns=columns, parent=self.project_id
)
table = synapseclient.Table(schema, values)
self.syn.store(table)
def delete_table(self, synapse_id: str) -> None:
"""Deletes a Synapse table
Args:
synapse_id (str): The Synapse id of the table to delete
"""
self.syn.delete(synapse_id)
def replace_table(self, table_name: str, table: pandas.DataFrame) -> None:
"""
Replaces synapse table with table made in table.
The synapse id is preserved.
Args:
table_name (str): The name of the table to be replaced
table (pandas.DataFrame): A dataframe of the table to replace to old table with
"""
if table_name not in self.get_table_names():
self.build_table(table_name, table)
else:
synapse_id = self.get_synapse_id_from_table_name(table_name)
self.delete_all_table_rows(synapse_id)
self.delete_all_table_columns(synapse_id)
self.add_table_columns(synapse_id, synapseclient.as_table_columns(table))
self.insert_table_rows(synapse_id, table)
def insert_table_rows(self, synapse_id: str, data: pandas.DataFrame) -> None:
"""Insert table rows into Synapse table
Args:
synapse_id (str): The Synapse id of the table to add rows into
data (pandas.DataFrame): The rows to be added.
"""
table = self.syn.get(synapse_id)
self.syn.store(synapseclient.Table(table, data))
def upsert_table_rows(self, synapse_id: str, data: pandas.DataFrame) -> None:
"""Upserts rows from the given table
Args:
synapse_id (str): The Synapse ID fo the table to be upserted into
data (pandas.DataFrame): The table the rows will come from
"""
self.syn.store(synapseclient.Table(synapse_id, data))
def delete_table_rows(self, synapse_id: str, data: pandas.DataFrame) -> None:
"""Deletes rows from the given table
Args:
synapse_id (str): The Synapse id of the table the rows will be deleted from
data (pandas.DataFrame): A pandas.DataFrame. Columns must include "ROW_ID",
and "ROW_VERSION"
Raises:
SynapseDeleteRowsError: If "ROW_ID" not in the columns of the data
SynapseDeleteRowsError: If "ROW_VERSION" not in the columns of the data
"""
columns = list(data.columns)
if "ROW_ID" not in columns:
raise SynapseDeleteRowsError(
"ROW_ID missing from input data", synapse_id, columns
)
if "ROW_VERSION" not in columns:
raise SynapseDeleteRowsError(
"ROW_VERSION missing from input data", synapse_id, columns
)
self.syn.delete(synapseclient.Table(synapse_id, data))
@retry(
stop=stop_after_attempt(5),
wait=wait_fixed(1),
retry=retry_if_exception_type(synapseclient.core.exceptions.SynapseHTTPError),
)
def delete_all_table_rows(self, synapse_id: str) -> None:
"""Deletes all rows in the Synapse table
Args:
synapse_id (str): The Synapse id of the table
"""
table = self.syn.get(synapse_id)
columns = self.syn.getTableColumns(table)
if len(list(columns)) > 0:
results = self.syn.tableQuery(f"select * from {synapse_id}")
self.syn.delete(results)
@retry(
stop=stop_after_attempt(5),
wait=wait_fixed(1),
retry=retry_if_exception_type(synapseclient.core.exceptions.SynapseHTTPError),
)
def delete_all_table_columns(self, synapse_id: str) -> None:
"""Deletes all columns in the Synapse table
Args:
synapse_id (str): The Synapse id of the table
"""
table = self.syn.get(synapse_id)
columns = self.syn.getTableColumns(table)
for col in columns:
table.removeColumn(col)
self.syn.store(table)
@retry(
stop=stop_after_attempt(5),
wait=wait_fixed(1),
retry=retry_if_exception_type(synapseclient.core.exceptions.SynapseHTTPError),
)
def add_table_columns(
self, synapse_id: str, columns: list[synapseclient.Column]
) -> None:
"""Add columns to synapse table
Args:
synapse_id (str): The Synapse id of the table to add the columns to
columns (list[synapseclient.Column]): The columns to be added
"""
table = self.syn.get(synapse_id)
for col in columns:
table.addColumn(col)
self.syn.store(table)
def get_entity_annotations(self, synapse_id: str) -> synapseclient.Annotations:
"""Gets the annotations for the Synapse entity
Args:
synapse_id (str): The Synapse id of the entity
Returns:
synapseclient.Annotations: The annotations of the Synapse entity in dict form.
"""
return self.syn.get_annotations(synapse_id)
def set_entity_annotations(
self, synapse_id: str, annotations: dict[str, Any]
) -> None:
"""Sets the entities annotations to the input annotations
Args:
synapse_id (str): The Synapse ID of the entity
annotations (dict[str, Any]): A dictionary of annotations
"""
entity_annotations = self.syn.get_annotations(synapse_id)
entity_annotations.clear()
for key, value in annotations.items():
entity_annotations[key] = value
self.syn.set_annotations(entity_annotations)
def clear_entity_annotations(self, synapse_id: str) -> None:
"""Removes all annotations from the entity
Args:
synapse_id (str): The Synapse ID of the entity
"""
annotations = self.syn.get_annotations(synapse_id)
annotations.clear()
self.syn.set_annotations(annotations)
|
schematic-db
|
/schematic_db-0.0.31-py3-none-any.whl/schematic_db/synapse/synapse.py
|
synapse.py
|
from typing import Any
from tenacity import retry, stop_after_attempt, wait_fixed, retry_if_exception_type
import synapseclient # type: ignore
import pandas # type: ignore
class SynapseTableNameError(Exception):
"""SynapseTableNameError"""
def __init__(self, message: str, table_name: str) -> None:
"""
Args:
message (str): A message describing the error
table_name (str): The name of the table
"""
self.message = message
self.table_name = table_name
super().__init__(self.message)
def __str__(self) -> str:
return f"{self.message}:{self.table_name}"
class SynapseDeleteRowsError(Exception):
"""SynapseDeleteRowsError"""
def __init__(self, message: str, table_id: str, columns: list[str]) -> None:
"""
Args:
message (str): A message describing the error
table_id (str): The synapse id of the table
columns (list[str]): A list of columns in the synapse table
"""
self.message = message
self.table_id = table_id
self.columns = columns
super().__init__(self.message)
def __str__(self) -> str:
return f"{self.message}; table_id:{self.table_id}; columns: {', '.join(self.columns)}"
class Synapse: # pylint: disable=too-many-public-methods
"""
The Synapse class handles interactions with a project in Synapse.
"""
def __init__(self, auth_token: str, project_id: str) -> None:
"""Init
Args:
auth_token (str): A Synapse auth_token
project_id (str): A Synapse id for a project
"""
self.project_id = project_id
syn = synapseclient.Synapse()
syn.login(authToken=auth_token)
self.syn = syn
self.project_id = project_id
def download_csv_as_dataframe(self, synapse_id: str) -> pandas.DataFrame:
"""Downloads a csv file form Synapse and reads it
Args:
synapse_id (str): The Synapse id of the file
Returns:
pandas.DataFrame: The file in dataframe form
"""
entity = self.syn.get(synapse_id)
return pandas.read_csv(entity.path)
def get_table_names(self) -> list[str]:
"""Gets the names of the tables in the schema
Returns:
list[str]: A list of table names
"""
tables = self._get_tables()
return [table["name"] for table in tables]
def _get_tables(self) -> list[synapseclient.Table]:
"""Gets the list of Synapse table entities for the project
Returns:
list[synapseclient.Table]: A list of all Synapse table entities
"""
project = self.syn.get(self.project_id)
return list(self.syn.getChildren(project, includeTypes=["table"]))
def get_table_column_names(self, table_name: str) -> list[str]:
"""Gets the column names from a synapse table
Args:
table_name (str): The name of the table
Returns:
list[str]: A list of column names
"""
synapse_id = self.get_synapse_id_from_table_name(table_name)
table = self.syn.get(synapse_id)
columns = list(self.syn.getTableColumns(table))
return [column.name for column in columns]
def get_synapse_id_from_table_name(self, table_name: str) -> str:
"""Gets the synapse id from the table name
Args:
table_name (str): The name of the table
Raises:
SynapseTableNameError: When no tables match the name
SynapseTableNameError: When multiple tables match the name
Returns:
str: A synapse id
"""
tables = self._get_tables()
matching_tables = [table for table in tables if table["name"] == table_name]
if len(matching_tables) == 0:
raise SynapseTableNameError("No matching tables with name:", table_name)
if len(matching_tables) > 1:
raise SynapseTableNameError(
"Multiple matching tables with name:", table_name
)
return matching_tables[0]["id"]
def get_table_name_from_synapse_id(self, synapse_id: str) -> str:
"""Gets the table name from the synapse id
Args:
synapse_id (str): A synapse id
Returns:
str: The name of the table with the synapse id
"""
tables = self._get_tables()
return [table["name"] for table in tables if table["id"] == synapse_id][0]
def query_table(
self, synapse_id: str, include_row_data: bool = False
) -> pandas.DataFrame:
"""Queries a whole table
Args:
synapse_id (str): The Synapse id of the table to delete
include_row_data (bool): Include row_id and row_etag. Defaults to False.
Returns:
pandas.DataFrame: The queried table
"""
query = f"SELECT * FROM {synapse_id}"
return self.execute_sql_query(query, include_row_data)
def execute_sql_query(
self, query: str, include_row_data: bool = False
) -> pandas.DataFrame:
"""Execute a Sql query
Args:
query (str): A SQL statement that can be run by Synapse
include_row_data (bool): Include row_id and row_etag. Defaults to False.
Returns:
pandas.DataFrame: The queried table
"""
result = self.execute_sql_statement(query, include_row_data)
table = pandas.read_csv(result.filepath)
return table
def execute_sql_statement(
self, statement: str, include_row_data: bool = False
) -> Any:
"""Execute a SQL statement
Args:
statement (str): A SQL statement that can be run by Synapse
include_row_data (bool): Include row_id and row_etag. Defaults to False.
Returns:
any: An object from
"""
return self.syn.tableQuery(
statement, includeRowIdAndRowVersion=include_row_data
)
def build_table(self, table_name: str, table: pandas.DataFrame) -> None:
"""Adds a table to the project based on the input table
Args:
table_name (str): The name fo the table
table (pandas.DataFrame): A dataframe of the table
"""
table_copy = table.copy(deep=False)
project = self.syn.get(self.project_id)
table_copy = synapseclient.table.build_table(table_name, project, table_copy)
self.syn.store(table_copy)
def add_table(self, table_name: str, columns: list[synapseclient.Column]) -> None:
"""Adds a synapse table
Args:
table_name (str): The name of the table to be added
columns (list[synapseclient.Column]): The columns to be added
"""
# create a dictionary with a key for every column, and value of an empty list
values: dict[str, list] = {column.name: [] for column in columns}
schema = synapseclient.Schema(
name=table_name, columns=columns, parent=self.project_id
)
table = synapseclient.Table(schema, values)
self.syn.store(table)
def delete_table(self, synapse_id: str) -> None:
"""Deletes a Synapse table
Args:
synapse_id (str): The Synapse id of the table to delete
"""
self.syn.delete(synapse_id)
def replace_table(self, table_name: str, table: pandas.DataFrame) -> None:
"""
Replaces synapse table with table made in table.
The synapse id is preserved.
Args:
table_name (str): The name of the table to be replaced
table (pandas.DataFrame): A dataframe of the table to replace to old table with
"""
if table_name not in self.get_table_names():
self.build_table(table_name, table)
else:
synapse_id = self.get_synapse_id_from_table_name(table_name)
self.delete_all_table_rows(synapse_id)
self.delete_all_table_columns(synapse_id)
self.add_table_columns(synapse_id, synapseclient.as_table_columns(table))
self.insert_table_rows(synapse_id, table)
def insert_table_rows(self, synapse_id: str, data: pandas.DataFrame) -> None:
"""Insert table rows into Synapse table
Args:
synapse_id (str): The Synapse id of the table to add rows into
data (pandas.DataFrame): The rows to be added.
"""
table = self.syn.get(synapse_id)
self.syn.store(synapseclient.Table(table, data))
def upsert_table_rows(self, synapse_id: str, data: pandas.DataFrame) -> None:
"""Upserts rows from the given table
Args:
synapse_id (str): The Synapse ID fo the table to be upserted into
data (pandas.DataFrame): The table the rows will come from
"""
self.syn.store(synapseclient.Table(synapse_id, data))
def delete_table_rows(self, synapse_id: str, data: pandas.DataFrame) -> None:
"""Deletes rows from the given table
Args:
synapse_id (str): The Synapse id of the table the rows will be deleted from
data (pandas.DataFrame): A pandas.DataFrame. Columns must include "ROW_ID",
and "ROW_VERSION"
Raises:
SynapseDeleteRowsError: If "ROW_ID" not in the columns of the data
SynapseDeleteRowsError: If "ROW_VERSION" not in the columns of the data
"""
columns = list(data.columns)
if "ROW_ID" not in columns:
raise SynapseDeleteRowsError(
"ROW_ID missing from input data", synapse_id, columns
)
if "ROW_VERSION" not in columns:
raise SynapseDeleteRowsError(
"ROW_VERSION missing from input data", synapse_id, columns
)
self.syn.delete(synapseclient.Table(synapse_id, data))
@retry(
stop=stop_after_attempt(5),
wait=wait_fixed(1),
retry=retry_if_exception_type(synapseclient.core.exceptions.SynapseHTTPError),
)
def delete_all_table_rows(self, synapse_id: str) -> None:
"""Deletes all rows in the Synapse table
Args:
synapse_id (str): The Synapse id of the table
"""
table = self.syn.get(synapse_id)
columns = self.syn.getTableColumns(table)
if len(list(columns)) > 0:
results = self.syn.tableQuery(f"select * from {synapse_id}")
self.syn.delete(results)
@retry(
stop=stop_after_attempt(5),
wait=wait_fixed(1),
retry=retry_if_exception_type(synapseclient.core.exceptions.SynapseHTTPError),
)
def delete_all_table_columns(self, synapse_id: str) -> None:
"""Deletes all columns in the Synapse table
Args:
synapse_id (str): The Synapse id of the table
"""
table = self.syn.get(synapse_id)
columns = self.syn.getTableColumns(table)
for col in columns:
table.removeColumn(col)
self.syn.store(table)
@retry(
stop=stop_after_attempt(5),
wait=wait_fixed(1),
retry=retry_if_exception_type(synapseclient.core.exceptions.SynapseHTTPError),
)
def add_table_columns(
self, synapse_id: str, columns: list[synapseclient.Column]
) -> None:
"""Add columns to synapse table
Args:
synapse_id (str): The Synapse id of the table to add the columns to
columns (list[synapseclient.Column]): The columns to be added
"""
table = self.syn.get(synapse_id)
for col in columns:
table.addColumn(col)
self.syn.store(table)
def get_entity_annotations(self, synapse_id: str) -> synapseclient.Annotations:
"""Gets the annotations for the Synapse entity
Args:
synapse_id (str): The Synapse id of the entity
Returns:
synapseclient.Annotations: The annotations of the Synapse entity in dict form.
"""
return self.syn.get_annotations(synapse_id)
def set_entity_annotations(
self, synapse_id: str, annotations: dict[str, Any]
) -> None:
"""Sets the entities annotations to the input annotations
Args:
synapse_id (str): The Synapse ID of the entity
annotations (dict[str, Any]): A dictionary of annotations
"""
entity_annotations = self.syn.get_annotations(synapse_id)
entity_annotations.clear()
for key, value in annotations.items():
entity_annotations[key] = value
self.syn.set_annotations(entity_annotations)
def clear_entity_annotations(self, synapse_id: str) -> None:
"""Removes all annotations from the entity
Args:
synapse_id (str): The Synapse ID of the entity
"""
annotations = self.syn.get_annotations(synapse_id)
annotations.clear()
self.syn.set_annotations(annotations)
| 0.878686 | 0.188137 |
from enum import Enum
from typing import Any, Optional, TypeVar
from pydantic.dataclasses import dataclass
from pydantic import validator
class ColumnDatatype(Enum):
"""A generic datatype that should be supported by all database types."""
TEXT = "text"
DATE = "date"
INT = "int"
FLOAT = "float"
BOOLEAN = "boolean"
# mypy types so that a class can refer to its own type
X = TypeVar("X", bound="ColumnSchema")
Y = TypeVar("Y", bound="TableSchema")
T = TypeVar("T", bound="DatabaseSchema")
@dataclass()
class ColumnSchema:
"""A schema for a table column (attribute)."""
name: str
datatype: ColumnDatatype
required: bool = False
index: bool = False
@validator("name")
@classmethod
def validate_string_is_not_empty(cls, value: str) -> str:
"""Check if string is not empty(has at least one char)
Args:
value (str): A string
Raises:
ValueError: If the value is zero characters long
Returns:
(str): The input value
"""
if len(value) == 0:
raise ValueError(f"{value} is an empty string")
return value
@dataclass()
class ForeignKeySchema:
"""A foreign key in a database schema."""
name: str
foreign_table_name: str
foreign_column_name: str
@validator("name", "foreign_table_name", "foreign_column_name")
@classmethod
def validate_string_is_not_empty(cls, value: str) -> str:
"""Check if string is not empty(has at least one char)
Args:
value (str): A string
Raises:
ValueError: If the value is zero characters long
Returns:
(str): The input value
"""
if len(value) == 0:
raise ValueError(f"{value} is an empty string")
return value
def get_column_dict(self) -> dict[str, str]:
"""Returns the foreign key in dict form
Returns:
dict[str, str]: A dictionary of the foreign key columns
"""
return {
"name": self.name,
"foreign_table_name": self.foreign_table_name,
"foreign_column_name": self.foreign_column_name,
}
class TableColumnError(Exception):
"""A generic error involving table columns"""
def __init__(self, message: str, table_name: str) -> None:
"""
Args:
message (str): A message describing the error
table_name (str): The name of the table involved in the error
"""
self.message = message
self.table_name = table_name
super().__init__(self.message)
def __str__(self) -> str:
"""String representation"""
return f"{self.message}: {self.table_name}"
class TableKeyError(Exception):
"""TableKeyError"""
def __init__(
self, message: str, table_name: str, key: Optional[str] = None
) -> None:
"""
Args:
message (str): A message describing the error
table_name (str): The name of the table involved in the error
key (Optional[str], optional): The name of the key involved in the error.
Defaults to None.
"""
self.message = message
self.table_name = table_name
self.key = key
super().__init__(self.message)
def __str__(self) -> str:
"""String representation"""
return f"{self.message}: {self.table_name}; {self.key}"
@dataclass
class TableSchema:
"""A schema for a database table."""
name: str
columns: list[ColumnSchema]
primary_key: str
foreign_keys: list[ForeignKeySchema]
@validator("name", "primary_key")
@classmethod
def validate_string_is_not_empty(cls, value: str) -> str:
"""Check if string is not empty(has at least one char)
Args:
value (str): A string
Raises:
ValueError: If the value is zero characters long
Returns:
(str): The input value
"""
if len(value) == 0:
raise ValueError(f"{value} is an empty string")
return value
def __post_init__(self) -> None:
"""Happens after initialization"""
self.columns.sort(key=lambda x: x.name)
self.foreign_keys.sort(key=lambda x: x.name)
self._check_columns()
self._check_primary_key()
self._check_foreign_keys()
def __eq__(self, other: Any) -> bool:
"""Overrides the default implementation"""
return self.get_sorted_columns() == other.get_sorted_columns()
def get_sorted_columns(self) -> list[ColumnSchema]:
"""Gets the tables columns sorted by name
Returns:
list[ColumnSchema]: Sorted list of columns
"""
return sorted(self.columns, key=lambda x: x.name)
def get_column_names(self) -> list[str]:
"""Returns a list of names of the columns
Returns:
List[str]: A list of names of the attributes
"""
return [column.name for column in self.columns]
def get_foreign_key_dependencies(self) -> list[str]:
"""Returns a list of table names the current table depends on
Returns:
list[str]: A list of table names
"""
return [key.foreign_table_name for key in self.foreign_keys]
def get_foreign_key_names(self) -> list[str]:
"""Returns a list of names of the foreign keys
Returns:
List[str]: A list of names of the foreign keys
"""
return [key.name for key in self.foreign_keys]
def get_foreign_key_by_name(self, name: str) -> ForeignKeySchema:
"""Returns foreign key
Args:
name (str): name of the foreign key
Returns:
ForeignKeySchema: The foreign key asked for
"""
return [key for key in self.foreign_keys if key.name == name][0]
def get_column_by_name(self, name: str) -> ColumnSchema:
"""Returns the column
Args:
name (str): name of the column
Returns:
ColumnSchema: The ColumnSchema asked for
"""
return [column for column in self.columns if column.name == name][0]
def _check_columns(self) -> None:
"""Checks that there are columns and they don't match
Raises:
TableColumnError: Raised when there are no columns
TableColumnError: Raised when columns match
"""
if len(self.columns) == 0:
raise TableColumnError("There are no columns", self.name)
if len(self.get_column_names()) != len(set(self.get_column_names())):
raise TableColumnError("There are duplicate columns", self.name)
def _check_primary_key(self) -> None:
"""Checks the primary is in the columns
Raises:
TableKeyError: Raised when the primary key is missing from the columns
"""
if self.primary_key not in self.get_column_names():
raise TableKeyError(
"Primary key is missing from columns", self.name, self.primary_key
)
def _check_foreign_keys(self) -> None:
"""Checks each foreign key"""
for key in self.foreign_keys:
self._check_foreign_key(key)
def _check_foreign_key(self, key: ForeignKeySchema) -> None:
"""Checks the foreign key exists in the columns and isn't referencing it's own table
Args:
key (ForeignKeySchema): A schema for a foreign key
Raises:
TableKeyError: Raised when the foreign key is missing from the columns
TableKeyError: Raised when the foreign key references its own table
"""
if key.name not in self.get_column_names():
raise TableKeyError(
"Foreign key is missing from columns", self.name, key.name
)
if key.foreign_table_name == self.name:
raise TableKeyError(
"Foreign key references its own table", self.name, key.name
)
class SchemaMissingTableError(Exception):
"""When a foreign key references an table that doesn't exist"""
def __init__(
self, foreign_key: str, table_name: str, foreign_table_name: str
) -> None:
"""
Args:
foreign_key (str): The name of the foreign key
table_name (str): The name of the table that the key is in
foreign_table_name (str): The name of the table the key refers to that is missing
"""
self.message = "Foreign key references table which does not exist in schema."
self.foreign_key = foreign_key
self.table_name = table_name
self.foreign_table_name = foreign_table_name
super().__init__(self.message)
def __str__(self) -> str:
"""String representation"""
msg = (
f"Foreign key '{self.foreign_key}' in table '{self.table_name}' references table "
f"'{self.foreign_table_name}' which does not exist in schema."
)
return msg
class SchemaMissingColumnError(Exception):
"""When a foreign key references an table column the table doesn't have"""
def __init__(
self,
foreign_key: str,
table_name: str,
foreign_table_name: str,
foreign_table_column: str,
) -> None:
"""
Args:
foreign_key (str): The name of the foreign key
table_name (str): The name of the table that the key is in
foreign_table_name (str): The name of the table the key refers
foreign_table_column (str): The column in the foreign table that is missing
"""
self.message = "Foreign key references column which does not exist."
self.foreign_key = foreign_key
self.table_name = table_name
self.foreign_table_name = foreign_table_name
self.foreign_table_column = foreign_table_column
super().__init__(self.message)
def __str__(self) -> str:
"""String representation"""
msg = (
f"Foreign key '{self.foreign_key}' in table '{self.table_name}' references "
f"column '{self.foreign_table_column}' which does not exist in table "
f"'{self.foreign_table_name}'"
)
return msg
@dataclass
class DatabaseSchema:
"""A database agnostic schema"""
table_schemas: list[TableSchema]
def __post_init__(self) -> None:
for schema in self.table_schemas:
self._check_foreign_keys(schema)
def __eq__(self, other: Any) -> bool:
"""Overrides the default implementation"""
return self.get_sorted_table_schemas() == other.get_sorted_table_schemas()
def get_sorted_table_schemas(self) -> list[TableSchema]:
"""Gets the table schemas sorted by name
Returns:
list[TableSchema]: The list of sorted table schemas
"""
return sorted(self.table_schemas, key=lambda x: x.name)
def get_dependencies(self, table_name: str) -> list[str]:
"""Gets the tables dependencies
Args:
table_name (str): The name of the table
Returns:
list[str]: A list of tables names the table depends on
"""
return self.get_schema_by_name(table_name).get_foreign_key_dependencies()
def get_reverse_dependencies(self, table_name: str) -> list[str]:
"""Gets the names of the tables that depend on the input table
Args:
table_name (str): The name of the table
Returns:
list[str]: A list of table names that depend on the input table
"""
return [
schema.name
for schema in self.table_schemas
if table_name in schema.get_foreign_key_dependencies()
]
def get_schema_names(self) -> list[str]:
"""Returns a list of names of the schemas
Returns:
List[str]: A list of names of the schemas
"""
return [schema.name for schema in self.table_schemas]
def get_schema_by_name(self, name: str) -> TableSchema:
"""Returns the schema
Args:
name (str): name of the schema
Returns:
TableSchema: The TableSchema asked for
"""
return [schema for schema in self.table_schemas if schema.name == name][0]
def _check_foreign_keys(self, schema: TableSchema) -> None:
"""Checks all foreign keys
Args:
schema (TableSchema): The schema of the table being checked
"""
for key in schema.foreign_keys:
self._check_foreign_key_table(schema, key)
self._check_foreign_key_column(schema, key)
def _check_foreign_key_table(
self, schema: TableSchema, key: ForeignKeySchema
) -> None:
"""Checks that the table the foreign key refers to exists
Args:
schema (TableSchema): The schema for the table being checked
key (ForeignKeySchema): The foreign key being checked
Raises:
SchemaMissingTableError: Raised when the table a foreign key references is missing
"""
if key.foreign_table_name not in self.get_schema_names():
raise SchemaMissingTableError(
foreign_key=key.name,
table_name=schema.name,
foreign_table_name=key.foreign_table_name,
)
def _check_foreign_key_column(
self, schema: TableSchema, key: ForeignKeySchema
) -> None:
"""Checks that the column the foreign key refers to exists
Args:
schema (TableSchema): The schema for the table being checked
key (ForeignKeySchema): The foreign key being checked
Raises:
SchemaMissingColumnError: Raised when the column a foreign key references is missing
"""
foreign_schema = self.get_schema_by_name(key.foreign_table_name)
if key.foreign_column_name not in foreign_schema.get_column_names():
raise SchemaMissingColumnError(
foreign_key=key.name,
table_name=schema.name,
foreign_table_name=key.foreign_table_name,
foreign_table_column=key.foreign_column_name,
)
|
schematic-db
|
/schematic_db-0.0.31-py3-none-any.whl/schematic_db/db_schema/db_schema.py
|
db_schema.py
|
from enum import Enum
from typing import Any, Optional, TypeVar
from pydantic.dataclasses import dataclass
from pydantic import validator
class ColumnDatatype(Enum):
"""A generic datatype that should be supported by all database types."""
TEXT = "text"
DATE = "date"
INT = "int"
FLOAT = "float"
BOOLEAN = "boolean"
# mypy types so that a class can refer to its own type
X = TypeVar("X", bound="ColumnSchema")
Y = TypeVar("Y", bound="TableSchema")
T = TypeVar("T", bound="DatabaseSchema")
@dataclass()
class ColumnSchema:
"""A schema for a table column (attribute)."""
name: str
datatype: ColumnDatatype
required: bool = False
index: bool = False
@validator("name")
@classmethod
def validate_string_is_not_empty(cls, value: str) -> str:
"""Check if string is not empty(has at least one char)
Args:
value (str): A string
Raises:
ValueError: If the value is zero characters long
Returns:
(str): The input value
"""
if len(value) == 0:
raise ValueError(f"{value} is an empty string")
return value
@dataclass()
class ForeignKeySchema:
"""A foreign key in a database schema."""
name: str
foreign_table_name: str
foreign_column_name: str
@validator("name", "foreign_table_name", "foreign_column_name")
@classmethod
def validate_string_is_not_empty(cls, value: str) -> str:
"""Check if string is not empty(has at least one char)
Args:
value (str): A string
Raises:
ValueError: If the value is zero characters long
Returns:
(str): The input value
"""
if len(value) == 0:
raise ValueError(f"{value} is an empty string")
return value
def get_column_dict(self) -> dict[str, str]:
"""Returns the foreign key in dict form
Returns:
dict[str, str]: A dictionary of the foreign key columns
"""
return {
"name": self.name,
"foreign_table_name": self.foreign_table_name,
"foreign_column_name": self.foreign_column_name,
}
class TableColumnError(Exception):
"""A generic error involving table columns"""
def __init__(self, message: str, table_name: str) -> None:
"""
Args:
message (str): A message describing the error
table_name (str): The name of the table involved in the error
"""
self.message = message
self.table_name = table_name
super().__init__(self.message)
def __str__(self) -> str:
"""String representation"""
return f"{self.message}: {self.table_name}"
class TableKeyError(Exception):
"""TableKeyError"""
def __init__(
self, message: str, table_name: str, key: Optional[str] = None
) -> None:
"""
Args:
message (str): A message describing the error
table_name (str): The name of the table involved in the error
key (Optional[str], optional): The name of the key involved in the error.
Defaults to None.
"""
self.message = message
self.table_name = table_name
self.key = key
super().__init__(self.message)
def __str__(self) -> str:
"""String representation"""
return f"{self.message}: {self.table_name}; {self.key}"
@dataclass
class TableSchema:
"""A schema for a database table."""
name: str
columns: list[ColumnSchema]
primary_key: str
foreign_keys: list[ForeignKeySchema]
@validator("name", "primary_key")
@classmethod
def validate_string_is_not_empty(cls, value: str) -> str:
"""Check if string is not empty(has at least one char)
Args:
value (str): A string
Raises:
ValueError: If the value is zero characters long
Returns:
(str): The input value
"""
if len(value) == 0:
raise ValueError(f"{value} is an empty string")
return value
def __post_init__(self) -> None:
"""Happens after initialization"""
self.columns.sort(key=lambda x: x.name)
self.foreign_keys.sort(key=lambda x: x.name)
self._check_columns()
self._check_primary_key()
self._check_foreign_keys()
def __eq__(self, other: Any) -> bool:
"""Overrides the default implementation"""
return self.get_sorted_columns() == other.get_sorted_columns()
def get_sorted_columns(self) -> list[ColumnSchema]:
"""Gets the tables columns sorted by name
Returns:
list[ColumnSchema]: Sorted list of columns
"""
return sorted(self.columns, key=lambda x: x.name)
def get_column_names(self) -> list[str]:
"""Returns a list of names of the columns
Returns:
List[str]: A list of names of the attributes
"""
return [column.name for column in self.columns]
def get_foreign_key_dependencies(self) -> list[str]:
"""Returns a list of table names the current table depends on
Returns:
list[str]: A list of table names
"""
return [key.foreign_table_name for key in self.foreign_keys]
def get_foreign_key_names(self) -> list[str]:
"""Returns a list of names of the foreign keys
Returns:
List[str]: A list of names of the foreign keys
"""
return [key.name for key in self.foreign_keys]
def get_foreign_key_by_name(self, name: str) -> ForeignKeySchema:
"""Returns foreign key
Args:
name (str): name of the foreign key
Returns:
ForeignKeySchema: The foreign key asked for
"""
return [key for key in self.foreign_keys if key.name == name][0]
def get_column_by_name(self, name: str) -> ColumnSchema:
"""Returns the column
Args:
name (str): name of the column
Returns:
ColumnSchema: The ColumnSchema asked for
"""
return [column for column in self.columns if column.name == name][0]
def _check_columns(self) -> None:
"""Checks that there are columns and they don't match
Raises:
TableColumnError: Raised when there are no columns
TableColumnError: Raised when columns match
"""
if len(self.columns) == 0:
raise TableColumnError("There are no columns", self.name)
if len(self.get_column_names()) != len(set(self.get_column_names())):
raise TableColumnError("There are duplicate columns", self.name)
def _check_primary_key(self) -> None:
"""Checks the primary is in the columns
Raises:
TableKeyError: Raised when the primary key is missing from the columns
"""
if self.primary_key not in self.get_column_names():
raise TableKeyError(
"Primary key is missing from columns", self.name, self.primary_key
)
def _check_foreign_keys(self) -> None:
"""Checks each foreign key"""
for key in self.foreign_keys:
self._check_foreign_key(key)
def _check_foreign_key(self, key: ForeignKeySchema) -> None:
"""Checks the foreign key exists in the columns and isn't referencing it's own table
Args:
key (ForeignKeySchema): A schema for a foreign key
Raises:
TableKeyError: Raised when the foreign key is missing from the columns
TableKeyError: Raised when the foreign key references its own table
"""
if key.name not in self.get_column_names():
raise TableKeyError(
"Foreign key is missing from columns", self.name, key.name
)
if key.foreign_table_name == self.name:
raise TableKeyError(
"Foreign key references its own table", self.name, key.name
)
class SchemaMissingTableError(Exception):
"""When a foreign key references an table that doesn't exist"""
def __init__(
self, foreign_key: str, table_name: str, foreign_table_name: str
) -> None:
"""
Args:
foreign_key (str): The name of the foreign key
table_name (str): The name of the table that the key is in
foreign_table_name (str): The name of the table the key refers to that is missing
"""
self.message = "Foreign key references table which does not exist in schema."
self.foreign_key = foreign_key
self.table_name = table_name
self.foreign_table_name = foreign_table_name
super().__init__(self.message)
def __str__(self) -> str:
"""String representation"""
msg = (
f"Foreign key '{self.foreign_key}' in table '{self.table_name}' references table "
f"'{self.foreign_table_name}' which does not exist in schema."
)
return msg
class SchemaMissingColumnError(Exception):
"""When a foreign key references an table column the table doesn't have"""
def __init__(
self,
foreign_key: str,
table_name: str,
foreign_table_name: str,
foreign_table_column: str,
) -> None:
"""
Args:
foreign_key (str): The name of the foreign key
table_name (str): The name of the table that the key is in
foreign_table_name (str): The name of the table the key refers
foreign_table_column (str): The column in the foreign table that is missing
"""
self.message = "Foreign key references column which does not exist."
self.foreign_key = foreign_key
self.table_name = table_name
self.foreign_table_name = foreign_table_name
self.foreign_table_column = foreign_table_column
super().__init__(self.message)
def __str__(self) -> str:
"""String representation"""
msg = (
f"Foreign key '{self.foreign_key}' in table '{self.table_name}' references "
f"column '{self.foreign_table_column}' which does not exist in table "
f"'{self.foreign_table_name}'"
)
return msg
@dataclass
class DatabaseSchema:
"""A database agnostic schema"""
table_schemas: list[TableSchema]
def __post_init__(self) -> None:
for schema in self.table_schemas:
self._check_foreign_keys(schema)
def __eq__(self, other: Any) -> bool:
"""Overrides the default implementation"""
return self.get_sorted_table_schemas() == other.get_sorted_table_schemas()
def get_sorted_table_schemas(self) -> list[TableSchema]:
"""Gets the table schemas sorted by name
Returns:
list[TableSchema]: The list of sorted table schemas
"""
return sorted(self.table_schemas, key=lambda x: x.name)
def get_dependencies(self, table_name: str) -> list[str]:
"""Gets the tables dependencies
Args:
table_name (str): The name of the table
Returns:
list[str]: A list of tables names the table depends on
"""
return self.get_schema_by_name(table_name).get_foreign_key_dependencies()
def get_reverse_dependencies(self, table_name: str) -> list[str]:
"""Gets the names of the tables that depend on the input table
Args:
table_name (str): The name of the table
Returns:
list[str]: A list of table names that depend on the input table
"""
return [
schema.name
for schema in self.table_schemas
if table_name in schema.get_foreign_key_dependencies()
]
def get_schema_names(self) -> list[str]:
"""Returns a list of names of the schemas
Returns:
List[str]: A list of names of the schemas
"""
return [schema.name for schema in self.table_schemas]
def get_schema_by_name(self, name: str) -> TableSchema:
"""Returns the schema
Args:
name (str): name of the schema
Returns:
TableSchema: The TableSchema asked for
"""
return [schema for schema in self.table_schemas if schema.name == name][0]
def _check_foreign_keys(self, schema: TableSchema) -> None:
"""Checks all foreign keys
Args:
schema (TableSchema): The schema of the table being checked
"""
for key in schema.foreign_keys:
self._check_foreign_key_table(schema, key)
self._check_foreign_key_column(schema, key)
def _check_foreign_key_table(
self, schema: TableSchema, key: ForeignKeySchema
) -> None:
"""Checks that the table the foreign key refers to exists
Args:
schema (TableSchema): The schema for the table being checked
key (ForeignKeySchema): The foreign key being checked
Raises:
SchemaMissingTableError: Raised when the table a foreign key references is missing
"""
if key.foreign_table_name not in self.get_schema_names():
raise SchemaMissingTableError(
foreign_key=key.name,
table_name=schema.name,
foreign_table_name=key.foreign_table_name,
)
def _check_foreign_key_column(
self, schema: TableSchema, key: ForeignKeySchema
) -> None:
"""Checks that the column the foreign key refers to exists
Args:
schema (TableSchema): The schema for the table being checked
key (ForeignKeySchema): The foreign key being checked
Raises:
SchemaMissingColumnError: Raised when the column a foreign key references is missing
"""
foreign_schema = self.get_schema_by_name(key.foreign_table_name)
if key.foreign_column_name not in foreign_schema.get_column_names():
raise SchemaMissingColumnError(
foreign_key=key.name,
table_name=schema.name,
foreign_table_name=key.foreign_table_name,
foreign_table_column=key.foreign_column_name,
)
| 0.958158 | 0.371279 |
# pylint: disable=logging-fstring-interpolation
import warnings
import logging
import pandas as pd
from schematic_db.rdb.rdb import (
RelationalDatabase,
UpsertDatabaseError,
InsertDatabaseError,
)
from schematic_db.manifest_store.manifest_store import ManifestStore
from schematic_db.db_schema.db_schema import TableSchema
from schematic_db.api_utils.api_utils import ManifestMetadataList
logging.getLogger(__name__)
class NoManifestWarning(Warning):
"""Raised when trying to update a database table there are no manifests"""
def __init__(
self, table_name: str, manifest_metadata_list: ManifestMetadataList
) -> None:
"""_summary_
Args:
table_name (str): The name of the table there were no manifests for
manifest_metadata_list (ManifestMetadataList): A list of metadata
for all found manifests
"""
self.message = "There were no manifests found for table"
self.table_name = table_name
self.manifest_metadata_list = manifest_metadata_list
super().__init__(self.message)
def __str__(self) -> str:
return (
f"{self.message}; "
f"Table Name: {self.table_name}; "
f"Manifests: {self.manifest_metadata_list}"
)
class UpdateError(Exception):
"""Raised when there is an error doing a table update"""
def __init__(self, table_name: str, dataset_id: str) -> None:
"""
Args:
table_name (str): The name of the table the upsert occurred in
dataset_id (str): The dataset id of the manifest that was being used to update
"""
self.message = "Error updating table"
self.table_name = table_name
self.dataset_id = dataset_id
super().__init__(self.message)
def __str__(self) -> str:
return (
f"{self.message}; "
f"Table Name: {self.table_name}; "
f"Dataset ID: {self.dataset_id}"
)
class ManifestPrimaryKeyError(Exception):
"""Raised when a manifest is missing its primary key"""
def __init__(
self, table_name: str, dataset_id: str, primary_key: str, columns: list[str]
) -> None:
"""
Args:
table_name (str): The name of the table for which the manifest was downloaded
dataset_id (str): The dataset id of the manifest
primary_key (str): The primary key of the table
columns (list[str]): The columns in the manifest
"""
self.message = "Manifest is missing its primary key"
self.table_name = table_name
self.dataset_id = dataset_id
self.primary_key = primary_key
self.columns = columns
super().__init__(self.message)
def __str__(self) -> str:
return (
f"{self.message}; "
f"Table Name: {self.table_name}; "
f"Dataset ID: {self.dataset_id}; "
f"Primary Key: {self.primary_key}; "
f"Columns: [{','.join(self.columns)}]"
)
class RDBUpdater:
"""An for updating a database."""
def __init__(self, rdb: RelationalDatabase, manifest_store: ManifestStore) -> None:
"""
Args:
rdb (RelationalDatabase): A relational database object to be updated
manifest_store (ManifestStore): A manifest store object to get manifests from
"""
self.rdb = rdb
self.manifest_store = manifest_store
def update_database(self, method: str = "upsert") -> None:
"""Updates all tables in database
Args:
method (str): The method used to update each table, either "upsert" or "insert"
Defaults to "upsert".
"""
logging.info("Updating database")
table_names = self.manifest_store.create_sorted_table_name_list()
for name in table_names:
self.update_table(name, method)
logging.info("Database updated")
def update_table(self, table_name: str, method: str = "upsert") -> None:
"""
Updates a table in the database based on one or more manifests.
If any of the manifests don't exist a warning will be raised.
Args:
table_name (str): The name of the table to be updated
method (str): The method used to update each table, either "upsert" or "insert"
Defaults to "upsert".
"""
manifest_ids = self.manifest_store.get_manifest_ids(table_name)
# If there are no manifests a warning is raised and breaks out of function.
if len(manifest_ids) == 0:
warnings.warn(
NoManifestWarning(
table_name, self.manifest_store.get_manifest_metadata()
)
)
return
for manifest_id in manifest_ids:
self._update_table_with_manifest_id(table_name, manifest_id, method)
def _update_table_with_manifest_id(
self, table_name: str, manifest_id: str, method: str = "upsert"
) -> None:
"""Updates a table in the database with a manifest
Args:
table_name (str): The name of the table
manifest_id (str): The id of the manifest
method (str): The method used to update each table, either "upsert" or "insert"
Defaults to "upsert".
Raises:
ManifestPrimaryKeyError: Raised when the manifest table is missing its primary key
UpsertError: Raised when there is an UpsertDatabaseError caught
"""
table_schema = self.rdb.get_table_schema(table_name)
manifest_table = self._download_manifest(table_name, manifest_id)
if table_schema.primary_key not in list(manifest_table.columns):
raise ManifestPrimaryKeyError(
table_name,
manifest_id,
table_schema.primary_key,
list(manifest_table.columns),
)
normalized_table = self._normalize_table(manifest_table, table_schema)
self._update_table_with_manifest(
normalized_table, table_name, manifest_id, method
)
def _download_manifest(self, table_name: str, manifest_id: str) -> pd.DataFrame:
"""Downloads a manifest, and performs logging
Args:
table_name (str): The name of the table the manifest will be upserted into
manifest_id (str): The id of the manifest
Returns:
(pd.DataFrame): The manifest in pandas.Dataframe form
"""
logging.info(
f"Downloading manifest; table name: {table_name}; manifest id: {manifest_id}"
)
manifest_table: pd.DataFrame = self.manifest_store.download_manifest(
manifest_id
)
logging.info("Finished downloading manifest")
return manifest_table
def _normalize_table(
self,
table: pd.DataFrame,
table_schema: TableSchema,
) -> pd.DataFrame:
"""
Gets the table ready for upsert by selecting only needed columns and removing
duplicate entries
Args:
table (pd.DataFrame): The table to normalize
table_schema (TableSchema):The schema of the table
Returns:
pd.DataFrame: A normalized table
"""
table_columns = set(table_schema.get_column_names())
manifest_columns = set(table.columns)
columns = list(table_columns.intersection(manifest_columns))
table = table[columns]
table = table.drop_duplicates(subset=table_schema.primary_key)
table.reset_index(inplace=True, drop=True)
return table
def _update_table_with_manifest(
self, table: pd.DataFrame, table_name: str, manifest_id: str, method: str
) -> None:
"""Updates the database table with the input table and performs logging
Args:
table (pd.DataFrame): The table to be upserted
table_name (str): The name of the table to be upserted into
manifest_id (str): The id of the manifest
method (str): The method used to update each table, either "upsert" or "insert"
Defaults to "upsert".
Raises:
UpdateError: Raised when there is an UpsertDatabaseError or InsertDatabaseError caught
ValueError: Raised when method is not one of ['insert', 'upsert']
"""
logging.info(
f"Updating manifest; table name: {table_name}; manifest id: {manifest_id}"
)
try:
if method == "upsert":
self.rdb.upsert_table_rows(table_name, table)
elif method == "insert":
self.rdb.insert_table_rows(table_name, table)
else:
raise ValueError(
f"Parameter method must be one of ['insert', 'upsert'] not {method}"
)
except (UpsertDatabaseError, InsertDatabaseError) as exc:
raise UpdateError(table_name, manifest_id) from exc
logging.info("Finished updating manifest")
|
schematic-db
|
/schematic_db-0.0.31-py3-none-any.whl/schematic_db/rdb_updater/rdb_updater.py
|
rdb_updater.py
|
# pylint: disable=logging-fstring-interpolation
import warnings
import logging
import pandas as pd
from schematic_db.rdb.rdb import (
RelationalDatabase,
UpsertDatabaseError,
InsertDatabaseError,
)
from schematic_db.manifest_store.manifest_store import ManifestStore
from schematic_db.db_schema.db_schema import TableSchema
from schematic_db.api_utils.api_utils import ManifestMetadataList
logging.getLogger(__name__)
class NoManifestWarning(Warning):
"""Raised when trying to update a database table there are no manifests"""
def __init__(
self, table_name: str, manifest_metadata_list: ManifestMetadataList
) -> None:
"""_summary_
Args:
table_name (str): The name of the table there were no manifests for
manifest_metadata_list (ManifestMetadataList): A list of metadata
for all found manifests
"""
self.message = "There were no manifests found for table"
self.table_name = table_name
self.manifest_metadata_list = manifest_metadata_list
super().__init__(self.message)
def __str__(self) -> str:
return (
f"{self.message}; "
f"Table Name: {self.table_name}; "
f"Manifests: {self.manifest_metadata_list}"
)
class UpdateError(Exception):
"""Raised when there is an error doing a table update"""
def __init__(self, table_name: str, dataset_id: str) -> None:
"""
Args:
table_name (str): The name of the table the upsert occurred in
dataset_id (str): The dataset id of the manifest that was being used to update
"""
self.message = "Error updating table"
self.table_name = table_name
self.dataset_id = dataset_id
super().__init__(self.message)
def __str__(self) -> str:
return (
f"{self.message}; "
f"Table Name: {self.table_name}; "
f"Dataset ID: {self.dataset_id}"
)
class ManifestPrimaryKeyError(Exception):
"""Raised when a manifest is missing its primary key"""
def __init__(
self, table_name: str, dataset_id: str, primary_key: str, columns: list[str]
) -> None:
"""
Args:
table_name (str): The name of the table for which the manifest was downloaded
dataset_id (str): The dataset id of the manifest
primary_key (str): The primary key of the table
columns (list[str]): The columns in the manifest
"""
self.message = "Manifest is missing its primary key"
self.table_name = table_name
self.dataset_id = dataset_id
self.primary_key = primary_key
self.columns = columns
super().__init__(self.message)
def __str__(self) -> str:
return (
f"{self.message}; "
f"Table Name: {self.table_name}; "
f"Dataset ID: {self.dataset_id}; "
f"Primary Key: {self.primary_key}; "
f"Columns: [{','.join(self.columns)}]"
)
class RDBUpdater:
"""An for updating a database."""
def __init__(self, rdb: RelationalDatabase, manifest_store: ManifestStore) -> None:
"""
Args:
rdb (RelationalDatabase): A relational database object to be updated
manifest_store (ManifestStore): A manifest store object to get manifests from
"""
self.rdb = rdb
self.manifest_store = manifest_store
def update_database(self, method: str = "upsert") -> None:
"""Updates all tables in database
Args:
method (str): The method used to update each table, either "upsert" or "insert"
Defaults to "upsert".
"""
logging.info("Updating database")
table_names = self.manifest_store.create_sorted_table_name_list()
for name in table_names:
self.update_table(name, method)
logging.info("Database updated")
def update_table(self, table_name: str, method: str = "upsert") -> None:
"""
Updates a table in the database based on one or more manifests.
If any of the manifests don't exist a warning will be raised.
Args:
table_name (str): The name of the table to be updated
method (str): The method used to update each table, either "upsert" or "insert"
Defaults to "upsert".
"""
manifest_ids = self.manifest_store.get_manifest_ids(table_name)
# If there are no manifests a warning is raised and breaks out of function.
if len(manifest_ids) == 0:
warnings.warn(
NoManifestWarning(
table_name, self.manifest_store.get_manifest_metadata()
)
)
return
for manifest_id in manifest_ids:
self._update_table_with_manifest_id(table_name, manifest_id, method)
def _update_table_with_manifest_id(
self, table_name: str, manifest_id: str, method: str = "upsert"
) -> None:
"""Updates a table in the database with a manifest
Args:
table_name (str): The name of the table
manifest_id (str): The id of the manifest
method (str): The method used to update each table, either "upsert" or "insert"
Defaults to "upsert".
Raises:
ManifestPrimaryKeyError: Raised when the manifest table is missing its primary key
UpsertError: Raised when there is an UpsertDatabaseError caught
"""
table_schema = self.rdb.get_table_schema(table_name)
manifest_table = self._download_manifest(table_name, manifest_id)
if table_schema.primary_key not in list(manifest_table.columns):
raise ManifestPrimaryKeyError(
table_name,
manifest_id,
table_schema.primary_key,
list(manifest_table.columns),
)
normalized_table = self._normalize_table(manifest_table, table_schema)
self._update_table_with_manifest(
normalized_table, table_name, manifest_id, method
)
def _download_manifest(self, table_name: str, manifest_id: str) -> pd.DataFrame:
"""Downloads a manifest, and performs logging
Args:
table_name (str): The name of the table the manifest will be upserted into
manifest_id (str): The id of the manifest
Returns:
(pd.DataFrame): The manifest in pandas.Dataframe form
"""
logging.info(
f"Downloading manifest; table name: {table_name}; manifest id: {manifest_id}"
)
manifest_table: pd.DataFrame = self.manifest_store.download_manifest(
manifest_id
)
logging.info("Finished downloading manifest")
return manifest_table
def _normalize_table(
self,
table: pd.DataFrame,
table_schema: TableSchema,
) -> pd.DataFrame:
"""
Gets the table ready for upsert by selecting only needed columns and removing
duplicate entries
Args:
table (pd.DataFrame): The table to normalize
table_schema (TableSchema):The schema of the table
Returns:
pd.DataFrame: A normalized table
"""
table_columns = set(table_schema.get_column_names())
manifest_columns = set(table.columns)
columns = list(table_columns.intersection(manifest_columns))
table = table[columns]
table = table.drop_duplicates(subset=table_schema.primary_key)
table.reset_index(inplace=True, drop=True)
return table
def _update_table_with_manifest(
self, table: pd.DataFrame, table_name: str, manifest_id: str, method: str
) -> None:
"""Updates the database table with the input table and performs logging
Args:
table (pd.DataFrame): The table to be upserted
table_name (str): The name of the table to be upserted into
manifest_id (str): The id of the manifest
method (str): The method used to update each table, either "upsert" or "insert"
Defaults to "upsert".
Raises:
UpdateError: Raised when there is an UpsertDatabaseError or InsertDatabaseError caught
ValueError: Raised when method is not one of ['insert', 'upsert']
"""
logging.info(
f"Updating manifest; table name: {table_name}; manifest id: {manifest_id}"
)
try:
if method == "upsert":
self.rdb.upsert_table_rows(table_name, table)
elif method == "insert":
self.rdb.insert_table_rows(table_name, table)
else:
raise ValueError(
f"Parameter method must be one of ['insert', 'upsert'] not {method}"
)
except (UpsertDatabaseError, InsertDatabaseError) as exc:
raise UpdateError(table_name, manifest_id) from exc
logging.info("Finished updating manifest")
| 0.794305 | 0.117597 |
# pylint: disable=duplicate-code
from typing import Any
from os import getenv
from datetime import datetime
import pytz
import requests
import pandas
from schematic_db.manifest_store.manifest_metadata_list import ManifestMetadataList
class SchematicAPIError(Exception):
"""When schematic API response status code is anything other than 200"""
def __init__( # pylint:disable=too-many-arguments
self,
endpoint_url: str,
status_code: int,
reason: str,
time: datetime,
params: dict[str, Any],
) -> None:
"""
Args:
endpoint_url (str): The url of the endpoint
status_code (int): The status code given in the response
reason (str): The reason given in the response
time (datetime): The time the API was called
params (dict[str, Any]): The parameters sent with the API call
"""
self.message = "Error accessing Schematic endpoint"
self.endpoint_url = endpoint_url
self.status_code = status_code
self.reason = reason
self.time = time
self.params = params
super().__init__(self.message)
def __str__(self) -> str:
"""
Returns:
str: The description of the error
"""
return (
f"{self.message}; "
f"URL: {self.endpoint_url}; "
f"Code: {self.status_code}; "
f"Reason: {self.reason}; "
f"Time (PST): {self.time}; "
f"Parameters: {self.params}"
)
class SchematicAPITimeoutError(Exception):
"""When schematic API timed out"""
def __init__(
self,
endpoint_url: str,
time: datetime,
params: dict[str, Any],
) -> None:
"""
Args:
endpoint_url (str): The url of the endpoint
time (datetime): The time the API was called
params (dict[str, Any]): The parameters sent with the API call
"""
self.message = "Schematic endpoint timed out"
self.endpoint_url = endpoint_url
self.time = time
self.params = params
super().__init__(self.message)
def __str__(self) -> str:
"""
Returns:
str: The description of the error
"""
return (
f"{self.message}; "
f"URL: {self.endpoint_url}; "
f"Time (PST): {self.time}; "
f"Parameters: {self.params}"
)
def create_schematic_api_response(
endpoint_path: str,
params: dict[str, Any],
timeout: int = 30,
) -> requests.Response:
"""Performs a GET request on the schematic API
Args:
endpoint_path (str): The path for the endpoint in the schematic API
params (dict): The parameters in dict form for the requested endpoint
timeout (int): The amount of seconds the API call has to run
Raises:
SchematicAPIError: When response code is anything other than 200
SchematicAPITimeoutError: When API call times out
Returns:
requests.Response: The response from the API
"""
api_url = getenv("API_URL", "https://schematic.api.sagebionetworks.org/v1/")
endpoint_url = f"{api_url}/{endpoint_path}"
start_time = datetime.now(pytz.timezone("US/Pacific"))
try:
response = requests.get(endpoint_url, params=params, timeout=timeout)
except requests.exceptions.Timeout as exc:
raise SchematicAPITimeoutError(
endpoint_url, start_time, filter_params(params)
) from exc
if response.status_code != 200:
raise SchematicAPIError(
endpoint_url,
response.status_code,
response.reason,
start_time,
filter_params(params),
)
return response
def filter_params(params: dict[str, Any]) -> dict[str, Any]:
"""Removes any parameters from the input dictionary that should not be seen.
Args:
params (dict[str, Any]): A dictionary of parameters
Returns:
dict[str, Any]: A dictionary of parameters with any secrets removed
"""
secret_params = ["access_token"]
for param in secret_params:
params.pop(param, None)
return params
def find_class_specific_properties(schema_url: str, schema_class: str) -> list[str]:
"""Find properties specifically associated with a given class
Args:
schema_url (str): Data Model URL
schema_class (str): The class/name fo the component
Returns:
list[str]: A list of properties of a given class/component.
"""
params = {"schema_url": schema_url, "schema_class": schema_class}
response = create_schematic_api_response(
"explorer/find_class_specific_properties", params
)
return response.json()
def get_property_label_from_display_name(
schema_url: str, display_name: str, strict_camel_case: bool = True
) -> str:
"""Converts a given display name string into a proper property label string
Args:
schema_url (str): Data Model URL
display_name (str): The display name to be converted
strict_camel_case (bool, optional): If true the more strict way of converting
to camel case is used. Defaults to True.
Returns:
str: the property label name
"""
params = {
"schema_url": schema_url,
"display_name": display_name,
"strict_camel_case": strict_camel_case,
}
response = create_schematic_api_response(
"explorer/get_property_label_from_display_name", params
)
return response.json()
def get_graph_by_edge_type(schema_url: str, relationship: str) -> list[tuple[str, str]]:
"""Get a subgraph containing all edges of a given type (aka relationship)
Args:
schema_url (str): Data Model URL
relationship (str): Relationship (i.e. parentOf, requiresDependency,
rangeValue, domainValue)
Returns:
list[tuple[str, str]]: A subgraph in the form of a list of tuples.
"""
params = {"schema_url": schema_url, "relationship": relationship}
response = create_schematic_api_response("schemas/get/graph_by_edge_type", params)
return response.json()
def get_project_manifests(
access_token: str, project_id: str, asset_view: str
) -> ManifestMetadataList:
"""Gets all metadata manifest files across all datasets in a specified project.
Args:
access_token (str): access token
project_id (str): Project ID
asset_view (str): ID of view listing all project data assets. For example,
for Synapse this would be the Synapse ID of the fileview listing all
data assets for a given project.(i.e. master_fileview in config.yml)
Returns:
ManifestMetadataList: A list of manifests in Synapse
"""
params = {
"access_token": access_token,
"project_id": project_id,
"asset_view": asset_view,
}
response = create_schematic_api_response(
"storage/project/manifests", params, timeout=1000
)
metadata_list = []
for item in response.json():
metadata_list.append(
{
"dataset_id": item[0][0],
"dataset_name": item[0][1],
"manifest_id": item[1][0],
"manifest_name": item[1][1],
"component_name": item[2][0],
}
)
return ManifestMetadataList(metadata_list)
def download_manifest(access_token: str, manifest_id: str) -> pandas.DataFrame:
"""Downloads a manifest as a pd.dataframe
Args:
access_token (str): Access token
manifest_id (str): The synapse id of the manifest
Returns:
pd.DataFrame: The manifest in dataframe form
"""
params = {
"access_token": access_token,
"manifest_id": manifest_id,
"as_json": True,
}
response = create_schematic_api_response("manifest/download", params, timeout=1000)
manifest = pandas.DataFrame(response.json())
return manifest
def is_node_required(schema_url: str, node_label: str) -> bool:
"""Checks if node is required
Args:
schema_url (str): Data Model URL
node_label (str): Label/display name for the node to check
Returns:
bool: Wether or not the node is required
"""
params = {"schema_url": schema_url, "node_display_name": node_label}
response = create_schematic_api_response("schemas/is_node_required", params)
return response.json()
def get_node_validation_rules(schema_url: str, node_display_name: str) -> list[str]:
"""Gets the validation rules for the node
Args:
schema_url (str): Data Model URL
node_display_name (str): Label/display name for the node to check
Returns:
list[str]: A list of validation rules
"""
params = {
"schema_url": schema_url,
"node_display_name": node_display_name,
}
response = create_schematic_api_response(
"schemas/get_node_validation_rules", params
)
return response.json()
|
schematic-db
|
/schematic_db-0.0.31-py3-none-any.whl/schematic_db/api_utils/api_utils.py
|
api_utils.py
|
# pylint: disable=duplicate-code
from typing import Any
from os import getenv
from datetime import datetime
import pytz
import requests
import pandas
from schematic_db.manifest_store.manifest_metadata_list import ManifestMetadataList
class SchematicAPIError(Exception):
"""When schematic API response status code is anything other than 200"""
def __init__( # pylint:disable=too-many-arguments
self,
endpoint_url: str,
status_code: int,
reason: str,
time: datetime,
params: dict[str, Any],
) -> None:
"""
Args:
endpoint_url (str): The url of the endpoint
status_code (int): The status code given in the response
reason (str): The reason given in the response
time (datetime): The time the API was called
params (dict[str, Any]): The parameters sent with the API call
"""
self.message = "Error accessing Schematic endpoint"
self.endpoint_url = endpoint_url
self.status_code = status_code
self.reason = reason
self.time = time
self.params = params
super().__init__(self.message)
def __str__(self) -> str:
"""
Returns:
str: The description of the error
"""
return (
f"{self.message}; "
f"URL: {self.endpoint_url}; "
f"Code: {self.status_code}; "
f"Reason: {self.reason}; "
f"Time (PST): {self.time}; "
f"Parameters: {self.params}"
)
class SchematicAPITimeoutError(Exception):
"""When schematic API timed out"""
def __init__(
self,
endpoint_url: str,
time: datetime,
params: dict[str, Any],
) -> None:
"""
Args:
endpoint_url (str): The url of the endpoint
time (datetime): The time the API was called
params (dict[str, Any]): The parameters sent with the API call
"""
self.message = "Schematic endpoint timed out"
self.endpoint_url = endpoint_url
self.time = time
self.params = params
super().__init__(self.message)
def __str__(self) -> str:
"""
Returns:
str: The description of the error
"""
return (
f"{self.message}; "
f"URL: {self.endpoint_url}; "
f"Time (PST): {self.time}; "
f"Parameters: {self.params}"
)
def create_schematic_api_response(
endpoint_path: str,
params: dict[str, Any],
timeout: int = 30,
) -> requests.Response:
"""Performs a GET request on the schematic API
Args:
endpoint_path (str): The path for the endpoint in the schematic API
params (dict): The parameters in dict form for the requested endpoint
timeout (int): The amount of seconds the API call has to run
Raises:
SchematicAPIError: When response code is anything other than 200
SchematicAPITimeoutError: When API call times out
Returns:
requests.Response: The response from the API
"""
api_url = getenv("API_URL", "https://schematic.api.sagebionetworks.org/v1/")
endpoint_url = f"{api_url}/{endpoint_path}"
start_time = datetime.now(pytz.timezone("US/Pacific"))
try:
response = requests.get(endpoint_url, params=params, timeout=timeout)
except requests.exceptions.Timeout as exc:
raise SchematicAPITimeoutError(
endpoint_url, start_time, filter_params(params)
) from exc
if response.status_code != 200:
raise SchematicAPIError(
endpoint_url,
response.status_code,
response.reason,
start_time,
filter_params(params),
)
return response
def filter_params(params: dict[str, Any]) -> dict[str, Any]:
"""Removes any parameters from the input dictionary that should not be seen.
Args:
params (dict[str, Any]): A dictionary of parameters
Returns:
dict[str, Any]: A dictionary of parameters with any secrets removed
"""
secret_params = ["access_token"]
for param in secret_params:
params.pop(param, None)
return params
def find_class_specific_properties(schema_url: str, schema_class: str) -> list[str]:
"""Find properties specifically associated with a given class
Args:
schema_url (str): Data Model URL
schema_class (str): The class/name fo the component
Returns:
list[str]: A list of properties of a given class/component.
"""
params = {"schema_url": schema_url, "schema_class": schema_class}
response = create_schematic_api_response(
"explorer/find_class_specific_properties", params
)
return response.json()
def get_property_label_from_display_name(
schema_url: str, display_name: str, strict_camel_case: bool = True
) -> str:
"""Converts a given display name string into a proper property label string
Args:
schema_url (str): Data Model URL
display_name (str): The display name to be converted
strict_camel_case (bool, optional): If true the more strict way of converting
to camel case is used. Defaults to True.
Returns:
str: the property label name
"""
params = {
"schema_url": schema_url,
"display_name": display_name,
"strict_camel_case": strict_camel_case,
}
response = create_schematic_api_response(
"explorer/get_property_label_from_display_name", params
)
return response.json()
def get_graph_by_edge_type(schema_url: str, relationship: str) -> list[tuple[str, str]]:
"""Get a subgraph containing all edges of a given type (aka relationship)
Args:
schema_url (str): Data Model URL
relationship (str): Relationship (i.e. parentOf, requiresDependency,
rangeValue, domainValue)
Returns:
list[tuple[str, str]]: A subgraph in the form of a list of tuples.
"""
params = {"schema_url": schema_url, "relationship": relationship}
response = create_schematic_api_response("schemas/get/graph_by_edge_type", params)
return response.json()
def get_project_manifests(
access_token: str, project_id: str, asset_view: str
) -> ManifestMetadataList:
"""Gets all metadata manifest files across all datasets in a specified project.
Args:
access_token (str): access token
project_id (str): Project ID
asset_view (str): ID of view listing all project data assets. For example,
for Synapse this would be the Synapse ID of the fileview listing all
data assets for a given project.(i.e. master_fileview in config.yml)
Returns:
ManifestMetadataList: A list of manifests in Synapse
"""
params = {
"access_token": access_token,
"project_id": project_id,
"asset_view": asset_view,
}
response = create_schematic_api_response(
"storage/project/manifests", params, timeout=1000
)
metadata_list = []
for item in response.json():
metadata_list.append(
{
"dataset_id": item[0][0],
"dataset_name": item[0][1],
"manifest_id": item[1][0],
"manifest_name": item[1][1],
"component_name": item[2][0],
}
)
return ManifestMetadataList(metadata_list)
def download_manifest(access_token: str, manifest_id: str) -> pandas.DataFrame:
"""Downloads a manifest as a pd.dataframe
Args:
access_token (str): Access token
manifest_id (str): The synapse id of the manifest
Returns:
pd.DataFrame: The manifest in dataframe form
"""
params = {
"access_token": access_token,
"manifest_id": manifest_id,
"as_json": True,
}
response = create_schematic_api_response("manifest/download", params, timeout=1000)
manifest = pandas.DataFrame(response.json())
return manifest
def is_node_required(schema_url: str, node_label: str) -> bool:
"""Checks if node is required
Args:
schema_url (str): Data Model URL
node_label (str): Label/display name for the node to check
Returns:
bool: Wether or not the node is required
"""
params = {"schema_url": schema_url, "node_display_name": node_label}
response = create_schematic_api_response("schemas/is_node_required", params)
return response.json()
def get_node_validation_rules(schema_url: str, node_display_name: str) -> list[str]:
"""Gets the validation rules for the node
Args:
schema_url (str): Data Model URL
node_display_name (str): Label/display name for the node to check
Returns:
list[str]: A list of validation rules
"""
params = {
"schema_url": schema_url,
"node_display_name": node_display_name,
}
response = create_schematic_api_response(
"schemas/get_node_validation_rules", params
)
return response.json()
| 0.850407 | 0.195959 |
import pandas as pd
from schematic_db.rdb.rdb import RelationalDatabase
from schematic_db.query_store.synapse_query_store import QueryStore
class DuplicateColumnError(Exception):
"""Occurs when a query results in a table with duplicate columns"""
def __init__(self, table_name: str) -> None:
"""
Args:
table_name (str): The name fo the table
"""
self.message = "Query result has duplicate columns"
self.table_name = table_name
super().__init__(self.message)
def __str__(self) -> str:
return f"{self.message}: {self.table_name}"
class RDBQueryer:
"""Queries a database and uploads the results to a query store."""
def __init__(
self,
rdb: RelationalDatabase,
query_store: QueryStore,
):
"""
Args:
rdb (RelationalDatabase): A relational database object to query
query_store (QueryStore): A query store object that will store the results of the query
"""
self.rdb = rdb
self.query_store = query_store
def store_query_results(self, csv_path: str) -> None:
"""Stores the results of queries
Takes a csv file with two columns named "query" and "table_name", and runs each query,
storing the result in the query_result_store as a table.
Args:
csv_path (str): A path to a csv file.
"""
csv = pd.read_csv(csv_path)
for _, row in csv.iterrows():
self.store_query_result(row["query"], row["table_name"])
def store_query_result(self, query: str, table_name: str) -> None:
"""Stores the result of a query
Args:
query (str): A query in SQL form
table_name (str): The name of the table the result will be stored as
Raises:
DuplicateColumnError: Raised when the query result has duplicate columns
"""
query_result = self.rdb.execute_sql_query(query)
column_names = list(query_result.columns)
if len(column_names) != len(set(column_names)):
raise DuplicateColumnError(table_name)
self.query_store.store_query_result(table_name, query_result)
|
schematic-db
|
/schematic_db-0.0.31-py3-none-any.whl/schematic_db/rdb_queryer/rdb_queryer.py
|
rdb_queryer.py
|
import pandas as pd
from schematic_db.rdb.rdb import RelationalDatabase
from schematic_db.query_store.synapse_query_store import QueryStore
class DuplicateColumnError(Exception):
"""Occurs when a query results in a table with duplicate columns"""
def __init__(self, table_name: str) -> None:
"""
Args:
table_name (str): The name fo the table
"""
self.message = "Query result has duplicate columns"
self.table_name = table_name
super().__init__(self.message)
def __str__(self) -> str:
return f"{self.message}: {self.table_name}"
class RDBQueryer:
"""Queries a database and uploads the results to a query store."""
def __init__(
self,
rdb: RelationalDatabase,
query_store: QueryStore,
):
"""
Args:
rdb (RelationalDatabase): A relational database object to query
query_store (QueryStore): A query store object that will store the results of the query
"""
self.rdb = rdb
self.query_store = query_store
def store_query_results(self, csv_path: str) -> None:
"""Stores the results of queries
Takes a csv file with two columns named "query" and "table_name", and runs each query,
storing the result in the query_result_store as a table.
Args:
csv_path (str): A path to a csv file.
"""
csv = pd.read_csv(csv_path)
for _, row in csv.iterrows():
self.store_query_result(row["query"], row["table_name"])
def store_query_result(self, query: str, table_name: str) -> None:
"""Stores the result of a query
Args:
query (str): A query in SQL form
table_name (str): The name of the table the result will be stored as
Raises:
DuplicateColumnError: Raised when the query result has duplicate columns
"""
query_result = self.rdb.execute_sql_query(query)
column_names = list(query_result.columns)
if len(column_names) != len(set(column_names)):
raise DuplicateColumnError(table_name)
self.query_store.store_query_result(table_name, query_result)
| 0.860823 | 0.255564 |
# pylint: disable=logging-fstring-interpolation
import logging
from schematic_db.rdb.rdb import RelationalDatabase
from schematic_db.schema.schema import Schema, DatabaseSchema
logging.getLogger(__name__)
class RDBBuilder: # pylint: disable=too-few-public-methods
"""Builds a database schema"""
def __init__(self, rdb: RelationalDatabase, schema: Schema) -> None:
"""
Args:
rdb (RelationalDatabase): A relational database object
schema (Schema): A Schema object
"""
self.rdb = rdb
self.schema = schema
def build_database(self) -> None:
"""Builds the database based on the schema."""
self._drop_all_tables()
database_schema = self._get_database_schema()
self._build_database_from_schema(database_schema)
def _drop_all_tables(self) -> None:
"""Drops all tables from database and performs logging"""
logging.info("Dropping all tables")
self.rdb.drop_all_tables()
logging.info("Dropped all tables")
def _get_database_schema(self) -> DatabaseSchema:
"""Gets the database schema from the schema object, and performs logging
Returns:
DatabaseSchema: A generic schema for the database
"""
logging.info("Getting database schema")
database_schema = self.schema.get_database_schema()
logging.info("Got database schema")
return database_schema
def _build_database_from_schema(self, database_schema: DatabaseSchema) -> None:
"""Builds the database frm a generic schema, and performs logging
Args:
database_schema (DatabaseSchema): A generic schema for the database
"""
logging.info("Building database")
for table_schema in database_schema.table_schemas:
logging.info(f"Adding table to database schema: {table_schema.name}")
self.rdb.add_table(table_schema.name, table_schema)
logging.info("Database built")
|
schematic-db
|
/schematic_db-0.0.31-py3-none-any.whl/schematic_db/rdb_builder/rdb_builder.py
|
rdb_builder.py
|
# pylint: disable=logging-fstring-interpolation
import logging
from schematic_db.rdb.rdb import RelationalDatabase
from schematic_db.schema.schema import Schema, DatabaseSchema
logging.getLogger(__name__)
class RDBBuilder: # pylint: disable=too-few-public-methods
"""Builds a database schema"""
def __init__(self, rdb: RelationalDatabase, schema: Schema) -> None:
"""
Args:
rdb (RelationalDatabase): A relational database object
schema (Schema): A Schema object
"""
self.rdb = rdb
self.schema = schema
def build_database(self) -> None:
"""Builds the database based on the schema."""
self._drop_all_tables()
database_schema = self._get_database_schema()
self._build_database_from_schema(database_schema)
def _drop_all_tables(self) -> None:
"""Drops all tables from database and performs logging"""
logging.info("Dropping all tables")
self.rdb.drop_all_tables()
logging.info("Dropped all tables")
def _get_database_schema(self) -> DatabaseSchema:
"""Gets the database schema from the schema object, and performs logging
Returns:
DatabaseSchema: A generic schema for the database
"""
logging.info("Getting database schema")
database_schema = self.schema.get_database_schema()
logging.info("Got database schema")
return database_schema
def _build_database_from_schema(self, database_schema: DatabaseSchema) -> None:
"""Builds the database frm a generic schema, and performs logging
Args:
database_schema (DatabaseSchema): A generic schema for the database
"""
logging.info("Building database")
for table_schema in database_schema.table_schemas:
logging.info(f"Adding table to database schema: {table_schema.name}")
self.rdb.add_table(table_schema.name, table_schema)
logging.info("Database built")
| 0.771456 | 0.126138 |
from typing import Any
import numpy
import pandas
import sqlalchemy
import sqlalchemy.dialects.postgresql
from sqlalchemy.inspection import inspect
from sqlalchemy import exc
from schematic_db.db_schema.db_schema import ColumnDatatype
from .sql_alchemy_database import SQLAlchemyDatabase, SQLConfig
from .rdb import UpsertDatabaseError
class PostgresDatabase(SQLAlchemyDatabase):
"""PostgresDatabase
- Represents a Postgres database.
- Implements the RelationalDatabase interface.
- Handles Postgres specific functionality.
"""
def __init__(
self,
config: SQLConfig,
verbose: bool = False,
):
"""Init
Args:
config (SQLConfig): A MySQL config
verbose (bool): Sends much more to logging.info
"""
super().__init__(config, verbose, "postgresql")
column_datatypes = self.column_datatypes.copy()
column_datatypes.update(
{
sqlalchemy.dialects.postgresql.base.TEXT: ColumnDatatype.TEXT,
sqlalchemy.dialects.postgresql.base.VARCHAR: ColumnDatatype.TEXT,
sqlalchemy.dialects.postgresql.base.INTEGER: ColumnDatatype.INT,
sqlalchemy.dialects.postgresql.base.DOUBLE_PRECISION: ColumnDatatype.FLOAT,
sqlalchemy.dialects.postgresql.base.FLOAT: ColumnDatatype.FLOAT,
sqlalchemy.dialects.postgresql.base.DATE: ColumnDatatype.DATE,
}
)
self.column_datatypes = column_datatypes
def upsert_table_rows(self, table_name: str, data: pandas.DataFrame) -> None:
"""Inserts and/or updates the rows of the table
Args:
table_name (str): The name of the table to be upserted
data (pandas.DataFrame): The rows to be upserted
Raises:
UpsertDatabaseError: Raised when a SQLAlchemy error caught
"""
table = self._get_table_object(table_name)
data = data.replace({numpy.nan: None})
rows = data.to_dict("records")
table_schema = self._get_current_metadata().tables[table_name]
primary_key = inspect(table_schema).primary_key.columns.values()[0].name
try:
self._upsert_table_rows(rows, table, table_name, primary_key)
except exc.SQLAlchemyError as exception:
raise UpsertDatabaseError(table_name) from exception
def _upsert_table_rows(
self,
rows: list[dict[str, Any]],
table: sqlalchemy.Table,
table_name: str,
primary_key: str,
) -> None:
"""Upserts a pandas dataframe into a Postgres table
Args:
rows (list[dict[str, Any]]): A list of rows of a dataframe to be upserted
table (sqlalchemy.Table): A sqlalchemy table entity to be upserted into
table_name (str): The name of the table to be upserted into
primary_key (str): The name fo the primary key of the table being upserted into
"""
statement = sqlalchemy.dialects.postgresql.insert(table).values(rows)
update_columns = {
col.name: col for col in statement.excluded if col.name != primary_key
}
statement = statement.on_conflict_do_update(
constraint=f"{table_name}_pkey", set_=update_columns
)
with self.engine.begin() as conn:
conn.execute(statement)
def query_table(self, table_name: str) -> pandas.DataFrame:
"""Queries a whole table
Args:
table_name (str): The name of the table to query
Returns:
pandas.DataFrame: The table in pandas.dataframe form
"""
query = f'SELECT * FROM "{table_name}"'
return self.execute_sql_query(query)
|
schematic-db
|
/schematic_db-0.0.31-py3-none-any.whl/schematic_db/rdb/postgres.py
|
postgres.py
|
from typing import Any
import numpy
import pandas
import sqlalchemy
import sqlalchemy.dialects.postgresql
from sqlalchemy.inspection import inspect
from sqlalchemy import exc
from schematic_db.db_schema.db_schema import ColumnDatatype
from .sql_alchemy_database import SQLAlchemyDatabase, SQLConfig
from .rdb import UpsertDatabaseError
class PostgresDatabase(SQLAlchemyDatabase):
"""PostgresDatabase
- Represents a Postgres database.
- Implements the RelationalDatabase interface.
- Handles Postgres specific functionality.
"""
def __init__(
self,
config: SQLConfig,
verbose: bool = False,
):
"""Init
Args:
config (SQLConfig): A MySQL config
verbose (bool): Sends much more to logging.info
"""
super().__init__(config, verbose, "postgresql")
column_datatypes = self.column_datatypes.copy()
column_datatypes.update(
{
sqlalchemy.dialects.postgresql.base.TEXT: ColumnDatatype.TEXT,
sqlalchemy.dialects.postgresql.base.VARCHAR: ColumnDatatype.TEXT,
sqlalchemy.dialects.postgresql.base.INTEGER: ColumnDatatype.INT,
sqlalchemy.dialects.postgresql.base.DOUBLE_PRECISION: ColumnDatatype.FLOAT,
sqlalchemy.dialects.postgresql.base.FLOAT: ColumnDatatype.FLOAT,
sqlalchemy.dialects.postgresql.base.DATE: ColumnDatatype.DATE,
}
)
self.column_datatypes = column_datatypes
def upsert_table_rows(self, table_name: str, data: pandas.DataFrame) -> None:
"""Inserts and/or updates the rows of the table
Args:
table_name (str): The name of the table to be upserted
data (pandas.DataFrame): The rows to be upserted
Raises:
UpsertDatabaseError: Raised when a SQLAlchemy error caught
"""
table = self._get_table_object(table_name)
data = data.replace({numpy.nan: None})
rows = data.to_dict("records")
table_schema = self._get_current_metadata().tables[table_name]
primary_key = inspect(table_schema).primary_key.columns.values()[0].name
try:
self._upsert_table_rows(rows, table, table_name, primary_key)
except exc.SQLAlchemyError as exception:
raise UpsertDatabaseError(table_name) from exception
def _upsert_table_rows(
self,
rows: list[dict[str, Any]],
table: sqlalchemy.Table,
table_name: str,
primary_key: str,
) -> None:
"""Upserts a pandas dataframe into a Postgres table
Args:
rows (list[dict[str, Any]]): A list of rows of a dataframe to be upserted
table (sqlalchemy.Table): A sqlalchemy table entity to be upserted into
table_name (str): The name of the table to be upserted into
primary_key (str): The name fo the primary key of the table being upserted into
"""
statement = sqlalchemy.dialects.postgresql.insert(table).values(rows)
update_columns = {
col.name: col for col in statement.excluded if col.name != primary_key
}
statement = statement.on_conflict_do_update(
constraint=f"{table_name}_pkey", set_=update_columns
)
with self.engine.begin() as conn:
conn.execute(statement)
def query_table(self, table_name: str) -> pandas.DataFrame:
"""Queries a whole table
Args:
table_name (str): The name of the table to query
Returns:
pandas.DataFrame: The table in pandas.dataframe form
"""
query = f'SELECT * FROM "{table_name}"'
return self.execute_sql_query(query)
| 0.79546 | 0.227995 |
from abc import ABC, abstractmethod
import pandas as pd
from schematic_db.db_schema.db_schema import TableSchema
class UpsertDatabaseError(Exception):
"""Raised when a database class catches an error doing an upsert"""
def __init__(self, table_name: str) -> None:
"""
Args:
table_name (str): The name of the table being upserted into
"""
self.message = "Error upserting table"
self.table_name = table_name
super().__init__(self.message)
def __str__(self) -> str:
return f"{self.message}; " f"Table Name: {self.table_name}"
class InsertDatabaseError(Exception):
"""Raised when a database class catches an error doing an insert"""
def __init__(self, table_name: str) -> None:
"""
Args:
table_name (str): The name of the table being inserted into
"""
self.message = "Error inserting table"
self.table_name = table_name
super().__init__(self.message)
def __str__(self) -> str:
return f"{self.message}; " f"Table Name: {self.table_name}"
class RelationalDatabase(ABC):
"""An interface for relational database types"""
@abstractmethod
def get_table_names(self) -> list[str]:
"""Gets the names of the tables in the database
Returns:
list[str]: A list of table names
"""
@abstractmethod
def get_table_schema(self, table_name: str) -> TableSchema:
"""Returns a TableSchema created from the current database table
Args:
table_name (str): The name of the table
Returns:
Optional[TableSchema]: The schema for the given table
"""
@abstractmethod
def execute_sql_query(self, query: str) -> pd.DataFrame:
"""Executes a valid SQL statement
Should be used when a result is expected.
Args:
query (str): A SQL statement
Returns:
pd.DataFrame: The table
"""
@abstractmethod
def query_table(self, table_name: str) -> pd.DataFrame:
"""Queries a whole table
Args:
table_name (str): The name of the table
Returns:
pd.DataFrame: The table
"""
@abstractmethod
def add_table(self, table_name: str, table_schema: TableSchema) -> None:
"""Adds a table to the schema
Args:
table_name (str): The name of the table
table_schema (TableSchema): The schema for the table being added
"""
@abstractmethod
def drop_table(self, table_name: str) -> None:
"""Drops a table from the schema
Args:
table_name (str): The id(name) of the table to be dropped
"""
@abstractmethod
def drop_all_tables(self) -> None:
"""Drops all tables from the database"""
@abstractmethod
def insert_table_rows(self, table_name: str, data: pd.DataFrame) -> None:
"""Inserts rows into the given table
Args:
table_name (str): The name of the table the rows be upserted into
data (pd.DataFrame): A pandas.DataFrame. It must contain the primary keys of the table
"""
@abstractmethod
def upsert_table_rows(self, table_name: str, data: pd.DataFrame) -> None:
"""Upserts rows into the given table
Args:
table_name (str): The name of the table the rows be upserted into
data (pd.DataFrame): A pandas.DataFrame. It must contain the primary keys of the table
"""
@abstractmethod
def delete_table_rows(self, table_name: str, data: pd.DataFrame) -> None:
"""Deletes rows from the given table
Args:
table_name (str): The name of the table the rows will be deleted from
data (pd.DataFrame): A pandas.DataFrame. It must contain the primary keys of the table
"""
|
schematic-db
|
/schematic_db-0.0.31-py3-none-any.whl/schematic_db/rdb/rdb.py
|
rdb.py
|
from abc import ABC, abstractmethod
import pandas as pd
from schematic_db.db_schema.db_schema import TableSchema
class UpsertDatabaseError(Exception):
"""Raised when a database class catches an error doing an upsert"""
def __init__(self, table_name: str) -> None:
"""
Args:
table_name (str): The name of the table being upserted into
"""
self.message = "Error upserting table"
self.table_name = table_name
super().__init__(self.message)
def __str__(self) -> str:
return f"{self.message}; " f"Table Name: {self.table_name}"
class InsertDatabaseError(Exception):
"""Raised when a database class catches an error doing an insert"""
def __init__(self, table_name: str) -> None:
"""
Args:
table_name (str): The name of the table being inserted into
"""
self.message = "Error inserting table"
self.table_name = table_name
super().__init__(self.message)
def __str__(self) -> str:
return f"{self.message}; " f"Table Name: {self.table_name}"
class RelationalDatabase(ABC):
"""An interface for relational database types"""
@abstractmethod
def get_table_names(self) -> list[str]:
"""Gets the names of the tables in the database
Returns:
list[str]: A list of table names
"""
@abstractmethod
def get_table_schema(self, table_name: str) -> TableSchema:
"""Returns a TableSchema created from the current database table
Args:
table_name (str): The name of the table
Returns:
Optional[TableSchema]: The schema for the given table
"""
@abstractmethod
def execute_sql_query(self, query: str) -> pd.DataFrame:
"""Executes a valid SQL statement
Should be used when a result is expected.
Args:
query (str): A SQL statement
Returns:
pd.DataFrame: The table
"""
@abstractmethod
def query_table(self, table_name: str) -> pd.DataFrame:
"""Queries a whole table
Args:
table_name (str): The name of the table
Returns:
pd.DataFrame: The table
"""
@abstractmethod
def add_table(self, table_name: str, table_schema: TableSchema) -> None:
"""Adds a table to the schema
Args:
table_name (str): The name of the table
table_schema (TableSchema): The schema for the table being added
"""
@abstractmethod
def drop_table(self, table_name: str) -> None:
"""Drops a table from the schema
Args:
table_name (str): The id(name) of the table to be dropped
"""
@abstractmethod
def drop_all_tables(self) -> None:
"""Drops all tables from the database"""
@abstractmethod
def insert_table_rows(self, table_name: str, data: pd.DataFrame) -> None:
"""Inserts rows into the given table
Args:
table_name (str): The name of the table the rows be upserted into
data (pd.DataFrame): A pandas.DataFrame. It must contain the primary keys of the table
"""
@abstractmethod
def upsert_table_rows(self, table_name: str, data: pd.DataFrame) -> None:
"""Upserts rows into the given table
Args:
table_name (str): The name of the table the rows be upserted into
data (pd.DataFrame): A pandas.DataFrame. It must contain the primary keys of the table
"""
@abstractmethod
def delete_table_rows(self, table_name: str, data: pd.DataFrame) -> None:
"""Deletes rows from the given table
Args:
table_name (str): The name of the table the rows will be deleted from
data (pd.DataFrame): A pandas.DataFrame. It must contain the primary keys of the table
"""
| 0.857872 | 0.349699 |
from typing import Any
import pandas
import numpy
import sqlalchemy
import sqlalchemy.dialects.mysql
from sqlalchemy import exc
from schematic_db.db_schema.db_schema import (
ColumnDatatype,
ColumnSchema,
)
from .sql_alchemy_database import SQLAlchemyDatabase, SQLConfig
from .rdb import UpsertDatabaseError
class MySQLDatabase(SQLAlchemyDatabase):
"""MySQLDatabase
- Represents a mysql database.
- Implements the RelationalDatabase interface.
- Handles MYSQL specific functionality.
"""
def __init__(
self,
config: SQLConfig,
verbose: bool = False,
):
"""Init
Args:
config (MySQLConfig): A MySQL config
verbose (bool): Sends much more to logging.info
"""
super().__init__(config, verbose, "mysql")
column_datatypes = self.column_datatypes.copy()
column_datatypes.update(
{
sqlalchemy.dialects.mysql.VARCHAR: ColumnDatatype.TEXT,
sqlalchemy.dialects.mysql.TEXT: ColumnDatatype.TEXT,
sqlalchemy.dialects.mysql.INTEGER: ColumnDatatype.INT,
sqlalchemy.dialects.mysql.DOUBLE: ColumnDatatype.FLOAT,
sqlalchemy.dialects.mysql.FLOAT: ColumnDatatype.FLOAT,
sqlalchemy.dialects.mysql.DATE: ColumnDatatype.DATE,
}
)
self.column_datatypes = column_datatypes
def upsert_table_rows(self, table_name: str, data: pandas.DataFrame) -> None:
"""Inserts and/or updates the rows of the table
Args:
table_name (str): The name of the table to be upserted
data (pandas.DataFrame): The rows to be upserted
Raises:
UpsertDatabaseError: Raised when a SQLAlchemy error caught
"""
table = self._get_table_object(table_name)
data = data.replace({numpy.nan: None})
rows = data.to_dict("records")
for row in rows:
try:
self._upsert_table_row(row, table, table_name)
except exc.SQLAlchemyError as exception:
raise UpsertDatabaseError(table_name) from exception
def _upsert_table_row(
self,
row: dict[str, Any],
table: sqlalchemy.Table,
table_name: str, # pylint: disable=unused-argument
) -> None:
"""Upserts a row into a MySQL table
Args:
row (dict[str, Any]): A row of a dataframe to be upserted
table (sqlalchemy.Table): A sqlalchemy Table to be upserted into
table_name (str): The name of the table to be upserted into (unused)
"""
statement = sqlalchemy.dialects.mysql.insert(table).values(row)
statement = statement.on_duplicate_key_update(**row)
with self.engine.begin() as conn:
conn.execute(statement)
def _get_datatype(
self, column_schema: ColumnSchema, primary_key: str, foreign_keys: list[str]
) -> Any:
"""
Gets the datatype of the column based on its schema
Args:
column_schema (ColumnSchema): The schema of the column
primary_key (str): The primary key fo the column (unused)
foreign_keys (list[str]): A list of foreign keys for the the column
Returns:
Any: The SQLAlchemy datatype
"""
datatypes = {
ColumnDatatype.TEXT: sqlalchemy.VARCHAR(5000),
ColumnDatatype.DATE: sqlalchemy.Date,
ColumnDatatype.INT: sqlalchemy.Integer,
ColumnDatatype.FLOAT: sqlalchemy.Float,
ColumnDatatype.BOOLEAN: sqlalchemy.Boolean,
}
# Keys need to be max 100 chars
if column_schema.datatype == ColumnDatatype.TEXT and (
column_schema.name == primary_key or column_schema.name in foreign_keys
):
return sqlalchemy.VARCHAR(100)
# Strings that need to be indexed need to be max 1000 chars
if column_schema.index and column_schema.datatype == ColumnDatatype.TEXT:
return sqlalchemy.VARCHAR(1000)
# Otherwise use datatypes dict
return datatypes[column_schema.datatype]
|
schematic-db
|
/schematic_db-0.0.31-py3-none-any.whl/schematic_db/rdb/mysql.py
|
mysql.py
|
from typing import Any
import pandas
import numpy
import sqlalchemy
import sqlalchemy.dialects.mysql
from sqlalchemy import exc
from schematic_db.db_schema.db_schema import (
ColumnDatatype,
ColumnSchema,
)
from .sql_alchemy_database import SQLAlchemyDatabase, SQLConfig
from .rdb import UpsertDatabaseError
class MySQLDatabase(SQLAlchemyDatabase):
"""MySQLDatabase
- Represents a mysql database.
- Implements the RelationalDatabase interface.
- Handles MYSQL specific functionality.
"""
def __init__(
self,
config: SQLConfig,
verbose: bool = False,
):
"""Init
Args:
config (MySQLConfig): A MySQL config
verbose (bool): Sends much more to logging.info
"""
super().__init__(config, verbose, "mysql")
column_datatypes = self.column_datatypes.copy()
column_datatypes.update(
{
sqlalchemy.dialects.mysql.VARCHAR: ColumnDatatype.TEXT,
sqlalchemy.dialects.mysql.TEXT: ColumnDatatype.TEXT,
sqlalchemy.dialects.mysql.INTEGER: ColumnDatatype.INT,
sqlalchemy.dialects.mysql.DOUBLE: ColumnDatatype.FLOAT,
sqlalchemy.dialects.mysql.FLOAT: ColumnDatatype.FLOAT,
sqlalchemy.dialects.mysql.DATE: ColumnDatatype.DATE,
}
)
self.column_datatypes = column_datatypes
def upsert_table_rows(self, table_name: str, data: pandas.DataFrame) -> None:
"""Inserts and/or updates the rows of the table
Args:
table_name (str): The name of the table to be upserted
data (pandas.DataFrame): The rows to be upserted
Raises:
UpsertDatabaseError: Raised when a SQLAlchemy error caught
"""
table = self._get_table_object(table_name)
data = data.replace({numpy.nan: None})
rows = data.to_dict("records")
for row in rows:
try:
self._upsert_table_row(row, table, table_name)
except exc.SQLAlchemyError as exception:
raise UpsertDatabaseError(table_name) from exception
def _upsert_table_row(
self,
row: dict[str, Any],
table: sqlalchemy.Table,
table_name: str, # pylint: disable=unused-argument
) -> None:
"""Upserts a row into a MySQL table
Args:
row (dict[str, Any]): A row of a dataframe to be upserted
table (sqlalchemy.Table): A sqlalchemy Table to be upserted into
table_name (str): The name of the table to be upserted into (unused)
"""
statement = sqlalchemy.dialects.mysql.insert(table).values(row)
statement = statement.on_duplicate_key_update(**row)
with self.engine.begin() as conn:
conn.execute(statement)
def _get_datatype(
self, column_schema: ColumnSchema, primary_key: str, foreign_keys: list[str]
) -> Any:
"""
Gets the datatype of the column based on its schema
Args:
column_schema (ColumnSchema): The schema of the column
primary_key (str): The primary key fo the column (unused)
foreign_keys (list[str]): A list of foreign keys for the the column
Returns:
Any: The SQLAlchemy datatype
"""
datatypes = {
ColumnDatatype.TEXT: sqlalchemy.VARCHAR(5000),
ColumnDatatype.DATE: sqlalchemy.Date,
ColumnDatatype.INT: sqlalchemy.Integer,
ColumnDatatype.FLOAT: sqlalchemy.Float,
ColumnDatatype.BOOLEAN: sqlalchemy.Boolean,
}
# Keys need to be max 100 chars
if column_schema.datatype == ColumnDatatype.TEXT and (
column_schema.name == primary_key or column_schema.name in foreign_keys
):
return sqlalchemy.VARCHAR(100)
# Strings that need to be indexed need to be max 1000 chars
if column_schema.index and column_schema.datatype == ColumnDatatype.TEXT:
return sqlalchemy.VARCHAR(1000)
# Otherwise use datatypes dict
return datatypes[column_schema.datatype]
| 0.825132 | 0.271916 |
# pylint: disable=duplicate-code
from typing import Any
import json
import re
from pydantic.dataclasses import dataclass
from pydantic import validator
@dataclass()
class ManifestMetadata:
"""Metadata for a manifest in Synapse."""
dataset_id: str
dataset_name: str
manifest_id: str
manifest_name: str
component_name: str
@validator("dataset_id", "manifest_id")
@classmethod
def validate_synapse_id(cls, value: str) -> str:
"""Check if string is a valid synapse id
Args:
value (str): A string
Raises:
ValueError: If the value isn't a valid Synapse id
Returns:
(str): The input value
"""
if not re.search("^syn[0-9]+", value):
raise ValueError(f"{value} is not a valid Synapse id")
return value
@validator("dataset_name", "manifest_name", "component_name")
@classmethod
def validate_string_is_not_empty(cls, value: str) -> str:
"""Check if string is not empty(has at least one char)
Args:
value (str): A string
Raises:
ValueError: If the value is zero characters long
Returns:
(str): The input value
"""
if len(value) == 0:
raise ValueError(f"{value} is an empty string")
return value
def to_dict(self) -> dict[str, str]:
"""Returns object attributes as dict
Returns:
dict[str, str]: dict of object attributes
"""
attribute_dict = vars(self)
attribute_names = [
"dataset_id",
"dataset_name",
"manifest_id",
"manifest_name",
"component_name",
]
return {key: attribute_dict[key] for key in attribute_names}
def __repr__(self) -> str:
"""Prints object as dict"""
return json.dumps(self.to_dict(), indent=4)
class ManifestMetadataList:
"""A list of Manifest Metadata"""
def __init__(self, metadata_input: list[dict[str, Any]]) -> None:
"""
Args:
metadata_input (list[dict[str, Any]]): A list of dicts where each dict has key values
pairs that correspond to the arguments of ManifestMetadata.
"""
metadata_list: list[ManifestMetadata] = []
for item in metadata_input.copy():
try:
metadata = ManifestMetadata(**item)
except ValueError:
pass
else:
metadata_list.append(metadata)
self.metadata_list = metadata_list
def __repr__(self) -> str:
"""Prints each metadata object as dict"""
return json.dumps(
[metadata.to_dict() for metadata in self.metadata_list], indent=4
)
def get_dataset_ids_for_component(self, component_name: str) -> list[str]:
"""Gets the dataset ids from the manifest metadata matching the component name
Args:
component_name (str): The name of the component to get the manifest datasets ids for
Returns:
list[str]: A list of synapse ids for the manifest datasets
"""
return [
metadata.dataset_id
for metadata in self.metadata_list
if metadata.component_name == component_name
]
def get_manifest_ids_for_component(self, component_name: str) -> list[str]:
"""Gets the manifest ids from the manifest metadata matching the component name
Args:
component_name (str): The name of the component to get the manifest ids for
Returns:
list[str]: A list of synapse ids for the manifests
"""
return [
metadata.manifest_id
for metadata in self.metadata_list
if metadata.component_name == component_name
]
|
schematic-db
|
/schematic_db-0.0.31-py3-none-any.whl/schematic_db/manifest_store/manifest_metadata_list.py
|
manifest_metadata_list.py
|
# pylint: disable=duplicate-code
from typing import Any
import json
import re
from pydantic.dataclasses import dataclass
from pydantic import validator
@dataclass()
class ManifestMetadata:
"""Metadata for a manifest in Synapse."""
dataset_id: str
dataset_name: str
manifest_id: str
manifest_name: str
component_name: str
@validator("dataset_id", "manifest_id")
@classmethod
def validate_synapse_id(cls, value: str) -> str:
"""Check if string is a valid synapse id
Args:
value (str): A string
Raises:
ValueError: If the value isn't a valid Synapse id
Returns:
(str): The input value
"""
if not re.search("^syn[0-9]+", value):
raise ValueError(f"{value} is not a valid Synapse id")
return value
@validator("dataset_name", "manifest_name", "component_name")
@classmethod
def validate_string_is_not_empty(cls, value: str) -> str:
"""Check if string is not empty(has at least one char)
Args:
value (str): A string
Raises:
ValueError: If the value is zero characters long
Returns:
(str): The input value
"""
if len(value) == 0:
raise ValueError(f"{value} is an empty string")
return value
def to_dict(self) -> dict[str, str]:
"""Returns object attributes as dict
Returns:
dict[str, str]: dict of object attributes
"""
attribute_dict = vars(self)
attribute_names = [
"dataset_id",
"dataset_name",
"manifest_id",
"manifest_name",
"component_name",
]
return {key: attribute_dict[key] for key in attribute_names}
def __repr__(self) -> str:
"""Prints object as dict"""
return json.dumps(self.to_dict(), indent=4)
class ManifestMetadataList:
"""A list of Manifest Metadata"""
def __init__(self, metadata_input: list[dict[str, Any]]) -> None:
"""
Args:
metadata_input (list[dict[str, Any]]): A list of dicts where each dict has key values
pairs that correspond to the arguments of ManifestMetadata.
"""
metadata_list: list[ManifestMetadata] = []
for item in metadata_input.copy():
try:
metadata = ManifestMetadata(**item)
except ValueError:
pass
else:
metadata_list.append(metadata)
self.metadata_list = metadata_list
def __repr__(self) -> str:
"""Prints each metadata object as dict"""
return json.dumps(
[metadata.to_dict() for metadata in self.metadata_list], indent=4
)
def get_dataset_ids_for_component(self, component_name: str) -> list[str]:
"""Gets the dataset ids from the manifest metadata matching the component name
Args:
component_name (str): The name of the component to get the manifest datasets ids for
Returns:
list[str]: A list of synapse ids for the manifest datasets
"""
return [
metadata.dataset_id
for metadata in self.metadata_list
if metadata.component_name == component_name
]
def get_manifest_ids_for_component(self, component_name: str) -> list[str]:
"""Gets the manifest ids from the manifest metadata matching the component name
Args:
component_name (str): The name of the component to get the manifest ids for
Returns:
list[str]: A list of synapse ids for the manifests
"""
return [
metadata.manifest_id
for metadata in self.metadata_list
if metadata.component_name == component_name
]
| 0.897762 | 0.3214 |
from typing import Optional
import pandas
from deprecation import deprecated
from schematic_db.schema_graph.schema_graph import SchemaGraph
from schematic_db.api_utils.api_utils import ManifestMetadataList
from schematic_db.synapse.synapse import Synapse
from .manifest_store import ManifestStore, ManifestStoreConfig
@deprecated(
deprecated_in="0.0.29",
details="This is both an experimental and temporary class that will be removed in the future.",
)
class SynapseManifestStore(ManifestStore):
"""An interface for interacting with manifests"""
def __init__(self, config: ManifestStoreConfig) -> None:
"""
Args:
config (ManifestStoreConfig): A config with setup values
"""
self.synapse_asset_view_id = config.synapse_asset_view_id
self.synapse = Synapse(config.synapse_auth_token, config.synapse_project_id)
self.schema_graph = SchemaGraph(config.schema_url)
self.manifest_metadata: Optional[ManifestMetadataList] = None
def create_sorted_table_name_list(self) -> list[str]:
"""
Creates a table name list such tables always come after ones they
depend on.
This order is how tables in a database should be built and/or updated.
Returns:
list[str]: A list of tables names
"""
return self.schema_graph.create_sorted_table_name_list()
def get_manifest_metadata(self) -> ManifestMetadataList:
"""Gets the current objects manifest metadata."""
query = (
"SELECT id, name, parentId, Component FROM "
f"{self.synapse_asset_view_id} "
"WHERE type = 'file' AND Component IS NOT NULL AND name LIKE '%csv'"
)
dataframe = self.synapse.execute_sql_query(query)
manifest_list = []
for _, row in dataframe.iterrows():
manifest_list.append(
{
"dataset_id": row["parentId"],
"dataset_name": "none",
"manifest_id": row["id"],
"manifest_name": row["name"],
"component_name": row["Component"],
}
)
return ManifestMetadataList(manifest_list)
def get_manifest_ids(self, name: str) -> list[str]:
"""Gets the manifest ids for a table(component)
Args:
name (str): The name of the table
Returns:
list[str]: The manifest ids for the table
"""
return self.get_manifest_metadata().get_manifest_ids_for_component(name)
def download_manifest(self, manifest_id: str) -> pandas.DataFrame:
"""Downloads the manifest
Args:
manifest_id (str): The synapse id of the manifest
Returns:
pandas.DataFrame: The manifest in dataframe form
"""
return self.synapse.download_csv_as_dataframe(manifest_id)
|
schematic-db
|
/schematic_db-0.0.31-py3-none-any.whl/schematic_db/manifest_store/synapse_manifest_store.py
|
synapse_manifest_store.py
|
from typing import Optional
import pandas
from deprecation import deprecated
from schematic_db.schema_graph.schema_graph import SchemaGraph
from schematic_db.api_utils.api_utils import ManifestMetadataList
from schematic_db.synapse.synapse import Synapse
from .manifest_store import ManifestStore, ManifestStoreConfig
@deprecated(
deprecated_in="0.0.29",
details="This is both an experimental and temporary class that will be removed in the future.",
)
class SynapseManifestStore(ManifestStore):
"""An interface for interacting with manifests"""
def __init__(self, config: ManifestStoreConfig) -> None:
"""
Args:
config (ManifestStoreConfig): A config with setup values
"""
self.synapse_asset_view_id = config.synapse_asset_view_id
self.synapse = Synapse(config.synapse_auth_token, config.synapse_project_id)
self.schema_graph = SchemaGraph(config.schema_url)
self.manifest_metadata: Optional[ManifestMetadataList] = None
def create_sorted_table_name_list(self) -> list[str]:
"""
Creates a table name list such tables always come after ones they
depend on.
This order is how tables in a database should be built and/or updated.
Returns:
list[str]: A list of tables names
"""
return self.schema_graph.create_sorted_table_name_list()
def get_manifest_metadata(self) -> ManifestMetadataList:
"""Gets the current objects manifest metadata."""
query = (
"SELECT id, name, parentId, Component FROM "
f"{self.synapse_asset_view_id} "
"WHERE type = 'file' AND Component IS NOT NULL AND name LIKE '%csv'"
)
dataframe = self.synapse.execute_sql_query(query)
manifest_list = []
for _, row in dataframe.iterrows():
manifest_list.append(
{
"dataset_id": row["parentId"],
"dataset_name": "none",
"manifest_id": row["id"],
"manifest_name": row["name"],
"component_name": row["Component"],
}
)
return ManifestMetadataList(manifest_list)
def get_manifest_ids(self, name: str) -> list[str]:
"""Gets the manifest ids for a table(component)
Args:
name (str): The name of the table
Returns:
list[str]: The manifest ids for the table
"""
return self.get_manifest_metadata().get_manifest_ids_for_component(name)
def download_manifest(self, manifest_id: str) -> pandas.DataFrame:
"""Downloads the manifest
Args:
manifest_id (str): The synapse id of the manifest
Returns:
pandas.DataFrame: The manifest in dataframe form
"""
return self.synapse.download_csv_as_dataframe(manifest_id)
| 0.906457 | 0.156846 |
# pylint: disable=duplicate-code
from typing import Optional
import pandas
from schematic_db.api_utils.api_utils import (
get_project_manifests,
download_manifest,
ManifestMetadataList,
)
from schematic_db.schema_graph.schema_graph import SchemaGraph
from .manifest_store import ManifestStore, ManifestStoreConfig
class ManifestMissingPrimaryKeyError(Exception):
"""Raised when a manifest is missing its primary key"""
def __init__(
self,
table_name: str,
dataset_id: str,
primary_key: str,
manifest_columns: list[str],
):
"""
Args:
table_name (str): The name of the table
dataset_id (str): The dataset id for the component
primary_key (str): The name of the primary key
manifest_columns (list[str]): The columns in the manifest
"""
self.message = "Manifest is missing its primary key"
self.table_name = table_name
self.dataset_id = dataset_id
self.primary_key = primary_key
self.manifest_columns = manifest_columns
super().__init__(self.message)
def __str__(self) -> str:
"""String representation"""
return (
f"{self.message}; table name:{self.table_name}; "
f"dataset_id:{self.dataset_id}; primary keys:{self.primary_key}; "
f"manifest columns:{self.manifest_columns}"
)
class APIManifestStore(ManifestStore):
"""
The APIManifestStore class interacts with the Schematic API download manifests.
"""
def __init__(self, config: ManifestStoreConfig) -> None:
"""
The Schema class handles interactions with the schematic API.
The main responsibilities are creating the database schema, and retrieving manifests.
Args:
config (SchemaConfig): A config describing the basic inputs for the schema object
"""
self.synapse_project_id = config.synapse_project_id
self.synapse_asset_view_id = config.synapse_asset_view_id
self.synapse_auth_token = config.synapse_auth_token
self.schema_graph = SchemaGraph(config.schema_url)
self.manifest_metadata: Optional[ManifestMetadataList] = None
def create_sorted_table_name_list(self) -> list[str]:
"""
Uses the schema graph to create a table name list such tables always come after ones they
depend on.
This order is how tables in a database should be built and/or updated.
Returns:
list[str]: A list of tables names
"""
return self.schema_graph.create_sorted_table_name_list()
def get_manifest_metadata(self) -> ManifestMetadataList:
"""Gets the manifest metadata
Returns:
ManifestMetadataList: the manifest metadata
"""
# When first initialized, manifest metadata is None
if self.manifest_metadata is None:
self.manifest_metadata = get_project_manifests(
access_token=self.synapse_auth_token,
project_id=self.synapse_project_id,
asset_view=self.synapse_asset_view_id,
)
assert self.manifest_metadata is not None
return self.manifest_metadata
def get_manifest_ids(self, name: str) -> list[str]:
"""Gets the manifest ids for a table(component)
Args:
name (str): The name of the table
Returns:
list[str]: The manifest ids for the table
"""
return self.get_manifest_metadata().get_manifest_ids_for_component(name)
def download_manifest(self, manifest_id: str) -> pandas.DataFrame:
"""Downloads the manifest
Args:
manifest_id (str): The synapse id of the manifest
Returns:
pandas.DataFrame: The manifest in dataframe form
"""
manifest = download_manifest(self.synapse_auth_token, manifest_id)
return manifest
|
schematic-db
|
/schematic_db-0.0.31-py3-none-any.whl/schematic_db/manifest_store/api_manifest_store.py
|
api_manifest_store.py
|
# pylint: disable=duplicate-code
from typing import Optional
import pandas
from schematic_db.api_utils.api_utils import (
get_project_manifests,
download_manifest,
ManifestMetadataList,
)
from schematic_db.schema_graph.schema_graph import SchemaGraph
from .manifest_store import ManifestStore, ManifestStoreConfig
class ManifestMissingPrimaryKeyError(Exception):
"""Raised when a manifest is missing its primary key"""
def __init__(
self,
table_name: str,
dataset_id: str,
primary_key: str,
manifest_columns: list[str],
):
"""
Args:
table_name (str): The name of the table
dataset_id (str): The dataset id for the component
primary_key (str): The name of the primary key
manifest_columns (list[str]): The columns in the manifest
"""
self.message = "Manifest is missing its primary key"
self.table_name = table_name
self.dataset_id = dataset_id
self.primary_key = primary_key
self.manifest_columns = manifest_columns
super().__init__(self.message)
def __str__(self) -> str:
"""String representation"""
return (
f"{self.message}; table name:{self.table_name}; "
f"dataset_id:{self.dataset_id}; primary keys:{self.primary_key}; "
f"manifest columns:{self.manifest_columns}"
)
class APIManifestStore(ManifestStore):
"""
The APIManifestStore class interacts with the Schematic API download manifests.
"""
def __init__(self, config: ManifestStoreConfig) -> None:
"""
The Schema class handles interactions with the schematic API.
The main responsibilities are creating the database schema, and retrieving manifests.
Args:
config (SchemaConfig): A config describing the basic inputs for the schema object
"""
self.synapse_project_id = config.synapse_project_id
self.synapse_asset_view_id = config.synapse_asset_view_id
self.synapse_auth_token = config.synapse_auth_token
self.schema_graph = SchemaGraph(config.schema_url)
self.manifest_metadata: Optional[ManifestMetadataList] = None
def create_sorted_table_name_list(self) -> list[str]:
"""
Uses the schema graph to create a table name list such tables always come after ones they
depend on.
This order is how tables in a database should be built and/or updated.
Returns:
list[str]: A list of tables names
"""
return self.schema_graph.create_sorted_table_name_list()
def get_manifest_metadata(self) -> ManifestMetadataList:
"""Gets the manifest metadata
Returns:
ManifestMetadataList: the manifest metadata
"""
# When first initialized, manifest metadata is None
if self.manifest_metadata is None:
self.manifest_metadata = get_project_manifests(
access_token=self.synapse_auth_token,
project_id=self.synapse_project_id,
asset_view=self.synapse_asset_view_id,
)
assert self.manifest_metadata is not None
return self.manifest_metadata
def get_manifest_ids(self, name: str) -> list[str]:
"""Gets the manifest ids for a table(component)
Args:
name (str): The name of the table
Returns:
list[str]: The manifest ids for the table
"""
return self.get_manifest_metadata().get_manifest_ids_for_component(name)
def download_manifest(self, manifest_id: str) -> pandas.DataFrame:
"""Downloads the manifest
Args:
manifest_id (str): The synapse id of the manifest
Returns:
pandas.DataFrame: The manifest in dataframe form
"""
manifest = download_manifest(self.synapse_auth_token, manifest_id)
return manifest
| 0.891457 | 0.154663 |
from abc import ABC, abstractmethod
import re
import pandas
from pydantic.dataclasses import dataclass
from pydantic import validator
import validators
from schematic_db.api_utils.api_utils import ManifestMetadataList
@dataclass()
class ManifestStoreConfig:
"""
A config for a ManifestStore.
Properties:
schema_url (str): A url to the jsonld schema file
synapse_project_id (str): The synapse id to the project where the manifests are stored.
synapse_asset_view_id (str): The synapse id to the asset view that tracks the manifests.
synapse_auth_token (str): A synapse token with download permissions for both the
synapse_project_id and synapse_asset_view_id
"""
schema_url: str
synapse_project_id: str
synapse_asset_view_id: str
synapse_auth_token: str
@validator("schema_url")
@classmethod
def validate_url(cls, value: str) -> str:
"""Validates that the value is a valid URL"""
valid_url = validators.url(value)
if not valid_url:
raise ValueError(f"{value} is a valid url")
return value
@validator("schema_url")
@classmethod
def validate_is_jsonld(cls, value: str) -> str:
"""Validates that the value is a jsonld file"""
is_jsonld = value.endswith(".jsonld")
if not is_jsonld:
raise ValueError(f"{value} does end with '.jsonld'")
return value
@validator("synapse_project_id", "synapse_asset_view_id")
@classmethod
def validate_synapse_id(cls, value: str) -> str:
"""Check if string is a valid synapse id"""
if not re.search("^syn[0-9]+", value):
raise ValueError(f"{value} is not a valid Synapse id")
return value
@validator("synapse_auth_token")
@classmethod
def validate_string_is_not_empty(cls, value: str) -> str:
"""Check if string is not empty(has at least one char)"""
if len(value) == 0:
raise ValueError(f"{value} is an empty string")
return value
class ManifestStore(ABC):
"""An interface for interacting with manifests"""
@abstractmethod
def create_sorted_table_name_list(self) -> list[str]:
"""
Creates a table name list such tables always come after ones they
depend on.
This order is how tables in a database should be built and/or updated.
Returns:
list[str]: A list of tables names
"""
@abstractmethod
def get_manifest_metadata(self) -> ManifestMetadataList:
"""Gets the current objects manifest metadata."""
@abstractmethod
def get_manifest_ids(self, name: str) -> list[str]:
"""Gets the manifest ids for a table(component)
Args:
name (str): The name of the table
Returns:
list[str]: The manifest ids for the table
"""
@abstractmethod
def download_manifest(self, manifest_id: str) -> pandas.DataFrame:
"""Downloads the manifest
Args:
manifest_id (str): The synapse id of the manifest
Returns:
pandas.DataFrame: The manifest in dataframe form
"""
|
schematic-db
|
/schematic_db-0.0.31-py3-none-any.whl/schematic_db/manifest_store/manifest_store.py
|
manifest_store.py
|
from abc import ABC, abstractmethod
import re
import pandas
from pydantic.dataclasses import dataclass
from pydantic import validator
import validators
from schematic_db.api_utils.api_utils import ManifestMetadataList
@dataclass()
class ManifestStoreConfig:
"""
A config for a ManifestStore.
Properties:
schema_url (str): A url to the jsonld schema file
synapse_project_id (str): The synapse id to the project where the manifests are stored.
synapse_asset_view_id (str): The synapse id to the asset view that tracks the manifests.
synapse_auth_token (str): A synapse token with download permissions for both the
synapse_project_id and synapse_asset_view_id
"""
schema_url: str
synapse_project_id: str
synapse_asset_view_id: str
synapse_auth_token: str
@validator("schema_url")
@classmethod
def validate_url(cls, value: str) -> str:
"""Validates that the value is a valid URL"""
valid_url = validators.url(value)
if not valid_url:
raise ValueError(f"{value} is a valid url")
return value
@validator("schema_url")
@classmethod
def validate_is_jsonld(cls, value: str) -> str:
"""Validates that the value is a jsonld file"""
is_jsonld = value.endswith(".jsonld")
if not is_jsonld:
raise ValueError(f"{value} does end with '.jsonld'")
return value
@validator("synapse_project_id", "synapse_asset_view_id")
@classmethod
def validate_synapse_id(cls, value: str) -> str:
"""Check if string is a valid synapse id"""
if not re.search("^syn[0-9]+", value):
raise ValueError(f"{value} is not a valid Synapse id")
return value
@validator("synapse_auth_token")
@classmethod
def validate_string_is_not_empty(cls, value: str) -> str:
"""Check if string is not empty(has at least one char)"""
if len(value) == 0:
raise ValueError(f"{value} is an empty string")
return value
class ManifestStore(ABC):
"""An interface for interacting with manifests"""
@abstractmethod
def create_sorted_table_name_list(self) -> list[str]:
"""
Creates a table name list such tables always come after ones they
depend on.
This order is how tables in a database should be built and/or updated.
Returns:
list[str]: A list of tables names
"""
@abstractmethod
def get_manifest_metadata(self) -> ManifestMetadataList:
"""Gets the current objects manifest metadata."""
@abstractmethod
def get_manifest_ids(self, name: str) -> list[str]:
"""Gets the manifest ids for a table(component)
Args:
name (str): The name of the table
Returns:
list[str]: The manifest ids for the table
"""
@abstractmethod
def download_manifest(self, manifest_id: str) -> pandas.DataFrame:
"""Downloads the manifest
Args:
manifest_id (str): The synapse id of the manifest
Returns:
pandas.DataFrame: The manifest in dataframe form
"""
| 0.852997 | 0.23105 |
from typing import Optional, Any
from deprecation import deprecated
from schematic_db.db_schema.db_schema import (
ForeignKeySchema,
ColumnSchema,
ColumnDatatype,
)
DATATYPES = {
"str": ColumnDatatype.TEXT,
"float": ColumnDatatype.FLOAT,
"int": ColumnDatatype.INT,
"date": ColumnDatatype.DATE,
}
@deprecated(
deprecated_in="0.0.27",
details="Functionality will be accomplished with future Schematic API calls.",
)
class DatabaseTableConfig: # pylint: disable=too-few-public-methods
"""A config for database specific items for one table"""
def __init__(
self,
name: str,
primary_key: Optional[str] = None,
foreign_keys: Optional[list[dict[str, str]]] = None,
columns: Optional[list[dict[str, Any]]] = None,
) -> None:
"""
Init
"""
self.name = name
self.primary_key = primary_key
if foreign_keys is None:
self.foreign_keys = None
else:
self.foreign_keys = [
ForeignKeySchema(
name=key["column_name"],
foreign_table_name=key["foreign_table_name"],
foreign_column_name=key["foreign_column_name"],
)
for key in foreign_keys
]
if columns is None:
self.columns = None
else:
self.columns = [
ColumnSchema(
name=column["column_name"],
datatype=DATATYPES[column["datatype"]],
required=column["required"],
index=column["index"],
)
for column in columns
]
def _check_column_names(self) -> None:
"""Checks that column names are not duplicated
Raises:
ValueError: Raised when there are duplicate column names
"""
column_names = self._get_column_names()
if column_names is not None:
if len(column_names) != len(list(set(column_names))):
raise ValueError("There are duplicate column names")
def _get_column_names(self) -> Optional[list[str]]:
"""Gets the list of column names in the config
Returns:
list[str]: A list of column names
"""
if self.columns is not None:
return [column.name for column in self.columns]
return None
def _check_foreign_key_name(self) -> None:
"""Checks that foreign keys are not duplicated
Raises:
ValueError: Raised when there are duplicate foreign keys
"""
foreign_keys_names = self._get_foreign_key_names()
if foreign_keys_names is not None:
if len(foreign_keys_names) != len(list(set(foreign_keys_names))):
raise ValueError("There are duplicate column names")
def _get_foreign_key_names(self) -> Optional[list[str]]:
"""Gets the list of foreign key names in the config
Returns:
list[str]: A list of foreign key names
"""
if self.foreign_keys is not None:
return [key.name for key in self.foreign_keys]
return None
class DatabaseConfig:
"""A config for database specific items"""
def __init__(self, tables: list[dict[str, Any]]) -> None:
"""
Init
"""
self.tables: list[DatabaseTableConfig] = [
DatabaseTableConfig(**table) for table in tables
]
self._check_table_names()
def get_primary_key(self, table_name: str) -> Optional[str]:
"""Gets the primary key for an table
Args:
table_name (str): The name of the table
Returns:
Optional[str]: The primary key
"""
table = self._get_table_by_name(table_name)
return None if table is None else table.primary_key
def get_foreign_keys(self, table_name: str) -> Optional[list[ForeignKeySchema]]:
"""Gets the foreign keys for an table
Args:
table_name (str): The name of the table
Returns:
Optional[list[ForeignKeySchema]]: The foreign keys
"""
table = self._get_table_by_name(table_name)
return None if table is None else table.foreign_keys
def get_columns(self, table_name: str) -> Optional[list[ColumnSchema]]:
"""Gets the columns for an table
Args:
table_name (str): The name of the table
Returns:
Optional[list[ColumnSchema]]: The list of columns
"""
table = self._get_table_by_name(table_name)
return None if table is None else table.columns
def get_column(self, table_name: str, column_name: str) -> Optional[ColumnSchema]:
"""Gets a column for a table
Args:
table_name (str): The name of the table to get the column for
column_name (str): The name of the column to get
Returns:
Optional[list[ColumnSchema]]: The list of columns
"""
columns = self.get_columns(table_name)
if columns is None:
return None
columns = [column for column in columns if column.name == column_name]
if len(columns) == 0:
return None
return columns[0]
def _get_table_by_name(self, table_name: str) -> Optional[DatabaseTableConfig]:
"""Gets the config for the table if it exists
Args:
table_name (str): The name of the table
Returns:
Optional[DatabaseTableConfig]: The config for the table if it exists
"""
tables = [table for table in self.tables if table.name == table_name]
if len(tables) == 0:
return None
return tables[0]
def _get_table_names(self) -> list[str]:
"""Gets the list of tables names in the config
Returns:
list[str]: A list of table names
"""
return [table.name for table in self.tables]
def _check_table_names(self) -> None:
"""Checks that the table names are not duplicated
Raises:
ValueError: Raised when there are duplicate table names
"""
n_table_names = len(self._get_table_names())
n_unique_names = len(list(set(self._get_table_names())))
if n_table_names != n_unique_names:
raise ValueError("There are duplicate table names")
|
schematic-db
|
/schematic_db-0.0.31-py3-none-any.whl/schematic_db/schema/database_config.py
|
database_config.py
|
from typing import Optional, Any
from deprecation import deprecated
from schematic_db.db_schema.db_schema import (
ForeignKeySchema,
ColumnSchema,
ColumnDatatype,
)
DATATYPES = {
"str": ColumnDatatype.TEXT,
"float": ColumnDatatype.FLOAT,
"int": ColumnDatatype.INT,
"date": ColumnDatatype.DATE,
}
@deprecated(
deprecated_in="0.0.27",
details="Functionality will be accomplished with future Schematic API calls.",
)
class DatabaseTableConfig: # pylint: disable=too-few-public-methods
"""A config for database specific items for one table"""
def __init__(
self,
name: str,
primary_key: Optional[str] = None,
foreign_keys: Optional[list[dict[str, str]]] = None,
columns: Optional[list[dict[str, Any]]] = None,
) -> None:
"""
Init
"""
self.name = name
self.primary_key = primary_key
if foreign_keys is None:
self.foreign_keys = None
else:
self.foreign_keys = [
ForeignKeySchema(
name=key["column_name"],
foreign_table_name=key["foreign_table_name"],
foreign_column_name=key["foreign_column_name"],
)
for key in foreign_keys
]
if columns is None:
self.columns = None
else:
self.columns = [
ColumnSchema(
name=column["column_name"],
datatype=DATATYPES[column["datatype"]],
required=column["required"],
index=column["index"],
)
for column in columns
]
def _check_column_names(self) -> None:
"""Checks that column names are not duplicated
Raises:
ValueError: Raised when there are duplicate column names
"""
column_names = self._get_column_names()
if column_names is not None:
if len(column_names) != len(list(set(column_names))):
raise ValueError("There are duplicate column names")
def _get_column_names(self) -> Optional[list[str]]:
"""Gets the list of column names in the config
Returns:
list[str]: A list of column names
"""
if self.columns is not None:
return [column.name for column in self.columns]
return None
def _check_foreign_key_name(self) -> None:
"""Checks that foreign keys are not duplicated
Raises:
ValueError: Raised when there are duplicate foreign keys
"""
foreign_keys_names = self._get_foreign_key_names()
if foreign_keys_names is not None:
if len(foreign_keys_names) != len(list(set(foreign_keys_names))):
raise ValueError("There are duplicate column names")
def _get_foreign_key_names(self) -> Optional[list[str]]:
"""Gets the list of foreign key names in the config
Returns:
list[str]: A list of foreign key names
"""
if self.foreign_keys is not None:
return [key.name for key in self.foreign_keys]
return None
class DatabaseConfig:
"""A config for database specific items"""
def __init__(self, tables: list[dict[str, Any]]) -> None:
"""
Init
"""
self.tables: list[DatabaseTableConfig] = [
DatabaseTableConfig(**table) for table in tables
]
self._check_table_names()
def get_primary_key(self, table_name: str) -> Optional[str]:
"""Gets the primary key for an table
Args:
table_name (str): The name of the table
Returns:
Optional[str]: The primary key
"""
table = self._get_table_by_name(table_name)
return None if table is None else table.primary_key
def get_foreign_keys(self, table_name: str) -> Optional[list[ForeignKeySchema]]:
"""Gets the foreign keys for an table
Args:
table_name (str): The name of the table
Returns:
Optional[list[ForeignKeySchema]]: The foreign keys
"""
table = self._get_table_by_name(table_name)
return None if table is None else table.foreign_keys
def get_columns(self, table_name: str) -> Optional[list[ColumnSchema]]:
"""Gets the columns for an table
Args:
table_name (str): The name of the table
Returns:
Optional[list[ColumnSchema]]: The list of columns
"""
table = self._get_table_by_name(table_name)
return None if table is None else table.columns
def get_column(self, table_name: str, column_name: str) -> Optional[ColumnSchema]:
"""Gets a column for a table
Args:
table_name (str): The name of the table to get the column for
column_name (str): The name of the column to get
Returns:
Optional[list[ColumnSchema]]: The list of columns
"""
columns = self.get_columns(table_name)
if columns is None:
return None
columns = [column for column in columns if column.name == column_name]
if len(columns) == 0:
return None
return columns[0]
def _get_table_by_name(self, table_name: str) -> Optional[DatabaseTableConfig]:
"""Gets the config for the table if it exists
Args:
table_name (str): The name of the table
Returns:
Optional[DatabaseTableConfig]: The config for the table if it exists
"""
tables = [table for table in self.tables if table.name == table_name]
if len(tables) == 0:
return None
return tables[0]
def _get_table_names(self) -> list[str]:
"""Gets the list of tables names in the config
Returns:
list[str]: A list of table names
"""
return [table.name for table in self.tables]
def _check_table_names(self) -> None:
"""Checks that the table names are not duplicated
Raises:
ValueError: Raised when there are duplicate table names
"""
n_table_names = len(self._get_table_names())
n_unique_names = len(list(set(self._get_table_names())))
if n_table_names != n_unique_names:
raise ValueError("There are duplicate table names")
| 0.906129 | 0.255016 |
# pylint: disable=duplicate-code
from typing import Optional
import warnings
from pydantic.dataclasses import dataclass
from pydantic import validator
import validators
from schematic_db.db_schema.db_schema import (
DatabaseSchema,
TableSchema,
ForeignKeySchema,
ColumnSchema,
ColumnDatatype,
)
from schematic_db.api_utils.api_utils import (
find_class_specific_properties,
get_property_label_from_display_name,
is_node_required,
get_node_validation_rules,
SchematicAPIError,
SchematicAPITimeoutError,
)
from schematic_db.schema_graph.schema_graph import SchemaGraph
from .database_config import DatabaseConfig
class NoColumnsWarning(Warning):
"""
Occurs when a database table has no columns returned from find_class_specific_properties().
"""
def __init__(self, message: str) -> None:
"""
Args:
message (str): A message describing the error
"""
self.message = message
super().__init__(self.message)
class MoreThanOneTypeRule(Exception):
"""Raised when an column has more than one validation type rule"""
def __init__(
self,
column_name: str,
type_rules: list[str],
):
"""
Args:
column_name (str): The name of the column
type_rules (list[str]): A list of the type rules
"""
self.message = "Attribute has more than one validation type rule"
self.column_name = column_name
self.type_rules = type_rules
super().__init__(self.message)
def __str__(self) -> str:
return (
f"{self.message}; column name:{self.column_name}; "
f"type_rules:{self.type_rules}"
)
class ColumnSchematicError(Exception):
"""Raised when there is an issue getting data from the Schematic API for a column"""
def __init__(
self,
column_name: str,
table_name: str,
):
"""
Args:
column_name (str): The name of the column
table_name (str): The name of the table
"""
self.message = (
"There was an issue getting data from the Schematic API for the column"
)
self.column_name = column_name
self.table_name = table_name
super().__init__(self.message)
def __str__(self) -> str:
return f"{self.message}: column name: {self.column_name}; table_name: {self.table_name}"
@dataclass()
class SchemaConfig:
"""
A config for a Schema.
Properties:
schema_url (str): A url to the jsonld schema file
"""
schema_url: str
@validator("schema_url")
@classmethod
def validate_url(cls, value: str) -> str:
"""Validates that the value is a valid URL"""
valid_url = validators.url(value)
if not valid_url:
raise ValueError(f"{value} is a valid url")
return value
@validator("schema_url")
@classmethod
def validate_is_jsonld(cls, value: str) -> str:
"""Validates that the value is a jsonld file"""
is_jsonld = value.endswith(".jsonld")
if not is_jsonld:
raise ValueError(f"{value} does end with '.jsonld'")
return value
class Schema:
"""
The Schema class interacts with the Schematic API to create a DatabaseSchema
table.
"""
def __init__(
self,
config: SchemaConfig,
database_config: DatabaseConfig = DatabaseConfig([]),
use_display_names_as_labels: bool = False,
) -> None:
"""
The Schema class handles interactions with the schematic API.
The main responsibilities are creating the database schema, and retrieving manifests.
Args:
config (SchemaConfig): A config describing the basic inputs for the schema table
database_config (DatabaseConfig): Experimental and will be deprecated in the near
future. A config describing optional database specific columns.
use_display_names_as_labels(bool): Experimental and will be deprecated in the near
future. Use when display names and labels are the same in the schema.
"""
self.database_config = database_config
self.schema_url = config.schema_url
self.use_display_names_as_labels = use_display_names_as_labels
self.schema_graph = SchemaGraph(config.schema_url)
self.database_schema: Optional[DatabaseSchema] = None
def get_database_schema(self) -> DatabaseSchema:
"""Gets the current database schema
Returns:
DatabaseSchema: the current database schema
"""
# When first initialized, database schema is None
if self.database_schema is None:
self.update_database_schema()
assert self.database_schema is not None
return self.database_schema
def update_database_schema(self) -> None:
"""Updates the database schema."""
table_names = self.schema_graph.create_sorted_table_name_list()
table_schemas = [
schema
for schema in [self._create_table_schema(name) for name in table_names]
if schema is not None
]
self.database_schema = DatabaseSchema(table_schemas)
def _create_table_schema(self, table_name: str) -> Optional[TableSchema]:
"""Creates the the schema for one table in the database, if any column
schemas can be created.
Args:
table_name (str): The name of the table the schema will be created for.
Returns:
Optional[TableSchema]: The config for the table if the table has columns
otherwise None.
"""
# Some components will not have any columns for various reasons
columns = self._create_column_schemas(table_name)
if not columns:
return None
return TableSchema(
name=table_name,
columns=columns,
primary_key=self._get_primary_key(table_name),
foreign_keys=self._get_foreign_keys(table_name),
)
def _create_column_schemas(
self,
table_name: str,
) -> Optional[list[ColumnSchema]]:
"""Create the column schemas for the table, if any can be created.
Args:
table_name (str): The name of the table to create the column schemas for
Returns:
Optional[list[ColumnSchema]]: A list of columns in ColumnSchema form
"""
# the names of the columns to be created, in label(not display) form
column_names = find_class_specific_properties(self.schema_url, table_name)
columns = [
self._create_column_schema(name, table_name) for name in column_names
]
# Some Tables will not have any columns for various reasons
if not columns:
warnings.warn(
NoColumnsWarning(
f"Table {table_name} has no columns, and will be skipped."
)
)
return None
return columns
def _create_column_schema(self, column_name: str, table_name: str) -> ColumnSchema:
"""Creates a schema for column
Args:
column_name (str): The name of the column
table_name (str): The name of the table
Returns:
ColumnSchema: The schema for the column
"""
column = self.database_config.get_column(table_name, column_name)
# Use column config if provided
if column is not None:
return column
# Create column config if not provided
return ColumnSchema(
name=column_name,
datatype=self._get_column_datatype(column_name, table_name),
required=self._is_column_required(column_name, table_name),
index=False,
)
def _is_column_required(self, column_name: str, table_name: str) -> bool:
"""Determines if the column is required in the schema
Args:
column_name (str): The name of the column
table_name (str): The name of the table
Raises:
ColumnSchematicError: Raised when there is an issue with getting a result from the
schematic API
Returns:
bool: Is the column required?
"""
try:
is_column_required = is_node_required(self.schema_url, column_name)
except (SchematicAPIError, SchematicAPITimeoutError) as exc:
raise ColumnSchematicError(column_name, table_name) from exc
return is_column_required
def _get_column_datatype(self, column_name: str, table_name: str) -> ColumnDatatype:
"""Gets the datatype for the column
Args:
column_name (str): The name of the column
table_name (str): The name of the table
Raises:
ColumnSchematicError: Raised when there is an issue with getting a result from the
schematic API
MoreThanOneTypeRule: Raised when the Schematic API returns more than one rule that
indicate the columns datatype
Returns:
ColumnDatatype: The columns datatype
"""
datatypes = {
"str": ColumnDatatype.TEXT,
"float": ColumnDatatype.FLOAT,
"num": ColumnDatatype.FLOAT,
"int": ColumnDatatype.INT,
"date": ColumnDatatype.DATE,
}
# Try to get validation rules from Schematic API
try:
all_validation_rules = get_node_validation_rules(
self.schema_url, column_name
)
except (SchematicAPIError, SchematicAPITimeoutError) as exc:
raise ColumnSchematicError(column_name, table_name) from exc
# Try to get type from validation rules
type_validation_rules = [
rule for rule in all_validation_rules if rule in datatypes
]
if len(type_validation_rules) > 1:
raise MoreThanOneTypeRule(column_name, type_validation_rules)
if len(type_validation_rules) == 1:
return datatypes[type_validation_rules[0]]
# Default to text if there are no validation type rules
return ColumnDatatype.TEXT
def _get_primary_key(self, table_name: str) -> str:
"""Get the primary key for the column
Args:
table_name (str): The name of the column
Returns:
str: The primary key of the column
"""
# Attempt to get the primary key from the config
primary_key_attempt = self.database_config.get_primary_key(table_name)
# Check if the primary key is in the config, otherwise assume "id"
if primary_key_attempt is None:
return "id"
return primary_key_attempt
def _get_foreign_keys(self, table_name: str) -> list[ForeignKeySchema]:
"""Gets a list of foreign keys for an table in the database
Args:
table_name (str): The name of the table the config will be created for.
Returns:
list[ForeignKeySchema]: A list of foreign keys for the table.
"""
# Attempt to get foreign keys from config
foreign_keys_attempt = self.database_config.get_foreign_keys(table_name)
# If there are no foreign keys in config use schema graph to create foreign keys
if foreign_keys_attempt is None:
return self._create_foreign_keys(table_name)
return foreign_keys_attempt
def _create_foreign_keys(self, table_name: str) -> list[ForeignKeySchema]:
"""Create a list of foreign keys an table in the database using the schema graph
Args:
table_name (str): The name of the table
Returns:
list[ForeignKeySchema]: A list of foreign
"""
# Uses the schema graph to find tables the current table depends on
parent_table_names = self.schema_graph.get_neighbors(table_name)
# Each parent of the current table needs a foreign key to that parent
return [self._create_foreign_key(name) for name in parent_table_names]
def _create_foreign_key(self, foreign_table_name: str) -> ForeignKeySchema:
"""Creates a foreign key table
Args:
foreign_table_name (str): The name of the table the foreign key is referring to.
Returns:
ForeignKeySchema: A foreign key table.
"""
# Assume the foreign key name is <table_name>_id where the table name is the
# name of the table the column the foreign key is in
column_name = self._get_column_name(f"{foreign_table_name}_id")
attempt = self.database_config.get_primary_key(foreign_table_name)
foreign_column_name = "id" if attempt is None else attempt
return ForeignKeySchema(column_name, foreign_table_name, foreign_column_name)
def _get_column_name(self, column_name: str) -> str:
"""Gets the column name of a manifest column
Args:
column_name (str): The name of the column
Returns:
str: The column name of the column
"""
if self.use_display_names_as_labels:
return column_name
return get_property_label_from_display_name(self.schema_url, column_name)
|
schematic-db
|
/schematic_db-0.0.31-py3-none-any.whl/schematic_db/schema/schema.py
|
schema.py
|
# pylint: disable=duplicate-code
from typing import Optional
import warnings
from pydantic.dataclasses import dataclass
from pydantic import validator
import validators
from schematic_db.db_schema.db_schema import (
DatabaseSchema,
TableSchema,
ForeignKeySchema,
ColumnSchema,
ColumnDatatype,
)
from schematic_db.api_utils.api_utils import (
find_class_specific_properties,
get_property_label_from_display_name,
is_node_required,
get_node_validation_rules,
SchematicAPIError,
SchematicAPITimeoutError,
)
from schematic_db.schema_graph.schema_graph import SchemaGraph
from .database_config import DatabaseConfig
class NoColumnsWarning(Warning):
"""
Occurs when a database table has no columns returned from find_class_specific_properties().
"""
def __init__(self, message: str) -> None:
"""
Args:
message (str): A message describing the error
"""
self.message = message
super().__init__(self.message)
class MoreThanOneTypeRule(Exception):
"""Raised when an column has more than one validation type rule"""
def __init__(
self,
column_name: str,
type_rules: list[str],
):
"""
Args:
column_name (str): The name of the column
type_rules (list[str]): A list of the type rules
"""
self.message = "Attribute has more than one validation type rule"
self.column_name = column_name
self.type_rules = type_rules
super().__init__(self.message)
def __str__(self) -> str:
return (
f"{self.message}; column name:{self.column_name}; "
f"type_rules:{self.type_rules}"
)
class ColumnSchematicError(Exception):
"""Raised when there is an issue getting data from the Schematic API for a column"""
def __init__(
self,
column_name: str,
table_name: str,
):
"""
Args:
column_name (str): The name of the column
table_name (str): The name of the table
"""
self.message = (
"There was an issue getting data from the Schematic API for the column"
)
self.column_name = column_name
self.table_name = table_name
super().__init__(self.message)
def __str__(self) -> str:
return f"{self.message}: column name: {self.column_name}; table_name: {self.table_name}"
@dataclass()
class SchemaConfig:
"""
A config for a Schema.
Properties:
schema_url (str): A url to the jsonld schema file
"""
schema_url: str
@validator("schema_url")
@classmethod
def validate_url(cls, value: str) -> str:
"""Validates that the value is a valid URL"""
valid_url = validators.url(value)
if not valid_url:
raise ValueError(f"{value} is a valid url")
return value
@validator("schema_url")
@classmethod
def validate_is_jsonld(cls, value: str) -> str:
"""Validates that the value is a jsonld file"""
is_jsonld = value.endswith(".jsonld")
if not is_jsonld:
raise ValueError(f"{value} does end with '.jsonld'")
return value
class Schema:
"""
The Schema class interacts with the Schematic API to create a DatabaseSchema
table.
"""
def __init__(
self,
config: SchemaConfig,
database_config: DatabaseConfig = DatabaseConfig([]),
use_display_names_as_labels: bool = False,
) -> None:
"""
The Schema class handles interactions with the schematic API.
The main responsibilities are creating the database schema, and retrieving manifests.
Args:
config (SchemaConfig): A config describing the basic inputs for the schema table
database_config (DatabaseConfig): Experimental and will be deprecated in the near
future. A config describing optional database specific columns.
use_display_names_as_labels(bool): Experimental and will be deprecated in the near
future. Use when display names and labels are the same in the schema.
"""
self.database_config = database_config
self.schema_url = config.schema_url
self.use_display_names_as_labels = use_display_names_as_labels
self.schema_graph = SchemaGraph(config.schema_url)
self.database_schema: Optional[DatabaseSchema] = None
def get_database_schema(self) -> DatabaseSchema:
"""Gets the current database schema
Returns:
DatabaseSchema: the current database schema
"""
# When first initialized, database schema is None
if self.database_schema is None:
self.update_database_schema()
assert self.database_schema is not None
return self.database_schema
def update_database_schema(self) -> None:
"""Updates the database schema."""
table_names = self.schema_graph.create_sorted_table_name_list()
table_schemas = [
schema
for schema in [self._create_table_schema(name) for name in table_names]
if schema is not None
]
self.database_schema = DatabaseSchema(table_schemas)
def _create_table_schema(self, table_name: str) -> Optional[TableSchema]:
"""Creates the the schema for one table in the database, if any column
schemas can be created.
Args:
table_name (str): The name of the table the schema will be created for.
Returns:
Optional[TableSchema]: The config for the table if the table has columns
otherwise None.
"""
# Some components will not have any columns for various reasons
columns = self._create_column_schemas(table_name)
if not columns:
return None
return TableSchema(
name=table_name,
columns=columns,
primary_key=self._get_primary_key(table_name),
foreign_keys=self._get_foreign_keys(table_name),
)
def _create_column_schemas(
self,
table_name: str,
) -> Optional[list[ColumnSchema]]:
"""Create the column schemas for the table, if any can be created.
Args:
table_name (str): The name of the table to create the column schemas for
Returns:
Optional[list[ColumnSchema]]: A list of columns in ColumnSchema form
"""
# the names of the columns to be created, in label(not display) form
column_names = find_class_specific_properties(self.schema_url, table_name)
columns = [
self._create_column_schema(name, table_name) for name in column_names
]
# Some Tables will not have any columns for various reasons
if not columns:
warnings.warn(
NoColumnsWarning(
f"Table {table_name} has no columns, and will be skipped."
)
)
return None
return columns
def _create_column_schema(self, column_name: str, table_name: str) -> ColumnSchema:
"""Creates a schema for column
Args:
column_name (str): The name of the column
table_name (str): The name of the table
Returns:
ColumnSchema: The schema for the column
"""
column = self.database_config.get_column(table_name, column_name)
# Use column config if provided
if column is not None:
return column
# Create column config if not provided
return ColumnSchema(
name=column_name,
datatype=self._get_column_datatype(column_name, table_name),
required=self._is_column_required(column_name, table_name),
index=False,
)
def _is_column_required(self, column_name: str, table_name: str) -> bool:
"""Determines if the column is required in the schema
Args:
column_name (str): The name of the column
table_name (str): The name of the table
Raises:
ColumnSchematicError: Raised when there is an issue with getting a result from the
schematic API
Returns:
bool: Is the column required?
"""
try:
is_column_required = is_node_required(self.schema_url, column_name)
except (SchematicAPIError, SchematicAPITimeoutError) as exc:
raise ColumnSchematicError(column_name, table_name) from exc
return is_column_required
def _get_column_datatype(self, column_name: str, table_name: str) -> ColumnDatatype:
"""Gets the datatype for the column
Args:
column_name (str): The name of the column
table_name (str): The name of the table
Raises:
ColumnSchematicError: Raised when there is an issue with getting a result from the
schematic API
MoreThanOneTypeRule: Raised when the Schematic API returns more than one rule that
indicate the columns datatype
Returns:
ColumnDatatype: The columns datatype
"""
datatypes = {
"str": ColumnDatatype.TEXT,
"float": ColumnDatatype.FLOAT,
"num": ColumnDatatype.FLOAT,
"int": ColumnDatatype.INT,
"date": ColumnDatatype.DATE,
}
# Try to get validation rules from Schematic API
try:
all_validation_rules = get_node_validation_rules(
self.schema_url, column_name
)
except (SchematicAPIError, SchematicAPITimeoutError) as exc:
raise ColumnSchematicError(column_name, table_name) from exc
# Try to get type from validation rules
type_validation_rules = [
rule for rule in all_validation_rules if rule in datatypes
]
if len(type_validation_rules) > 1:
raise MoreThanOneTypeRule(column_name, type_validation_rules)
if len(type_validation_rules) == 1:
return datatypes[type_validation_rules[0]]
# Default to text if there are no validation type rules
return ColumnDatatype.TEXT
def _get_primary_key(self, table_name: str) -> str:
"""Get the primary key for the column
Args:
table_name (str): The name of the column
Returns:
str: The primary key of the column
"""
# Attempt to get the primary key from the config
primary_key_attempt = self.database_config.get_primary_key(table_name)
# Check if the primary key is in the config, otherwise assume "id"
if primary_key_attempt is None:
return "id"
return primary_key_attempt
def _get_foreign_keys(self, table_name: str) -> list[ForeignKeySchema]:
"""Gets a list of foreign keys for an table in the database
Args:
table_name (str): The name of the table the config will be created for.
Returns:
list[ForeignKeySchema]: A list of foreign keys for the table.
"""
# Attempt to get foreign keys from config
foreign_keys_attempt = self.database_config.get_foreign_keys(table_name)
# If there are no foreign keys in config use schema graph to create foreign keys
if foreign_keys_attempt is None:
return self._create_foreign_keys(table_name)
return foreign_keys_attempt
def _create_foreign_keys(self, table_name: str) -> list[ForeignKeySchema]:
"""Create a list of foreign keys an table in the database using the schema graph
Args:
table_name (str): The name of the table
Returns:
list[ForeignKeySchema]: A list of foreign
"""
# Uses the schema graph to find tables the current table depends on
parent_table_names = self.schema_graph.get_neighbors(table_name)
# Each parent of the current table needs a foreign key to that parent
return [self._create_foreign_key(name) for name in parent_table_names]
def _create_foreign_key(self, foreign_table_name: str) -> ForeignKeySchema:
"""Creates a foreign key table
Args:
foreign_table_name (str): The name of the table the foreign key is referring to.
Returns:
ForeignKeySchema: A foreign key table.
"""
# Assume the foreign key name is <table_name>_id where the table name is the
# name of the table the column the foreign key is in
column_name = self._get_column_name(f"{foreign_table_name}_id")
attempt = self.database_config.get_primary_key(foreign_table_name)
foreign_column_name = "id" if attempt is None else attempt
return ForeignKeySchema(column_name, foreign_table_name, foreign_column_name)
def _get_column_name(self, column_name: str) -> str:
"""Gets the column name of a manifest column
Args:
column_name (str): The name of the column
Returns:
str: The column name of the column
"""
if self.use_display_names_as_labels:
return column_name
return get_property_label_from_display_name(self.schema_url, column_name)
| 0.90839 | 0.252511 |
from errno import ENOENT
from os import pathsep
from re import split
from pkg_resources import resource_exists, \
resource_filename, \
resource_stream, \
resource_string, \
resource_listdir
class InvalidResourceError(Exception):
"""
Args:
uri {String}: The URI which was requested within the given loader's
that did not exist or was malformed.
"""
def __init__(self, namespace, requested_uri):
self.namespace = namespace
self.requested_uri = requested_uri
self.message = 'Resource does not exist or is declared incorrectly'
self.errno = ENOENT
super(InvalidResourceError, self).__init__(self.message)
def __str__(self):
return '{}({}), "{}" of {}'.format(self.message, self.errno, self.requested_uri, self.namespace)
def __repr__(self):
return self.__str__()
class Loader(object):
"""
Args:
namespace {String}: The namespace within the package (relative to the package root)
to load resources from. Using the magic variable __name__ is suggested as when the script
is run as "__main__" it will load the most recent local resources instead of the cached
egg resources.
prefix {String}: Set a prefix for all URIs. Use a prefix if resources are centrally
located in a single place the uri's will be prefixed automatically by the loader.
"""
def __init__(self, namespace, **opts):
self.namespace = namespace
self.prefix = opts.get('prefix', '')
self.local = opts.get('local', False)
if not self.local:
self.namespace = split(r'\.|\\|\/', self.namespace)[0]
def _resolve(self, uri):
resource_uri = '/'.join([self.prefix] + uri.split(pathsep))
ns = self.namespace
if not resource_exists(ns, resource_uri):
raise InvalidResourceError(ns, resource_uri)
return ns, resource_uri
def read(self, uri):
"""
Read entire contents of resource. Same as open('path...').read()
Args:
uri {String}: URI of the resource.
"""
ns, uri = self._resolve(uri)
return resource_string(ns, uri)
def open(self, uri):
"""
Open a file object like handle to the resource. Same as open('path...')
Args:
uri {String}: URI of the resource.
"""
ns, uri = self._resolve(uri)
return resource_stream(ns, uri)
def filename(self, uri):
"""
Return the "most correct" filename for a resource. Same as os.path.normpath('path...')
Args:
uri {String}: URI of the resource.
"""
ns, uri = self._resolve(uri)
return resource_filename(ns, uri)
def list(self, url):
"""
Return a list of all resources within the given URL
Args:
url {String}: URL of the resources.
"""
ns, uri = self._resolve(url)
return map(lambda x: url+'/'+x,resource_listdir(ns, uri))
# call Loader() and pass `schematic`, which is the global package namespace
LOADER = Loader('schematic', prefix='etc')
|
schematic-test
|
/schematic_test-0.1.11-py3-none-any.whl/schematic/loader.py
|
loader.py
|
from errno import ENOENT
from os import pathsep
from re import split
from pkg_resources import resource_exists, \
resource_filename, \
resource_stream, \
resource_string, \
resource_listdir
class InvalidResourceError(Exception):
"""
Args:
uri {String}: The URI which was requested within the given loader's
that did not exist or was malformed.
"""
def __init__(self, namespace, requested_uri):
self.namespace = namespace
self.requested_uri = requested_uri
self.message = 'Resource does not exist or is declared incorrectly'
self.errno = ENOENT
super(InvalidResourceError, self).__init__(self.message)
def __str__(self):
return '{}({}), "{}" of {}'.format(self.message, self.errno, self.requested_uri, self.namespace)
def __repr__(self):
return self.__str__()
class Loader(object):
"""
Args:
namespace {String}: The namespace within the package (relative to the package root)
to load resources from. Using the magic variable __name__ is suggested as when the script
is run as "__main__" it will load the most recent local resources instead of the cached
egg resources.
prefix {String}: Set a prefix for all URIs. Use a prefix if resources are centrally
located in a single place the uri's will be prefixed automatically by the loader.
"""
def __init__(self, namespace, **opts):
self.namespace = namespace
self.prefix = opts.get('prefix', '')
self.local = opts.get('local', False)
if not self.local:
self.namespace = split(r'\.|\\|\/', self.namespace)[0]
def _resolve(self, uri):
resource_uri = '/'.join([self.prefix] + uri.split(pathsep))
ns = self.namespace
if not resource_exists(ns, resource_uri):
raise InvalidResourceError(ns, resource_uri)
return ns, resource_uri
def read(self, uri):
"""
Read entire contents of resource. Same as open('path...').read()
Args:
uri {String}: URI of the resource.
"""
ns, uri = self._resolve(uri)
return resource_string(ns, uri)
def open(self, uri):
"""
Open a file object like handle to the resource. Same as open('path...')
Args:
uri {String}: URI of the resource.
"""
ns, uri = self._resolve(uri)
return resource_stream(ns, uri)
def filename(self, uri):
"""
Return the "most correct" filename for a resource. Same as os.path.normpath('path...')
Args:
uri {String}: URI of the resource.
"""
ns, uri = self._resolve(uri)
return resource_filename(ns, uri)
def list(self, url):
"""
Return a list of all resources within the given URL
Args:
url {String}: URL of the resources.
"""
ns, uri = self._resolve(url)
return map(lambda x: url+'/'+x,resource_listdir(ns, uri))
# call Loader() and pass `schematic`, which is the global package namespace
LOADER = Loader('schematic', prefix='etc')
| 0.557604 | 0.109563 |
import os
import yaml
class Configuration(object):
def __init__(self):
# path to config.yml file
self.CONFIG_PATH = None
# path to credentials.json file
self.CREDS_PATH = None
# path to token.pickle file
self.TOKEN_PICKLE = None
# path to service account credentials file
self.SERVICE_ACCT_CREDS = None
# path to synapse config file
self.SYNAPSE_CONFIG_PATH = None
# entire configuration data
self.DATA = None
def __getattribute__(self, name):
value = super().__getattribute__(name)
if value is None and "SCHEMATIC_CONFIG" in os.environ:
self.load_config_from_env()
value = super().__getattribute__(name)
elif value is None and "SCHEMATIC_CONFIG" not in os.environ:
raise AttributeError(
"The '%s' configuration field was accessed, but it hasn't been "
"set yet, presumably because the schematic.CONFIG.load_config() "
"method hasn't been run yet. Alternatively, you can re-run this "
"code with the 'SCHEMATIC_CONFIG' environment variable set to "
"the config.yml file, which will be automatically loaded."
% name
)
return value
def __getitem__(self, key):
return self.DATA[key]
def get(self, key, default):
try:
value = self[key]
except AttributeError or KeyError:
value = default
return value
@staticmethod
def load_yaml(file_path: str) -> dict:
with open(file_path, 'r') as stream:
try:
config_data = yaml.safe_load(stream)
except yaml.YAMLError as exc:
print(exc)
return None
return config_data
def normalize_path(self, path):
# Retrieve parent directory of the config to decode relative paths
parent_dir = os.path.dirname(self.CONFIG_PATH)
# Ensure absolute file paths
if not os.path.isabs(path):
path = os.path.join(parent_dir, path)
# And lastly, normalize file paths
return os.path.normpath(path)
def load_config_from_env(self):
schematic_config = os.environ["SCHEMATIC_CONFIG"]
print(
"Loading config YAML file specified in 'SCHEMATIC_CONFIG' "
"environment variable: %s" % schematic_config
)
return self.load_config(schematic_config)
def load_config(self, config_path=None):
# If config_path is None, try loading from environment
if config_path is None and "SCHEMATIC_CONFIG" in os.environ:
return self.load_config_from_env()
# Otherwise, raise an error
elif config_path is None and "SCHEMATIC_CONFIG" not in os.environ:
raise ValueError(
"No configuration file provided to the `config_path` argument "
"in `load_config`()`, nor was one specified in the "
"'SCHEMATIC_CONFIG' environment variable. Quitting now..."
)
# Load configuration YAML file
config_path = os.path.expanduser(config_path)
config_path = os.path.abspath(config_path)
self.DATA = self.load_yaml(config_path)
# Update module-level configuration constants
self.CONFIG_PATH = config_path
self.CREDS_PATH = self.DATA["definitions"]["creds_path"]
self.TOKEN_PICKLE = self.DATA["definitions"]["token_pickle"]
self.SERVICE_ACCT_CREDS = self.DATA["definitions"]["service_acct_creds"]
self.SYNAPSE_CONFIG_PATH = self.DATA["definitions"]["synapse_config"]
# Normalize all file paths as absolute file paths
self.CONFIG_PATH = self.normalize_path(self.CONFIG_PATH)
self.CREDS_PATH = self.normalize_path(self.CREDS_PATH)
self.TOKEN_PICKLE = self.normalize_path(self.TOKEN_PICKLE)
self.SERVICE_ACCT_CREDS = self.normalize_path(self.SERVICE_ACCT_CREDS)
self.SYNAPSE_CONFIG_PATH = self.normalize_path(self.SYNAPSE_CONFIG_PATH)
# Return self.DATA as a side-effect
return self.DATA
CONFIG = Configuration()
|
schematic-test
|
/schematic_test-0.1.11-py3-none-any.whl/schematic/configuration.py
|
configuration.py
|
import os
import yaml
class Configuration(object):
def __init__(self):
# path to config.yml file
self.CONFIG_PATH = None
# path to credentials.json file
self.CREDS_PATH = None
# path to token.pickle file
self.TOKEN_PICKLE = None
# path to service account credentials file
self.SERVICE_ACCT_CREDS = None
# path to synapse config file
self.SYNAPSE_CONFIG_PATH = None
# entire configuration data
self.DATA = None
def __getattribute__(self, name):
value = super().__getattribute__(name)
if value is None and "SCHEMATIC_CONFIG" in os.environ:
self.load_config_from_env()
value = super().__getattribute__(name)
elif value is None and "SCHEMATIC_CONFIG" not in os.environ:
raise AttributeError(
"The '%s' configuration field was accessed, but it hasn't been "
"set yet, presumably because the schematic.CONFIG.load_config() "
"method hasn't been run yet. Alternatively, you can re-run this "
"code with the 'SCHEMATIC_CONFIG' environment variable set to "
"the config.yml file, which will be automatically loaded."
% name
)
return value
def __getitem__(self, key):
return self.DATA[key]
def get(self, key, default):
try:
value = self[key]
except AttributeError or KeyError:
value = default
return value
@staticmethod
def load_yaml(file_path: str) -> dict:
with open(file_path, 'r') as stream:
try:
config_data = yaml.safe_load(stream)
except yaml.YAMLError as exc:
print(exc)
return None
return config_data
def normalize_path(self, path):
# Retrieve parent directory of the config to decode relative paths
parent_dir = os.path.dirname(self.CONFIG_PATH)
# Ensure absolute file paths
if not os.path.isabs(path):
path = os.path.join(parent_dir, path)
# And lastly, normalize file paths
return os.path.normpath(path)
def load_config_from_env(self):
schematic_config = os.environ["SCHEMATIC_CONFIG"]
print(
"Loading config YAML file specified in 'SCHEMATIC_CONFIG' "
"environment variable: %s" % schematic_config
)
return self.load_config(schematic_config)
def load_config(self, config_path=None):
# If config_path is None, try loading from environment
if config_path is None and "SCHEMATIC_CONFIG" in os.environ:
return self.load_config_from_env()
# Otherwise, raise an error
elif config_path is None and "SCHEMATIC_CONFIG" not in os.environ:
raise ValueError(
"No configuration file provided to the `config_path` argument "
"in `load_config`()`, nor was one specified in the "
"'SCHEMATIC_CONFIG' environment variable. Quitting now..."
)
# Load configuration YAML file
config_path = os.path.expanduser(config_path)
config_path = os.path.abspath(config_path)
self.DATA = self.load_yaml(config_path)
# Update module-level configuration constants
self.CONFIG_PATH = config_path
self.CREDS_PATH = self.DATA["definitions"]["creds_path"]
self.TOKEN_PICKLE = self.DATA["definitions"]["token_pickle"]
self.SERVICE_ACCT_CREDS = self.DATA["definitions"]["service_acct_creds"]
self.SYNAPSE_CONFIG_PATH = self.DATA["definitions"]["synapse_config"]
# Normalize all file paths as absolute file paths
self.CONFIG_PATH = self.normalize_path(self.CONFIG_PATH)
self.CREDS_PATH = self.normalize_path(self.CREDS_PATH)
self.TOKEN_PICKLE = self.normalize_path(self.TOKEN_PICKLE)
self.SERVICE_ACCT_CREDS = self.normalize_path(self.SERVICE_ACCT_CREDS)
self.SYNAPSE_CONFIG_PATH = self.normalize_path(self.SYNAPSE_CONFIG_PATH)
# Return self.DATA as a side-effect
return self.DATA
CONFIG = Configuration()
| 0.33644 | 0.065935 |
import click
import click_log
import logging
import sys
from jsonschema import ValidationError
from schematic.models.metadata import MetadataModel
from schematic.utils.cli_utils import get_from_config, fill_in_from_config
from schematic import CONFIG
logger = logging.getLogger(__name__)
click_log.basic_config(logger)
CONTEXT_SETTINGS = dict(help_option_names=['--help', '-h']) # help options
# invoke_without_command=True -> forces the application not to show aids before losing them with a --h
@click.group(context_settings=CONTEXT_SETTINGS, invoke_without_command=True)
@click_log.simple_verbosity_option(logger)
@click.option('-c', '--config', envvar='SCHEMATIC_CONFIG', help='Path to schematic configuration file.')
@click.pass_context
def model(ctx, config): # use as `schematic model ...`
"""
Sub-commands for Metadata Model related utilities/methods.
"""
try:
logger.debug(f"Loading config file contents in '{config}'")
ctx.obj = CONFIG.load_config(config)
except ValueError as e:
logger.error("'--config' not provided or environment variable not set.")
logger.exception(e)
sys.exit(1)
# prototype based on submit_metadata_manifest()
@model.command('submit', short_help='Validation (optional) and submission of manifest files.')
@click_log.simple_verbosity_option(logger)
@click.option('-mp', '--manifest_path', help='Path to the user-populated manifest file.', required=True)
@click.option('-d', '--dataset_id', help='SynID of existing dataset on Synapse.', required=True)
@click.option('-vc', '--validate_component', help='Component to be used for validation', default=None)
@click.pass_obj
def submit_manifest(ctx, manifest_path, dataset_id, validate_component):
"""
Running CLI with manifest validation (optional) and submission options.
"""
jsonld = get_from_config(CONFIG.DATA, ("model", "input", "location"))
model_file_type = get_from_config(CONFIG.DATA, ("model", "input", "file_type"))
metadata_model = MetadataModel(inputMModelLocation=jsonld,
inputMModelLocationType=model_file_type)
try:
success = metadata_model.submit_metadata_manifest(manifest_path=manifest_path,
dataset_id=dataset_id,
validate_component=validate_component)
if success:
logger.info(f"File at '{manifest_path}' was successfully associated "
f"with dataset '{dataset_id}'.")
except ValueError:
logger.error(f"Component '{validate_component}' is not present in '{jsonld}', or is invalid.")
except ValidationError:
logger.error(f"Validation errors resulted while validating with '{validate_component}'.")
|
schematic-test
|
/schematic_test-0.1.11-py3-none-any.whl/schematic/models/commands.py
|
commands.py
|
import click
import click_log
import logging
import sys
from jsonschema import ValidationError
from schematic.models.metadata import MetadataModel
from schematic.utils.cli_utils import get_from_config, fill_in_from_config
from schematic import CONFIG
logger = logging.getLogger(__name__)
click_log.basic_config(logger)
CONTEXT_SETTINGS = dict(help_option_names=['--help', '-h']) # help options
# invoke_without_command=True -> forces the application not to show aids before losing them with a --h
@click.group(context_settings=CONTEXT_SETTINGS, invoke_without_command=True)
@click_log.simple_verbosity_option(logger)
@click.option('-c', '--config', envvar='SCHEMATIC_CONFIG', help='Path to schematic configuration file.')
@click.pass_context
def model(ctx, config): # use as `schematic model ...`
"""
Sub-commands for Metadata Model related utilities/methods.
"""
try:
logger.debug(f"Loading config file contents in '{config}'")
ctx.obj = CONFIG.load_config(config)
except ValueError as e:
logger.error("'--config' not provided or environment variable not set.")
logger.exception(e)
sys.exit(1)
# prototype based on submit_metadata_manifest()
@model.command('submit', short_help='Validation (optional) and submission of manifest files.')
@click_log.simple_verbosity_option(logger)
@click.option('-mp', '--manifest_path', help='Path to the user-populated manifest file.', required=True)
@click.option('-d', '--dataset_id', help='SynID of existing dataset on Synapse.', required=True)
@click.option('-vc', '--validate_component', help='Component to be used for validation', default=None)
@click.pass_obj
def submit_manifest(ctx, manifest_path, dataset_id, validate_component):
"""
Running CLI with manifest validation (optional) and submission options.
"""
jsonld = get_from_config(CONFIG.DATA, ("model", "input", "location"))
model_file_type = get_from_config(CONFIG.DATA, ("model", "input", "file_type"))
metadata_model = MetadataModel(inputMModelLocation=jsonld,
inputMModelLocationType=model_file_type)
try:
success = metadata_model.submit_metadata_manifest(manifest_path=manifest_path,
dataset_id=dataset_id,
validate_component=validate_component)
if success:
logger.info(f"File at '{manifest_path}' was successfully associated "
f"with dataset '{dataset_id}'.")
except ValueError:
logger.error(f"Component '{validate_component}' is not present in '{jsonld}', or is invalid.")
except ValidationError:
logger.error(f"Validation errors resulted while validating with '{validate_component}'.")
| 0.373647 | 0.069985 |
## Usage of methods in `synapse.store` module
_Note: Refer to the `store_usage` module within the `examples/` directory here for the snippets._
**Make sure to configure the values of `"username"` and `"password"` in the `.synapseConfig` file as described in the main README.**
To retreive a list of all the Synapse projects that the current user (user whose credentials are registered with the instance of `synapseclient.Synapse`), run the following:
```python
projects_list = syn_store.getStorageProjects()
print("Testing retrieval of project list from Synapse...")
projects_df = pd.DataFrame(projects_list, columns=["Synapse ID", "Project Name"])
print(projects_df)
```
The above snippet returns a dataframe with 'synapse ID' in the first column, and 'Project Name' in second.
From this list, select any project of your choice. Feed the synapse ID of the selected project to the `getStorageDatasetsInProject()` method as follows:
_The below example uses the synapse ID of the `HTAN CenterA` project._
```python
datasets_list = syn_store.getStorageDatasetsInProject(projectId="syn20977135")
print("Testing retrieval of dataset list within a given storage project from Synapse...")
datasets_df = pd.DataFrame(datasets_list, columns=["Synapse ID", "Dataset Name"])
print(datasets_df)
```
Similarly, from the above list of datasets, select any dataset of your choice, and feed the synapse ID of that dataset to the `getFilesInStorageDataset()` method as follows:
_The below example uses the synapse ID of "HTAN_CenterA_BulkRNAseq_AlignmentDataset_1" dataset._
```python
files_list = syn_store.getFilesInStorageDataset(datasetId="syn22125525")
print("Testing retrieval of file list within a given storage dataset from Synapse")
files_df = pd.DataFrame(files_list, columns=["Synapse ID", "File Name"])
print(files_df)
```
Once you have generated/filled out/validated a metadata manifest file, and want to associate it with a synapse dataset/entity, do the following:
```python
print("Testing association of entities with annotation from manifest...")
manifest_syn_id = syn_store.associateMetadataWithFiles(CONFIG["synapse"]["manifest_filename"], "syn21984120")
print(manifest_syn_id)
```
_Note: Make sure you have the right permissions to the project before executing the above block of code._
|
schematic-test
|
/schematic_test-0.1.11-py3-none-any.whl/schematic/store/README.md
|
README.md
|
projects_list = syn_store.getStorageProjects()
print("Testing retrieval of project list from Synapse...")
projects_df = pd.DataFrame(projects_list, columns=["Synapse ID", "Project Name"])
print(projects_df)
datasets_list = syn_store.getStorageDatasetsInProject(projectId="syn20977135")
print("Testing retrieval of dataset list within a given storage project from Synapse...")
datasets_df = pd.DataFrame(datasets_list, columns=["Synapse ID", "Dataset Name"])
print(datasets_df)
files_list = syn_store.getFilesInStorageDataset(datasetId="syn22125525")
print("Testing retrieval of file list within a given storage dataset from Synapse")
files_df = pd.DataFrame(files_list, columns=["Synapse ID", "File Name"])
print(files_df)
print("Testing association of entities with annotation from manifest...")
manifest_syn_id = syn_store.associateMetadataWithFiles(CONFIG["synapse"]["manifest_filename"], "syn21984120")
print(manifest_syn_id)
| 0.283385 | 0.87982 |
import synapseclient
import pandas as pd
import os
from schematic.store.synapse import SynapseStorage
from schematic import CONFIG
# create an instance of synapseclient.Synapse() and login
syn = synapseclient.Synapse(configPath=CONFIG.SYNAPSE_CONFIG_PATH)
try:
syn.login()
except synapseclient.core.exceptions.SynapseNoCredentialsError:
print("Please make sure the 'username' and 'password'/'api_key' values have been filled out in .synapseConfig.")
except synapseclient.core.exceptions.SynapseAuthenticationError:
print("Please make sure the credentials in the .synapseConfig file are correct.")
syn_store = SynapseStorage(syn=syn)
# testing the retrieval of list of projects (associated with current user) from synapse
projects_list = syn_store.getStorageProjects()
print("Testing retrieval of project list from Synapse...")
# create pandas df from the list of projects to make results more presentable
projects_df = pd.DataFrame(projects_list, columns=["Synapse ID", "Project Name"])
print(projects_df)
# testing the retrieval of list of datasets (associated with given project) from Synapse
# synapse ID for the "HTAN CenterA" project
datasets_list = syn_store.getStorageDatasetsInProject(projectId="syn20977135")
print("Testing retrieval of dataset list within a given storage project from Synapse...")
datasets_df = pd.DataFrame(datasets_list, columns=["Synapse ID", "Dataset Name"])
print(datasets_df)
# testing the retrieval of list of files (associated with given dataset) from Synapse
# synapse ID of the "HTAN_CenterA_BulkRNAseq_AlignmentDataset_1" dataset
files_list = syn_store.getFilesInStorageDataset(datasetId="syn22125525")
print("Testing retrieval of file list within a given storage dataset from Synapse")
files_df = pd.DataFrame(files_list, columns=["Synapse ID", "File Name"])
print(files_df)
# testing the association of entities with annotation(s) from manifest
# synapse ID of "HTAN_CenterA_FamilyHistory" dataset and associating with it a validated manifest
MANIFEST_LOC = CONFIG["synapse"]["manifest_filename"]
print("Testing association of entities with annotation from manifest...")
manifest_syn_id = syn_store.associateMetadataWithFiles(MANIFEST_LOC, "syn21984120")
print(manifest_syn_id)
# testing the successful retreival of all manifests associated with a project, accessible by the current user
print("Testing retreival of all manifests associated with projects accessible by user...")
manifests_list = syn_store.getAllManifests()
manifests_df = pd.DataFrame(manifests_list)
print(manifests_df)
|
schematic-test
|
/schematic_test-0.1.11-py3-none-any.whl/schematic/store/examples/store_usage.py
|
store_usage.py
|
import synapseclient
import pandas as pd
import os
from schematic.store.synapse import SynapseStorage
from schematic import CONFIG
# create an instance of synapseclient.Synapse() and login
syn = synapseclient.Synapse(configPath=CONFIG.SYNAPSE_CONFIG_PATH)
try:
syn.login()
except synapseclient.core.exceptions.SynapseNoCredentialsError:
print("Please make sure the 'username' and 'password'/'api_key' values have been filled out in .synapseConfig.")
except synapseclient.core.exceptions.SynapseAuthenticationError:
print("Please make sure the credentials in the .synapseConfig file are correct.")
syn_store = SynapseStorage(syn=syn)
# testing the retrieval of list of projects (associated with current user) from synapse
projects_list = syn_store.getStorageProjects()
print("Testing retrieval of project list from Synapse...")
# create pandas df from the list of projects to make results more presentable
projects_df = pd.DataFrame(projects_list, columns=["Synapse ID", "Project Name"])
print(projects_df)
# testing the retrieval of list of datasets (associated with given project) from Synapse
# synapse ID for the "HTAN CenterA" project
datasets_list = syn_store.getStorageDatasetsInProject(projectId="syn20977135")
print("Testing retrieval of dataset list within a given storage project from Synapse...")
datasets_df = pd.DataFrame(datasets_list, columns=["Synapse ID", "Dataset Name"])
print(datasets_df)
# testing the retrieval of list of files (associated with given dataset) from Synapse
# synapse ID of the "HTAN_CenterA_BulkRNAseq_AlignmentDataset_1" dataset
files_list = syn_store.getFilesInStorageDataset(datasetId="syn22125525")
print("Testing retrieval of file list within a given storage dataset from Synapse")
files_df = pd.DataFrame(files_list, columns=["Synapse ID", "File Name"])
print(files_df)
# testing the association of entities with annotation(s) from manifest
# synapse ID of "HTAN_CenterA_FamilyHistory" dataset and associating with it a validated manifest
MANIFEST_LOC = CONFIG["synapse"]["manifest_filename"]
print("Testing association of entities with annotation from manifest...")
manifest_syn_id = syn_store.associateMetadataWithFiles(MANIFEST_LOC, "syn21984120")
print(manifest_syn_id)
# testing the successful retreival of all manifests associated with a project, accessible by the current user
print("Testing retreival of all manifests associated with projects accessible by user...")
manifests_list = syn_store.getAllManifests()
manifests_df = pd.DataFrame(manifests_list)
print(manifests_df)
| 0.277963 | 0.134605 |
__author__ = "Jaakko Salonen"
__copyright__ = "Copyright 2011-2012, Jaakko Salonen"
__version__ = "0.5.0"
__license__ = "MIT"
__status__ = "Prototype"
from urllib.parse import unquote
from copy import copy
from rdflib import BNode, URIRef
import sys
if sys.version_info.major == 3:
unicode = str
try:
from rdflib import BNode, URIRef
except:
# Fallback if rdflib is not present
class BNode(object):
def __init__(self, val):
self.val = val
def n3(self):
return unicode('_:'+self.val)
class URIRef(unicode): pass
class Curie(object):
""" Curie Datatype Class
Examples:
>>> nss = dict(dc='http://purl.org/dc/elements/1.1/')
>>> dc_title = Curie('http://purl.org/dc/elements/1.1/title', nss)
>>> dc_title.curie
u'dc:title'
>>> dc_title.uri
u'http://purl.org/dc/elements/1.1/title'
>>> dc_title.curie
u'dc:title'
>>> nss['example'] = 'http://www.example.org/'
>>> iri_test = Curie('http://www.example.org/D%C3%BCrst', nss)
>>> iri_test.uri
u'http://www.example.org/D\\xfcrst'
>>> iri_test.curie
u'example:D%C3%BCrst'
"""
def __init__(self, uri, namespaces=dict()):
self.namespaces = namespaces
self.uri = unicode(unquote(uri), 'utf-8')
self.curie = copy(self.uri)
for ns in self.namespaces:
self.curie = uri.replace(u''+self.namespaces['%s'%ns], u"%s:" % ns)
def __str__(self):
return self.__unicode__()
def __unicode__(self):
return self.curie
def uri2curie(uri, namespaces):
""" Convert URI to CURIE
Define namespaces we want to use:
>>> nss = dict(dc='http://purl.org/dc/elements/1.1/')
Converting a string URI to CURIE
>>> uri2curie('http://purl.org/dc/elements/1.1/title', nss)
u'dc:title'
RDFLib data type conversions:
URIRef to CURIE
>>> uri2curie(URIRef('http://purl.org/dc/elements/1.1/title'), nss)
u'dc:title'
Blank node to CURIE
>>> uri2curie(BNode('blanknode1'), nss)
u'_:blanknode1'
"""
# Use n3() method if BNode
if isinstance(uri, BNode):
result = uri.n3()
else:
result = uri
# result = unicode(uri)
for ns in namespaces:
ns_raw = u'%s' % namespaces['%s'%ns]
if ns_raw == 'http://www.w3.org/2002/07/owl#uri':
ns_raw = 'http://www.w3.org/2002/07/owl#'
result = result.replace(ns_raw, u"%s:" % ns)
result = result.replace(u'http://www.w3.org/2002/07/owl#', 'owl:')
return result
def curie2uri(curie, namespaces):
""" Convert CURIE to URI
TODO: testing
"""
result = unicode(curie)
for ns in namespaces:
result = result.replace(u"%s:" % ns, u''+namespaces['%s'%ns])
return URIRef(result)
if __name__ == "__main__":
import doctest
doctest.testmod()
|
schematic-test
|
/schematic_test-0.1.11-py3-none-any.whl/schematic/schemas/curie.py
|
curie.py
|
__author__ = "Jaakko Salonen"
__copyright__ = "Copyright 2011-2012, Jaakko Salonen"
__version__ = "0.5.0"
__license__ = "MIT"
__status__ = "Prototype"
from urllib.parse import unquote
from copy import copy
from rdflib import BNode, URIRef
import sys
if sys.version_info.major == 3:
unicode = str
try:
from rdflib import BNode, URIRef
except:
# Fallback if rdflib is not present
class BNode(object):
def __init__(self, val):
self.val = val
def n3(self):
return unicode('_:'+self.val)
class URIRef(unicode): pass
class Curie(object):
""" Curie Datatype Class
Examples:
>>> nss = dict(dc='http://purl.org/dc/elements/1.1/')
>>> dc_title = Curie('http://purl.org/dc/elements/1.1/title', nss)
>>> dc_title.curie
u'dc:title'
>>> dc_title.uri
u'http://purl.org/dc/elements/1.1/title'
>>> dc_title.curie
u'dc:title'
>>> nss['example'] = 'http://www.example.org/'
>>> iri_test = Curie('http://www.example.org/D%C3%BCrst', nss)
>>> iri_test.uri
u'http://www.example.org/D\\xfcrst'
>>> iri_test.curie
u'example:D%C3%BCrst'
"""
def __init__(self, uri, namespaces=dict()):
self.namespaces = namespaces
self.uri = unicode(unquote(uri), 'utf-8')
self.curie = copy(self.uri)
for ns in self.namespaces:
self.curie = uri.replace(u''+self.namespaces['%s'%ns], u"%s:" % ns)
def __str__(self):
return self.__unicode__()
def __unicode__(self):
return self.curie
def uri2curie(uri, namespaces):
""" Convert URI to CURIE
Define namespaces we want to use:
>>> nss = dict(dc='http://purl.org/dc/elements/1.1/')
Converting a string URI to CURIE
>>> uri2curie('http://purl.org/dc/elements/1.1/title', nss)
u'dc:title'
RDFLib data type conversions:
URIRef to CURIE
>>> uri2curie(URIRef('http://purl.org/dc/elements/1.1/title'), nss)
u'dc:title'
Blank node to CURIE
>>> uri2curie(BNode('blanknode1'), nss)
u'_:blanknode1'
"""
# Use n3() method if BNode
if isinstance(uri, BNode):
result = uri.n3()
else:
result = uri
# result = unicode(uri)
for ns in namespaces:
ns_raw = u'%s' % namespaces['%s'%ns]
if ns_raw == 'http://www.w3.org/2002/07/owl#uri':
ns_raw = 'http://www.w3.org/2002/07/owl#'
result = result.replace(ns_raw, u"%s:" % ns)
result = result.replace(u'http://www.w3.org/2002/07/owl#', 'owl:')
return result
def curie2uri(curie, namespaces):
""" Convert CURIE to URI
TODO: testing
"""
result = unicode(curie)
for ns in namespaces:
result = result.replace(u"%s:" % ns, u''+namespaces['%s'%ns])
return URIRef(result)
if __name__ == "__main__":
import doctest
doctest.testmod()
| 0.466603 | 0.129458 |
## Usage of methods in `schematic.schemas.explorer` module
Path to the data model/schema that you want to load using the `SchemaExplorer` object:
```python
PATH_TO_JSONLD = CONFIG["model"]["input"]["location"]
```
Create an object of the SchemaExplorer() class:
```python
schema_explorer = SchemaExplorer()
```
Check if object has been instantiated or not:
```python
if isinstance(schema_explorer, SchemaExplorer):
print("'schema_explorer' - an object of the SchemaExplorer class has been created successfully.")
else:
print("object of class SchemaExplorer could not be created.")
```
By default schema exploerer loads the biothings schema. To explicitly load a different data model/json-ld schema,
use `load_schema()`:
```python
schema_explorer.load_schema(PATH_TO_JSONLD)
print("schema at {} has been loaded.".format(PATH_TO_JSONLD))
```
Get the networkx graph generated from the json-ld:
```python
nx_graph = schema_explorer.get_nx_schema()
```
Check if `nx_graph` has been instantiated correctly:
```python
if isinstance(nx_graph, nx.MultiDiGraph):
print("'nx_graph' - object of class MultiDiGraph has been retreived successfully.")
else:
print("object of class SchemaExplorer could not be retreived.")
```
Check if a particular class is in the current `HTAN JSON-LD` schema (or any schema that has been loaded):
```python
TEST_CLASS = 'Sequencing'
is_or_not = schema_explorer.is_class_in_schema(TEST_CLASS)
if is_or_not == True:
print("The class {} is present in the schema.".format(TEST_CLASS))
else:
print("The class {} is not present in the schema.".format(TEST_CLASS))
```
Generate graph visualization of the entire HTAN JSON-LD schema using `graphviz` package:
```python
gv_digraph = schema_explorer.full_schema_graph()
```
Since the graph is very big, we will generate an svg viz. of it. Please allow some time for the visualization to be rendered:
```python
gv_digraph.format = 'svg'
gv_digraph.render('viz/HTAN-GV', view=True)
print("The svg visualization of the entire schema has been rendered.")
```
_Note: The above visualization is too big to be rendered here, but see the sub-schema below for a small visualization._
Generate graph visualization of a sub-schema:
```python
seq_subgraph = schema_explorer.sub_schema_graph(TEST_CLASS, "up")
seq_subgraph.format = 'svg'
seq_subgraph.render('SUB-GV', view=True)
print("The svg visualization of the sub-schema with {} as the source node has been rendered.".format(TEST_CLASS))
```
_Fig.: Output from the execution of above block of code_

Returns list of successors of a node:
```python
seq_children = schema_explorer.find_children_classes(TEST_CLASS)
print("These are the children of {} class: {}".format(TEST_CLASS, seq_children))
```
Returns list of parents of a node:
```python
seq_parents = schema_explorer.find_parent_classes(TEST_CLASS)
print("These are the parents of {} class: {}".format(TEST_CLASS, seq_parents))
```
Find the properties that are associated with a class:
```python
PROP_CLASS = 'BiologicalEntity'
class_props = schema_explorer.find_class_specific_properties(PROP_CLASS)
print("The properties associated with class {} are: {}".format(PROP_CLASS, class_props))
```
Find the schema classes that inherit from a given class:
```python
inh_classes = schema_explorer.find_child_classes("Assay")
print("classes that inherit from class 'Assay' are: {}".format(inh_classes))
```
Get all details about a specific class in the schema:
```python
class_details = schema_explorer.explore_class(TEST_CLASS)
print("information/details about class {} : {} ".format(TEST_CLASS, class_details))
```
Get all details about a specific property in the schema:
```python
TEST_PROP = 'increasesActivityOf'
prop_details = schema_explorer.explore_property(TEST_PROP)
print("information/details about property {} : {}".format(TEST_PROP, prop_details))
```
Get name/label of the property associated with a given class' display name:
```python
prop_label = schema_explorer.get_property_label_from_display_name("Basic Statistics")
print("label of the property associated with 'Basic Statistics': {}".format(prop_label))
```
Get name/label of the class associated with a given class' display name:
```python
class_label = schema_explorer.get_property_label_from_display_name("Basic Statistics")
print("label of the class associated with 'Basic Statistics': {}".format(class_label))
```
Generate template of class in schema:
```python
class_temp = schema_explorer.generate_class_template()
print("generic template of a class in the schema/data model: {}".format(class_temp))
```
Modified `TEST_CLASS ("Sequencing")` based on the above generated template:
```python
class_mod = {
"@id": "bts:Sequencing",
"@type": "rdfs:Class",
"rdfs:comment": "Modified Test: Module for next generation sequencing assays",
"rdfs:label": "Sequencing",
"rdfs:subClassOf": [
{
"@id": "bts:Assay"
}
],
"schema:isPartOf": {
"@id": "http://schema.biothings.io"
},
"sms:displayName": "Sequencing",
"sms:required": "sms:false"
}
```
Make edits to `TEST_CLASS` based on the above template and pass it to `edit_class()`:
```python
schema_explorer.edit_class(class_info=class_mod)
```
Verify that the comment associated with `TEST_CLASS` has indeed been changed:
```python
class_details = schema_explorer.explore_class(TEST_CLASS)
print("Modified {} details : {}".format(TEST_CLASS, class_details))
```
## Usage of methods in `schematic.schemas.generator` module
Create an object of the `SchemaGenerator` class:
```python
schema_generator = SchemaGenerator(PATH_TO_JSONLD)
```
Check if object has been properly instantiated or not:
```python
if isinstance(schema_generator, SchemaGenerator):
print("'schema_generator' - an object of the SchemaGenerator class has been created successfully.")
else:
print("object of class SchemaGenerator could not be created.")
```
Get the list of out-edges from a specific node, based on a particular type of relationship:
```python
TEST_NODE = "Sequencing"
TEST_REL = "parentOf"
out_edges = schema_generator.get_edges_by_relationship(TEST_NODE, TEST_REL)
if out_edges:
print("The out-edges from class {}, based on {} relationship are: {}".format(TEST_NODE, TEST_REL, out_edges))
else:
print("The class does not have any out-edges.")
```
Get the list of nodes that are adjacent to a given node, based on a particular type of relationship:
```python
adj_nodes = schema_generator.get_adjacent_nodes_by_relationship(TEST_NODE, TEST_REL)
if adj_nodes:
print("The node(s) adjacent to {}, based on {} relationship are: {}".format(TEST_NODE, TEST_REL, adj_nodes))
else:
print("The class does not have any adjacent nodes.")
```
Get the list of descendants (all nodes that are reachable from a given node) of a node:
```python
desc_nodes = schema_generator.get_descendants_by_edge_type(TEST_NODE, TEST_REL)
if desc_nodes:
print("The descendant(s) from {} are: {}".format(TEST_NODE, desc_nodes))
else:
print("The class does not have descendants.")
```
Get the list of components that are associated with a given component:
```python
TEST_COMP = "Patient"
req_comps = schema_generator.get_component_requirements(TEST_COMP)
if req_comps:
print("The component(s) that are associated with a given component: {}".format(req_comps))
else:
print("There are no components associated with {}".format(TEST_COMP))
```
Get the list of immediate dependencies of a particular node:
```python
node_deps = schema_generator.get_node_dependencies(TEST_COMP)
if node_deps:
print("The immediate dependencies of {} are: {}".format(TEST_COMP, node_deps))
else:
print("The node has no immediate dependencies.")
```
Get the `label` associated with a node, based on the node's display name:
```python
try:
node_label = schema_generator.get_node_label(TEST_NODE)
print("The label name for the node {} is: {}".format(TEST_NODE, node_label))
except KeyError:
print("Please try a valid node name.")
```
Get the `definition/comment` associated with a given node:
```python
try:
node_def = schema_generator.get_node_definition(TEST_NODE)
print("The node definition for node {} is: {}".format(TEST_NODE, node_def))
except KeyError:
print("Please try a valid node name.")
```
Gather all the dependencies and value-constraints associated with a particular node
```python
json_schema = schema_generator.get_json_schema_requirements(TEST_COMP, "Patient-Schema")
print("The JSON schema based on {} as source node is:".format(TEST_COMP))
print(json_schema)
```
|
schematic-test
|
/schematic_test-0.1.11-py3-none-any.whl/schematic/schemas/README.md
|
README.md
|
PATH_TO_JSONLD = CONFIG["model"]["input"]["location"]
schema_explorer = SchemaExplorer()
if isinstance(schema_explorer, SchemaExplorer):
print("'schema_explorer' - an object of the SchemaExplorer class has been created successfully.")
else:
print("object of class SchemaExplorer could not be created.")
schema_explorer.load_schema(PATH_TO_JSONLD)
print("schema at {} has been loaded.".format(PATH_TO_JSONLD))
nx_graph = schema_explorer.get_nx_schema()
if isinstance(nx_graph, nx.MultiDiGraph):
print("'nx_graph' - object of class MultiDiGraph has been retreived successfully.")
else:
print("object of class SchemaExplorer could not be retreived.")
TEST_CLASS = 'Sequencing'
is_or_not = schema_explorer.is_class_in_schema(TEST_CLASS)
if is_or_not == True:
print("The class {} is present in the schema.".format(TEST_CLASS))
else:
print("The class {} is not present in the schema.".format(TEST_CLASS))
gv_digraph = schema_explorer.full_schema_graph()
gv_digraph.format = 'svg'
gv_digraph.render('viz/HTAN-GV', view=True)
print("The svg visualization of the entire schema has been rendered.")
seq_subgraph = schema_explorer.sub_schema_graph(TEST_CLASS, "up")
seq_subgraph.format = 'svg'
seq_subgraph.render('SUB-GV', view=True)
print("The svg visualization of the sub-schema with {} as the source node has been rendered.".format(TEST_CLASS))
Returns list of parents of a node:
Find the properties that are associated with a class:
Find the schema classes that inherit from a given class:
Get all details about a specific class in the schema:
Get all details about a specific property in the schema:
Get name/label of the property associated with a given class' display name:
Get name/label of the class associated with a given class' display name:
Generate template of class in schema:
Modified `TEST_CLASS ("Sequencing")` based on the above generated template:
Make edits to `TEST_CLASS` based on the above template and pass it to `edit_class()`:
Verify that the comment associated with `TEST_CLASS` has indeed been changed:
## Usage of methods in `schematic.schemas.generator` module
Create an object of the `SchemaGenerator` class:
Check if object has been properly instantiated or not:
Get the list of out-edges from a specific node, based on a particular type of relationship:
Get the list of nodes that are adjacent to a given node, based on a particular type of relationship:
Get the list of descendants (all nodes that are reachable from a given node) of a node:
Get the list of components that are associated with a given component:
Get the list of immediate dependencies of a particular node:
Get the `label` associated with a node, based on the node's display name:
Get the `definition/comment` associated with a given node:
Gather all the dependencies and value-constraints associated with a particular node
| 0.55447 | 0.873215 |
import os
from jsonschema import validate
from schematic.utils.io_utils import load_schemaorg, load_json, load_default
from schematic.utils.general import str2list, dict2list, find_duplicates
from schematic.utils.curie_utils import expand_curies_in_schema, extract_name_from_uri_or_curie
from schematic.utils.validate_utils import validate_class_schema, validate_property_schema, validate_schema
from schematic import CONFIG
class SchemaValidator():
"""Validate Schema against SchemaOrg standard
Validation Criterias:
1. Data Structure wise:
> "@id", "@context", "@graph"
> Each element in "@graph" should contain "@id", "@type", "rdfs:comment",
"rdfs:label", "sms:displayName"
> validate against JSON Schema
> Should validate the whole structure, and also validate property and
value separately
2. Data Content wise:
> "@id" field should match with "rdfs:label" field
> all prefixes used in the file should be defined in "@context"
> There should be no duplicate "@id"
> Class specific
> rdfs:label field should be capitalize the first character of each
word for a class;
> the value of "rdfs:subClassOf" should be present in the schema or in
the core vocabulary
> sms:displayName ideally should contain capitalized words separated by space, but that's not enforced by validation
> Property specific
> rdfs:label field should be cammelCase
> the value of "schema:domainIncludes" should be present in the schema
or in the core vocabulary
> the value of "schema:rangeIncludes" should be present in the schema
or in the core vocabulary
> sms:displayName ideally should contain capitalized words separated by space, but that's not enforced by validation
TODO: add dependencies and component dependencies to class structure documentation; as well as value range and required property
"""
def __init__(self, schema):
self.schemaorg = {'schema': load_schemaorg(),
'classes': [],
'properties': []}
for _schema in self.schemaorg['schema']['@graph']:
for _record in _schema["@graph"]:
if "@type" in _record:
_type = str2list(_record["@type"])
if "rdfs:Property" in _type:
self.schemaorg['properties'].append(_record["@id"])
elif "rdfs:Class" in _type:
self.schemaorg['classes'].append(_record["@id"])
self.extension_schema = {'schema': expand_curies_in_schema(schema),
'classes': [],
'properties': []}
for _record in self.extension_schema['schema']["@graph"]:
_type = str2list(_record["@type"])
if "rdfs:Property" in _type:
self.extension_schema['properties'].append(_record["@id"])
elif "rdfs:Class" in _type:
self.extension_schema['classes'].append(_record["@id"])
self.all_classes = self.schemaorg['classes'] + self.extension_schema['classes']
def validate_class_label(self, label_uri):
""" Check if the first character of class label is capitalized
"""
label = extract_name_from_uri_or_curie(label_uri)
assert label[0].isupper()
def validate_property_label(self, label_uri):
""" Check if the first character of property label is lower case
"""
label = extract_name_from_uri_or_curie(label_uri)
assert label[0].islower()
def validate_subclassof_field(self, subclassof_value):
""" Check if the value of "subclassof" is included in the schema file
"""
subclassof_value = dict2list(subclassof_value)
for record in subclassof_value:
assert record["@id"] in self.all_classes
def validate_domainIncludes_field(self, domainincludes_value):
""" Check if the value of "domainincludes" is included in the schema
file
"""
domainincludes_value = dict2list(domainincludes_value)
for record in domainincludes_value:
assert record["@id"] in self.all_classes, "value of domainincludes not recorded in schema: %r" % domainincludes_value
def validate_rangeIncludes_field(self, rangeincludes_value):
""" Check if the value of "rangeincludes" is included in the schema
file
"""
rangeincludes_value = dict2list(rangeincludes_value)
for record in rangeincludes_value:
assert record["@id"] in self.all_classes
def check_whether_atid_and_label_match(self, record):
""" Check if @id field matches with the "rdfs:label" field
"""
_id = extract_name_from_uri_or_curie(record["@id"])
assert _id == record["rdfs:label"], "id and label not match: %r" % record
def check_duplicate_labels(self):
""" Check for duplication in the schema
"""
labels = [_record['rdfs:label'] for _record in self.extension_schema["schema"]["@graph"]]
duplicates = find_duplicates(labels)
try:
assert len(duplicates) == 0
except:
raise Exception('Duplicates detected in graph: ', duplicates)
def validate_schema(self, schema):
"""Validate schema against SchemaORG standard
"""
json_schema_path = os.path.join('validation_schemas', 'schema.json')
json_schema = load_json(json_schema_path)
return validate(schema, json_schema)
def validate_property_schema(self, schema):
"""Validate schema against SchemaORG property definition standard
"""
json_schema_path = os.path.join('validation_schemas', 'property_json_schema.json')
json_schema = load_json(json_schema_path)
return validate(schema, json_schema)
def validate_class_schema(self, schema):
"""Validate schema against SchemaORG class definition standard
"""
json_schema_path = os.path.join('validation_schemas', 'class_json_schema.json')
json_schema = load_json(json_schema_path)
return validate(schema, json_schema)
def validate_full_schema(self):
self.check_duplicate_labels()
for record in self.extension_schema['schema']['@graph']:
self.check_whether_atid_and_label_match(record)
if record['@type'] == "rdf:Class":
self.validate_class_schema(record)
self.validate_class_label(record["@id"])
elif record['@type'] == "rdf:Property":
self.validate_property_schema(record)
self.validate_property_label(record["@id"])
self.validate_domainIncludes_field(record["http://schema.org/domainIncludes"])
if "http://schema.org/rangeIncludes" in record:
self.validate_rangeIncludes_field(record["http://schema.org/rangeIncludes"])
|
schematic-test
|
/schematic_test-0.1.11-py3-none-any.whl/schematic/schemas/validator.py
|
validator.py
|
import os
from jsonschema import validate
from schematic.utils.io_utils import load_schemaorg, load_json, load_default
from schematic.utils.general import str2list, dict2list, find_duplicates
from schematic.utils.curie_utils import expand_curies_in_schema, extract_name_from_uri_or_curie
from schematic.utils.validate_utils import validate_class_schema, validate_property_schema, validate_schema
from schematic import CONFIG
class SchemaValidator():
"""Validate Schema against SchemaOrg standard
Validation Criterias:
1. Data Structure wise:
> "@id", "@context", "@graph"
> Each element in "@graph" should contain "@id", "@type", "rdfs:comment",
"rdfs:label", "sms:displayName"
> validate against JSON Schema
> Should validate the whole structure, and also validate property and
value separately
2. Data Content wise:
> "@id" field should match with "rdfs:label" field
> all prefixes used in the file should be defined in "@context"
> There should be no duplicate "@id"
> Class specific
> rdfs:label field should be capitalize the first character of each
word for a class;
> the value of "rdfs:subClassOf" should be present in the schema or in
the core vocabulary
> sms:displayName ideally should contain capitalized words separated by space, but that's not enforced by validation
> Property specific
> rdfs:label field should be cammelCase
> the value of "schema:domainIncludes" should be present in the schema
or in the core vocabulary
> the value of "schema:rangeIncludes" should be present in the schema
or in the core vocabulary
> sms:displayName ideally should contain capitalized words separated by space, but that's not enforced by validation
TODO: add dependencies and component dependencies to class structure documentation; as well as value range and required property
"""
def __init__(self, schema):
self.schemaorg = {'schema': load_schemaorg(),
'classes': [],
'properties': []}
for _schema in self.schemaorg['schema']['@graph']:
for _record in _schema["@graph"]:
if "@type" in _record:
_type = str2list(_record["@type"])
if "rdfs:Property" in _type:
self.schemaorg['properties'].append(_record["@id"])
elif "rdfs:Class" in _type:
self.schemaorg['classes'].append(_record["@id"])
self.extension_schema = {'schema': expand_curies_in_schema(schema),
'classes': [],
'properties': []}
for _record in self.extension_schema['schema']["@graph"]:
_type = str2list(_record["@type"])
if "rdfs:Property" in _type:
self.extension_schema['properties'].append(_record["@id"])
elif "rdfs:Class" in _type:
self.extension_schema['classes'].append(_record["@id"])
self.all_classes = self.schemaorg['classes'] + self.extension_schema['classes']
def validate_class_label(self, label_uri):
""" Check if the first character of class label is capitalized
"""
label = extract_name_from_uri_or_curie(label_uri)
assert label[0].isupper()
def validate_property_label(self, label_uri):
""" Check if the first character of property label is lower case
"""
label = extract_name_from_uri_or_curie(label_uri)
assert label[0].islower()
def validate_subclassof_field(self, subclassof_value):
""" Check if the value of "subclassof" is included in the schema file
"""
subclassof_value = dict2list(subclassof_value)
for record in subclassof_value:
assert record["@id"] in self.all_classes
def validate_domainIncludes_field(self, domainincludes_value):
""" Check if the value of "domainincludes" is included in the schema
file
"""
domainincludes_value = dict2list(domainincludes_value)
for record in domainincludes_value:
assert record["@id"] in self.all_classes, "value of domainincludes not recorded in schema: %r" % domainincludes_value
def validate_rangeIncludes_field(self, rangeincludes_value):
""" Check if the value of "rangeincludes" is included in the schema
file
"""
rangeincludes_value = dict2list(rangeincludes_value)
for record in rangeincludes_value:
assert record["@id"] in self.all_classes
def check_whether_atid_and_label_match(self, record):
""" Check if @id field matches with the "rdfs:label" field
"""
_id = extract_name_from_uri_or_curie(record["@id"])
assert _id == record["rdfs:label"], "id and label not match: %r" % record
def check_duplicate_labels(self):
""" Check for duplication in the schema
"""
labels = [_record['rdfs:label'] for _record in self.extension_schema["schema"]["@graph"]]
duplicates = find_duplicates(labels)
try:
assert len(duplicates) == 0
except:
raise Exception('Duplicates detected in graph: ', duplicates)
def validate_schema(self, schema):
"""Validate schema against SchemaORG standard
"""
json_schema_path = os.path.join('validation_schemas', 'schema.json')
json_schema = load_json(json_schema_path)
return validate(schema, json_schema)
def validate_property_schema(self, schema):
"""Validate schema against SchemaORG property definition standard
"""
json_schema_path = os.path.join('validation_schemas', 'property_json_schema.json')
json_schema = load_json(json_schema_path)
return validate(schema, json_schema)
def validate_class_schema(self, schema):
"""Validate schema against SchemaORG class definition standard
"""
json_schema_path = os.path.join('validation_schemas', 'class_json_schema.json')
json_schema = load_json(json_schema_path)
return validate(schema, json_schema)
def validate_full_schema(self):
self.check_duplicate_labels()
for record in self.extension_schema['schema']['@graph']:
self.check_whether_atid_and_label_match(record)
if record['@type'] == "rdf:Class":
self.validate_class_schema(record)
self.validate_class_label(record["@id"])
elif record['@type'] == "rdf:Property":
self.validate_property_schema(record)
self.validate_property_label(record["@id"])
self.validate_domainIncludes_field(record["http://schema.org/domainIncludes"])
if "http://schema.org/rangeIncludes" in record:
self.validate_rangeIncludes_field(record["http://schema.org/rangeIncludes"])
| 0.413714 | 0.392249 |
import os
from schematic.schemas.generator import SchemaGenerator
from schematic import CONFIG
PATH_TO_JSONLD = CONFIG["model"]["input"]["location"]
# create an object of SchemaGenerator() class
schema_generator = SchemaGenerator(PATH_TO_JSONLD)
if isinstance(schema_generator, SchemaGenerator):
print("'schema_generator' - an object of the SchemaGenerator class has been created successfully.")
else:
print("object of class SchemaGenerator could not be created.")
# get list of the out-edges from a node based on a specific relationship
TEST_NODE = "Sequencing"
TEST_REL = "parentOf"
out_edges = schema_generator.get_edges_by_relationship(TEST_NODE, TEST_REL)
if out_edges:
print("The out-edges from class {}, based on {} relationship are: {}".format(TEST_NODE, TEST_REL, out_edges))
else:
print("The class does not have any out-edges.")
# get list of nodes that are adjacent to specified node, based on a given relationship
adj_nodes = schema_generator.get_adjacent_nodes_by_relationship(TEST_NODE, TEST_REL)
if adj_nodes:
print("The node(s) adjacent to {}, based on {} relationship are: {}".format(TEST_NODE, TEST_REL, adj_nodes))
else:
print("The class does not have any adjacent nodes.")
# get list of descendants (nodes) based on a specific type of relationship
desc_nodes = schema_generator.get_descendants_by_edge_type(TEST_NODE, TEST_REL)
if desc_nodes:
print("The descendant(s) from {} are: {}".format(TEST_NODE, desc_nodes))
else:
print("The class does not have descendants.")
# get all components associated with a given component
TEST_COMP = "Patient"
req_comps = schema_generator.get_component_requirements(TEST_COMP)
if req_comps:
print("The component(s) that are associated with a given component: {}".format(req_comps))
else:
print("There are no components associated with {}".format(TEST_COMP))
# get immediate dependencies that are related to a given node
node_deps = schema_generator.get_node_dependencies(TEST_COMP)
if node_deps:
print("The immediate dependencies of {} are: {}".format(TEST_COMP, node_deps))
else:
print("The node has no immediate dependencies.")
# get label for a given node
try:
node_label = schema_generator.get_node_label(TEST_NODE)
print("The label name for the node {} is: {}".format(TEST_NODE, node_label))
except KeyError:
print("Please try a valid node name.")
# get node definition/comment
try:
node_def = schema_generator.get_node_definition(TEST_NODE)
print("The node definition for node {} is: {}".format(TEST_NODE, node_def))
except KeyError:
print("Please try a valid node name.")
# gather dependencies and value-constraints for a particular node
json_schema = schema_generator.get_json_schema_requirements(TEST_COMP, "Patient-Schema")
print("The JSON schema based on {} as source node is:".format(TEST_COMP))
print(json_schema)
|
schematic-test
|
/schematic_test-0.1.11-py3-none-any.whl/schematic/schemas/examples/generator_usage.py
|
generator_usage.py
|
import os
from schematic.schemas.generator import SchemaGenerator
from schematic import CONFIG
PATH_TO_JSONLD = CONFIG["model"]["input"]["location"]
# create an object of SchemaGenerator() class
schema_generator = SchemaGenerator(PATH_TO_JSONLD)
if isinstance(schema_generator, SchemaGenerator):
print("'schema_generator' - an object of the SchemaGenerator class has been created successfully.")
else:
print("object of class SchemaGenerator could not be created.")
# get list of the out-edges from a node based on a specific relationship
TEST_NODE = "Sequencing"
TEST_REL = "parentOf"
out_edges = schema_generator.get_edges_by_relationship(TEST_NODE, TEST_REL)
if out_edges:
print("The out-edges from class {}, based on {} relationship are: {}".format(TEST_NODE, TEST_REL, out_edges))
else:
print("The class does not have any out-edges.")
# get list of nodes that are adjacent to specified node, based on a given relationship
adj_nodes = schema_generator.get_adjacent_nodes_by_relationship(TEST_NODE, TEST_REL)
if adj_nodes:
print("The node(s) adjacent to {}, based on {} relationship are: {}".format(TEST_NODE, TEST_REL, adj_nodes))
else:
print("The class does not have any adjacent nodes.")
# get list of descendants (nodes) based on a specific type of relationship
desc_nodes = schema_generator.get_descendants_by_edge_type(TEST_NODE, TEST_REL)
if desc_nodes:
print("The descendant(s) from {} are: {}".format(TEST_NODE, desc_nodes))
else:
print("The class does not have descendants.")
# get all components associated with a given component
TEST_COMP = "Patient"
req_comps = schema_generator.get_component_requirements(TEST_COMP)
if req_comps:
print("The component(s) that are associated with a given component: {}".format(req_comps))
else:
print("There are no components associated with {}".format(TEST_COMP))
# get immediate dependencies that are related to a given node
node_deps = schema_generator.get_node_dependencies(TEST_COMP)
if node_deps:
print("The immediate dependencies of {} are: {}".format(TEST_COMP, node_deps))
else:
print("The node has no immediate dependencies.")
# get label for a given node
try:
node_label = schema_generator.get_node_label(TEST_NODE)
print("The label name for the node {} is: {}".format(TEST_NODE, node_label))
except KeyError:
print("Please try a valid node name.")
# get node definition/comment
try:
node_def = schema_generator.get_node_definition(TEST_NODE)
print("The node definition for node {} is: {}".format(TEST_NODE, node_def))
except KeyError:
print("Please try a valid node name.")
# gather dependencies and value-constraints for a particular node
json_schema = schema_generator.get_json_schema_requirements(TEST_COMP, "Patient-Schema")
print("The JSON schema based on {} as source node is:".format(TEST_COMP))
print(json_schema)
| 0.322633 | 0.261584 |
import os
import synapseclient
import pickle
import pygsheets as ps
from googleapiclient.discovery import build
from google_auth_oauthlib.flow import InstalledAppFlow
from google.auth.transport.requests import Request
from google.oauth2 import service_account
from schematic import CONFIG
# If modifying these scopes, delete the file token.pickle.
SCOPES = ['https://www.googleapis.com/auth/spreadsheets', 'https://www.googleapis.com/auth/drive']
# it will create 'token.pickle' based on credentials.json
# TODO: replace by pygsheets calls?
def build_credentials() -> dict:
creds = None
# The file token.pickle stores the user's access and refresh tokens,
# and is created automatically when the authorization flow completes for the first time.
if os.path.exists(CONFIG.TOKEN_PICKLE):
with open(CONFIG.TOKEN_PICKLE, 'rb') as token:
creds = pickle.load(token)
# If there are no (valid) credentials available, let the user log in.
if not creds or not creds.valid:
if creds and creds.expired and creds.refresh_token:
creds.refresh(Request())
else:
flow = InstalledAppFlow.from_client_secrets_file(CONFIG.CREDS_PATH, SCOPES)
creds = flow.run_console() ### don't have to deal with ports
# Save the credentials for the next run
with open(CONFIG.TOKEN_PICKLE, 'wb') as token:
pickle.dump(creds, token)
# get a Google Sheet API service
sheet_service = build('sheets', 'v4', credentials=creds)
# get a Google Drive API service
drive_service = build('drive', 'v3', credentials=creds)
return {
'sheet_service': sheet_service,
'drive_service': drive_service,
'creds': creds
}
def build_service_account_creds():
credentials = service_account.Credentials.from_service_account_file(CONFIG.SERVICE_ACCT_CREDS, scopes=SCOPES)
# get a Google Sheet API service
sheet_service = build('sheets', 'v4', credentials=credentials)
# get a Google Drive API service
drive_service = build('drive', 'v3', credentials=credentials)
return {
'sheet_service': sheet_service,
'drive_service': drive_service,
'creds': credentials
}
def download_creds_file():
if not os.path.exists(CONFIG.CREDS_PATH):
print("Retrieving Google API credentials from Synapse...")
# synapse ID of the 'credentials.json' file, which we need in
# order to establish communication with gAPIs/services
API_CREDS = CONFIG["synapse"]["api_creds"]
syn = synapseclient.Synapse()
syn.login()
# Download in parent directory of CREDS_PATH to
# ensure same file system for os.rename()
creds_dir = os.path.dirname(CONFIG.CREDS_PATH)
creds_file = syn.get(API_CREDS, downloadLocation = creds_dir)
os.rename(creds_file.path, CONFIG.CREDS_PATH)
print("Downloaded Google API credentials file.")
def execute_google_api_requests(service, requests_body, **kwargs):
"""
Execute google API requests batch; attempt to execute in parallel.
Args:
service: google api service; for now assume google sheets service that is instantiated and authorized
service_type: default batchUpdate; TODO: add logic for values update
kwargs: google API service parameters
Return: google API response
"""
if "spreadsheet_id" in kwargs and "service_type" in kwargs and kwargs["service_type"] == "batch_update":
# execute all requests
response = service.spreadsheets().batchUpdate(spreadsheetId=kwargs["spreadsheet_id"], body = requests_body).execute()
return response
|
schematic-test
|
/schematic_test-0.1.11-py3-none-any.whl/schematic/utils/google_api_utils.py
|
google_api_utils.py
|
import os
import synapseclient
import pickle
import pygsheets as ps
from googleapiclient.discovery import build
from google_auth_oauthlib.flow import InstalledAppFlow
from google.auth.transport.requests import Request
from google.oauth2 import service_account
from schematic import CONFIG
# If modifying these scopes, delete the file token.pickle.
SCOPES = ['https://www.googleapis.com/auth/spreadsheets', 'https://www.googleapis.com/auth/drive']
# it will create 'token.pickle' based on credentials.json
# TODO: replace by pygsheets calls?
def build_credentials() -> dict:
creds = None
# The file token.pickle stores the user's access and refresh tokens,
# and is created automatically when the authorization flow completes for the first time.
if os.path.exists(CONFIG.TOKEN_PICKLE):
with open(CONFIG.TOKEN_PICKLE, 'rb') as token:
creds = pickle.load(token)
# If there are no (valid) credentials available, let the user log in.
if not creds or not creds.valid:
if creds and creds.expired and creds.refresh_token:
creds.refresh(Request())
else:
flow = InstalledAppFlow.from_client_secrets_file(CONFIG.CREDS_PATH, SCOPES)
creds = flow.run_console() ### don't have to deal with ports
# Save the credentials for the next run
with open(CONFIG.TOKEN_PICKLE, 'wb') as token:
pickle.dump(creds, token)
# get a Google Sheet API service
sheet_service = build('sheets', 'v4', credentials=creds)
# get a Google Drive API service
drive_service = build('drive', 'v3', credentials=creds)
return {
'sheet_service': sheet_service,
'drive_service': drive_service,
'creds': creds
}
def build_service_account_creds():
credentials = service_account.Credentials.from_service_account_file(CONFIG.SERVICE_ACCT_CREDS, scopes=SCOPES)
# get a Google Sheet API service
sheet_service = build('sheets', 'v4', credentials=credentials)
# get a Google Drive API service
drive_service = build('drive', 'v3', credentials=credentials)
return {
'sheet_service': sheet_service,
'drive_service': drive_service,
'creds': credentials
}
def download_creds_file():
if not os.path.exists(CONFIG.CREDS_PATH):
print("Retrieving Google API credentials from Synapse...")
# synapse ID of the 'credentials.json' file, which we need in
# order to establish communication with gAPIs/services
API_CREDS = CONFIG["synapse"]["api_creds"]
syn = synapseclient.Synapse()
syn.login()
# Download in parent directory of CREDS_PATH to
# ensure same file system for os.rename()
creds_dir = os.path.dirname(CONFIG.CREDS_PATH)
creds_file = syn.get(API_CREDS, downloadLocation = creds_dir)
os.rename(creds_file.path, CONFIG.CREDS_PATH)
print("Downloaded Google API credentials file.")
def execute_google_api_requests(service, requests_body, **kwargs):
"""
Execute google API requests batch; attempt to execute in parallel.
Args:
service: google api service; for now assume google sheets service that is instantiated and authorized
service_type: default batchUpdate; TODO: add logic for values update
kwargs: google API service parameters
Return: google API response
"""
if "spreadsheet_id" in kwargs and "service_type" in kwargs and kwargs["service_type"] == "batch_update":
# execute all requests
response = service.spreadsheets().batchUpdate(spreadsheetId=kwargs["spreadsheet_id"], body = requests_body).execute()
return response
| 0.276984 | 0.118742 |
## Package-specific files:
#### The files within `etc` folder are:
`data_models`:
- `biothings.model.jsonld`: Base knowledge graph/vocabulary as specified by the [biolink model](https://biolink.github.io/biolink-model/).
- `schema_org.model.jsonld`: Schema vocabulary as specified by [schema.org](https://schema.org/docs/gs.html#schemaorg_types).
`validation_schemas`:
- `class.schema.json`: JSON Schema used for validation against schema.org class definition standard.
- `property.schema.json`: JSON Schema used for validation against schema.org property definition standard.
- `model.schema.json`: JSON Schema used for validation against schema.org standard.
|
schematic-test
|
/schematic_test-0.1.11-py3-none-any.whl/schematic/etc/README.md
|
README.md
|
## Package-specific files:
#### The files within `etc` folder are:
`data_models`:
- `biothings.model.jsonld`: Base knowledge graph/vocabulary as specified by the [biolink model](https://biolink.github.io/biolink-model/).
- `schema_org.model.jsonld`: Schema vocabulary as specified by [schema.org](https://schema.org/docs/gs.html#schemaorg_types).
`validation_schemas`:
- `class.schema.json`: JSON Schema used for validation against schema.org class definition standard.
- `property.schema.json`: JSON Schema used for validation against schema.org property definition standard.
- `model.schema.json`: JSON Schema used for validation against schema.org standard.
| 0.683208 | 0.436682 |
import click
from schematic.manifest.generator import ManifestGenerator
from schematic.utils.cli_utils import fill_in_from_config, query_dict
from schematic import CONFIG
CONTEXT_SETTINGS = dict(help_option_names=['--help', '-h']) # help options
# invoke_without_command=True -> forces the application not to show aids before losing them with a --h
@click.group(context_settings=CONTEXT_SETTINGS, invoke_without_command=True)
def manifest(): # use as `schematic manifest ...`
"""
Sub-commands with Manifest Generation utilities/methods.
"""
pass
# prototype based on getModelManifest() and get_manifest()
# use as `schematic config get positional_args --optional_args`
@manifest.command('get', short_help='Prepares the manifest URL based on provided schema.')
# define the optional arguments
@click.option('-t', '--title', help='Title of generated manifest file.')
@click.option('-dt', '--data_type', help='Data type/component from JSON-LD schema to be used for manifest generation.')
@click.option('-p', '--jsonld', help='Path to JSON-LD schema.')
@click.option('-d', '--dataset_id', help='SynID of existing dataset on Synapse.')
@click.option('-s', '--sheet_url', type=bool, help='Enable/disable URL generation.')
@click.option('-j', '--json_schema', help='Path to JSON Schema (validation schema).')
@click.option('-c', '--config', help='Path to schematic configuration file.', required=True)
def get_manifest(title, data_type, jsonld,
dataset_id, sheet_url, json_schema,
config):
"""
Running CLI with manifest generation options.
"""
config_data = CONFIG.load_config(config)
# optional parameters that need to be passed to ManifestGenerator()
# can be read from config.yml as well
title = fill_in_from_config(
"title", title, ("manifest", "title")
)
data_type = fill_in_from_config(
"data_type", data_type, ("manifest", "data_type")
)
jsonld = fill_in_from_config(
"jsonld", jsonld, ("model", "input", "location")
)
json_schema = fill_in_from_config(
"json_schema", json_schema, ("model", "input", "validation_schema")
)
# create object of type ManifestGenerator
manifest_generator = ManifestGenerator(title=title,
path_to_json_ld=jsonld,
root=data_type)
# call get_manifest() on manifest_generator
click.echo(manifest_generator.get_manifest(dataset_id=dataset_id,
sheet_url=sheet_url,
json_schema=json_schema))
|
schematic-test
|
/schematic_test-0.1.11-py3-none-any.whl/schematic/manifest/commands.py
|
commands.py
|
import click
from schematic.manifest.generator import ManifestGenerator
from schematic.utils.cli_utils import fill_in_from_config, query_dict
from schematic import CONFIG
CONTEXT_SETTINGS = dict(help_option_names=['--help', '-h']) # help options
# invoke_without_command=True -> forces the application not to show aids before losing them with a --h
@click.group(context_settings=CONTEXT_SETTINGS, invoke_without_command=True)
def manifest(): # use as `schematic manifest ...`
"""
Sub-commands with Manifest Generation utilities/methods.
"""
pass
# prototype based on getModelManifest() and get_manifest()
# use as `schematic config get positional_args --optional_args`
@manifest.command('get', short_help='Prepares the manifest URL based on provided schema.')
# define the optional arguments
@click.option('-t', '--title', help='Title of generated manifest file.')
@click.option('-dt', '--data_type', help='Data type/component from JSON-LD schema to be used for manifest generation.')
@click.option('-p', '--jsonld', help='Path to JSON-LD schema.')
@click.option('-d', '--dataset_id', help='SynID of existing dataset on Synapse.')
@click.option('-s', '--sheet_url', type=bool, help='Enable/disable URL generation.')
@click.option('-j', '--json_schema', help='Path to JSON Schema (validation schema).')
@click.option('-c', '--config', help='Path to schematic configuration file.', required=True)
def get_manifest(title, data_type, jsonld,
dataset_id, sheet_url, json_schema,
config):
"""
Running CLI with manifest generation options.
"""
config_data = CONFIG.load_config(config)
# optional parameters that need to be passed to ManifestGenerator()
# can be read from config.yml as well
title = fill_in_from_config(
"title", title, ("manifest", "title")
)
data_type = fill_in_from_config(
"data_type", data_type, ("manifest", "data_type")
)
jsonld = fill_in_from_config(
"jsonld", jsonld, ("model", "input", "location")
)
json_schema = fill_in_from_config(
"json_schema", json_schema, ("model", "input", "validation_schema")
)
# create object of type ManifestGenerator
manifest_generator = ManifestGenerator(title=title,
path_to_json_ld=jsonld,
root=data_type)
# call get_manifest() on manifest_generator
click.echo(manifest_generator.get_manifest(dataset_id=dataset_id,
sheet_url=sheet_url,
json_schema=json_schema))
| 0.490236 | 0.107672 |
## Usage of method(s) in `schematic.manifest.generator` module
An important method in the `manifest.generator` module is the `get_manifest()` method which takes care of generating the manifest link, based on the underlying `JSON-LD schema` (in this case, `HTAN.jsonld`) and an optionally provided `JSON schema`.
First, we need to make sure the google API credentials file (which is required to interact with google services, in this case google docs), is present in the root folder:
```python
try:
download_creds_file()
except synapseclient.core.exceptions.SynapseHTTPError:
print("Make sure the credentials set in the config file are correct.")
```
Create an object of `ManifestGenerator`, and feed the path to the master schema (JSON-LD). In addition, also change the name of the root node (component) based on the custom template type of your choice:
```python
PATH_TO_JSONLD = CONFIG["model"]["input"]["location"]
# create an instance of ManifestGenerator class
TEST_NODE = "FollowUp"
manifest_generator = ManifestGenerator(title="Demo Manifest", path_to_json_ld=PATH_TO_JSONLD, root=TEST_NODE)
```
_Note: Not providing any value for the `root` argument will produce a general manifest file (not specific to any component)._
|
schematic-test
|
/schematic_test-0.1.11-py3-none-any.whl/schematic/manifest/README.md
|
README.md
|
try:
download_creds_file()
except synapseclient.core.exceptions.SynapseHTTPError:
print("Make sure the credentials set in the config file are correct.")
PATH_TO_JSONLD = CONFIG["model"]["input"]["location"]
# create an instance of ManifestGenerator class
TEST_NODE = "FollowUp"
manifest_generator = ManifestGenerator(title="Demo Manifest", path_to_json_ld=PATH_TO_JSONLD, root=TEST_NODE)
| 0.221856 | 0.772831 |
# Schematic
[](https://actions-badge.atrox.dev/Sage-Bionetworks/schematic/goto?ref=develop) [](https://sage-schematic.readthedocs.io/en/develop/?badge=develop) [](https://badge.fury.io/py/schematicpy)
# Table of contents
- [Introduction](#introduction)
- [Installation](#installation)
- [Installation Requirements](#installation-requirements)
- [Installation guide for data curator app](#installation-guide-for-data-curator-app)
- [Installation guide for developers/contributors](#installation-guide-for-developerscontributors)
- [Other Contribution Guidelines](#other-contribution-guidelines)
- [Update readthedocs documentation](#update-readthedocs-documentation)
- [Command Line Usage](#command-line-usage)
- [Testing](#testing)
- [Updating Synapse test resources](#updating-synapse-test-resources)
- [Code Style](#code-style)
- [Contributors](#contributors)
# Introduction
SCHEMATIC is an acronym for _Schema Engine for Manifest Ingress and Curation_. The Python based infrastructure provides a _novel_ schema-based, metadata ingress ecosystem, that is meant to streamline the process of biomedical dataset annotation, metadata validation and submission to a data repository for various data contributors.
# Installation
## Installation Requirements
* Python version 3.9.0≤x<3.11.0
Note: You need to be a registered and certified user on [`synapse.org`](https://www.synapse.org/), and also have the right permissions to download the Google credentials files from Synapse.
## Installation guide for data curator app
Create and activate a virtual environment within which you can install the package:
```
python3 -m venv .venv
source .venv/bin/activate
```
Note: Python 3 has a built-in support for virtual environment [venv](https://docs.python.org/3/library/venv.html#module-venv) so you no longer need to install virtualenv.
Install and update the package using [pip](https://pip.pypa.io/en/stable/quickstart/):
```
python3 -m pip install schematicpy
```
If you run into error: Failed building wheel for numpy, the error might be able to resolve by upgrading pip. Please try to upgrade pip by:
```
pip3 install --upgrade pip
```
## Installation guide for developers/contributors
When contributing to this repository, please first discuss the change you wish to make via issue, email, or any other method with the owners of this repository before making a change.
Please note we have a [code of conduct](CODE_OF_CONDUCT.md), please follow it in all your interactions with the project.
### Development environment setup
1. Clone the `schematic` package repository.
```
git clone https://github.com/Sage-Bionetworks/schematic.git
```
2. Install `poetry` (version 1.2 or later) using either the [official installer](https://python-poetry.org/docs/#installing-with-the-official-installer) or [pipx](https://python-poetry.org/docs/#installing-with-pipx). If you have an older installation of Poetry, we recommend uninstalling it first.
3. Start the virtual environment by doing:
```
poetry shell
```
4. Install the dependencies by doing:
```
poetry install
```
This command will install the dependencies based on what we specify in poetry.lock. If this step is taking a long time, try to go back to step 2 and check your version of poetry. Alternatively, you could also try deleting the lock file and regenerate it by doing `poetry install` (Please note this method should be used as a last resort because this would force other developers to change their development environment)
5. Fill in credential files:
*Note*: If you won't interact with Synapse, please ignore this section.
There are two main configuration files that need to be edited:
config.yml
and [synapseConfig](https://raw.githubusercontent.com/Sage-Bionetworks/synapsePythonClient/v2.3.0-rc/synapseclient/.synapseConfig)
<strong>Configure .synapseConfig File</strong>
Download a copy of the ``.synapseConfig`` file, open the file in the
editor of your choice and edit the `username` and `authtoken` attribute under the `authentication` section
*Note*: You could also visit [configparser](https://docs.python.org/3/library/configparser.html#module-configparser>) doc to see the format that `.synapseConfig` must have. For instance:
>[authentication]<br> username = ABC <br> authtoken = abc
<strong>Configure config.yml File</strong>
There are some defaults in schematic that can be configured. These fields are in ``config_example.yml``:
```text
# This is an example config for Schematic.
# All listed values are those that are the default if a config is not used.
# Save this as config.yml, this will be gitignored.
# Remove any fields in the config you don't want to change
# Change the values of any fields you do want to change
# This describes where assets such as manifests are stored
asset_store:
# This is when assets are stored in a synapse project
synapse:
# Synapse ID of the file view listing all project data assets.
master_fileview_id: "syn23643253"
# Path to the synapse config file, either absolute or relative to this file
config: ".synapseConfig"
# Base name that manifest files will be saved as
manifest_basename: "synapse_storage_manifest"
# This describes information about manifests as it relates to generation and validation
manifest:
# Location where manifests will saved to
manifest_folder: "manifests"
# Title or title prefix given to generated manifest(s)
title: "example"
# Data types of manifests to be generated or data type (singular) to validate manifest against
data_type:
- "Biospecimen"
- "Patient"
# Describes the location of your schema
model:
# Location of your schema jsonld, it must be a path relative to this file or absolute
location: "tests/data/example.model.jsonld"
# This section is for using google sheets with Schematic
google_sheets:
# The Synapse id of the Google service account credentials.
service_acct_creds_synapse_id: "syn25171627"
# Path to the synapse config file, either absolute or relative to this file
service_acct_creds: "schematic_service_account_creds.json"
# When doing google sheet validation (regex match) with the validation rules.
# true is alerting the user and not allowing entry of bad values.
# false is warning but allowing the entry on to the sheet.
strict_validation: true
```
If you want to change any of these copy ``config_example.yml`` to ``config.yml``, change any fields you want to, and remove any fields you don't.
For example if you wanted to change the folder where manifests are downloaded your config should look like:
```text
manifest:
manifest_folder: "my_manifest_folder_path"
```
_Note_: `config.yml` is ignored by git.
_Note_: Paths can be specified relative to the `config.yml` file or as absolute paths.
6. Login to Synapse by using the command line
On the CLI in your virtual environment, run the following command:
```
synapse login -u <synapse username> -p <synapse password> --rememberMe
```
Please make sure that you run the command before running `schematic init` below
7. Obtain Google credential Files
To obtain ``schematic_service_account_creds.json``, please run:
```
schematic init --config ~/path/to/config.yml
```
> As v22.12.1 version of schematic, using `token` mode of authentication (in other words, using `token.pickle` and `credentials.json`) is no longer supported due to Google's decision to move away from using OAuth out-of-band (OOB) flow. Click [here](https://developers.google.com/identity/protocols/oauth2/resources/oob-migration) to learn more.
*Notes*: Use the ``schematic_service_account_creds.json`` file for the service
account mode of authentication (*for Google services/APIs*). Service accounts
are special Google accounts that can be used by applications to access Google APIs
programmatically via OAuth2.0, with the advantage being that they do not require
human authorization.
*Background*: schematic uses Google’s API to generate google sheet templates that users fill in to provide (meta)data.
Most Google sheet functionality could be authenticated with service account. However, more complex Google sheet functionality
requires token-based authentication. As browser support that requires the token-based authentication diminishes, we are hoping to deprecate
token-based authentication and keep only service account authentication in the future.
### Development process instruction
For new features, bugs, enhancements
1. Pull the latest code from [develop branch in the upstream repo](https://github.com/Sage-Bionetworks/schematic)
2. Checkout a new branch develop-<feature/fix-name> from the develop branch
3. Do development on branch develop-<feature/fix-name>
a. may need to ensure that schematic poetry toml and lock files are compatible with your local environment
4. Add changed files for tracking and commit changes using [best practices](https://www.perforce.com/blog/vcs/git-best-practices-git-commit)
5. Have granular commits: not “too many” file changes, and not hundreds of code lines of changes
6. Commits with work in progress are encouraged:
a. add WIP to the beginning of the commit message for “Work In Progress” commits
7. Keep commit messages descriptive but less than a page long, see best practices
8. Push code to develop-<feature/fix-name> in upstream repo
9. Branch out off develop-<feature/fix-name> if needed to work on multiple features associated with the same code base
10. After feature work is complete and before creating a PR to the develop branch in upstream
a. ensure that code runs locally
b. test for logical correctness locally
c. wait for git workflow to complete (e.g. tests are run) on github
11. Create a PR from develop-<feature/fix-name> into the develop branch of the upstream repo
12. Request a code review on the PR
13. Once code is approved merge in the develop branch
14. Delete the develop-<feature/fix-name> branch
*Note*: Make sure you have the latest version of the `develop` branch on your local machine.
## Installation Guide - Docker
1. Install docker from https://www.docker.com/ . <br>
2. Identify docker image of interest from [Schematic DockerHub](https://hub.docker.com/r/sagebionetworks/schematic/tags) <br>
Ex `docker pull sagebionetworks/schematic:latest` from the CLI or, run `docker compose up` after cloning the schematic github repo <br>
in this case, `sagebionetworks/schematic:latest` is the name of the image chosen
3. Run Schematic Command with `docker run <flags> <schematic command and args>`. <br>
<t> - For more information on flags for `docker run` and what they do, visit the [Docker Documentation](https://docs.docker.com/engine/reference/commandline/run/) <br>
<t> - These example commands assume that you have navigated to the directory you want to run schematic from. To specify your working directory, use `$(pwd)` on MacOS/Linux or `%cd%` on Windows. <br>
<t> - If not using the latest image, then the full name should be specified: ie `sagebionetworks/schematic:commit-e611e4a` <br>
<t> - If using local image created by `docker compose up`, then the docker image name should be changed: i.e. `schematic_schematic` <br>
<t> - Using the `--name` flag sets the name of the container running locally on your machine <br>
### Example For REST API <br>
#### Use file path of `config.yml` to run API endpoints:
```
docker run --rm -p 3001:3001 \
-v $(pwd):/schematic -w /schematic --name schematic \
-e SCHEMATIC_CONFIG=/schematic/config.yml \
-e GE_HOME=/usr/src/app/great_expectations/ \
sagebionetworks/schematic \
python /usr/src/app/run_api.py
```
#### Use content of `config.yml` and `schematic_service_account_creds.json`as an environment variable to run API endpoints:
1. save content of `config.yml` as to environment variable `SCHEMATIC_CONFIG_CONTENT` by doing: `export SCHEMATIC_CONFIG_CONTENT=$(cat /path/to/config.yml)`
2. Similarly, save the content of `schematic_service_account_creds.json` as `SERVICE_ACCOUNT_CREDS` by doing: `export SERVICE_ACCOUNT_CREDS=$(cat /path/to/schematic_service_account_creds.json)`
3. Pass `SCHEMATIC_CONFIG_CONTENT` and `schematic_service_account_creds` as environment variables by using `docker run`
```
docker run --rm -p 3001:3001 \
-v $(pwd):/schematic -w /schematic --name schematic \
-e GE_HOME=/usr/src/app/great_expectations/ \
-e SCHEMATIC_CONFIG_CONTENT=$SCHEMATIC_CONFIG_CONTENT \
-e SERVICE_ACCOUNT_CREDS=$SERVICE_ACCOUNT_CREDS \
sagebionetworks/schematic \
python /usr/src/app/run_api.py
```
### Example For Schematic on mac/linux <br>
To run example below, first clone schematic into your home directory `git clone https://github.com/sage-bionetworks/schematic ~/schematic` <br>
Then update .synapseConfig with your credentials
```
docker run \
-v ~/schematic:/schematic \
-w /schematic \
-e SCHEMATIC_CONFIG=/schematic/config.yml \
-e GE_HOME=/usr/src/app/great_expectations/ \
sagebionetworks/schematic schematic model \
-c /schematic/config.yml validate \
-mp /schematic/tests/data/mock_manifests/Valid_Test_Manifest.csv \
-dt MockComponent \
-js /schematic/tests/data/example.model.jsonld
```
### Example For Schematic on Windows <br>
```
docker run -v %cd%:/schematic \
-w /schematic \
-e GE_HOME=/usr/src/app/great_expectations/ \
sagebionetworks/schematic \
schematic model \
-c config.yml validate -mp tests/data/mock_manifests/inValid_Test_Manifest.csv -dt MockComponent -js /schematic/data/example.model.jsonld
```
# Other Contribution Guidelines
## Updating readthedocs documentation
1. `cd docs`
2. After making relevant changes, you could run the `make html` command to re-generate the `build` folder.
3. Please contact the dev team to publish your updates
*Other helpful resources*:
1. [Getting started with Sphinx](https://haha.readthedocs.io/en/latest/intro/getting-started-with-sphinx.html)
2. [Installing Sphinx](https://haha.readthedocs.io/en/latest/intro/getting-started-with-sphinx.html)
## Update toml file and lock file
If you install external libraries by using `poetry add <name of library>`, please make sure that you include `pyproject.toml` and `poetry.lock` file in your commit.
## Reporting bugs or feature requests
You can **create bug and feature requests** through [Sage Bionetwork's FAIR Data service desk](https://sagebionetworks.jira.com/servicedesk/customer/portal/5/group/8). Providing enough details to the developers to verify and troubleshoot your issue is paramount:
- **Provide a clear and descriptive title as well as a concise summary** of the issue to identify the problem.
- **Describe the exact steps which reproduce the problem** in as many details as possible.
- **Describe the behavior you observed after following the steps** and point out what exactly is the problem with that behavior.
- **Explain which behavior you expected to see** instead and why.
- **Provide screenshots of the expected or actual behaviour** where applicable.
# Command Line Usage
Please visit more documentation [here](https://sage-schematic.readthedocs.io/en/develop/cli_reference.html)
# Testing
All code added to the client must have tests. The Python client uses pytest to run tests. The test code is located in the [tests](https://github.com/Sage-Bionetworks/schematic/tree/develop-docs-update/tests) subdirectory.
You can run the test suite in the following way:
```
pytest -vs tests/
```
## Updating Synapse test resources
1. Duplicate the entity being updated (or folder if applicable).
2. Edit the duplicates (_e.g._ annotations, contents, name).
3. Update the test suite in your branch to use these duplicates, including the expected values in the test assertions.
4. Open a PR as per the usual process (see above).
5. Once the PR is merged, leave the original copies on Synapse to maintain support for feature branches that were forked from `develop` before your update.
- If the old copies are problematic and need to be removed immediately (_e.g._ contain sensitive data), proceed with the deletion and alert the other contributors that they need to merge the latest `develop` branch into their feature branches for their tests to work.
# Code style
* Please consult the [Google Python style guide](http://google.github.io/styleguide/pyguide.html) prior to contributing code to this project.
* Be consistent and follow existing code conventions and spirit.
# Contributors
Main contributors and developers:
- [Milen Nikolov](https://github.com/milen-sage)
- [Mialy DeFelice](https://github.com/mialy-defelice)
- [Sujay Patil](https://github.com/sujaypatil96)
- [Bruno Grande](https://github.com/BrunoGrandePhD)
- [Robert Allaway](https://github.com/allaway)
- [Gianna Jordan](https://github.com/giajordan)
- [Lingling Peng](https://github.com/linglp)
|
schematicpy
|
/schematicpy-23.8.1.tar.gz/schematicpy-23.8.1/README.md
|
README.md
|
python3 -m venv .venv
source .venv/bin/activate
python3 -m pip install schematicpy
pip3 install --upgrade pip
git clone https://github.com/Sage-Bionetworks/schematic.git
poetry shell
poetry install
# This is an example config for Schematic.
# All listed values are those that are the default if a config is not used.
# Save this as config.yml, this will be gitignored.
# Remove any fields in the config you don't want to change
# Change the values of any fields you do want to change
# This describes where assets such as manifests are stored
asset_store:
# This is when assets are stored in a synapse project
synapse:
# Synapse ID of the file view listing all project data assets.
master_fileview_id: "syn23643253"
# Path to the synapse config file, either absolute or relative to this file
config: ".synapseConfig"
# Base name that manifest files will be saved as
manifest_basename: "synapse_storage_manifest"
# This describes information about manifests as it relates to generation and validation
manifest:
# Location where manifests will saved to
manifest_folder: "manifests"
# Title or title prefix given to generated manifest(s)
title: "example"
# Data types of manifests to be generated or data type (singular) to validate manifest against
data_type:
- "Biospecimen"
- "Patient"
# Describes the location of your schema
model:
# Location of your schema jsonld, it must be a path relative to this file or absolute
location: "tests/data/example.model.jsonld"
# This section is for using google sheets with Schematic
google_sheets:
# The Synapse id of the Google service account credentials.
service_acct_creds_synapse_id: "syn25171627"
# Path to the synapse config file, either absolute or relative to this file
service_acct_creds: "schematic_service_account_creds.json"
# When doing google sheet validation (regex match) with the validation rules.
# true is alerting the user and not allowing entry of bad values.
# false is warning but allowing the entry on to the sheet.
strict_validation: true
manifest:
manifest_folder: "my_manifest_folder_path"
synapse login -u <synapse username> -p <synapse password> --rememberMe
schematic init --config ~/path/to/config.yml
docker run --rm -p 3001:3001 \
-v $(pwd):/schematic -w /schematic --name schematic \
-e SCHEMATIC_CONFIG=/schematic/config.yml \
-e GE_HOME=/usr/src/app/great_expectations/ \
sagebionetworks/schematic \
python /usr/src/app/run_api.py
docker run --rm -p 3001:3001 \
-v $(pwd):/schematic -w /schematic --name schematic \
-e GE_HOME=/usr/src/app/great_expectations/ \
-e SCHEMATIC_CONFIG_CONTENT=$SCHEMATIC_CONFIG_CONTENT \
-e SERVICE_ACCOUNT_CREDS=$SERVICE_ACCOUNT_CREDS \
sagebionetworks/schematic \
python /usr/src/app/run_api.py
docker run \
-v ~/schematic:/schematic \
-w /schematic \
-e SCHEMATIC_CONFIG=/schematic/config.yml \
-e GE_HOME=/usr/src/app/great_expectations/ \
sagebionetworks/schematic schematic model \
-c /schematic/config.yml validate \
-mp /schematic/tests/data/mock_manifests/Valid_Test_Manifest.csv \
-dt MockComponent \
-js /schematic/tests/data/example.model.jsonld
docker run -v %cd%:/schematic \
-w /schematic \
-e GE_HOME=/usr/src/app/great_expectations/ \
sagebionetworks/schematic \
schematic model \
-c config.yml validate -mp tests/data/mock_manifests/inValid_Test_Manifest.csv -dt MockComponent -js /schematic/data/example.model.jsonld
pytest -vs tests/
| 0.571527 | 0.94428 |
from typing import Any
from collections.abc import Iterable
from errno import ENOENT
from os import pathsep
from re import split
from pkg_resources import (
resource_exists,
resource_filename,
resource_stream,
resource_string,
resource_listdir,
)
class InvalidResourceError(Exception):
"""
Args:
uri {String}: The URI which was requested within the given loader's
that did not exist or was malformed.
"""
def __init__(self, namespace: str, requested_uri: str) -> None:
self.namespace = namespace
self.requested_uri = requested_uri
self.message = "Resource does not exist or is declared incorrectly"
self.errno = ENOENT
super().__init__(self.message)
def __str__(self) -> str:
return (
f'{self.message}({self.errno}), "{self.requested_uri}" of {self.namespace}'
)
def __repr__(self) -> str:
return self.__str__()
class Loader:
"""
Args:
namespace {String}: The namespace within the package (relative to the package root)
to load resources from. Using the magic variable __name__ is suggested as when the script
is run as "__main__" it will load the most recent local resources instead of the cached
egg resources.
prefix {String}: Set a prefix for all URIs. Use a prefix if resources are centrally
located in a single place the uri's will be prefixed automatically by the loader.
"""
def __init__(self, namespace: str, **opts: Any) -> None:
self.namespace = namespace
self.prefix = opts.get("prefix", "")
self.local = opts.get("local", False)
if not self.local:
self.namespace = split(r"\.|\\|\/", self.namespace)[0]
def _resolve(self, uri: str) -> tuple[str, str]:
resource_uri = "/".join([self.prefix] + uri.split(pathsep))
namespace = self.namespace
if not resource_exists(namespace, resource_uri):
raise InvalidResourceError(namespace, resource_uri)
return namespace, resource_uri
def read(self, uri: str) -> Any:
"""
Read entire contents of resource. Same as open('path...').read()
Args:
uri {String}: URI of the resource.
"""
namespace, uri = self._resolve(uri)
return resource_string(namespace, uri)
def open(self, uri: str) -> Any:
"""
Open a file object like handle to the resource. Same as open('path...')
Args:
uri {String}: URI of the resource.
"""
namespace, uri = self._resolve(uri)
return resource_stream(namespace, uri)
def filename(self, uri: str) -> str:
"""
Return the "most correct" filename for a resource. Same as os.path.normpath('path...')
Args:
uri {String}: URI of the resource.
"""
namespace, uri = self._resolve(uri)
return resource_filename(namespace, uri)
def list(self, url: str) -> Iterable[str]:
"""
Return a list of all resources within the given URL
Args:
url {String}: URL of the resources.
"""
namespace, uri = self._resolve(url)
return map(lambda x: url + "/" + x, resource_listdir(namespace, uri))
# call Loader() and pass `schematic`, which is the global package namespace
LOADER = Loader("schematic", prefix="etc")
|
schematicpy
|
/schematicpy-23.8.1.tar.gz/schematicpy-23.8.1/schematic/loader.py
|
loader.py
|
from typing import Any
from collections.abc import Iterable
from errno import ENOENT
from os import pathsep
from re import split
from pkg_resources import (
resource_exists,
resource_filename,
resource_stream,
resource_string,
resource_listdir,
)
class InvalidResourceError(Exception):
"""
Args:
uri {String}: The URI which was requested within the given loader's
that did not exist or was malformed.
"""
def __init__(self, namespace: str, requested_uri: str) -> None:
self.namespace = namespace
self.requested_uri = requested_uri
self.message = "Resource does not exist or is declared incorrectly"
self.errno = ENOENT
super().__init__(self.message)
def __str__(self) -> str:
return (
f'{self.message}({self.errno}), "{self.requested_uri}" of {self.namespace}'
)
def __repr__(self) -> str:
return self.__str__()
class Loader:
"""
Args:
namespace {String}: The namespace within the package (relative to the package root)
to load resources from. Using the magic variable __name__ is suggested as when the script
is run as "__main__" it will load the most recent local resources instead of the cached
egg resources.
prefix {String}: Set a prefix for all URIs. Use a prefix if resources are centrally
located in a single place the uri's will be prefixed automatically by the loader.
"""
def __init__(self, namespace: str, **opts: Any) -> None:
self.namespace = namespace
self.prefix = opts.get("prefix", "")
self.local = opts.get("local", False)
if not self.local:
self.namespace = split(r"\.|\\|\/", self.namespace)[0]
def _resolve(self, uri: str) -> tuple[str, str]:
resource_uri = "/".join([self.prefix] + uri.split(pathsep))
namespace = self.namespace
if not resource_exists(namespace, resource_uri):
raise InvalidResourceError(namespace, resource_uri)
return namespace, resource_uri
def read(self, uri: str) -> Any:
"""
Read entire contents of resource. Same as open('path...').read()
Args:
uri {String}: URI of the resource.
"""
namespace, uri = self._resolve(uri)
return resource_string(namespace, uri)
def open(self, uri: str) -> Any:
"""
Open a file object like handle to the resource. Same as open('path...')
Args:
uri {String}: URI of the resource.
"""
namespace, uri = self._resolve(uri)
return resource_stream(namespace, uri)
def filename(self, uri: str) -> str:
"""
Return the "most correct" filename for a resource. Same as os.path.normpath('path...')
Args:
uri {String}: URI of the resource.
"""
namespace, uri = self._resolve(uri)
return resource_filename(namespace, uri)
def list(self, url: str) -> Iterable[str]:
"""
Return a list of all resources within the given URL
Args:
url {String}: URL of the resources.
"""
namespace, uri = self._resolve(url)
return map(lambda x: url + "/" + x, resource_listdir(namespace, uri))
# call Loader() and pass `schematic`, which is the global package namespace
LOADER = Loader("schematic", prefix="etc")
| 0.864182 | 0.186873 |
# pylint: disable=line-too-long
#!/usr/bin/env python3
# `schematic manifest` related sub-commands description
manifest_commands = {
"manifest": {
"config": (
"Specify the path to the `config.yml` using this option. This is a required argument."
),
"get": {
"short_help": (
"Specify the path to the `config.yml` using this option. "
"This is a required argument."
),
"title": (
"Specify the title of the manifest (or title prefix of multiple manifests) that "
"will be created at the end of the run. You can either explicitly pass the "
"title of the manifest here or provide it in the `config.yml` "
"file as a value for the `(manifest > title)` key."
),
"data_type": (
"Specify the component(s) (data type) from the data model that is to be used "
"for generating the metadata manifest file. To make all available manifests enter 'all manifests'. "
"You can either explicitly pass the data type here or provide "
"it in the `config.yml` file as a value for the `(manifest > data_type)` key."
),
"jsonld": (
"Specify the path to the JSON-LD data model (schema) using this option. You can either explicitly pass the "
"schema here or provide a value for the `(model > input > location)` key."
),
"dataset_id": (
"Specify the synID of a dataset folder on Synapse. If there is an exisiting manifest already present "
"in that folder, then it will be pulled with the existing annotations for further annotation/modification. "
),
"sheet_url": (
"This is a boolean flag. If flag is provided when command line utility is executed, result will be a link/URL "
"to the metadata manifest file. If not it will produce a pandas dataframe for the same."
),
"output_csv": ("Path to where the CSV manifest template should be stored."),
"output_xlsx": (
"Path to where the Excel manifest template should be stored."
),
"use_annotations": (
"This is a boolean flag. If flag is provided when command line utility is executed, it will prepopulate template "
"with existing annotations from Synapse."
),
"json_schema": (
"Specify the path to the JSON Validation Schema for this argument. "
"You can either explicitly pass the `.json` file here or provide it in the `config.yml` file "
"as a value for the `(model > location)` key."
),
"alphabetize_valid_values": (
"Specify to alphabetize valid attribute values either ascending (a) or descending (d)."
"Optional"
),
},
"migrate": {
"short_help": (
"Specify the path to the `config.yml` using this option. "
"This is a required argument."
),
"project_scope": (
"Specify a comma-separated list of projects where manifest entities will be migrated to tables."
),
"archive_project": (
"Specify a single project where legacy manifest entities will be stored after migration to table."
),
"return_entities": (
"This is a boolean flag. If flag is provided when command line utility is executed, "
"entities that have been transferred to an archive project will be returned to their original folders."
),
"dry_run": (
"This is a boolean flag. If flag is provided when command line utility is executed, "
"a dry run will be performed. No manifests will be re-uploaded and no entities will be migrated, "
"but archival folders will still be created. "
"Migration information for testing purposes will be logged to the INFO level."
),
},
}
}
# `schematic model` related sub-commands description
model_commands = {
"model": {
"config": (
"Specify the path to the `config.yml` using this option. This is a required argument."
),
"submit": {
"short_help": ("Validation (optional) and submission of manifest files."),
"manifest_path": (
"Specify the path to the metadata manifest file that you want to submit to a dataset on Synapse. "
"This is a required argument."
),
"dataset_id": (
"Specify the synID of the dataset folder on Synapse to which you intend to submit "
"the metadata manifest file. This is a required argument."
),
"validate_component": (
"The component or data type from the data model which you can use to validate the "
"data filled in your manifest template."
),
"use_schema_label": (
"Store attributes using the schema label (--use_schema_label, default) or store attributes using the display label "
"(--use_display_label). Attribute display names in the schema must not only include characters that are "
"not accepted by Synapse. Annotation names may only contain: letters, numbers, '_' and '.'"
),
"hide_blanks": (
"This is a boolean flag. If flag is provided when command line utility is executed, annotations with blank values will be hidden from a dataset's annotation list in Synaspe."
"If not, annotations with blank values will be displayed."
),
"manifest_record_type": (
"Specify the way the manifest should be store as on Synapse. Options are 'file_only', 'file_and_entities', 'table_and_file' and "
"'table_file_and_entities'. 'file_and_entities' will store the manifest as a csv and create Synapse files for each row in the manifest. "
"'table_and_file' will store the manifest as a table and a csv on Synapse. "
"'file_only' will store the manifest as a csv only on Synapse."
"'table_file_and_entities' will perform the options file_with_entites and table in combination."
"Default value is 'table_file_and_entities'."
),
"table_manipulation": (
"Specify the way the manifest tables should be store as on Synapse when one with the same name already exists. Options are 'replace' and 'upsert'. "
"'replace' will remove the rows and columns from the existing table and store the new rows and columns, preserving the name and synID. "
"'upsert' will add the new rows to the table and preserve the exisitng rows and columns in the existing table. "
"Default value is 'replace'. "
"Upsert specific requirements: {\n}"
"'upsert' should be used for initial table uploads if users intend to upsert into them at a later time."
"Using 'upsert' at creation will generate the metadata necessary for upsert functionality."
"Upsert functionality requires primary keys to be specified in the data model and manfiest as <component>_id."
"Currently it is required to use -dl/--use_display_label with table upserts."
),
},
"validate": {
"short_help": ("Validation of manifest files."),
"manifest_path": (
"Specify the path to the metadata manifest file that you want to submit to a dataset on Synapse. "
"This is a required argument."
),
"data_type": (
"Specify the component (data type) from the data model that is to be used "
"for validating the metadata manifest file. You can either explicitly pass the data type here or provide "
"it in the `config.yml` file as a value for the `(manifest > data_type)` key."
),
"json_schema": (
"Specify the path to the JSON Validation Schema for this argument. "
"You can either explicitly pass the `.json` file here or provide it in the `config.yml` file "
"as a value for the `(model > input > validation_schema)` key."
),
"restrict_rules": (
"This is a boolean flag. If flag is provided when command line utility is executed, validation suite will only run with in-house validation rules, "
"and Great Expectations rules and suite will not be utilized."
"If not, the Great Expectations suite will be utilized and all rules will be available."
),
"project_scope": (
"Specify a comma-separated list of projects to search through for cross manifest validation."
),
},
}
}
# `schematic schema` related sub-commands description
schema_commands = {
"schema": {
"convert": {
"short_help": (
"Convert specification from CSV data model to JSON-LD data model."
),
"base_schema": (
"Path to base data model. BioThings data model is loaded by default."
),
"output_jsonld": (
"Path to where the generated JSON-LD file needs to be outputted."
),
}
}
}
# `schematic init` command description
init_command = {
"init": {
"short_help": ("Initialize authentication for schematic."),
"config": (
"Specify the path to the `config.yml` using this option. This is a required argument."
),
}
}
viz_commands = {
"visualization": {
"config": (
"Specify the path to the `config.yml` using this option. This is a required argument."
),
"tangled_tree": {
"figure_type": (
"Specify the type of schema visualization to make. Either 'dependency' or 'component'."
),
"text_format": (
"Specify the type of text to gather for tangled tree visualization, either 'plain' or 'highlighted'."
),
},
}
}
|
schematicpy
|
/schematicpy-23.8.1.tar.gz/schematicpy-23.8.1/schematic/help.py
|
help.py
|
# pylint: disable=line-too-long
#!/usr/bin/env python3
# `schematic manifest` related sub-commands description
manifest_commands = {
"manifest": {
"config": (
"Specify the path to the `config.yml` using this option. This is a required argument."
),
"get": {
"short_help": (
"Specify the path to the `config.yml` using this option. "
"This is a required argument."
),
"title": (
"Specify the title of the manifest (or title prefix of multiple manifests) that "
"will be created at the end of the run. You can either explicitly pass the "
"title of the manifest here or provide it in the `config.yml` "
"file as a value for the `(manifest > title)` key."
),
"data_type": (
"Specify the component(s) (data type) from the data model that is to be used "
"for generating the metadata manifest file. To make all available manifests enter 'all manifests'. "
"You can either explicitly pass the data type here or provide "
"it in the `config.yml` file as a value for the `(manifest > data_type)` key."
),
"jsonld": (
"Specify the path to the JSON-LD data model (schema) using this option. You can either explicitly pass the "
"schema here or provide a value for the `(model > input > location)` key."
),
"dataset_id": (
"Specify the synID of a dataset folder on Synapse. If there is an exisiting manifest already present "
"in that folder, then it will be pulled with the existing annotations for further annotation/modification. "
),
"sheet_url": (
"This is a boolean flag. If flag is provided when command line utility is executed, result will be a link/URL "
"to the metadata manifest file. If not it will produce a pandas dataframe for the same."
),
"output_csv": ("Path to where the CSV manifest template should be stored."),
"output_xlsx": (
"Path to where the Excel manifest template should be stored."
),
"use_annotations": (
"This is a boolean flag. If flag is provided when command line utility is executed, it will prepopulate template "
"with existing annotations from Synapse."
),
"json_schema": (
"Specify the path to the JSON Validation Schema for this argument. "
"You can either explicitly pass the `.json` file here or provide it in the `config.yml` file "
"as a value for the `(model > location)` key."
),
"alphabetize_valid_values": (
"Specify to alphabetize valid attribute values either ascending (a) or descending (d)."
"Optional"
),
},
"migrate": {
"short_help": (
"Specify the path to the `config.yml` using this option. "
"This is a required argument."
),
"project_scope": (
"Specify a comma-separated list of projects where manifest entities will be migrated to tables."
),
"archive_project": (
"Specify a single project where legacy manifest entities will be stored after migration to table."
),
"return_entities": (
"This is a boolean flag. If flag is provided when command line utility is executed, "
"entities that have been transferred to an archive project will be returned to their original folders."
),
"dry_run": (
"This is a boolean flag. If flag is provided when command line utility is executed, "
"a dry run will be performed. No manifests will be re-uploaded and no entities will be migrated, "
"but archival folders will still be created. "
"Migration information for testing purposes will be logged to the INFO level."
),
},
}
}
# `schematic model` related sub-commands description
model_commands = {
"model": {
"config": (
"Specify the path to the `config.yml` using this option. This is a required argument."
),
"submit": {
"short_help": ("Validation (optional) and submission of manifest files."),
"manifest_path": (
"Specify the path to the metadata manifest file that you want to submit to a dataset on Synapse. "
"This is a required argument."
),
"dataset_id": (
"Specify the synID of the dataset folder on Synapse to which you intend to submit "
"the metadata manifest file. This is a required argument."
),
"validate_component": (
"The component or data type from the data model which you can use to validate the "
"data filled in your manifest template."
),
"use_schema_label": (
"Store attributes using the schema label (--use_schema_label, default) or store attributes using the display label "
"(--use_display_label). Attribute display names in the schema must not only include characters that are "
"not accepted by Synapse. Annotation names may only contain: letters, numbers, '_' and '.'"
),
"hide_blanks": (
"This is a boolean flag. If flag is provided when command line utility is executed, annotations with blank values will be hidden from a dataset's annotation list in Synaspe."
"If not, annotations with blank values will be displayed."
),
"manifest_record_type": (
"Specify the way the manifest should be store as on Synapse. Options are 'file_only', 'file_and_entities', 'table_and_file' and "
"'table_file_and_entities'. 'file_and_entities' will store the manifest as a csv and create Synapse files for each row in the manifest. "
"'table_and_file' will store the manifest as a table and a csv on Synapse. "
"'file_only' will store the manifest as a csv only on Synapse."
"'table_file_and_entities' will perform the options file_with_entites and table in combination."
"Default value is 'table_file_and_entities'."
),
"table_manipulation": (
"Specify the way the manifest tables should be store as on Synapse when one with the same name already exists. Options are 'replace' and 'upsert'. "
"'replace' will remove the rows and columns from the existing table and store the new rows and columns, preserving the name and synID. "
"'upsert' will add the new rows to the table and preserve the exisitng rows and columns in the existing table. "
"Default value is 'replace'. "
"Upsert specific requirements: {\n}"
"'upsert' should be used for initial table uploads if users intend to upsert into them at a later time."
"Using 'upsert' at creation will generate the metadata necessary for upsert functionality."
"Upsert functionality requires primary keys to be specified in the data model and manfiest as <component>_id."
"Currently it is required to use -dl/--use_display_label with table upserts."
),
},
"validate": {
"short_help": ("Validation of manifest files."),
"manifest_path": (
"Specify the path to the metadata manifest file that you want to submit to a dataset on Synapse. "
"This is a required argument."
),
"data_type": (
"Specify the component (data type) from the data model that is to be used "
"for validating the metadata manifest file. You can either explicitly pass the data type here or provide "
"it in the `config.yml` file as a value for the `(manifest > data_type)` key."
),
"json_schema": (
"Specify the path to the JSON Validation Schema for this argument. "
"You can either explicitly pass the `.json` file here or provide it in the `config.yml` file "
"as a value for the `(model > input > validation_schema)` key."
),
"restrict_rules": (
"This is a boolean flag. If flag is provided when command line utility is executed, validation suite will only run with in-house validation rules, "
"and Great Expectations rules and suite will not be utilized."
"If not, the Great Expectations suite will be utilized and all rules will be available."
),
"project_scope": (
"Specify a comma-separated list of projects to search through for cross manifest validation."
),
},
}
}
# `schematic schema` related sub-commands description
schema_commands = {
"schema": {
"convert": {
"short_help": (
"Convert specification from CSV data model to JSON-LD data model."
),
"base_schema": (
"Path to base data model. BioThings data model is loaded by default."
),
"output_jsonld": (
"Path to where the generated JSON-LD file needs to be outputted."
),
}
}
}
# `schematic init` command description
init_command = {
"init": {
"short_help": ("Initialize authentication for schematic."),
"config": (
"Specify the path to the `config.yml` using this option. This is a required argument."
),
}
}
viz_commands = {
"visualization": {
"config": (
"Specify the path to the `config.yml` using this option. This is a required argument."
),
"tangled_tree": {
"figure_type": (
"Specify the type of schema visualization to make. Either 'dependency' or 'component'."
),
"text_format": (
"Specify the type of text to gather for tangled tree visualization, either 'plain' or 'highlighted'."
),
},
}
}
| 0.74055 | 0.42483 |
from typing import Optional, Any, Sequence
class MissingConfigValueError(Exception):
"""Exception raised when configuration value not provided in config file.
Args:
config_keys: tuple of keys as present in config file.
message: custom/pre-defined error message to be returned.
Returns:
message.
"""
def __init__(
self, config_keys: Sequence[Any], message: Optional[str] = None
) -> None:
config_keys_str = " > ".join(config_keys)
self.message = (
"The configuration value corresponding to the argument "
f"({config_keys_str}) doesn't exist. "
"Please provide a value in the configuration file."
)
if message:
self.message = message
super().__init__(self.message)
def __str__(self) -> str:
return f"{self.message}"
class WrongEntityTypeError(Exception):
"""Exception raised when the entity type is not desired
Args:
entity id: For synapse, thi
message: custom/pre-defined error message to be returned.
Returns:
message.
"""
def __init__(self, syn_id: str, message: Optional[str] = None) -> None:
self.message = (
f"'{syn_id}'' is not a desired entity type"
"Please ensure that you put in the right syn_id"
)
if message:
self.message = message
super().__init__(self.message)
def __str__(self) -> str:
return f"{self.message}"
class MissingConfigAndArgumentValueError(Exception):
"""Exception raised when configuration value not provided in config file.
Args:
arg_name: CLI argument name.
config_keys: tuple of keys as present in config file.
message: custom/pre-defined error message to be returned.
Returns:
message.
"""
def __init__(
self, arg_name: str, config_keys: Sequence[Any], message: Optional[str] = None
) -> None:
config_keys_str = " > ".join(config_keys)
self.message = (
f"The value corresponding to the CLI argument '--{arg_name}'"
" doesn't exist. "
"Please provide a value for either the CLI argument or "
f"({config_keys_str}) in the configuration file."
)
if message:
self.message = message
super().__init__(self.message)
def __str__(self) -> str:
return f"{self.message}"
class AccessCredentialsError(Exception):
"""Exception raised when provided access credentials cannot be resolved.
Args:
project: Platform/project (e.g., synID of a project)
message: custom/pre-defined error message to be returned.
Returns:
message.
"""
def __init__(self, project: str, message: Optional[str] = None) -> None:
self.message = (
f"Your access to '{project}'' could not be resolved. "
"Please check your credentials and try again."
)
if message:
self.message = message
super().__init__(self.message)
def __str__(self) -> str:
return f"{self.message}"
|
schematicpy
|
/schematicpy-23.8.1.tar.gz/schematicpy-23.8.1/schematic/exceptions.py
|
exceptions.py
|
from typing import Optional, Any, Sequence
class MissingConfigValueError(Exception):
"""Exception raised when configuration value not provided in config file.
Args:
config_keys: tuple of keys as present in config file.
message: custom/pre-defined error message to be returned.
Returns:
message.
"""
def __init__(
self, config_keys: Sequence[Any], message: Optional[str] = None
) -> None:
config_keys_str = " > ".join(config_keys)
self.message = (
"The configuration value corresponding to the argument "
f"({config_keys_str}) doesn't exist. "
"Please provide a value in the configuration file."
)
if message:
self.message = message
super().__init__(self.message)
def __str__(self) -> str:
return f"{self.message}"
class WrongEntityTypeError(Exception):
"""Exception raised when the entity type is not desired
Args:
entity id: For synapse, thi
message: custom/pre-defined error message to be returned.
Returns:
message.
"""
def __init__(self, syn_id: str, message: Optional[str] = None) -> None:
self.message = (
f"'{syn_id}'' is not a desired entity type"
"Please ensure that you put in the right syn_id"
)
if message:
self.message = message
super().__init__(self.message)
def __str__(self) -> str:
return f"{self.message}"
class MissingConfigAndArgumentValueError(Exception):
"""Exception raised when configuration value not provided in config file.
Args:
arg_name: CLI argument name.
config_keys: tuple of keys as present in config file.
message: custom/pre-defined error message to be returned.
Returns:
message.
"""
def __init__(
self, arg_name: str, config_keys: Sequence[Any], message: Optional[str] = None
) -> None:
config_keys_str = " > ".join(config_keys)
self.message = (
f"The value corresponding to the CLI argument '--{arg_name}'"
" doesn't exist. "
"Please provide a value for either the CLI argument or "
f"({config_keys_str}) in the configuration file."
)
if message:
self.message = message
super().__init__(self.message)
def __str__(self) -> str:
return f"{self.message}"
class AccessCredentialsError(Exception):
"""Exception raised when provided access credentials cannot be resolved.
Args:
project: Platform/project (e.g., synID of a project)
message: custom/pre-defined error message to be returned.
Returns:
message.
"""
def __init__(self, project: str, message: Optional[str] = None) -> None:
self.message = (
f"Your access to '{project}'' could not be resolved. "
"Please check your credentials and try again."
)
if message:
self.message = message
super().__init__(self.message)
def __str__(self) -> str:
return f"{self.message}"
| 0.94152 | 0.162879 |
from gc import callbacks
import logging
import sys
from time import perf_counter
import click
import click_log
from jsonschema import ValidationError
from schematic.models.metadata import MetadataModel
from schematic.utils.cli_utils import log_value_from_config, query_dict, parse_synIDs, parse_comma_str_to_list
from schematic.help import model_commands
from schematic.exceptions import MissingConfigValueError
from schematic.configuration.configuration import CONFIG
logger = logging.getLogger('schematic')
click_log.basic_config(logger)
CONTEXT_SETTINGS = dict(help_option_names=["--help", "-h"]) # help options
# invoke_without_command=True -> forces the application not to show aids before losing them with a --h
@click.group(context_settings=CONTEXT_SETTINGS, invoke_without_command=True)
@click_log.simple_verbosity_option(logger)
@click.option(
"-c",
"--config",
type=click.Path(),
envvar="SCHEMATIC_CONFIG",
help=query_dict(model_commands, ("model", "config")),
)
@click.pass_context
def model(ctx, config): # use as `schematic model ...`
"""
Sub-commands for Metadata Model related utilities/methods.
"""
try:
logger.debug(f"Loading config file contents in '{config}'")
CONFIG.load_config(config)
ctx.obj = CONFIG
except ValueError as e:
logger.error("'--config' not provided or environment variable not set.")
logger.exception(e)
sys.exit(1)
# prototype based on submit_metadata_manifest()
@model.command(
"submit", short_help=query_dict(model_commands, ("model", "submit", "short_help"))
)
@click_log.simple_verbosity_option(logger)
@click.option(
"-mp",
"--manifest_path",
help=query_dict(model_commands, ("model", "submit", "manifest_path")),
)
@click.option(
"-d",
"--dataset_id",
help=query_dict(model_commands, ("model", "submit", "dataset_id")),
)
@click.option(
"-vc",
"--validate_component",
help=query_dict(model_commands, ("model", "submit", "validate_component")),
)
@click.option(
"--use_schema_label/--use_display_label",
"-sl/-dl",
default=True,
help=query_dict(model_commands, ("model", "submit", "use_schema_label")),
)
@click.option(
"--hide_blanks",
"-hb",
is_flag=True,
help=query_dict(model_commands,("model","submit","hide_blanks")),
)
@click.option(
"--manifest_record_type",
"-mrt",
default='table_file_and_entities',
type=click.Choice(['table_and_file', 'file_only', 'file_and_entities', 'table_file_and_entities'], case_sensitive=True),
help=query_dict(model_commands, ("model", "submit", "manifest_record_type")))
@click.option(
"-rr",
"--restrict_rules",
is_flag=True,
help=query_dict(model_commands,("model","validate","restrict_rules")),
)
@click.option(
"-ps",
"--project_scope",
default=None,
callback=parse_synIDs,
help=query_dict(model_commands, ("model", "validate", "project_scope")),
)
@click.option(
"--table_manipulation",
"-tm",
default='replace',
type=click.Choice(['replace', 'upsert'], case_sensitive=True),
help=query_dict(model_commands, ("model", "submit", "table_manipulation")))
@click.pass_obj
def submit_manifest(
ctx, manifest_path, dataset_id, validate_component, manifest_record_type, use_schema_label, hide_blanks, restrict_rules, project_scope, table_manipulation,
):
"""
Running CLI with manifest validation (optional) and submission options.
"""
jsonld = CONFIG.model_location
log_value_from_config("jsonld", jsonld)
metadata_model = MetadataModel(
inputMModelLocation=jsonld, inputMModelLocationType="local"
)
manifest_id = metadata_model.submit_metadata_manifest(
path_to_json_ld = jsonld,
manifest_path=manifest_path,
dataset_id=dataset_id,
validate_component=validate_component,
manifest_record_type=manifest_record_type,
restrict_rules=restrict_rules,
use_schema_label=use_schema_label,
hide_blanks=hide_blanks,
project_scope=project_scope,
table_manipulation=table_manipulation,
)
if manifest_id:
logger.info(
f"File at '{manifest_path}' was successfully associated "
f"with dataset '{dataset_id}'."
)
# prototype based on validateModelManifest()
@model.command(
"validate",
short_help=query_dict(model_commands, ("model", "validate", "short_help")),
)
@click_log.simple_verbosity_option(logger)
@click.option(
"-mp",
"--manifest_path",
type=click.Path(exists=True),
required=True,
help=query_dict(model_commands, ("model", "validate", "manifest_path")),
)
@click.option(
"-dt",
"--data_type",
callback=parse_comma_str_to_list,
help=query_dict(model_commands, ("model", "validate", "data_type")),
)
@click.option(
"-js",
"--json_schema",
help=query_dict(model_commands, ("model", "validate", "json_schema")),
)
@click.option(
"-rr",
"--restrict_rules",
is_flag=True,
help=query_dict(model_commands,("model","validate","restrict_rules")),
)
@click.option(
"-ps",
"--project_scope",
default=None,
callback=parse_synIDs,
help=query_dict(model_commands, ("model", "validate", "project_scope")),
)
@click.pass_obj
def validate_manifest(ctx, manifest_path, data_type, json_schema, restrict_rules,project_scope):
"""
Running CLI for manifest validation.
"""
if data_type is None:
data_type = CONFIG.manifest_data_type
log_value_from_config("data_type", data_type)
try:
len(data_type) == 1
except:
logger.error(
f"Can only validate a single data_type at a time. Please provide a single data_type"
)
data_type = data_type[0]
t_validate = perf_counter()
jsonld = CONFIG.model_location
log_value_from_config("jsonld", jsonld)
metadata_model = MetadataModel(
inputMModelLocation=jsonld, inputMModelLocationType="local"
)
errors, warnings = metadata_model.validateModelManifest(
manifestPath=manifest_path, rootNode=data_type, jsonSchema=json_schema, restrict_rules=restrict_rules, project_scope=project_scope,
)
if not errors:
click.echo(
"Your manifest has been validated successfully. "
"There are no errors in your manifest, and it can "
"be submitted without any modifications."
)
else:
click.echo(errors)
logger.debug(
f"Total elapsed time {perf_counter()-t_validate} seconds"
)
|
schematicpy
|
/schematicpy-23.8.1.tar.gz/schematicpy-23.8.1/schematic/models/commands.py
|
commands.py
|
from gc import callbacks
import logging
import sys
from time import perf_counter
import click
import click_log
from jsonschema import ValidationError
from schematic.models.metadata import MetadataModel
from schematic.utils.cli_utils import log_value_from_config, query_dict, parse_synIDs, parse_comma_str_to_list
from schematic.help import model_commands
from schematic.exceptions import MissingConfigValueError
from schematic.configuration.configuration import CONFIG
logger = logging.getLogger('schematic')
click_log.basic_config(logger)
CONTEXT_SETTINGS = dict(help_option_names=["--help", "-h"]) # help options
# invoke_without_command=True -> forces the application not to show aids before losing them with a --h
@click.group(context_settings=CONTEXT_SETTINGS, invoke_without_command=True)
@click_log.simple_verbosity_option(logger)
@click.option(
"-c",
"--config",
type=click.Path(),
envvar="SCHEMATIC_CONFIG",
help=query_dict(model_commands, ("model", "config")),
)
@click.pass_context
def model(ctx, config): # use as `schematic model ...`
"""
Sub-commands for Metadata Model related utilities/methods.
"""
try:
logger.debug(f"Loading config file contents in '{config}'")
CONFIG.load_config(config)
ctx.obj = CONFIG
except ValueError as e:
logger.error("'--config' not provided or environment variable not set.")
logger.exception(e)
sys.exit(1)
# prototype based on submit_metadata_manifest()
@model.command(
"submit", short_help=query_dict(model_commands, ("model", "submit", "short_help"))
)
@click_log.simple_verbosity_option(logger)
@click.option(
"-mp",
"--manifest_path",
help=query_dict(model_commands, ("model", "submit", "manifest_path")),
)
@click.option(
"-d",
"--dataset_id",
help=query_dict(model_commands, ("model", "submit", "dataset_id")),
)
@click.option(
"-vc",
"--validate_component",
help=query_dict(model_commands, ("model", "submit", "validate_component")),
)
@click.option(
"--use_schema_label/--use_display_label",
"-sl/-dl",
default=True,
help=query_dict(model_commands, ("model", "submit", "use_schema_label")),
)
@click.option(
"--hide_blanks",
"-hb",
is_flag=True,
help=query_dict(model_commands,("model","submit","hide_blanks")),
)
@click.option(
"--manifest_record_type",
"-mrt",
default='table_file_and_entities',
type=click.Choice(['table_and_file', 'file_only', 'file_and_entities', 'table_file_and_entities'], case_sensitive=True),
help=query_dict(model_commands, ("model", "submit", "manifest_record_type")))
@click.option(
"-rr",
"--restrict_rules",
is_flag=True,
help=query_dict(model_commands,("model","validate","restrict_rules")),
)
@click.option(
"-ps",
"--project_scope",
default=None,
callback=parse_synIDs,
help=query_dict(model_commands, ("model", "validate", "project_scope")),
)
@click.option(
"--table_manipulation",
"-tm",
default='replace',
type=click.Choice(['replace', 'upsert'], case_sensitive=True),
help=query_dict(model_commands, ("model", "submit", "table_manipulation")))
@click.pass_obj
def submit_manifest(
ctx, manifest_path, dataset_id, validate_component, manifest_record_type, use_schema_label, hide_blanks, restrict_rules, project_scope, table_manipulation,
):
"""
Running CLI with manifest validation (optional) and submission options.
"""
jsonld = CONFIG.model_location
log_value_from_config("jsonld", jsonld)
metadata_model = MetadataModel(
inputMModelLocation=jsonld, inputMModelLocationType="local"
)
manifest_id = metadata_model.submit_metadata_manifest(
path_to_json_ld = jsonld,
manifest_path=manifest_path,
dataset_id=dataset_id,
validate_component=validate_component,
manifest_record_type=manifest_record_type,
restrict_rules=restrict_rules,
use_schema_label=use_schema_label,
hide_blanks=hide_blanks,
project_scope=project_scope,
table_manipulation=table_manipulation,
)
if manifest_id:
logger.info(
f"File at '{manifest_path}' was successfully associated "
f"with dataset '{dataset_id}'."
)
# prototype based on validateModelManifest()
@model.command(
"validate",
short_help=query_dict(model_commands, ("model", "validate", "short_help")),
)
@click_log.simple_verbosity_option(logger)
@click.option(
"-mp",
"--manifest_path",
type=click.Path(exists=True),
required=True,
help=query_dict(model_commands, ("model", "validate", "manifest_path")),
)
@click.option(
"-dt",
"--data_type",
callback=parse_comma_str_to_list,
help=query_dict(model_commands, ("model", "validate", "data_type")),
)
@click.option(
"-js",
"--json_schema",
help=query_dict(model_commands, ("model", "validate", "json_schema")),
)
@click.option(
"-rr",
"--restrict_rules",
is_flag=True,
help=query_dict(model_commands,("model","validate","restrict_rules")),
)
@click.option(
"-ps",
"--project_scope",
default=None,
callback=parse_synIDs,
help=query_dict(model_commands, ("model", "validate", "project_scope")),
)
@click.pass_obj
def validate_manifest(ctx, manifest_path, data_type, json_schema, restrict_rules,project_scope):
"""
Running CLI for manifest validation.
"""
if data_type is None:
data_type = CONFIG.manifest_data_type
log_value_from_config("data_type", data_type)
try:
len(data_type) == 1
except:
logger.error(
f"Can only validate a single data_type at a time. Please provide a single data_type"
)
data_type = data_type[0]
t_validate = perf_counter()
jsonld = CONFIG.model_location
log_value_from_config("jsonld", jsonld)
metadata_model = MetadataModel(
inputMModelLocation=jsonld, inputMModelLocationType="local"
)
errors, warnings = metadata_model.validateModelManifest(
manifestPath=manifest_path, rootNode=data_type, jsonSchema=json_schema, restrict_rules=restrict_rules, project_scope=project_scope,
)
if not errors:
click.echo(
"Your manifest has been validated successfully. "
"There are no errors in your manifest, and it can "
"be submitted without any modifications."
)
else:
click.echo(errors)
logger.debug(
f"Total elapsed time {perf_counter()-t_validate} seconds"
)
| 0.315841 | 0.10393 |
__author__ = "Jaakko Salonen"
__copyright__ = "Copyright 2011-2012, Jaakko Salonen"
__version__ = "0.5.0"
__license__ = "MIT"
__status__ = "Prototype"
from urllib.parse import unquote
from copy import copy
from rdflib import BNode, URIRef
import sys
if sys.version_info.major == 3:
unicode = str
try:
from rdflib import BNode, URIRef
except:
# Fallback if rdflib is not present
class BNode(object):
def __init__(self, val):
self.val = val
def n3(self):
return unicode("_:" + self.val)
class URIRef(unicode):
pass
class Curie(object):
"""Curie Datatype Class
Examples:
>>> nss = dict(dc='http://purl.org/dc/elements/1.1/')
>>> dc_title = Curie('http://purl.org/dc/elements/1.1/title', nss)
>>> dc_title.curie
u'dc:title'
>>> dc_title.uri
u'http://purl.org/dc/elements/1.1/title'
>>> dc_title.curie
u'dc:title'
>>> nss['example'] = 'http://www.example.org/'
>>> iri_test = Curie('http://www.example.org/D%C3%BCrst', nss)
>>> iri_test.uri
u'http://www.example.org/D\\xfcrst'
>>> iri_test.curie
u'example:D%C3%BCrst'
"""
def __init__(self, uri, namespaces=dict()):
self.namespaces = namespaces
self.uri = unicode(unquote(uri), "utf-8")
self.curie = copy(self.uri)
for ns in self.namespaces:
self.curie = uri.replace("" + self.namespaces["%s" % ns], "%s:" % ns)
def __str__(self):
return self.__unicode__()
def __unicode__(self):
return self.curie
def uri2curie(uri, namespaces):
"""Convert URI to CURIE
Define namespaces we want to use:
>>> nss = dict(dc='http://purl.org/dc/elements/1.1/')
Converting a string URI to CURIE
>>> uri2curie('http://purl.org/dc/elements/1.1/title', nss)
u'dc:title'
RDFLib data type conversions:
URIRef to CURIE
>>> uri2curie(URIRef('http://purl.org/dc/elements/1.1/title'), nss)
u'dc:title'
Blank node to CURIE
>>> uri2curie(BNode('blanknode1'), nss)
u'_:blanknode1'
"""
# Use n3() method if BNode
if isinstance(uri, BNode):
result = uri.n3()
else:
result = uri
# result = unicode(uri)
for ns in namespaces:
ns_raw = "%s" % namespaces["%s" % ns]
if ns_raw == "http://www.w3.org/2002/07/owl#uri":
ns_raw = "http://www.w3.org/2002/07/owl#"
result = result.replace(ns_raw, "%s:" % ns)
result = result.replace("http://www.w3.org/2002/07/owl#", "owl:")
return result
def curie2uri(curie, namespaces):
"""Convert CURIE to URI
TODO: testing
"""
result = unicode(curie)
for ns in namespaces:
result = result.replace("%s:" % ns, "" + namespaces["%s" % ns])
return URIRef(result)
if __name__ == "__main__":
import doctest
doctest.testmod()
|
schematicpy
|
/schematicpy-23.8.1.tar.gz/schematicpy-23.8.1/schematic/schemas/curie.py
|
curie.py
|
__author__ = "Jaakko Salonen"
__copyright__ = "Copyright 2011-2012, Jaakko Salonen"
__version__ = "0.5.0"
__license__ = "MIT"
__status__ = "Prototype"
from urllib.parse import unquote
from copy import copy
from rdflib import BNode, URIRef
import sys
if sys.version_info.major == 3:
unicode = str
try:
from rdflib import BNode, URIRef
except:
# Fallback if rdflib is not present
class BNode(object):
def __init__(self, val):
self.val = val
def n3(self):
return unicode("_:" + self.val)
class URIRef(unicode):
pass
class Curie(object):
"""Curie Datatype Class
Examples:
>>> nss = dict(dc='http://purl.org/dc/elements/1.1/')
>>> dc_title = Curie('http://purl.org/dc/elements/1.1/title', nss)
>>> dc_title.curie
u'dc:title'
>>> dc_title.uri
u'http://purl.org/dc/elements/1.1/title'
>>> dc_title.curie
u'dc:title'
>>> nss['example'] = 'http://www.example.org/'
>>> iri_test = Curie('http://www.example.org/D%C3%BCrst', nss)
>>> iri_test.uri
u'http://www.example.org/D\\xfcrst'
>>> iri_test.curie
u'example:D%C3%BCrst'
"""
def __init__(self, uri, namespaces=dict()):
self.namespaces = namespaces
self.uri = unicode(unquote(uri), "utf-8")
self.curie = copy(self.uri)
for ns in self.namespaces:
self.curie = uri.replace("" + self.namespaces["%s" % ns], "%s:" % ns)
def __str__(self):
return self.__unicode__()
def __unicode__(self):
return self.curie
def uri2curie(uri, namespaces):
"""Convert URI to CURIE
Define namespaces we want to use:
>>> nss = dict(dc='http://purl.org/dc/elements/1.1/')
Converting a string URI to CURIE
>>> uri2curie('http://purl.org/dc/elements/1.1/title', nss)
u'dc:title'
RDFLib data type conversions:
URIRef to CURIE
>>> uri2curie(URIRef('http://purl.org/dc/elements/1.1/title'), nss)
u'dc:title'
Blank node to CURIE
>>> uri2curie(BNode('blanknode1'), nss)
u'_:blanknode1'
"""
# Use n3() method if BNode
if isinstance(uri, BNode):
result = uri.n3()
else:
result = uri
# result = unicode(uri)
for ns in namespaces:
ns_raw = "%s" % namespaces["%s" % ns]
if ns_raw == "http://www.w3.org/2002/07/owl#uri":
ns_raw = "http://www.w3.org/2002/07/owl#"
result = result.replace(ns_raw, "%s:" % ns)
result = result.replace("http://www.w3.org/2002/07/owl#", "owl:")
return result
def curie2uri(curie, namespaces):
"""Convert CURIE to URI
TODO: testing
"""
result = unicode(curie)
for ns in namespaces:
result = result.replace("%s:" % ns, "" + namespaces["%s" % ns])
return URIRef(result)
if __name__ == "__main__":
import doctest
doctest.testmod()
| 0.478285 | 0.160562 |
import click
import click_log
import logging
import sys
import re
from schematic.schemas.df_parser import _convert_csv_to_data_model
from schematic.utils.cli_utils import query_dict
from schematic.help import schema_commands
logger = logging.getLogger('schematic')
click_log.basic_config(logger)
CONTEXT_SETTINGS = dict(help_option_names=["--help", "-h"]) # help options
# invoke_without_command=True -> forces the application not to show aids before losing them with a --h
@click.group(context_settings=CONTEXT_SETTINGS, invoke_without_command=True)
def schema(): # use as `schematic model ...`
"""
Sub-commands for Schema related utilities/methods.
"""
pass
# prototype based on submit_metadata_manifest()
@schema.command(
"convert",
options_metavar="<options>",
short_help=query_dict(schema_commands, ("schema", "convert", "short_help")),
)
@click_log.simple_verbosity_option(logger)
@click.argument(
"schema_csv", type=click.Path(exists=True), metavar="<DATA_MODEL_CSV>", nargs=1
)
@click.option(
"--base_schema",
"-b",
type=click.Path(exists=True),
metavar="<JSON-LD_SCHEMA>",
help=query_dict(schema_commands, ("schema", "convert", "base_schema")),
)
@click.option(
"--output_jsonld",
"-o",
metavar="<OUTPUT_PATH>",
help=query_dict(schema_commands, ("schema", "convert", "output_jsonld")),
)
def convert(schema_csv, base_schema, output_jsonld):
"""
Running CLI to convert data model specification in CSV format to
data model in JSON-LD format.
"""
# convert RFC to Data Model
base_se = _convert_csv_to_data_model(schema_csv, base_schema)
# output JSON-LD file alongside CSV file by default
if output_jsonld is None:
csv_no_ext = re.sub("[.]csv$", "", schema_csv)
output_jsonld = csv_no_ext + ".jsonld"
logger.info(
"By default, the JSON-LD output will be stored alongside the first "
f"input CSV file. In this case, it will appear here: '{output_jsonld}'. "
"You can use the `--output_jsonld` argument to specify another file path."
)
# saving updated schema.org schema
try:
base_se.export_schema(output_jsonld)
click.echo(f"The Data Model was created and saved to '{output_jsonld}' location.")
except:
click.echo(f"The Data Model could not be created by using '{output_jsonld}' location. Please check your file path again")
|
schematicpy
|
/schematicpy-23.8.1.tar.gz/schematicpy-23.8.1/schematic/schemas/commands.py
|
commands.py
|
import click
import click_log
import logging
import sys
import re
from schematic.schemas.df_parser import _convert_csv_to_data_model
from schematic.utils.cli_utils import query_dict
from schematic.help import schema_commands
logger = logging.getLogger('schematic')
click_log.basic_config(logger)
CONTEXT_SETTINGS = dict(help_option_names=["--help", "-h"]) # help options
# invoke_without_command=True -> forces the application not to show aids before losing them with a --h
@click.group(context_settings=CONTEXT_SETTINGS, invoke_without_command=True)
def schema(): # use as `schematic model ...`
"""
Sub-commands for Schema related utilities/methods.
"""
pass
# prototype based on submit_metadata_manifest()
@schema.command(
"convert",
options_metavar="<options>",
short_help=query_dict(schema_commands, ("schema", "convert", "short_help")),
)
@click_log.simple_verbosity_option(logger)
@click.argument(
"schema_csv", type=click.Path(exists=True), metavar="<DATA_MODEL_CSV>", nargs=1
)
@click.option(
"--base_schema",
"-b",
type=click.Path(exists=True),
metavar="<JSON-LD_SCHEMA>",
help=query_dict(schema_commands, ("schema", "convert", "base_schema")),
)
@click.option(
"--output_jsonld",
"-o",
metavar="<OUTPUT_PATH>",
help=query_dict(schema_commands, ("schema", "convert", "output_jsonld")),
)
def convert(schema_csv, base_schema, output_jsonld):
"""
Running CLI to convert data model specification in CSV format to
data model in JSON-LD format.
"""
# convert RFC to Data Model
base_se = _convert_csv_to_data_model(schema_csv, base_schema)
# output JSON-LD file alongside CSV file by default
if output_jsonld is None:
csv_no_ext = re.sub("[.]csv$", "", schema_csv)
output_jsonld = csv_no_ext + ".jsonld"
logger.info(
"By default, the JSON-LD output will be stored alongside the first "
f"input CSV file. In this case, it will appear here: '{output_jsonld}'. "
"You can use the `--output_jsonld` argument to specify another file path."
)
# saving updated schema.org schema
try:
base_se.export_schema(output_jsonld)
click.echo(f"The Data Model was created and saved to '{output_jsonld}' location.")
except:
click.echo(f"The Data Model could not be created by using '{output_jsonld}' location. Please check your file path again")
| 0.383988 | 0.091585 |
import os
from jsonschema import validate
from schematic.utils.io_utils import load_schemaorg, load_json, load_default
from schematic.utils.general import str2list, dict2list, find_duplicates
from schematic.utils.curie_utils import (
expand_curies_in_schema,
extract_name_from_uri_or_curie,
)
from schematic.utils.validate_utils import (
validate_class_schema,
validate_property_schema,
validate_schema,
)
class SchemaValidator:
"""Validate Schema against SchemaOrg standard
Validation Criterias:
1. Data Structure wise:
> "@id", "@context", "@graph"
> Each element in "@graph" should contain "@id", "@type", "rdfs:comment",
"rdfs:label", "sms:displayName"
> validate against JSON Schema
> Should validate the whole structure, and also validate property and
value separately
2. Data Content wise:
> "@id" field should match with "rdfs:label" field
> all prefixes used in the file should be defined in "@context"
> There should be no duplicate "@id"
> Class specific
> rdfs:label field should be capitalize the first character of each
word for a class;
> the value of "rdfs:subClassOf" should be present in the schema or in
the core vocabulary
> sms:displayName ideally should contain capitalized words separated by space, but that's not enforced by validation
> Property specific
> rdfs:label field should be cammelCase
> the value of "schema:domainIncludes" should be present in the schema
or in the core vocabulary
> the value of "schema:rangeIncludes" should be present in the schema
or in the core vocabulary
> sms:displayName ideally should contain capitalized words separated by space, but that's not enforced by validation
TODO: add dependencies and component dependencies to class structure documentation; as well as value range and required property
"""
def __init__(self, schema):
self.schemaorg = {"schema": load_schemaorg(), "classes": [], "properties": []}
for _schema in self.schemaorg["schema"]["@graph"]:
for _record in _schema["@graph"]:
if "@type" in _record:
_type = str2list(_record["@type"])
if "rdfs:Property" in _type:
self.schemaorg["properties"].append(_record["@id"])
elif "rdfs:Class" in _type:
self.schemaorg["classes"].append(_record["@id"])
self.extension_schema = {
"schema": expand_curies_in_schema(schema),
"classes": [],
"properties": [],
}
for _record in self.extension_schema["schema"]["@graph"]:
_type = str2list(_record["@type"])
if "rdfs:Property" in _type:
self.extension_schema["properties"].append(_record["@id"])
elif "rdfs:Class" in _type:
self.extension_schema["classes"].append(_record["@id"])
self.all_classes = self.schemaorg["classes"] + self.extension_schema["classes"]
def validate_class_label(self, label_uri):
"""Check if the first character of class label is capitalized"""
label = extract_name_from_uri_or_curie(label_uri)
assert label[0].isupper()
def validate_property_label(self, label_uri):
"""Check if the first character of property label is lower case"""
label = extract_name_from_uri_or_curie(label_uri)
assert label[0].islower()
def validate_subclassof_field(self, subclassof_value):
"""Check if the value of "subclassof" is included in the schema file"""
subclassof_value = dict2list(subclassof_value)
for record in subclassof_value:
assert record["@id"] in self.all_classes
def validate_domainIncludes_field(self, domainincludes_value):
"""Check if the value of "domainincludes" is included in the schema
file
"""
domainincludes_value = dict2list(domainincludes_value)
for record in domainincludes_value:
assert record["@id"] in self.all_classes, (
"value of domainincludes not recorded in schema: %r"
% domainincludes_value
)
def validate_rangeIncludes_field(self, rangeincludes_value):
"""Check if the value of "rangeincludes" is included in the schema
file
"""
rangeincludes_value = dict2list(rangeincludes_value)
for record in rangeincludes_value:
assert record["@id"] in self.all_classes
def check_whether_atid_and_label_match(self, record):
"""Check if @id field matches with the "rdfs:label" field"""
_id = extract_name_from_uri_or_curie(record["@id"])
assert _id == record["rdfs:label"], "id and label not match: %r" % record
def check_duplicate_labels(self):
"""Check for duplication in the schema"""
labels = [
_record["rdfs:label"]
for _record in self.extension_schema["schema"]["@graph"]
]
duplicates = find_duplicates(labels)
try:
assert len(duplicates) == 0
except:
raise Exception("Duplicates detected in graph: ", duplicates)
def validate_schema(self, schema):
"""Validate schema against SchemaORG standard"""
json_schema_path = os.path.join("validation_schemas", "schema.json")
json_schema = load_json(json_schema_path)
return validate(schema, json_schema)
def validate_property_schema(self, schema):
"""Validate schema against SchemaORG property definition standard"""
json_schema_path = os.path.join(
"validation_schemas", "property_json_schema.json"
)
json_schema = load_json(json_schema_path)
return validate(schema, json_schema)
def validate_class_schema(self, schema):
"""Validate schema against SchemaORG class definition standard"""
json_schema_path = os.path.join("validation_schemas", "class_json_schema.json")
json_schema = load_json(json_schema_path)
return validate(schema, json_schema)
def validate_full_schema(self):
self.check_duplicate_labels()
for record in self.extension_schema["schema"]["@graph"]:
self.check_whether_atid_and_label_match(record)
if record["@type"] == "rdf:Class":
self.validate_class_schema(record)
self.validate_class_label(record["@id"])
elif record["@type"] == "rdf:Property":
self.validate_property_schema(record)
self.validate_property_label(record["@id"])
self.validate_domainIncludes_field(
record["http://schema.org/domainIncludes"]
)
if "http://schema.org/rangeIncludes" in record:
self.validate_rangeIncludes_field(
record["http://schema.org/rangeIncludes"]
)
|
schematicpy
|
/schematicpy-23.8.1.tar.gz/schematicpy-23.8.1/schematic/schemas/validator.py
|
validator.py
|
import os
from jsonschema import validate
from schematic.utils.io_utils import load_schemaorg, load_json, load_default
from schematic.utils.general import str2list, dict2list, find_duplicates
from schematic.utils.curie_utils import (
expand_curies_in_schema,
extract_name_from_uri_or_curie,
)
from schematic.utils.validate_utils import (
validate_class_schema,
validate_property_schema,
validate_schema,
)
class SchemaValidator:
"""Validate Schema against SchemaOrg standard
Validation Criterias:
1. Data Structure wise:
> "@id", "@context", "@graph"
> Each element in "@graph" should contain "@id", "@type", "rdfs:comment",
"rdfs:label", "sms:displayName"
> validate against JSON Schema
> Should validate the whole structure, and also validate property and
value separately
2. Data Content wise:
> "@id" field should match with "rdfs:label" field
> all prefixes used in the file should be defined in "@context"
> There should be no duplicate "@id"
> Class specific
> rdfs:label field should be capitalize the first character of each
word for a class;
> the value of "rdfs:subClassOf" should be present in the schema or in
the core vocabulary
> sms:displayName ideally should contain capitalized words separated by space, but that's not enforced by validation
> Property specific
> rdfs:label field should be cammelCase
> the value of "schema:domainIncludes" should be present in the schema
or in the core vocabulary
> the value of "schema:rangeIncludes" should be present in the schema
or in the core vocabulary
> sms:displayName ideally should contain capitalized words separated by space, but that's not enforced by validation
TODO: add dependencies and component dependencies to class structure documentation; as well as value range and required property
"""
def __init__(self, schema):
self.schemaorg = {"schema": load_schemaorg(), "classes": [], "properties": []}
for _schema in self.schemaorg["schema"]["@graph"]:
for _record in _schema["@graph"]:
if "@type" in _record:
_type = str2list(_record["@type"])
if "rdfs:Property" in _type:
self.schemaorg["properties"].append(_record["@id"])
elif "rdfs:Class" in _type:
self.schemaorg["classes"].append(_record["@id"])
self.extension_schema = {
"schema": expand_curies_in_schema(schema),
"classes": [],
"properties": [],
}
for _record in self.extension_schema["schema"]["@graph"]:
_type = str2list(_record["@type"])
if "rdfs:Property" in _type:
self.extension_schema["properties"].append(_record["@id"])
elif "rdfs:Class" in _type:
self.extension_schema["classes"].append(_record["@id"])
self.all_classes = self.schemaorg["classes"] + self.extension_schema["classes"]
def validate_class_label(self, label_uri):
"""Check if the first character of class label is capitalized"""
label = extract_name_from_uri_or_curie(label_uri)
assert label[0].isupper()
def validate_property_label(self, label_uri):
"""Check if the first character of property label is lower case"""
label = extract_name_from_uri_or_curie(label_uri)
assert label[0].islower()
def validate_subclassof_field(self, subclassof_value):
"""Check if the value of "subclassof" is included in the schema file"""
subclassof_value = dict2list(subclassof_value)
for record in subclassof_value:
assert record["@id"] in self.all_classes
def validate_domainIncludes_field(self, domainincludes_value):
"""Check if the value of "domainincludes" is included in the schema
file
"""
domainincludes_value = dict2list(domainincludes_value)
for record in domainincludes_value:
assert record["@id"] in self.all_classes, (
"value of domainincludes not recorded in schema: %r"
% domainincludes_value
)
def validate_rangeIncludes_field(self, rangeincludes_value):
"""Check if the value of "rangeincludes" is included in the schema
file
"""
rangeincludes_value = dict2list(rangeincludes_value)
for record in rangeincludes_value:
assert record["@id"] in self.all_classes
def check_whether_atid_and_label_match(self, record):
"""Check if @id field matches with the "rdfs:label" field"""
_id = extract_name_from_uri_or_curie(record["@id"])
assert _id == record["rdfs:label"], "id and label not match: %r" % record
def check_duplicate_labels(self):
"""Check for duplication in the schema"""
labels = [
_record["rdfs:label"]
for _record in self.extension_schema["schema"]["@graph"]
]
duplicates = find_duplicates(labels)
try:
assert len(duplicates) == 0
except:
raise Exception("Duplicates detected in graph: ", duplicates)
def validate_schema(self, schema):
"""Validate schema against SchemaORG standard"""
json_schema_path = os.path.join("validation_schemas", "schema.json")
json_schema = load_json(json_schema_path)
return validate(schema, json_schema)
def validate_property_schema(self, schema):
"""Validate schema against SchemaORG property definition standard"""
json_schema_path = os.path.join(
"validation_schemas", "property_json_schema.json"
)
json_schema = load_json(json_schema_path)
return validate(schema, json_schema)
def validate_class_schema(self, schema):
"""Validate schema against SchemaORG class definition standard"""
json_schema_path = os.path.join("validation_schemas", "class_json_schema.json")
json_schema = load_json(json_schema_path)
return validate(schema, json_schema)
def validate_full_schema(self):
self.check_duplicate_labels()
for record in self.extension_schema["schema"]["@graph"]:
self.check_whether_atid_and_label_match(record)
if record["@type"] == "rdf:Class":
self.validate_class_schema(record)
self.validate_class_label(record["@id"])
elif record["@type"] == "rdf:Property":
self.validate_property_schema(record)
self.validate_property_label(record["@id"])
self.validate_domainIncludes_field(
record["http://schema.org/domainIncludes"]
)
if "http://schema.org/rangeIncludes" in record:
self.validate_rangeIncludes_field(
record["http://schema.org/rangeIncludes"]
)
| 0.449634 | 0.323848 |
import inspect
import logging
from typing import Any, Mapping, Sequence, Union, List
from functools import reduce
import re
logger = logging.getLogger(__name__)
# We are using fstrings in logger methods
# pylint: disable=logging-fstring-interpolation
def query_dict(dictionary: Mapping[Any, Any], keys: Sequence[Any]) -> Union[Any, None]:
"""Access a nested value in a dictionary corresponding
to a series of keys.
Args:
dictionary: A dictionary containing anything.
keys: A sequence of values corresponding to keys
in `dictionary`
Returns:
The nested value corresponding to the given series
of keys, or `None` is such a value doesn't exist.
"""
def extract(dictionary: Any, key: Any) -> Union[Any, None]:
"""Get value associated with key, defaulting to None."""
if dictionary is None or not isinstance(dictionary, dict):
return None
return dictionary.get(key)
return reduce(extract, keys, dictionary)
def log_value_from_config(arg_name: str, config_value: Any):
"""Logs when getting a value from the config
Args:
arg_name (str): Name of the argument. Used for logging.
config_value (Any): The value in the config
"""
logger.info(
f"The {arg_name} argument is being taken from configuration file, i.e., {config_value}."
)
def parse_synIDs(
ctx, param, synIDs,
) -> List[str]:
"""Parse and validate a comma separated string of synIDs
Args:
ctx:
click option context
param:
click option argument name
synIDs:
comma separated string of synIDs
Returns:
List of synID strings
Raises:
ValueError: If the entire string does not match a regex for
a valid comma separated string of SynIDs
"""
if synIDs:
project_regex = re.compile("(syn\d+\,?)+")
valid=project_regex.fullmatch(synIDs)
if valid:
synIDs = synIDs.split(",")
return synIDs
else:
raise ValueError(
f"The provided list of project synID(s): {synIDs}, is not formatted correctly. "
"\nPlease check your list of projects for errors."
)
else:
return
def parse_comma_str_to_list(
ctx, param, comma_string,
) -> List[str]:
if comma_string:
return comma_string.split(",")
else:
return None
|
schematicpy
|
/schematicpy-23.8.1.tar.gz/schematicpy-23.8.1/schematic/utils/cli_utils.py
|
cli_utils.py
|
import inspect
import logging
from typing import Any, Mapping, Sequence, Union, List
from functools import reduce
import re
logger = logging.getLogger(__name__)
# We are using fstrings in logger methods
# pylint: disable=logging-fstring-interpolation
def query_dict(dictionary: Mapping[Any, Any], keys: Sequence[Any]) -> Union[Any, None]:
"""Access a nested value in a dictionary corresponding
to a series of keys.
Args:
dictionary: A dictionary containing anything.
keys: A sequence of values corresponding to keys
in `dictionary`
Returns:
The nested value corresponding to the given series
of keys, or `None` is such a value doesn't exist.
"""
def extract(dictionary: Any, key: Any) -> Union[Any, None]:
"""Get value associated with key, defaulting to None."""
if dictionary is None or not isinstance(dictionary, dict):
return None
return dictionary.get(key)
return reduce(extract, keys, dictionary)
def log_value_from_config(arg_name: str, config_value: Any):
"""Logs when getting a value from the config
Args:
arg_name (str): Name of the argument. Used for logging.
config_value (Any): The value in the config
"""
logger.info(
f"The {arg_name} argument is being taken from configuration file, i.e., {config_value}."
)
def parse_synIDs(
ctx, param, synIDs,
) -> List[str]:
"""Parse and validate a comma separated string of synIDs
Args:
ctx:
click option context
param:
click option argument name
synIDs:
comma separated string of synIDs
Returns:
List of synID strings
Raises:
ValueError: If the entire string does not match a regex for
a valid comma separated string of SynIDs
"""
if synIDs:
project_regex = re.compile("(syn\d+\,?)+")
valid=project_regex.fullmatch(synIDs)
if valid:
synIDs = synIDs.split(",")
return synIDs
else:
raise ValueError(
f"The provided list of project synID(s): {synIDs}, is not formatted correctly. "
"\nPlease check your list of projects for errors."
)
else:
return
def parse_comma_str_to_list(
ctx, param, comma_string,
) -> List[str]:
if comma_string:
return comma_string.split(",")
else:
return None
| 0.928676 | 0.346099 |
import logging
import sys
import click
import click_log
from schematic.visualization.attributes_explorer import AttributesExplorer
from schematic.visualization.tangled_tree import TangledTree
from schematic.utils.cli_utils import log_value_from_config, query_dict
from schematic.help import viz_commands
from schematic.help import model_commands
from schematic.configuration.configuration import CONFIG
logger = logging.getLogger(__name__)
click_log.basic_config(logger)
CONTEXT_SETTINGS = dict(help_option_names=["--help", "-h"]) # help options
# invoke_without_command=True -> forces the application not to show aids before losing them with a --h
@click.group(context_settings=CONTEXT_SETTINGS, invoke_without_command=True)
@click_log.simple_verbosity_option(logger)
@click.option(
"-c",
"--config",
type=click.Path(),
envvar="SCHEMATIC_CONFIG",
help=query_dict(model_commands, ("model", "config")),
)
@click.pass_context
def viz(ctx, config): # use as `schematic model ...`
"""
Sub-commands for Visualization methods.
"""
try:
logger.debug(f"Loading config file contents in '{config}'")
CONFIG.load_config(config)
ctx.obj = CONFIG
except ValueError as e:
logger.error("'--config' not provided or environment variable not set.")
logger.exception(e)
sys.exit(1)
@viz.command(
"attributes",
)
@click_log.simple_verbosity_option(logger)
@click.pass_obj
def get_attributes(ctx):
"""
"""
# Get JSONLD file path
path_to_jsonld = CONFIG.model_location
log_value_from_config("jsonld", path_to_jsonld)
# Run attributes explorer
AttributesExplorer(path_to_jsonld).parse_attributes(save_file=True)
return
@viz.command(
"tangled_tree_text"
)
@click_log.simple_verbosity_option(logger)
@click.option(
"-ft",
"--figure_type",
type=click.Choice(['component', 'dependency'], case_sensitive=False),
help=query_dict(viz_commands, ("visualization", "tangled_tree", "figure_type")),
)
@click.option(
"-tf",
"--text_format",
type=click.Choice(['plain', 'highlighted'], case_sensitive=False),
help=query_dict(viz_commands, ("visualization", "tangled_tree", "text_format")),
)
@click.pass_obj
def get_tangled_tree_text(ctx, figure_type, text_format):
""" Get text to be placed on the tangled tree visualization.
"""
# Get JSONLD file path
path_to_jsonld = CONFIG.model_location
log_value_from_config("jsonld", path_to_jsonld)
# Initialize TangledTree
tangled_tree = TangledTree(path_to_jsonld, figure_type)
# Get text for tangled tree.
text_df = tangled_tree.get_text_for_tangled_tree(text_format, save_file=True)
return
@viz.command(
"tangled_tree_layers"
)
@click_log.simple_verbosity_option(logger)
@click.option(
"-ft",
"--figure_type",
type=click.Choice(['component', 'dependency'], case_sensitive=False),
help=query_dict(viz_commands, ("visualization", "tangled_tree", "figure_type")),
)
@click.pass_obj
def get_tangled_tree_component_layers(ctx, figure_type):
''' Get the components that belong in each layer of the tangled tree visualization.
'''
# Get JSONLD file path
path_to_jsonld = CONFIG.model_location
log_value_from_config("jsonld", path_to_jsonld)
# Initialize Tangled Tree
tangled_tree = TangledTree(path_to_jsonld, figure_type)
# Get tangled trees layers JSON.
layers = tangled_tree.get_tangled_tree_layers(save_file=True)
return
|
schematicpy
|
/schematicpy-23.8.1.tar.gz/schematicpy-23.8.1/schematic/visualization/commands.py
|
commands.py
|
import logging
import sys
import click
import click_log
from schematic.visualization.attributes_explorer import AttributesExplorer
from schematic.visualization.tangled_tree import TangledTree
from schematic.utils.cli_utils import log_value_from_config, query_dict
from schematic.help import viz_commands
from schematic.help import model_commands
from schematic.configuration.configuration import CONFIG
logger = logging.getLogger(__name__)
click_log.basic_config(logger)
CONTEXT_SETTINGS = dict(help_option_names=["--help", "-h"]) # help options
# invoke_without_command=True -> forces the application not to show aids before losing them with a --h
@click.group(context_settings=CONTEXT_SETTINGS, invoke_without_command=True)
@click_log.simple_verbosity_option(logger)
@click.option(
"-c",
"--config",
type=click.Path(),
envvar="SCHEMATIC_CONFIG",
help=query_dict(model_commands, ("model", "config")),
)
@click.pass_context
def viz(ctx, config): # use as `schematic model ...`
"""
Sub-commands for Visualization methods.
"""
try:
logger.debug(f"Loading config file contents in '{config}'")
CONFIG.load_config(config)
ctx.obj = CONFIG
except ValueError as e:
logger.error("'--config' not provided or environment variable not set.")
logger.exception(e)
sys.exit(1)
@viz.command(
"attributes",
)
@click_log.simple_verbosity_option(logger)
@click.pass_obj
def get_attributes(ctx):
"""
"""
# Get JSONLD file path
path_to_jsonld = CONFIG.model_location
log_value_from_config("jsonld", path_to_jsonld)
# Run attributes explorer
AttributesExplorer(path_to_jsonld).parse_attributes(save_file=True)
return
@viz.command(
"tangled_tree_text"
)
@click_log.simple_verbosity_option(logger)
@click.option(
"-ft",
"--figure_type",
type=click.Choice(['component', 'dependency'], case_sensitive=False),
help=query_dict(viz_commands, ("visualization", "tangled_tree", "figure_type")),
)
@click.option(
"-tf",
"--text_format",
type=click.Choice(['plain', 'highlighted'], case_sensitive=False),
help=query_dict(viz_commands, ("visualization", "tangled_tree", "text_format")),
)
@click.pass_obj
def get_tangled_tree_text(ctx, figure_type, text_format):
""" Get text to be placed on the tangled tree visualization.
"""
# Get JSONLD file path
path_to_jsonld = CONFIG.model_location
log_value_from_config("jsonld", path_to_jsonld)
# Initialize TangledTree
tangled_tree = TangledTree(path_to_jsonld, figure_type)
# Get text for tangled tree.
text_df = tangled_tree.get_text_for_tangled_tree(text_format, save_file=True)
return
@viz.command(
"tangled_tree_layers"
)
@click_log.simple_verbosity_option(logger)
@click.option(
"-ft",
"--figure_type",
type=click.Choice(['component', 'dependency'], case_sensitive=False),
help=query_dict(viz_commands, ("visualization", "tangled_tree", "figure_type")),
)
@click.pass_obj
def get_tangled_tree_component_layers(ctx, figure_type):
''' Get the components that belong in each layer of the tangled tree visualization.
'''
# Get JSONLD file path
path_to_jsonld = CONFIG.model_location
log_value_from_config("jsonld", path_to_jsonld)
# Initialize Tangled Tree
tangled_tree = TangledTree(path_to_jsonld, figure_type)
# Get tangled trees layers JSON.
layers = tangled_tree.get_tangled_tree_layers(save_file=True)
return
| 0.445288 | 0.139924 |
from io import StringIO
import json
import logging
import networkx as nx
import numpy as np
import os
from os import path
import pandas as pd
# allows specifying explicit variable types
from typing import Any, Dict, Optional, Text, List
from schematic.utils.viz_utils import visualize
from schematic.visualization.attributes_explorer import AttributesExplorer
from schematic.schemas.explorer import SchemaExplorer
from schematic.schemas.generator import SchemaGenerator
from schematic import LOADER
from schematic.utils.io_utils import load_json
from copy import deepcopy
# Make sure to have newest version of decorator
logger = logging.getLogger(__name__)
#OUTPUT_DATA_DIR = str(Path('tests/data/visualization/AMPAD').resolve())
#DATA_DIR = str(Path('tests/data').resolve())
class TangledTree(object):
"""
"""
def __init__(self,
path_to_json_ld: str,
figure_type: str,
) -> None:
# Load jsonld
self.path_to_json_ld = path_to_json_ld
self.json_data_model = load_json(self.path_to_json_ld)
# Parse schema name
self.schema_name = path.basename(self.path_to_json_ld).split(".model.jsonld")[0]
# Instantiate a schema generator to retrieve db schema graph from metadata model graph
self.sg = SchemaGenerator(self.path_to_json_ld)
# Get metadata model schema graph
self.G = self.sg.se.get_nx_schema()
# Set Parameters
self.figure_type = figure_type.lower()
self.dependency_type = ''.join(('requires', self.figure_type.capitalize()))
# Get names
self.schema = load_json(self.path_to_json_ld)
self.schema_abbr = self.schema_name.split('_')[0]
# Initialize AttributesExplorer
self.ae = AttributesExplorer(self.path_to_json_ld)
# Create output paths.
self.text_csv_output_path = self.ae.create_output_path('text_csv')
self.json_output_path = self.ae.create_output_path('tangled_tree_json')
def strip_double_quotes(self, string):
# Remove double quotes from beginning and end of string.
if string.startswith('"') and string.endswith('"'):
string = string[1:-1]
# now remove whitespace
string = "".join(string.split())
return string
def get_text_for_tangled_tree(self, text_type, save_file=False):
'''Gather the text that needs to be either higlighted or plain for the tangled tree visualization.
Args:
text_type (str): Choices = ['highlighted', 'plain'], determines the type of text
rendering to return.
save_file (bool): Determines if the outputs should be saved to disk or returned.
Returns:
If save_file==True: Saves plain or highlighted text as a CSV (to disk).
save_file==False: Returns plain or highlighted text as a csv string.
'''
# Get nodes in the digraph, many more nodes returned if figure type is dependency
cdg = self.sg.se.get_digraph_by_edge_type(self.dependency_type)
nodes = cdg.nodes()
if self.dependency_type == 'requiresComponent':
component_nodes = nodes
else:
# get component nodes if making dependency figure
component_dg = self.sg.se.get_digraph_by_edge_type('requiresComponent')
component_nodes = component_dg.nodes()
# Initialize lists
highlighted = []
plain = []
# For each component node in the tangled tree gather the plain and higlighted text.
for node in component_nodes:
# Get the highlighted components based on figure_type
if self.figure_type == 'component':
highlight_descendants = self.sg.se.get_descendants_by_edge_type(node, 'requiresComponent')
elif self.figure_type == 'dependency':
highlight_descendants = [node]
# Format text to be higlighted and gather text to be formated plain.
if not highlight_descendants:
# If there are no highlighted descendants just highlight the selected node (format for observable.)
highlighted.append([node, "id", node])
# Gather all the text as plain text.
plain_descendants = [n for n in nodes if n != node]
else:
# Format higlighted text for Observable.
for hd in highlight_descendants:
highlighted.append([node, "id", hd])
# Gather the non-higlighted text as plain text descendants.
plain_descendants = [node for node in nodes if node not in highlight_descendants]
# Format all the plain text for observable.
for nd in plain_descendants:
plain.append([node, "id", nd])
# Prepare df depending on what type of text we need.
df = pd.DataFrame(locals()[text_type.lower()], columns = ['Component', 'type', 'name'])
# Depending on input either export csv locally to disk or as a string.
if save_file==True:
file_name = f"{self.schema_abbr}_{self.figure_type}_{text_type}.csv"
df.to_csv(os.path.join(self.text_csv_output_path, file_name))
return
elif save_file==False:
return df.to_csv()
def get_topological_generations(self):
''' Gather topological_gen, nodes and edges based on figure type.
Outputs:
topological_gen (List(list)):list of lists. Indicates layers of nodes.
nodes: (Networkx NodeView) Nodes of the component or dependency graph. When iterated over it functions like a list.
edges: (Networkx EdgeDataView) Edges of component or dependency graph. When iterated over it works like a list of tuples.
'''
# Get nodes in the digraph
digraph = self.sg.se.get_digraph_by_edge_type(self.dependency_type)
nodes = digraph.nodes()
# Get subgraph
mm_graph = self.sg.se.get_nx_schema()
subg = self.sg.get_subgraph_by_edge_type(mm_graph, self.dependency_type)
# Get edges and topological_gen based on figure type.
if self.figure_type == 'component':
edges = digraph.edges()
topological_gen = list(reversed(list(nx.topological_generations(subg))))
elif self.figure_type == 'dependency':
rev_digraph = nx.DiGraph.reverse(digraph)
edges = rev_digraph.edges()
topological_gen = list(nx.topological_generations(subg))
return topological_gen, nodes, edges, subg
def remove_unwanted_characters_from_conditional_statement(self, cond_req: str) -> str:
'''Remove unwanted characters from conditional statement
Example of conditional requirement: If File Format IS "BAM" OR "CRAM" OR "CSV/TSV" then Genome Build is required
Example output: File Format IS "BAM" OR "CRAM" OR "CSV/TSV"
'''
if "then" in cond_req:
# remove everything after "then"
cond_req_new = cond_req.split('then')[0]
# remove "If" and empty space
cond_req = cond_req_new.replace("If", "").lstrip().rstrip()
return cond_req
def get_ca_alias(self, conditional_requirements: list) -> dict:
'''Get the alias for each conditional attribute.
NOTE: Obtaining attributes(attr) and aliases(ali) in this function is specific to how formatting
is set in AttributesExplorer. If that formatting changes, this section
will likely break or in the worst case have a silent error.
Input:
conditional_requirements_list (list): list of strings of conditional requirements from outputs of AttributesExplorer.
Output:
ca_alias (dict):
key: alias (attribute response)
value: attribute
'''
ca_alias = {}
# clean up conditional requirements
conditional_requirements = [self.remove_unwanted_characters_from_conditional_statement(req) for req in conditional_requirements]
for i, req in enumerate(conditional_requirements):
if "OR" not in req:
attr, ali = req.split(' is ')
attr = "".join(attr.split())
ali = self.strip_double_quotes(ali)
ca_alias[ali] = attr
else:
attr, alias_str = req.split(' is ')
alias_lst = alias_str.split(' OR ')
for elem in alias_lst:
elem = self.strip_double_quotes(elem)
ca_alias[elem] = attr
return ca_alias
def gather_component_dependency_info(self, cn, attributes_df):
'''Gather all component dependency information.
Inputs:
cn: (str) component name
attributes_df: (Pandas DataFrame) Details for all attributes across all components. From AttributesExplorer.
Outputs:
conditional_attributes (list): List of conditional attributes for a particular component
ca_alias (dict):
key: alias (attribute response)
value: attribute
all_attributes (list): all attributes associated with a particular component.
'''
# Gather all component dependency information
component_attributes = self.sg.get_descendants_by_edge_type(
cn,
self.dependency_type,
connected=True
)
# Dont want to display `Component` in the figure so remove
if 'Component' in component_attributes:
component_attributes.remove('Component')
# Gather conditional attributes so they can be added to the figure.
if 'Cond_Req' in attributes_df.columns:
conditional_attributes = list(attributes_df[(attributes_df['Cond_Req']==True)
&(attributes_df['Component']==cn)]['Label'])
ca_df = attributes_df[(attributes_df['Cond_Req']==True)&(attributes_df['Component']==cn)]
conditional_requirements = list(attributes_df[(attributes_df['Cond_Req']==True)
&(attributes_df['Component']==cn)]['Conditional Requirements'])
ca_alias = self.get_ca_alias(conditional_requirements)
else:
# If there are no conditional attributes/requirements, initialize blank lists.
conditional_attributes = []
ca_alias = {}
# Gather a list of all attributes for the current component.
all_attributes = list(np.append(component_attributes,conditional_attributes))
return conditional_attributes, ca_alias, all_attributes
def find_source_nodes(self, nodes, edges, all_attributes=[]):
'''Find all nodes in the graph that do not have a parent node.
Inputs:
nodes: (Networkx NodeView) Nodes of the component or dependency graph. When iterated over it functions like a list.
edges: (Networkx EdgeDataView) Edges of component or dependency graph. When iterated over it works like a list of tuples.
attributes_df: (Pandas DataFrame) Details for all attributes across all components. From AttributesExplorer.
Outputs:
source_nodes (list(str)): List of parentless nodes in
'''
# Find edges that are not source nodes.
not_source = []
for node in nodes:
for edge_pair in edges:
if node == edge_pair[0]:
not_source.append(node)
# Find source nodes as nodes that are not in not_source.
source_nodes = []
for node in nodes:
if self.figure_type == 'dependency':
if node not in not_source and node in all_attributes:
source_nodes.append(node)
else:
if node not in not_source:
source_nodes.append(node)
return source_nodes
def get_parent_child_dictionary(self, nodes, edges, all_attributes=[]):
'''Based on the dependency type, create dictionaries between parent and child and child and parent attributes.
Input:
nodes: (Networkx NodeView) Nodes of the component or dependency graph.
edges: (Networkx EdgeDataView (component figure) or List(list) (dependency figure))
Edges of component or dependency graph.
all_attributes:
Output:
child_parents (dict):
key: child
value: list of the childs parents
parent_children (dict):
key: parent
value: list of the parents children
'''
child_parents = {}
parent_children = {}
if self.dependency_type == 'requiresComponent':
# Construct child_parents dictionary
for edge in edges:
# Add child as a key
if edge[0] not in child_parents.keys():
child_parents[edge[0]] = []
# Add parents to list
child_parents[edge[0]].append(edge[1])
# Construct parent_children dictionary
for edge in edges:
# Add parent as a key
if edge[1] not in parent_children.keys():
parent_children[edge[1]] = []
# Add children to list
parent_children[edge[1]].append(edge[0])
elif self.dependency_type == 'requiresDependency':
# Construct child_parents dictionary
for edge in edges:
# Check if child is an attribute for the current component
if edge[0] in all_attributes:
# Add child as a key
if edge[0] not in child_parents.keys():
child_parents[edge[0]] = []
# Add parent to list if it is an attriute for the current component
if edge[1] in all_attributes:
child_parents[edge[0]].append(edge[1])
# Construct parent_children dictionary
for edge in edges:
# Check if parent is an attribute for the current component
if edge[1] in all_attributes:
# Add parent as a key
if edge[1] not in parent_children.keys():
parent_children[edge[1]] = []
# Add child to list if it is an attriute for the current component
if edge[0] in all_attributes:
parent_children[edge[1]].append(edge[0])
return child_parents, parent_children
def alias_edges(self, ca_alias:dict, edges) -> List[list]:
'''Create new edges based on aliasing between an attribute and its response.
Purpose:
Create aliased edges.
For example:
If BiospecimenType (attribute) is AnalyteBiospecimenType (response)
Then ShippingConditionType (conditional requirement) is now required.
In the model the edges that connect these options are:
(AnalyteBiospecimenType, BiospecimenType)
(ShippingConditionType, AnalyteBiospecimenType)
Use alias defined in self.get_ca_alias along to define new edges that would
directly link attributes to their conditional requirements, in this
example the new edge would be:
[ShippingConditionType, BiospecimenType]
Inputs:
ca_alias (dict):
key: alias (attribute response)
value: attribute
edges (Networkx EdgeDataView): Edges of component or dependency graph. When iterated over it works like a list of tuples.
Output:
aliased_edges (List[lists]) of aliased edges.
'''
aliased_edges = []
for i, edge in enumerate(edges):
# construct one set of edges at a time
edge_set = []
# If the first edge has an alias add alias to the first position in the current edge set
if edge[0] in ca_alias.keys():
edge_set.append(ca_alias[edge[0]])
# Else add the non-aliased edge
else:
edge_set.append(edge[0])
# If the secod edge has an alias add alias to the first position in the current edge set
if edge[1] in ca_alias.keys():
edge_set.append(ca_alias[edge[1]])
# Else add the non-aliased edge
else:
edge_set.append(edge[1])
# Add new edge set to a the list of aliased edges.
aliased_edges.append(edge_set)
return aliased_edges
def prune_expand_topological_gen(self, topological_gen, all_attributes, conditional_attributes):
'''
Purpose:
Remake topological_gen with only relevant nodes.
This is necessary since for the figure this function is being used in we
only want to display a portion of the graph data.
In addition to only displaying relevant nodes, we want to add conditional
attributes to topological_gen so we can visualize them in the tangled tree
as well.
Input:
topological_gen (List[list]): Indicates layers of nodes.
all_attributes (list): all attributes associated with a particular component.
conditional_attributes (list): List of conditional attributes for a particular component
Output:
new_top_gen (List[list]): mimics structure of topological_gen but only
includes the nodes we want
'''
pruned_topological_gen = []
# For each layer(gen) in the topological generation list
for i, layer in enumerate(topological_gen):
current_layer = []
next_layer = []
# For each node in the layer
for node in layer:
# If the node is relevant to this component and is not a conditional attribute add it to the current layer.
if node in all_attributes and node not in conditional_attributes:
current_layer.append(node)
# If its a conditional attribute add it to a followup layer.
if node in conditional_attributes:
next_layer.append(node)
# Added layers to new pruned_topological_gen list
if current_layer:
pruned_topological_gen.append(current_layer)
if next_layer:
pruned_topological_gen.append(next_layer)
return pruned_topological_gen
def get_base_layers(self, topological_gen, child_parents, source_nodes, cn):
'''
Purpose:
Reconfigure topological gen to move things back appropriate layers if
they would have a back reference.
The Tangle Tree figure requrires an acyclic directed graph that has additional
layering rules between connected nodes.
- If there is a backward connection then the line connecting them will
break (this would suggest a cyclic connection.)
- Additionally if two or more nodes are connecting to a downstream node it is
best to put both parent nodes at the same level, if possible, to
prevent line breaks.
- Also want to move any children nodes one layer below
the parent node(s). If there are multiple parents, put one layer below the
parent that is furthest from the origin.
This is an iterative process that needs to run twice to move all the nodes to their
appropriate positions.
Input:
topological_gen: list of lists. Indicates layers of nodes.
child_parents (dict):
key: child
value: list of the childs parents
source_nodes: list, list of nodes that do not have a parent.
cn: str, component name, default=''
Output:
base_layers: dict, key: component name, value: layer
represents initial layering of toplogical_gen
base_layers_copy_copy: dict, key: component name, value: layer
represents the final layering after moving the components/attributes to
their desired layer.c
'''
# Convert topological_gen to a dictionary
base_layers = {com:i for i, lev in enumerate(topological_gen)
for com in lev}
# Make another version to iterate on -- Cant set to equal or will overwrite the original.
base_layers_copy = {com:i for i, lev in enumerate(topological_gen)
for com in lev}
# Move child nodes one node downstream of their parents.
for level in topological_gen:
for node in level:
# Check if node has a parent.
if node in child_parents.keys():
#node_level = base_layers[node]
# Look at the parents for the node.
parent_levels = []
for par in child_parents[node]:
# Get the layer the parent is located at.
parent_levels.append(base_layers[par])
# Get the max layer a parent of the node can be found.
max_parent_level = max(parent_levels)
# Move the node one layer beyond the max parent node position, so it will be downstream of its parents.
base_layers_copy[node] = max_parent_level + 1
# Make another version of updated positions iterate on further.
base_layers_copy_copy = base_layers_copy
# Move parental source nodes if necessary.
for level in topological_gen:
for node in level:
# Check if node has any parents.
if node in child_parents.keys():
parent_levels = []
modify_par = []
# For each parent get their position.
for par in child_parents[node]:
parent_levels.append(base_layers_copy[par])
# If one of the parents is a source node move
# it to the same level as the other nodes the child connects to so
# that the connections will not be backwards (and result in a broken line)
for par in child_parents[node]:
# For a given parent determine if its a source node and that the parents
# are not already at level 0, and the parent is not the current component node.
if (par in source_nodes and
(parent_levels.count(parent_levels[0]) != len(parent_levels))
and par != cn):
# If so, remove its position from parent_levels
parent_levels.remove(base_layers_copy[par])
# Add this parent to a list of parental positions to modify later.
modify_par.append(par)
# Get the new max parent level for this node.
max_parent_level = max(parent_levels)
# Move the node one position downstream of its max parent level.
base_layers_copy_copy[node] = max_parent_level + 1
# For each parental position to modify, move the parents level up to the max_parent_level.
for par in modify_par:
base_layers_copy_copy[par] = max_parent_level
return base_layers, base_layers_copy_copy
def adjust_node_placement(self, base_layers_copy_copy, base_layers, topological_gen):
'''Reorder nodes within topological_generations to match how they were ordered in base_layers_copy_copy
Input:
topological_gen: list of lists. Indicates layers of nodes.
base_layers: dict, key: component name, value: layer
represents initial layering of toplogical_gen
base_layers_copy_copy: dict, key: component name, value: layer
represents the final layering after moving the components/attributes to
their desired layer.
Output:
topological_gen: same format but as the incoming topologial_gen but
ordered to match base_layers_copy_copy.
'''
if self.figure_type == 'component':
# For each node get its new layer in the tangled tree
for node, i in base_layers_copy_copy.items():
# Check if node is not already in the proper layer
if node not in topological_gen[i]:
# If not put it in the appropriate layer
topological_gen[i].append(node)
# Remove from inappropriate layer.
topological_gen[base_layers[node]].remove(node)
elif self.figure_type == 'dependency':
for node, i in base_layers_copy_copy.items():
# Check if the location of the node is more than the number of
# layers topological gen current handles
if i > len(topological_gen) - 1:
# If so, add node to new node at the end of topological_gen
topological_gen.append([node])
# Remove the node from its previous position.
topological_gen[base_layers[node]].remove(node)
# Else, check if node is not already in the proper layer
elif node not in topological_gen[i]:
# If not put it in the appropriate layer
topological_gen[i].append(node)
# Remove from inappropriate layer.
topological_gen[base_layers[node]].remove(node)
return topological_gen
def move_source_nodes_to_bottom_of_layer(self, node_layers, source_nodes):
'''For aesthetic purposes move source nodes to the bottom of their respective layers.
Input:
node_layers (List(list)): Lists of lists of each layer and the nodes contained in that layer as strings.
source_nodes (list): list of nodes that do not have a parent.
Output:
node_layers (List(list)): modified to move source nodes to the bottom of each layer.
'''
for i, layer in enumerate(node_layers):
nodes_to_move = []
for node in layer:
if node in source_nodes:
nodes_to_move.append(node)
for node in nodes_to_move:
node_layers[i].remove(node)
node_layers[i].append(node)
return node_layers
def get_layers_dict_list(self, node_layers, child_parents, parent_children, all_parent_children):
'''Convert node_layers to a list of lists of dictionaries that specifies each node and its parents (if applicable).
Inputs:
node_layers: list of lists of each layer and the nodes contained in that layer as strings.
child_parents (dict):
key: child
value: list of the childs parents
parent_children (dict):
key: parent
value: list of the parents children
Outputs:
layers_list (List(list): list of lists of dictionaries that specifies each node and its parents (if applicable)
'''
num_layers = len(node_layers)
layers_list = [[] for i in range(0, num_layers)]
for i, layer in enumerate(node_layers):
for node in layer:
if node in child_parents.keys():
parents = child_parents[node]
else:
parents = []
if node in parent_children.keys():
direct_children = parent_children[node]
else:
direct_children = []
if node in all_parent_children.keys():
all_children = all_parent_children[node]
else:
all_children = []
layers_list[i].append({'id': node, 'parents': parents, 'direct_children': direct_children, 'children': all_children})
return layers_list
def get_node_layers_json(self, topological_gen, source_nodes, child_parents, parent_children, cn='', all_parent_children=None):
'''Return all the layers of a single tangled tree as a JSON String.
Inputs:
topological_gen:list of lists. Indicates layers of nodes.
source_nodes: list of nodes that do not have a parent.
child_parents (dict):
key: child
value: list of the childs parents
parent_children (dict):
key: parent
value: list of the parents children
all_parent_children (dict):
key: parent
value: list of the parents children (including all downstream nodes). Default to an empty dictionary
Outputs:
layers_json (JSON String): Layers of nodes in the tangled tree as a json string.
'''
base_layers, base_layers_copy_copy = self.get_base_layers(topological_gen,
child_parents, source_nodes, cn)
# Rearrange node_layers to follow the pattern laid out in component layers.
node_layers = self.adjust_node_placement(base_layers_copy_copy,
base_layers, topological_gen)
# Move source nodes to the bottom of each layer.
node_layers = self.move_source_nodes_to_bottom_of_layer(node_layers, source_nodes)
# Convert layers to a list of dictionaries
if not all_parent_children:
# default to an empty dictionary
all_parent_children = dict()
layers_dicts = self.get_layers_dict_list(node_layers, child_parents, parent_children, all_parent_children)
# Convert dictionary to a JSON string
layers_json = json.dumps(layers_dicts)
return layers_json
def save_outputs(self, save_file, layers_json, cn='', all_layers=[]):
'''
Inputs:
save_file (bool): Indicates whether to save a file locally or not.:
layers_json (JSON String): Layers of nodes in the tangled tree as a json string.
cn (str): component name, default=''
all_layers (list of json strings): Each string represents contains the layers for a single tangled tree.
If a dependency figure the list is added to each time this function is called, so starts incomplete.
default=[].
Outputs:
all_layers (list of json strings):
If save_file == False: Each string represents contains the layers for a single tangled tree.
If save_file ==True: is an empty list.
'''
if save_file == True:
if cn:
output_file_name = f"{self.schema_abbr}_{self.figure_type}_{cn}_tangled_tree.json"
else:
output_file_name = f"{self.schema_abbr}_{self.figure_type}_tangled_tree.json"
with open(os.path.join(self.json_output_path, output_file_name), 'w') as outfile:
outfile.write(layers_json)
logger.info(f"Tangled Tree JSON String saved to {os.path.join(self.json_output_path, output_file_name)}.")
all_layers = layers_json
elif save_file == False:
all_layers.append(layers_json)
return all_layers
def get_ancestors_nodes(self, subgraph, components):
"""
Inputs:
subgraph: networkX graph object
components: a list of nodes
outputs:
all_parent_children: a dictionary that indicates a list of children (including all the intermediate children) of a given node
"""
all_parent_children = {}
for component in components:
all_ancestors = self.sg.se.get_nodes_ancestors(subgraph, component)
all_parent_children[component] = all_ancestors
return all_parent_children
def get_tangled_tree_layers(self, save_file=True):
'''Based on user indicated figure type, construct the layers of nodes of a tangled tree.
Inputs:
save_file (bool): Indicates whether to save a file locally or not.
Outputs:
all_layers (list of json strings):
If save_file == False: Each string represents contains the layers for a single tangled tree.
If save_file ==True: is an empty list.
Note on Dependency Tangled Tree:
If there are many conditional requirements associated with a depependency, and those
conditional requirements have overlapping attributes associated with them
the tangled tree will only report one
'''
# Gather the data model's, topological generations, nodes and edges
topological_gen, nodes, edges, subg = self.get_topological_generations()
if self.figure_type == 'component':
# Gather all source nodes
source_nodes = self.find_source_nodes(nodes, edges)
# Map all children to their parents and vice versa
child_parents, parent_children = self.get_parent_child_dictionary(nodes, edges)
# find all the downstream nodes
all_parent_children = self.get_ancestors_nodes(subg, parent_children.keys())
# Get the layers that each node belongs to.
layers_json = self.get_node_layers_json(topological_gen, source_nodes, child_parents, parent_children, all_parent_children=all_parent_children)
# If indicated save outputs locally else gather all layers.
all_layers = self.save_outputs(save_file, layers_json)
if self.figure_type == 'dependency':
# Get component digraph and nodes.
component_dg = self.sg.se.get_digraph_by_edge_type('requiresComponent')
component_nodes = component_dg.nodes()
# Get table of attributes.
attributes_csv_str = self.ae.parse_attributes(save_file=False)
attributes_df = pd.read_table(StringIO(attributes_csv_str), sep=",")
all_layers =[]
for cn in component_nodes:
# Gather attribute and dependency information per node
conditional_attributes, ca_alias, all_attributes = self.gather_component_dependency_info(cn, attributes_df)
# Gather all source nodes
source_nodes = self.find_source_nodes(component_nodes, edges, all_attributes)
# Alias the conditional requirement edge back to its actual parent label,
# then apply aliasing back to the edges
aliased_edges = self.alias_edges(ca_alias, edges)
# Gather relationships between children and their parents.
child_parents, parent_children = self.get_parent_child_dictionary(nodes,
aliased_edges, all_attributes)
# Remake topological_gen so it has only relevant nodes.
pruned_topological_gen = self.prune_expand_topological_gen(topological_gen, all_attributes, conditional_attributes)
# Get the layers that each node belongs to.
layers_json = self.get_node_layers_json(pruned_topological_gen, source_nodes, child_parents, parent_children, cn)
# If indicated save outputs locally else, gather all layers.
all_layers = self.save_outputs(save_file, layers_json, cn, all_layers)
return all_layers
|
schematicpy
|
/schematicpy-23.8.1.tar.gz/schematicpy-23.8.1/schematic/visualization/tangled_tree.py
|
tangled_tree.py
|
from io import StringIO
import json
import logging
import networkx as nx
import numpy as np
import os
from os import path
import pandas as pd
# allows specifying explicit variable types
from typing import Any, Dict, Optional, Text, List
from schematic.utils.viz_utils import visualize
from schematic.visualization.attributes_explorer import AttributesExplorer
from schematic.schemas.explorer import SchemaExplorer
from schematic.schemas.generator import SchemaGenerator
from schematic import LOADER
from schematic.utils.io_utils import load_json
from copy import deepcopy
# Make sure to have newest version of decorator
logger = logging.getLogger(__name__)
#OUTPUT_DATA_DIR = str(Path('tests/data/visualization/AMPAD').resolve())
#DATA_DIR = str(Path('tests/data').resolve())
class TangledTree(object):
"""
"""
def __init__(self,
path_to_json_ld: str,
figure_type: str,
) -> None:
# Load jsonld
self.path_to_json_ld = path_to_json_ld
self.json_data_model = load_json(self.path_to_json_ld)
# Parse schema name
self.schema_name = path.basename(self.path_to_json_ld).split(".model.jsonld")[0]
# Instantiate a schema generator to retrieve db schema graph from metadata model graph
self.sg = SchemaGenerator(self.path_to_json_ld)
# Get metadata model schema graph
self.G = self.sg.se.get_nx_schema()
# Set Parameters
self.figure_type = figure_type.lower()
self.dependency_type = ''.join(('requires', self.figure_type.capitalize()))
# Get names
self.schema = load_json(self.path_to_json_ld)
self.schema_abbr = self.schema_name.split('_')[0]
# Initialize AttributesExplorer
self.ae = AttributesExplorer(self.path_to_json_ld)
# Create output paths.
self.text_csv_output_path = self.ae.create_output_path('text_csv')
self.json_output_path = self.ae.create_output_path('tangled_tree_json')
def strip_double_quotes(self, string):
# Remove double quotes from beginning and end of string.
if string.startswith('"') and string.endswith('"'):
string = string[1:-1]
# now remove whitespace
string = "".join(string.split())
return string
def get_text_for_tangled_tree(self, text_type, save_file=False):
'''Gather the text that needs to be either higlighted or plain for the tangled tree visualization.
Args:
text_type (str): Choices = ['highlighted', 'plain'], determines the type of text
rendering to return.
save_file (bool): Determines if the outputs should be saved to disk or returned.
Returns:
If save_file==True: Saves plain or highlighted text as a CSV (to disk).
save_file==False: Returns plain or highlighted text as a csv string.
'''
# Get nodes in the digraph, many more nodes returned if figure type is dependency
cdg = self.sg.se.get_digraph_by_edge_type(self.dependency_type)
nodes = cdg.nodes()
if self.dependency_type == 'requiresComponent':
component_nodes = nodes
else:
# get component nodes if making dependency figure
component_dg = self.sg.se.get_digraph_by_edge_type('requiresComponent')
component_nodes = component_dg.nodes()
# Initialize lists
highlighted = []
plain = []
# For each component node in the tangled tree gather the plain and higlighted text.
for node in component_nodes:
# Get the highlighted components based on figure_type
if self.figure_type == 'component':
highlight_descendants = self.sg.se.get_descendants_by_edge_type(node, 'requiresComponent')
elif self.figure_type == 'dependency':
highlight_descendants = [node]
# Format text to be higlighted and gather text to be formated plain.
if not highlight_descendants:
# If there are no highlighted descendants just highlight the selected node (format for observable.)
highlighted.append([node, "id", node])
# Gather all the text as plain text.
plain_descendants = [n for n in nodes if n != node]
else:
# Format higlighted text for Observable.
for hd in highlight_descendants:
highlighted.append([node, "id", hd])
# Gather the non-higlighted text as plain text descendants.
plain_descendants = [node for node in nodes if node not in highlight_descendants]
# Format all the plain text for observable.
for nd in plain_descendants:
plain.append([node, "id", nd])
# Prepare df depending on what type of text we need.
df = pd.DataFrame(locals()[text_type.lower()], columns = ['Component', 'type', 'name'])
# Depending on input either export csv locally to disk or as a string.
if save_file==True:
file_name = f"{self.schema_abbr}_{self.figure_type}_{text_type}.csv"
df.to_csv(os.path.join(self.text_csv_output_path, file_name))
return
elif save_file==False:
return df.to_csv()
def get_topological_generations(self):
''' Gather topological_gen, nodes and edges based on figure type.
Outputs:
topological_gen (List(list)):list of lists. Indicates layers of nodes.
nodes: (Networkx NodeView) Nodes of the component or dependency graph. When iterated over it functions like a list.
edges: (Networkx EdgeDataView) Edges of component or dependency graph. When iterated over it works like a list of tuples.
'''
# Get nodes in the digraph
digraph = self.sg.se.get_digraph_by_edge_type(self.dependency_type)
nodes = digraph.nodes()
# Get subgraph
mm_graph = self.sg.se.get_nx_schema()
subg = self.sg.get_subgraph_by_edge_type(mm_graph, self.dependency_type)
# Get edges and topological_gen based on figure type.
if self.figure_type == 'component':
edges = digraph.edges()
topological_gen = list(reversed(list(nx.topological_generations(subg))))
elif self.figure_type == 'dependency':
rev_digraph = nx.DiGraph.reverse(digraph)
edges = rev_digraph.edges()
topological_gen = list(nx.topological_generations(subg))
return topological_gen, nodes, edges, subg
def remove_unwanted_characters_from_conditional_statement(self, cond_req: str) -> str:
'''Remove unwanted characters from conditional statement
Example of conditional requirement: If File Format IS "BAM" OR "CRAM" OR "CSV/TSV" then Genome Build is required
Example output: File Format IS "BAM" OR "CRAM" OR "CSV/TSV"
'''
if "then" in cond_req:
# remove everything after "then"
cond_req_new = cond_req.split('then')[0]
# remove "If" and empty space
cond_req = cond_req_new.replace("If", "").lstrip().rstrip()
return cond_req
def get_ca_alias(self, conditional_requirements: list) -> dict:
'''Get the alias for each conditional attribute.
NOTE: Obtaining attributes(attr) and aliases(ali) in this function is specific to how formatting
is set in AttributesExplorer. If that formatting changes, this section
will likely break or in the worst case have a silent error.
Input:
conditional_requirements_list (list): list of strings of conditional requirements from outputs of AttributesExplorer.
Output:
ca_alias (dict):
key: alias (attribute response)
value: attribute
'''
ca_alias = {}
# clean up conditional requirements
conditional_requirements = [self.remove_unwanted_characters_from_conditional_statement(req) for req in conditional_requirements]
for i, req in enumerate(conditional_requirements):
if "OR" not in req:
attr, ali = req.split(' is ')
attr = "".join(attr.split())
ali = self.strip_double_quotes(ali)
ca_alias[ali] = attr
else:
attr, alias_str = req.split(' is ')
alias_lst = alias_str.split(' OR ')
for elem in alias_lst:
elem = self.strip_double_quotes(elem)
ca_alias[elem] = attr
return ca_alias
def gather_component_dependency_info(self, cn, attributes_df):
'''Gather all component dependency information.
Inputs:
cn: (str) component name
attributes_df: (Pandas DataFrame) Details for all attributes across all components. From AttributesExplorer.
Outputs:
conditional_attributes (list): List of conditional attributes for a particular component
ca_alias (dict):
key: alias (attribute response)
value: attribute
all_attributes (list): all attributes associated with a particular component.
'''
# Gather all component dependency information
component_attributes = self.sg.get_descendants_by_edge_type(
cn,
self.dependency_type,
connected=True
)
# Dont want to display `Component` in the figure so remove
if 'Component' in component_attributes:
component_attributes.remove('Component')
# Gather conditional attributes so they can be added to the figure.
if 'Cond_Req' in attributes_df.columns:
conditional_attributes = list(attributes_df[(attributes_df['Cond_Req']==True)
&(attributes_df['Component']==cn)]['Label'])
ca_df = attributes_df[(attributes_df['Cond_Req']==True)&(attributes_df['Component']==cn)]
conditional_requirements = list(attributes_df[(attributes_df['Cond_Req']==True)
&(attributes_df['Component']==cn)]['Conditional Requirements'])
ca_alias = self.get_ca_alias(conditional_requirements)
else:
# If there are no conditional attributes/requirements, initialize blank lists.
conditional_attributes = []
ca_alias = {}
# Gather a list of all attributes for the current component.
all_attributes = list(np.append(component_attributes,conditional_attributes))
return conditional_attributes, ca_alias, all_attributes
def find_source_nodes(self, nodes, edges, all_attributes=[]):
'''Find all nodes in the graph that do not have a parent node.
Inputs:
nodes: (Networkx NodeView) Nodes of the component or dependency graph. When iterated over it functions like a list.
edges: (Networkx EdgeDataView) Edges of component or dependency graph. When iterated over it works like a list of tuples.
attributes_df: (Pandas DataFrame) Details for all attributes across all components. From AttributesExplorer.
Outputs:
source_nodes (list(str)): List of parentless nodes in
'''
# Find edges that are not source nodes.
not_source = []
for node in nodes:
for edge_pair in edges:
if node == edge_pair[0]:
not_source.append(node)
# Find source nodes as nodes that are not in not_source.
source_nodes = []
for node in nodes:
if self.figure_type == 'dependency':
if node not in not_source and node in all_attributes:
source_nodes.append(node)
else:
if node not in not_source:
source_nodes.append(node)
return source_nodes
def get_parent_child_dictionary(self, nodes, edges, all_attributes=[]):
'''Based on the dependency type, create dictionaries between parent and child and child and parent attributes.
Input:
nodes: (Networkx NodeView) Nodes of the component or dependency graph.
edges: (Networkx EdgeDataView (component figure) or List(list) (dependency figure))
Edges of component or dependency graph.
all_attributes:
Output:
child_parents (dict):
key: child
value: list of the childs parents
parent_children (dict):
key: parent
value: list of the parents children
'''
child_parents = {}
parent_children = {}
if self.dependency_type == 'requiresComponent':
# Construct child_parents dictionary
for edge in edges:
# Add child as a key
if edge[0] not in child_parents.keys():
child_parents[edge[0]] = []
# Add parents to list
child_parents[edge[0]].append(edge[1])
# Construct parent_children dictionary
for edge in edges:
# Add parent as a key
if edge[1] not in parent_children.keys():
parent_children[edge[1]] = []
# Add children to list
parent_children[edge[1]].append(edge[0])
elif self.dependency_type == 'requiresDependency':
# Construct child_parents dictionary
for edge in edges:
# Check if child is an attribute for the current component
if edge[0] in all_attributes:
# Add child as a key
if edge[0] not in child_parents.keys():
child_parents[edge[0]] = []
# Add parent to list if it is an attriute for the current component
if edge[1] in all_attributes:
child_parents[edge[0]].append(edge[1])
# Construct parent_children dictionary
for edge in edges:
# Check if parent is an attribute for the current component
if edge[1] in all_attributes:
# Add parent as a key
if edge[1] not in parent_children.keys():
parent_children[edge[1]] = []
# Add child to list if it is an attriute for the current component
if edge[0] in all_attributes:
parent_children[edge[1]].append(edge[0])
return child_parents, parent_children
def alias_edges(self, ca_alias:dict, edges) -> List[list]:
'''Create new edges based on aliasing between an attribute and its response.
Purpose:
Create aliased edges.
For example:
If BiospecimenType (attribute) is AnalyteBiospecimenType (response)
Then ShippingConditionType (conditional requirement) is now required.
In the model the edges that connect these options are:
(AnalyteBiospecimenType, BiospecimenType)
(ShippingConditionType, AnalyteBiospecimenType)
Use alias defined in self.get_ca_alias along to define new edges that would
directly link attributes to their conditional requirements, in this
example the new edge would be:
[ShippingConditionType, BiospecimenType]
Inputs:
ca_alias (dict):
key: alias (attribute response)
value: attribute
edges (Networkx EdgeDataView): Edges of component or dependency graph. When iterated over it works like a list of tuples.
Output:
aliased_edges (List[lists]) of aliased edges.
'''
aliased_edges = []
for i, edge in enumerate(edges):
# construct one set of edges at a time
edge_set = []
# If the first edge has an alias add alias to the first position in the current edge set
if edge[0] in ca_alias.keys():
edge_set.append(ca_alias[edge[0]])
# Else add the non-aliased edge
else:
edge_set.append(edge[0])
# If the secod edge has an alias add alias to the first position in the current edge set
if edge[1] in ca_alias.keys():
edge_set.append(ca_alias[edge[1]])
# Else add the non-aliased edge
else:
edge_set.append(edge[1])
# Add new edge set to a the list of aliased edges.
aliased_edges.append(edge_set)
return aliased_edges
def prune_expand_topological_gen(self, topological_gen, all_attributes, conditional_attributes):
'''
Purpose:
Remake topological_gen with only relevant nodes.
This is necessary since for the figure this function is being used in we
only want to display a portion of the graph data.
In addition to only displaying relevant nodes, we want to add conditional
attributes to topological_gen so we can visualize them in the tangled tree
as well.
Input:
topological_gen (List[list]): Indicates layers of nodes.
all_attributes (list): all attributes associated with a particular component.
conditional_attributes (list): List of conditional attributes for a particular component
Output:
new_top_gen (List[list]): mimics structure of topological_gen but only
includes the nodes we want
'''
pruned_topological_gen = []
# For each layer(gen) in the topological generation list
for i, layer in enumerate(topological_gen):
current_layer = []
next_layer = []
# For each node in the layer
for node in layer:
# If the node is relevant to this component and is not a conditional attribute add it to the current layer.
if node in all_attributes and node not in conditional_attributes:
current_layer.append(node)
# If its a conditional attribute add it to a followup layer.
if node in conditional_attributes:
next_layer.append(node)
# Added layers to new pruned_topological_gen list
if current_layer:
pruned_topological_gen.append(current_layer)
if next_layer:
pruned_topological_gen.append(next_layer)
return pruned_topological_gen
def get_base_layers(self, topological_gen, child_parents, source_nodes, cn):
'''
Purpose:
Reconfigure topological gen to move things back appropriate layers if
they would have a back reference.
The Tangle Tree figure requrires an acyclic directed graph that has additional
layering rules between connected nodes.
- If there is a backward connection then the line connecting them will
break (this would suggest a cyclic connection.)
- Additionally if two or more nodes are connecting to a downstream node it is
best to put both parent nodes at the same level, if possible, to
prevent line breaks.
- Also want to move any children nodes one layer below
the parent node(s). If there are multiple parents, put one layer below the
parent that is furthest from the origin.
This is an iterative process that needs to run twice to move all the nodes to their
appropriate positions.
Input:
topological_gen: list of lists. Indicates layers of nodes.
child_parents (dict):
key: child
value: list of the childs parents
source_nodes: list, list of nodes that do not have a parent.
cn: str, component name, default=''
Output:
base_layers: dict, key: component name, value: layer
represents initial layering of toplogical_gen
base_layers_copy_copy: dict, key: component name, value: layer
represents the final layering after moving the components/attributes to
their desired layer.c
'''
# Convert topological_gen to a dictionary
base_layers = {com:i for i, lev in enumerate(topological_gen)
for com in lev}
# Make another version to iterate on -- Cant set to equal or will overwrite the original.
base_layers_copy = {com:i for i, lev in enumerate(topological_gen)
for com in lev}
# Move child nodes one node downstream of their parents.
for level in topological_gen:
for node in level:
# Check if node has a parent.
if node in child_parents.keys():
#node_level = base_layers[node]
# Look at the parents for the node.
parent_levels = []
for par in child_parents[node]:
# Get the layer the parent is located at.
parent_levels.append(base_layers[par])
# Get the max layer a parent of the node can be found.
max_parent_level = max(parent_levels)
# Move the node one layer beyond the max parent node position, so it will be downstream of its parents.
base_layers_copy[node] = max_parent_level + 1
# Make another version of updated positions iterate on further.
base_layers_copy_copy = base_layers_copy
# Move parental source nodes if necessary.
for level in topological_gen:
for node in level:
# Check if node has any parents.
if node in child_parents.keys():
parent_levels = []
modify_par = []
# For each parent get their position.
for par in child_parents[node]:
parent_levels.append(base_layers_copy[par])
# If one of the parents is a source node move
# it to the same level as the other nodes the child connects to so
# that the connections will not be backwards (and result in a broken line)
for par in child_parents[node]:
# For a given parent determine if its a source node and that the parents
# are not already at level 0, and the parent is not the current component node.
if (par in source_nodes and
(parent_levels.count(parent_levels[0]) != len(parent_levels))
and par != cn):
# If so, remove its position from parent_levels
parent_levels.remove(base_layers_copy[par])
# Add this parent to a list of parental positions to modify later.
modify_par.append(par)
# Get the new max parent level for this node.
max_parent_level = max(parent_levels)
# Move the node one position downstream of its max parent level.
base_layers_copy_copy[node] = max_parent_level + 1
# For each parental position to modify, move the parents level up to the max_parent_level.
for par in modify_par:
base_layers_copy_copy[par] = max_parent_level
return base_layers, base_layers_copy_copy
def adjust_node_placement(self, base_layers_copy_copy, base_layers, topological_gen):
'''Reorder nodes within topological_generations to match how they were ordered in base_layers_copy_copy
Input:
topological_gen: list of lists. Indicates layers of nodes.
base_layers: dict, key: component name, value: layer
represents initial layering of toplogical_gen
base_layers_copy_copy: dict, key: component name, value: layer
represents the final layering after moving the components/attributes to
their desired layer.
Output:
topological_gen: same format but as the incoming topologial_gen but
ordered to match base_layers_copy_copy.
'''
if self.figure_type == 'component':
# For each node get its new layer in the tangled tree
for node, i in base_layers_copy_copy.items():
# Check if node is not already in the proper layer
if node not in topological_gen[i]:
# If not put it in the appropriate layer
topological_gen[i].append(node)
# Remove from inappropriate layer.
topological_gen[base_layers[node]].remove(node)
elif self.figure_type == 'dependency':
for node, i in base_layers_copy_copy.items():
# Check if the location of the node is more than the number of
# layers topological gen current handles
if i > len(topological_gen) - 1:
# If so, add node to new node at the end of topological_gen
topological_gen.append([node])
# Remove the node from its previous position.
topological_gen[base_layers[node]].remove(node)
# Else, check if node is not already in the proper layer
elif node not in topological_gen[i]:
# If not put it in the appropriate layer
topological_gen[i].append(node)
# Remove from inappropriate layer.
topological_gen[base_layers[node]].remove(node)
return topological_gen
def move_source_nodes_to_bottom_of_layer(self, node_layers, source_nodes):
'''For aesthetic purposes move source nodes to the bottom of their respective layers.
Input:
node_layers (List(list)): Lists of lists of each layer and the nodes contained in that layer as strings.
source_nodes (list): list of nodes that do not have a parent.
Output:
node_layers (List(list)): modified to move source nodes to the bottom of each layer.
'''
for i, layer in enumerate(node_layers):
nodes_to_move = []
for node in layer:
if node in source_nodes:
nodes_to_move.append(node)
for node in nodes_to_move:
node_layers[i].remove(node)
node_layers[i].append(node)
return node_layers
def get_layers_dict_list(self, node_layers, child_parents, parent_children, all_parent_children):
'''Convert node_layers to a list of lists of dictionaries that specifies each node and its parents (if applicable).
Inputs:
node_layers: list of lists of each layer and the nodes contained in that layer as strings.
child_parents (dict):
key: child
value: list of the childs parents
parent_children (dict):
key: parent
value: list of the parents children
Outputs:
layers_list (List(list): list of lists of dictionaries that specifies each node and its parents (if applicable)
'''
num_layers = len(node_layers)
layers_list = [[] for i in range(0, num_layers)]
for i, layer in enumerate(node_layers):
for node in layer:
if node in child_parents.keys():
parents = child_parents[node]
else:
parents = []
if node in parent_children.keys():
direct_children = parent_children[node]
else:
direct_children = []
if node in all_parent_children.keys():
all_children = all_parent_children[node]
else:
all_children = []
layers_list[i].append({'id': node, 'parents': parents, 'direct_children': direct_children, 'children': all_children})
return layers_list
def get_node_layers_json(self, topological_gen, source_nodes, child_parents, parent_children, cn='', all_parent_children=None):
'''Return all the layers of a single tangled tree as a JSON String.
Inputs:
topological_gen:list of lists. Indicates layers of nodes.
source_nodes: list of nodes that do not have a parent.
child_parents (dict):
key: child
value: list of the childs parents
parent_children (dict):
key: parent
value: list of the parents children
all_parent_children (dict):
key: parent
value: list of the parents children (including all downstream nodes). Default to an empty dictionary
Outputs:
layers_json (JSON String): Layers of nodes in the tangled tree as a json string.
'''
base_layers, base_layers_copy_copy = self.get_base_layers(topological_gen,
child_parents, source_nodes, cn)
# Rearrange node_layers to follow the pattern laid out in component layers.
node_layers = self.adjust_node_placement(base_layers_copy_copy,
base_layers, topological_gen)
# Move source nodes to the bottom of each layer.
node_layers = self.move_source_nodes_to_bottom_of_layer(node_layers, source_nodes)
# Convert layers to a list of dictionaries
if not all_parent_children:
# default to an empty dictionary
all_parent_children = dict()
layers_dicts = self.get_layers_dict_list(node_layers, child_parents, parent_children, all_parent_children)
# Convert dictionary to a JSON string
layers_json = json.dumps(layers_dicts)
return layers_json
def save_outputs(self, save_file, layers_json, cn='', all_layers=[]):
'''
Inputs:
save_file (bool): Indicates whether to save a file locally or not.:
layers_json (JSON String): Layers of nodes in the tangled tree as a json string.
cn (str): component name, default=''
all_layers (list of json strings): Each string represents contains the layers for a single tangled tree.
If a dependency figure the list is added to each time this function is called, so starts incomplete.
default=[].
Outputs:
all_layers (list of json strings):
If save_file == False: Each string represents contains the layers for a single tangled tree.
If save_file ==True: is an empty list.
'''
if save_file == True:
if cn:
output_file_name = f"{self.schema_abbr}_{self.figure_type}_{cn}_tangled_tree.json"
else:
output_file_name = f"{self.schema_abbr}_{self.figure_type}_tangled_tree.json"
with open(os.path.join(self.json_output_path, output_file_name), 'w') as outfile:
outfile.write(layers_json)
logger.info(f"Tangled Tree JSON String saved to {os.path.join(self.json_output_path, output_file_name)}.")
all_layers = layers_json
elif save_file == False:
all_layers.append(layers_json)
return all_layers
def get_ancestors_nodes(self, subgraph, components):
"""
Inputs:
subgraph: networkX graph object
components: a list of nodes
outputs:
all_parent_children: a dictionary that indicates a list of children (including all the intermediate children) of a given node
"""
all_parent_children = {}
for component in components:
all_ancestors = self.sg.se.get_nodes_ancestors(subgraph, component)
all_parent_children[component] = all_ancestors
return all_parent_children
def get_tangled_tree_layers(self, save_file=True):
'''Based on user indicated figure type, construct the layers of nodes of a tangled tree.
Inputs:
save_file (bool): Indicates whether to save a file locally or not.
Outputs:
all_layers (list of json strings):
If save_file == False: Each string represents contains the layers for a single tangled tree.
If save_file ==True: is an empty list.
Note on Dependency Tangled Tree:
If there are many conditional requirements associated with a depependency, and those
conditional requirements have overlapping attributes associated with them
the tangled tree will only report one
'''
# Gather the data model's, topological generations, nodes and edges
topological_gen, nodes, edges, subg = self.get_topological_generations()
if self.figure_type == 'component':
# Gather all source nodes
source_nodes = self.find_source_nodes(nodes, edges)
# Map all children to their parents and vice versa
child_parents, parent_children = self.get_parent_child_dictionary(nodes, edges)
# find all the downstream nodes
all_parent_children = self.get_ancestors_nodes(subg, parent_children.keys())
# Get the layers that each node belongs to.
layers_json = self.get_node_layers_json(topological_gen, source_nodes, child_parents, parent_children, all_parent_children=all_parent_children)
# If indicated save outputs locally else gather all layers.
all_layers = self.save_outputs(save_file, layers_json)
if self.figure_type == 'dependency':
# Get component digraph and nodes.
component_dg = self.sg.se.get_digraph_by_edge_type('requiresComponent')
component_nodes = component_dg.nodes()
# Get table of attributes.
attributes_csv_str = self.ae.parse_attributes(save_file=False)
attributes_df = pd.read_table(StringIO(attributes_csv_str), sep=",")
all_layers =[]
for cn in component_nodes:
# Gather attribute and dependency information per node
conditional_attributes, ca_alias, all_attributes = self.gather_component_dependency_info(cn, attributes_df)
# Gather all source nodes
source_nodes = self.find_source_nodes(component_nodes, edges, all_attributes)
# Alias the conditional requirement edge back to its actual parent label,
# then apply aliasing back to the edges
aliased_edges = self.alias_edges(ca_alias, edges)
# Gather relationships between children and their parents.
child_parents, parent_children = self.get_parent_child_dictionary(nodes,
aliased_edges, all_attributes)
# Remake topological_gen so it has only relevant nodes.
pruned_topological_gen = self.prune_expand_topological_gen(topological_gen, all_attributes, conditional_attributes)
# Get the layers that each node belongs to.
layers_json = self.get_node_layers_json(pruned_topological_gen, source_nodes, child_parents, parent_children, cn)
# If indicated save outputs locally else, gather all layers.
all_layers = self.save_outputs(save_file, layers_json, cn, all_layers)
return all_layers
| 0.76895 | 0.264103 |
import gc
import json
import logging
import numpy as np
import os
import pandas as pd
from typing import Any, Dict, Optional, Text, List
from schematic.schemas import SchemaGenerator
from schematic.utils.io_utils import load_json
logger = logging.getLogger(__name__)
class AttributesExplorer():
def __init__(self,
path_to_jsonld: str,
)-> None:
self.path_to_jsonld = path_to_jsonld
self.json_data_model = load_json(self.path_to_jsonld)
self.jsonld = load_json(self.path_to_jsonld)
# instantiate a schema generator to retrieve db schema graph from metadata model graph
self.sg = SchemaGenerator(self.path_to_jsonld)
self.output_path = self.create_output_path('merged_csv')
def create_output_path(self, terminal_folder):
''' Create output path to store Observable visualization data if it does not already exist.
Args: self.path_to_jsonld
Returns: output_path (str): path to store outputs
'''
base_dir = os.path.dirname(self.path_to_jsonld)
self.schema_name = self.path_to_jsonld.split('/')[-1].split('.model.jsonld')[0]
output_path = os.path.join(base_dir, 'visualization', self.schema_name, terminal_folder)
if not os.path.exists(output_path):
os.makedirs(output_path)
return output_path
def convert_string_cols_to_json(self, df: pd.DataFrame, cols_to_modify: list):
"""Converts values in a column from strings to JSON list
for upload to Synapse.
"""
for col in df.columns:
if col in cols_to_modify:
df[col] = df[col].apply(lambda x: json.dumps([y.strip() for y in x]) if x != "NaN" and x and x == np.nan else x)
return df
def parse_attributes(self, save_file=True):
'''
Args: save_file (bool):
True: merged_df is saved locally to output_path.
False: merged_df is returned.
Returns:
merged_df (pd.DataFrame): dataframe containing data relating to attributes
for the provided data model for all components in the data model.
Dataframe is saved locally as a csv if save_file == True, or returned if
save_file == False.
'''
# get all components
component_dg = self.sg.se.get_digraph_by_edge_type('requiresComponent')
components = component_dg.nodes()
# For each data type to be loaded gather all attribtes the user would
# have to provide.
return self._parse_attributes(components, save_file)
def parse_component_attributes(self, component=None, save_file=True, include_index=True):
'''
Args: save_file (bool):
True: merged_df is saved locally to output_path.
False: merged_df is returned.
include_index (bool):
Whether to include the index in the returned dataframe (True) or not (False)
Returns:
merged_df (pd.DataFrame): dataframe containing data relating to attributes
for the provided data model for the specified component in the data model.
Dataframe is saved locally as a csv if save_file == True, or returned if
save_file == False.
'''
if not component:
raise ValueError("You must provide a component to visualize.")
else:
return self._parse_attributes([component], save_file, include_index)
def _parse_attributes(self, components, save_file=True, include_index=True):
'''
Args: save_file (bool):
True: merged_df is saved locally to output_path.
False: merged_df is returned.
components (list):
list of components to parse attributes for
include_index (bool):
Whether to include the index in the returned dataframe (True) or not (False)
Returns:
merged_df (pd.DataFrame): dataframe containing data relating to attributes
for the provided data model for specified components in the data model.
Dataframe is saved locally as a csv if save_file == True, or returned if
save_file == False.
Raises:
ValueError:
If unable hits an error while attempting to get conditional requirements.
This error is likely to be found if there is a mismatch in naming.
'''
# For each data type to be loaded gather all attribtes the user would
# have to provide.
df_store = []
for component in components:
data_dict = {}
# get the json schema
json_schema = self.sg.get_json_schema_requirements(
source_node=component, schema_name=self.path_to_jsonld)
# Gather all attribues, their valid values and requirements
for key, value in json_schema['properties'].items():
data_dict[key] = {}
for k, v in value.items():
if k == 'enum':
data_dict[key]['Valid Values'] = value['enum']
if key in json_schema['required']:
data_dict[key]['Required'] = True
else:
data_dict[key]['Required'] = False
data_dict[key]['Component'] = component
# Add additional details per key (from the JSON-ld)
for dic in self.jsonld['@graph']:
if 'sms:displayName' in dic.keys():
key = dic['sms:displayName']
if key in data_dict.keys():
data_dict[key]['Attribute'] = dic['sms:displayName']
data_dict[key]['Label'] = dic['rdfs:label']
data_dict[key]['Description'] = dic['rdfs:comment']
if 'validationRules' in dic.keys():
data_dict[key]['Validation Rules'] = dic['validationRules']
# Find conditional dependencies
if 'allOf' in json_schema.keys():
for conditional_dependencies in json_schema['allOf']:
key = list(conditional_dependencies['then']['properties'])[0]
try:
if key in data_dict.keys():
if 'Cond_Req' not in data_dict[key].keys():
data_dict[key]['Cond_Req'] = []
data_dict[key]['Conditional Requirements'] = []
attribute = list(conditional_dependencies['if']['properties'])[0]
value = conditional_dependencies['if']['properties'][attribute]['enum']
# Capitalize attribute if it begins with a lowercase letter, for aesthetics.
if attribute[0].islower():
attribute = attribute.capitalize()
# Remove "Type" (i.e. turn "Biospecimen Type" to "Biospcimen")
if "Type" in attribute:
attribute = attribute.split(" ")[0]
# Remove "Type" (i.e. turn "Tissue Type" to "Tissue")
if "Type" in value[0]:
value[0] = value[0].split(" ")[0]
conditional_statement = f'{attribute} is "{value[0]}"'
if conditional_statement not in data_dict[key]['Conditional Requirements']:
data_dict[key]['Cond_Req'] = True
data_dict[key]['Conditional Requirements'].extend([conditional_statement])
except:
raise ValueError(
f"There is an error getting conditional requirements related "
"to the attribute: {key}. The error is likely caused by naming inconsistencies (e.g. uppercase, camelcase, ...)"
)
for key, value in data_dict.items():
if 'Conditional Requirements' in value.keys():
## reformat conditional requirement
# get all attributes
attr_lst = [i.split(" is ")[-1] for i in data_dict[key]['Conditional Requirements']]
# join a list of attributes by using OR
attr_str = " OR ".join(attr_lst)
# reformat the conditional requirement
component_name = data_dict[key]['Conditional Requirements'][0].split(' is ')[0]
conditional_statement_str = f' If {component_name} is {attr_str} then "{key}" is required'
data_dict[key]['Conditional Requirements'] = conditional_statement_str
df = pd.DataFrame(data_dict)
df = df.T
cols = ['Attribute', 'Label', 'Description', 'Required', 'Cond_Req', 'Valid Values', 'Conditional Requirements', 'Validation Rules', 'Component']
cols = [col for col in cols if col in df.columns]
df = df[cols]
df = self.convert_string_cols_to_json(df, ['Valid Values'])
#df.to_csv(os.path.join(csv_output_path, data_type + '.vis_data.csv'))
df_store.append(df)
merged_attributes_df = pd.concat(df_store, join='outer')
cols = ['Attribute', 'Label', 'Description', 'Required', 'Cond_Req', 'Valid Values', 'Conditional Requirements', 'Validation Rules', 'Component']
cols = [col for col in cols if col in merged_attributes_df.columns]
merged_attributes_df = merged_attributes_df[cols]
if save_file == True:
return merged_attributes_df.to_csv(os.path.join(self.output_path, self.schema_name + 'attributes_data.vis_data.csv'), index=include_index)
elif save_file == False:
return merged_attributes_df.to_csv(index=include_index)
|
schematicpy
|
/schematicpy-23.8.1.tar.gz/schematicpy-23.8.1/schematic/visualization/attributes_explorer.py
|
attributes_explorer.py
|
import gc
import json
import logging
import numpy as np
import os
import pandas as pd
from typing import Any, Dict, Optional, Text, List
from schematic.schemas import SchemaGenerator
from schematic.utils.io_utils import load_json
logger = logging.getLogger(__name__)
class AttributesExplorer():
def __init__(self,
path_to_jsonld: str,
)-> None:
self.path_to_jsonld = path_to_jsonld
self.json_data_model = load_json(self.path_to_jsonld)
self.jsonld = load_json(self.path_to_jsonld)
# instantiate a schema generator to retrieve db schema graph from metadata model graph
self.sg = SchemaGenerator(self.path_to_jsonld)
self.output_path = self.create_output_path('merged_csv')
def create_output_path(self, terminal_folder):
''' Create output path to store Observable visualization data if it does not already exist.
Args: self.path_to_jsonld
Returns: output_path (str): path to store outputs
'''
base_dir = os.path.dirname(self.path_to_jsonld)
self.schema_name = self.path_to_jsonld.split('/')[-1].split('.model.jsonld')[0]
output_path = os.path.join(base_dir, 'visualization', self.schema_name, terminal_folder)
if not os.path.exists(output_path):
os.makedirs(output_path)
return output_path
def convert_string_cols_to_json(self, df: pd.DataFrame, cols_to_modify: list):
"""Converts values in a column from strings to JSON list
for upload to Synapse.
"""
for col in df.columns:
if col in cols_to_modify:
df[col] = df[col].apply(lambda x: json.dumps([y.strip() for y in x]) if x != "NaN" and x and x == np.nan else x)
return df
def parse_attributes(self, save_file=True):
'''
Args: save_file (bool):
True: merged_df is saved locally to output_path.
False: merged_df is returned.
Returns:
merged_df (pd.DataFrame): dataframe containing data relating to attributes
for the provided data model for all components in the data model.
Dataframe is saved locally as a csv if save_file == True, or returned if
save_file == False.
'''
# get all components
component_dg = self.sg.se.get_digraph_by_edge_type('requiresComponent')
components = component_dg.nodes()
# For each data type to be loaded gather all attribtes the user would
# have to provide.
return self._parse_attributes(components, save_file)
def parse_component_attributes(self, component=None, save_file=True, include_index=True):
'''
Args: save_file (bool):
True: merged_df is saved locally to output_path.
False: merged_df is returned.
include_index (bool):
Whether to include the index in the returned dataframe (True) or not (False)
Returns:
merged_df (pd.DataFrame): dataframe containing data relating to attributes
for the provided data model for the specified component in the data model.
Dataframe is saved locally as a csv if save_file == True, or returned if
save_file == False.
'''
if not component:
raise ValueError("You must provide a component to visualize.")
else:
return self._parse_attributes([component], save_file, include_index)
def _parse_attributes(self, components, save_file=True, include_index=True):
'''
Args: save_file (bool):
True: merged_df is saved locally to output_path.
False: merged_df is returned.
components (list):
list of components to parse attributes for
include_index (bool):
Whether to include the index in the returned dataframe (True) or not (False)
Returns:
merged_df (pd.DataFrame): dataframe containing data relating to attributes
for the provided data model for specified components in the data model.
Dataframe is saved locally as a csv if save_file == True, or returned if
save_file == False.
Raises:
ValueError:
If unable hits an error while attempting to get conditional requirements.
This error is likely to be found if there is a mismatch in naming.
'''
# For each data type to be loaded gather all attribtes the user would
# have to provide.
df_store = []
for component in components:
data_dict = {}
# get the json schema
json_schema = self.sg.get_json_schema_requirements(
source_node=component, schema_name=self.path_to_jsonld)
# Gather all attribues, their valid values and requirements
for key, value in json_schema['properties'].items():
data_dict[key] = {}
for k, v in value.items():
if k == 'enum':
data_dict[key]['Valid Values'] = value['enum']
if key in json_schema['required']:
data_dict[key]['Required'] = True
else:
data_dict[key]['Required'] = False
data_dict[key]['Component'] = component
# Add additional details per key (from the JSON-ld)
for dic in self.jsonld['@graph']:
if 'sms:displayName' in dic.keys():
key = dic['sms:displayName']
if key in data_dict.keys():
data_dict[key]['Attribute'] = dic['sms:displayName']
data_dict[key]['Label'] = dic['rdfs:label']
data_dict[key]['Description'] = dic['rdfs:comment']
if 'validationRules' in dic.keys():
data_dict[key]['Validation Rules'] = dic['validationRules']
# Find conditional dependencies
if 'allOf' in json_schema.keys():
for conditional_dependencies in json_schema['allOf']:
key = list(conditional_dependencies['then']['properties'])[0]
try:
if key in data_dict.keys():
if 'Cond_Req' not in data_dict[key].keys():
data_dict[key]['Cond_Req'] = []
data_dict[key]['Conditional Requirements'] = []
attribute = list(conditional_dependencies['if']['properties'])[0]
value = conditional_dependencies['if']['properties'][attribute]['enum']
# Capitalize attribute if it begins with a lowercase letter, for aesthetics.
if attribute[0].islower():
attribute = attribute.capitalize()
# Remove "Type" (i.e. turn "Biospecimen Type" to "Biospcimen")
if "Type" in attribute:
attribute = attribute.split(" ")[0]
# Remove "Type" (i.e. turn "Tissue Type" to "Tissue")
if "Type" in value[0]:
value[0] = value[0].split(" ")[0]
conditional_statement = f'{attribute} is "{value[0]}"'
if conditional_statement not in data_dict[key]['Conditional Requirements']:
data_dict[key]['Cond_Req'] = True
data_dict[key]['Conditional Requirements'].extend([conditional_statement])
except:
raise ValueError(
f"There is an error getting conditional requirements related "
"to the attribute: {key}. The error is likely caused by naming inconsistencies (e.g. uppercase, camelcase, ...)"
)
for key, value in data_dict.items():
if 'Conditional Requirements' in value.keys():
## reformat conditional requirement
# get all attributes
attr_lst = [i.split(" is ")[-1] for i in data_dict[key]['Conditional Requirements']]
# join a list of attributes by using OR
attr_str = " OR ".join(attr_lst)
# reformat the conditional requirement
component_name = data_dict[key]['Conditional Requirements'][0].split(' is ')[0]
conditional_statement_str = f' If {component_name} is {attr_str} then "{key}" is required'
data_dict[key]['Conditional Requirements'] = conditional_statement_str
df = pd.DataFrame(data_dict)
df = df.T
cols = ['Attribute', 'Label', 'Description', 'Required', 'Cond_Req', 'Valid Values', 'Conditional Requirements', 'Validation Rules', 'Component']
cols = [col for col in cols if col in df.columns]
df = df[cols]
df = self.convert_string_cols_to_json(df, ['Valid Values'])
#df.to_csv(os.path.join(csv_output_path, data_type + '.vis_data.csv'))
df_store.append(df)
merged_attributes_df = pd.concat(df_store, join='outer')
cols = ['Attribute', 'Label', 'Description', 'Required', 'Cond_Req', 'Valid Values', 'Conditional Requirements', 'Validation Rules', 'Component']
cols = [col for col in cols if col in merged_attributes_df.columns]
merged_attributes_df = merged_attributes_df[cols]
if save_file == True:
return merged_attributes_df.to_csv(os.path.join(self.output_path, self.schema_name + 'attributes_data.vis_data.csv'), index=include_index)
elif save_file == False:
return merged_attributes_df.to_csv(index=include_index)
| 0.735167 | 0.214568 |
## Package-specific files:
#### The files within `etc` folder are:
`data_models`:
- `biothings.model.jsonld`: Base knowledge graph/vocabulary as specified by the [biolink model](https://biolink.github.io/biolink-model/).
- `schema_org.model.jsonld`: Schema vocabulary as specified by [schema.org](https://schema.org/docs/gs.html#schemaorg_types).
`validation_schemas`:
- `class.schema.json`: JSON Schema used for validation against schema.org class definition standard.
- `property.schema.json`: JSON Schema used for validation against schema.org property definition standard.
- `model.schema.json`: JSON Schema used for validation against schema.org standard.
|
schematicpy
|
/schematicpy-23.8.1.tar.gz/schematicpy-23.8.1/schematic/etc/README.md
|
README.md
|
## Package-specific files:
#### The files within `etc` folder are:
`data_models`:
- `biothings.model.jsonld`: Base knowledge graph/vocabulary as specified by the [biolink model](https://biolink.github.io/biolink-model/).
- `schema_org.model.jsonld`: Schema vocabulary as specified by [schema.org](https://schema.org/docs/gs.html#schemaorg_types).
`validation_schemas`:
- `class.schema.json`: JSON Schema used for validation against schema.org class definition standard.
- `property.schema.json`: JSON Schema used for validation against schema.org property definition standard.
- `model.schema.json`: JSON Schema used for validation against schema.org standard.
| 0.683208 | 0.436682 |
import re
from dataclasses import field
from pydantic.dataclasses import dataclass
from pydantic import validator, ConfigDict, Extra
# This turns on validation for value assignments after creation
pydantic_config = ConfigDict(validate_assignment=True, extra=Extra.forbid)
@dataclass(config=pydantic_config)
class SynapseConfig:
"""
config_basename: Path to the synapse config file, either absolute or relative to this file
manifest_basename: the name of downloaded manifest files
master_fileview_id: Synapse ID of the file view listing all project data assets.
"""
config: str = ".synapseConfig"
manifest_basename: str = "synapse_storage_manifest"
master_fileview_id: str = "syn23643253"
@validator("master_fileview_id")
@classmethod
def validate_synapse_id(cls, value: str) -> str:
"""Check if string is a valid synapse id
Args:
value (str): A string
Raises:
ValueError: If the value isn't a valid Synapse id
Returns:
(str): The input value
"""
if not re.search("^syn[0-9]+", value):
raise ValueError(f"{value} is not a valid Synapse id")
return value
@validator("config", "manifest_basename")
@classmethod
def validate_string_is_not_empty(cls, value: str) -> str:
"""Check if string is not empty(has at least one char)
Args:
value (str): A string
Raises:
ValueError: If the value is zero characters long
Returns:
(str): The input value
"""
if not value:
raise ValueError(f"{value} is an empty string")
return value
@dataclass(config=pydantic_config)
class ManifestConfig:
"""
manifest_folder: name of the folder manifests will be saved to locally
title: Title or title prefix given to generated manifest(s)
data_type: Data types of manifests to be generated or data type (singular) to validate
manifest against
"""
manifest_folder: str = "manifests"
title: str = "example"
data_type: list[str] = field(default_factory=lambda: ["Biospecimen", "Patient"])
@validator("title", "manifest_folder")
@classmethod
def validate_string_is_not_empty(cls, value: str) -> str:
"""Check if string is not empty(has at least one char)
Args:
value (str): A string
Raises:
ValueError: If the value is zero characters long
Returns:
(str): The input value
"""
if not value:
raise ValueError(f"{value} is an empty string")
return value
@dataclass(config=pydantic_config)
class ModelConfig:
"""
location: location of the schema jsonld
"""
location: str = "tests/data/example.model.jsonld"
@validator("location")
@classmethod
def validate_string_is_not_empty(cls, value: str) -> str:
"""Check if string is not empty(has at least one char)
Args:
value (str): A string
Raises:
ValueError: If the value is zero characters long
Returns:
(str): The input value
"""
if not value:
raise ValueError(f"{value} is an empty string")
return value
@dataclass(config=pydantic_config)
class GoogleSheetsConfig:
"""
master_template_id: The template id of the google sheet.
strict_validation: When doing google sheet validation (regex match) with the validation rules.
True is alerting the user and not allowing entry of bad values.
False is warning but allowing the entry on to the sheet.
service_acct_creds_synapse_id: The Synapse id of the Google service account credentials.
service_acct_creds: Path to the Google service account credentials,
either absolute or relative to this file
"""
service_acct_creds_synapse_id: str = "syn25171627"
service_acct_creds: str = "schematic_service_account_creds.json"
strict_validation: bool = True
@validator("service_acct_creds")
@classmethod
def validate_string_is_not_empty(cls, value: str) -> str:
"""Check if string is not empty(has at least one char)
Args:
value (str): A string
Raises:
ValueError: If the value is zero characters long
Returns:
(str): The input value
"""
if not value:
raise ValueError(f"{value} is an empty string")
return value
@validator("service_acct_creds_synapse_id")
@classmethod
def validate_synapse_id(cls, value: str) -> str:
"""Check if string is a valid synapse id
Args:
value (str): A string
Raises:
ValueError: If the value isn't a valid Synapse id
Returns:
(str): The input value
"""
if not re.search("^syn[0-9]+", value):
raise ValueError(f"{value} is not a valid Synapse id")
return value
|
schematicpy
|
/schematicpy-23.8.1.tar.gz/schematicpy-23.8.1/schematic/configuration/dataclasses.py
|
dataclasses.py
|
import re
from dataclasses import field
from pydantic.dataclasses import dataclass
from pydantic import validator, ConfigDict, Extra
# This turns on validation for value assignments after creation
pydantic_config = ConfigDict(validate_assignment=True, extra=Extra.forbid)
@dataclass(config=pydantic_config)
class SynapseConfig:
"""
config_basename: Path to the synapse config file, either absolute or relative to this file
manifest_basename: the name of downloaded manifest files
master_fileview_id: Synapse ID of the file view listing all project data assets.
"""
config: str = ".synapseConfig"
manifest_basename: str = "synapse_storage_manifest"
master_fileview_id: str = "syn23643253"
@validator("master_fileview_id")
@classmethod
def validate_synapse_id(cls, value: str) -> str:
"""Check if string is a valid synapse id
Args:
value (str): A string
Raises:
ValueError: If the value isn't a valid Synapse id
Returns:
(str): The input value
"""
if not re.search("^syn[0-9]+", value):
raise ValueError(f"{value} is not a valid Synapse id")
return value
@validator("config", "manifest_basename")
@classmethod
def validate_string_is_not_empty(cls, value: str) -> str:
"""Check if string is not empty(has at least one char)
Args:
value (str): A string
Raises:
ValueError: If the value is zero characters long
Returns:
(str): The input value
"""
if not value:
raise ValueError(f"{value} is an empty string")
return value
@dataclass(config=pydantic_config)
class ManifestConfig:
"""
manifest_folder: name of the folder manifests will be saved to locally
title: Title or title prefix given to generated manifest(s)
data_type: Data types of manifests to be generated or data type (singular) to validate
manifest against
"""
manifest_folder: str = "manifests"
title: str = "example"
data_type: list[str] = field(default_factory=lambda: ["Biospecimen", "Patient"])
@validator("title", "manifest_folder")
@classmethod
def validate_string_is_not_empty(cls, value: str) -> str:
"""Check if string is not empty(has at least one char)
Args:
value (str): A string
Raises:
ValueError: If the value is zero characters long
Returns:
(str): The input value
"""
if not value:
raise ValueError(f"{value} is an empty string")
return value
@dataclass(config=pydantic_config)
class ModelConfig:
"""
location: location of the schema jsonld
"""
location: str = "tests/data/example.model.jsonld"
@validator("location")
@classmethod
def validate_string_is_not_empty(cls, value: str) -> str:
"""Check if string is not empty(has at least one char)
Args:
value (str): A string
Raises:
ValueError: If the value is zero characters long
Returns:
(str): The input value
"""
if not value:
raise ValueError(f"{value} is an empty string")
return value
@dataclass(config=pydantic_config)
class GoogleSheetsConfig:
"""
master_template_id: The template id of the google sheet.
strict_validation: When doing google sheet validation (regex match) with the validation rules.
True is alerting the user and not allowing entry of bad values.
False is warning but allowing the entry on to the sheet.
service_acct_creds_synapse_id: The Synapse id of the Google service account credentials.
service_acct_creds: Path to the Google service account credentials,
either absolute or relative to this file
"""
service_acct_creds_synapse_id: str = "syn25171627"
service_acct_creds: str = "schematic_service_account_creds.json"
strict_validation: bool = True
@validator("service_acct_creds")
@classmethod
def validate_string_is_not_empty(cls, value: str) -> str:
"""Check if string is not empty(has at least one char)
Args:
value (str): A string
Raises:
ValueError: If the value is zero characters long
Returns:
(str): The input value
"""
if not value:
raise ValueError(f"{value} is an empty string")
return value
@validator("service_acct_creds_synapse_id")
@classmethod
def validate_synapse_id(cls, value: str) -> str:
"""Check if string is a valid synapse id
Args:
value (str): A string
Raises:
ValueError: If the value isn't a valid Synapse id
Returns:
(str): The input value
"""
if not re.search("^syn[0-9]+", value):
raise ValueError(f"{value} is not a valid Synapse id")
return value
| 0.85738 | 0.400017 |
from typing import Optional, Any
import os
import yaml
from schematic.utils.general import normalize_path
from .dataclasses import (
SynapseConfig,
ManifestConfig,
ModelConfig,
GoogleSheetsConfig,
)
class ConfigNonAllowedFieldError(Exception):
"""Raised when a user submitted config file contains non allowed fields"""
def __init__(
self, message: str, fields: list[str], allowed_fields: list[str]
) -> None:
"""
Args:
message (str): A message describing the error
fields (list[str]): The fields in the config
allowed_fields (list[str]): The allowed fields in the config
"""
self.message = message
self.fields = fields
self.allowed_fields = allowed_fields
super().__init__(self.message)
def __str__(self) -> str:
"""String representation"""
return (
f"{self.message}; "
f"config contains fields: {self.fields}; "
f"allowed fields: {self.allowed_fields}"
)
class Configuration:
"""
This class is used as a singleton by the rest of the package.
It is instantiated only once at the bottom of this file, and that
instance is imported by other modules
"""
def __init__(self) -> None:
self.config_path: Optional[str] = None
self._parent_directory = os.getcwd()
self._synapse_config = SynapseConfig()
self._manifest_config = ManifestConfig()
self._model_config = ModelConfig()
self._google_sheets_config = GoogleSheetsConfig()
def load_config(self, config_path: str) -> None:
"""Loads a user created config file and overwrites any defaults listed in the file
Args:
config_path (str): The path to the config file
Raises:
ConfigNonAllowedFieldError: If there are non allowed fields in the config file
"""
allowed_config_fields = {"asset_store", "manifest", "model", "google_sheets"}
config_path = os.path.expanduser(config_path)
config_path = os.path.abspath(config_path)
self.config_path = config_path
self._parent_directory = os.path.dirname(config_path)
with open(config_path, "r", encoding="utf-8") as file:
config: dict[str, Any] = yaml.safe_load(file)
if not set(config.keys()).issubset(allowed_config_fields):
raise ConfigNonAllowedFieldError(
"Non allowed fields in top level of configuration file.",
list(config.keys()),
list(allowed_config_fields),
)
self._manifest_config = ManifestConfig(**config.get("manifest", {}))
self._model_config = ModelConfig(**config.get("model", {}))
self._google_sheets_config = GoogleSheetsConfig(
**config.get("google_sheets", {})
)
self._set_asset_store(config.get("asset_store", {}))
def _set_asset_store(self, config: dict[str, Any]) -> None:
allowed_config_fields = {"synapse"}
if not config:
pass
if not set(config.keys()).issubset(allowed_config_fields):
raise ConfigNonAllowedFieldError(
"Non allowed fields in asset_store of configuration file.",
list(config.keys()),
list(allowed_config_fields),
)
self._synapse_config = SynapseConfig(**config["synapse"])
@property
def synapse_configuration_path(self) -> str:
"""
Returns:
str: The path to the synapse configuration file
"""
return normalize_path(self._synapse_config.config, self._parent_directory)
@property
def synapse_manifest_basename(self) -> str:
"""
Returns:
str:
"""
return self._synapse_config.manifest_basename
@property
def synapse_master_fileview_id(self) -> str:
"""
Returns:
str:
"""
return self._synapse_config.master_fileview_id
@synapse_master_fileview_id.setter
def synapse_master_fileview_id(self, synapse_id: str) -> None:
"""Sets the Synapse master fileview ID
Args:
synapse_id (str): The synapse id to set
"""
self._synapse_config.master_fileview_id = synapse_id
@property
def manifest_folder(self) -> str:
"""
Returns:
str: Location where manifests will saved to
"""
return self._manifest_config.manifest_folder
@property
def manifest_title(self) -> str:
"""
Returns:
str: Title or title prefix given to generated manifest(s)
"""
return self._manifest_config.title
@property
def manifest_data_type(self) -> list[str]:
"""
Returns:
list[str]: Data types of manifests to be generated or data type (singular) to validate
manifest against
"""
return self._manifest_config.data_type
@property
def model_location(self) -> str:
"""
Returns:
str: The path to the model.jsonld
"""
return self._model_config.location
@property
def service_account_credentials_synapse_id(self) -> str:
"""
Returns:
str: The Synapse id of the Google service account credentials.
"""
return self._google_sheets_config.service_acct_creds_synapse_id
@property
def service_account_credentials_path(self) -> str:
"""
Returns:
str: The path of the Google service account credentials.
"""
return normalize_path(
self._google_sheets_config.service_acct_creds, self._parent_directory
)
@property
def google_sheets_master_template_id(self) -> str:
"""
Returns:
str: The template id of the google sheet.
"""
return "1LYS5qE4nV9jzcYw5sXwCza25slDfRA1CIg3cs-hCdpU"
@property
def google_sheets_strict_validation(self) -> bool:
"""
Returns:
bool: Weather or not to disallow bad values in the google sheet
"""
return self._google_sheets_config.strict_validation
@property
def google_required_background_color(self) -> dict[str, float]:
"""
Returns:
dict[str, float]: Background color for google sheet
"""
return {
"red": 0.9215,
"green": 0.9725,
"blue": 0.9803,
}
@property
def google_optional_background_color(self) -> dict[str, float]:
"""
Returns:
dict[str, float]: Background color for google sheet
"""
return {
"red": 1.0,
"green": 1.0,
"blue": 0.9019,
}
# This instantiates the singleton for the rest of the package
CONFIG = Configuration()
|
schematicpy
|
/schematicpy-23.8.1.tar.gz/schematicpy-23.8.1/schematic/configuration/configuration.py
|
configuration.py
|
from typing import Optional, Any
import os
import yaml
from schematic.utils.general import normalize_path
from .dataclasses import (
SynapseConfig,
ManifestConfig,
ModelConfig,
GoogleSheetsConfig,
)
class ConfigNonAllowedFieldError(Exception):
"""Raised when a user submitted config file contains non allowed fields"""
def __init__(
self, message: str, fields: list[str], allowed_fields: list[str]
) -> None:
"""
Args:
message (str): A message describing the error
fields (list[str]): The fields in the config
allowed_fields (list[str]): The allowed fields in the config
"""
self.message = message
self.fields = fields
self.allowed_fields = allowed_fields
super().__init__(self.message)
def __str__(self) -> str:
"""String representation"""
return (
f"{self.message}; "
f"config contains fields: {self.fields}; "
f"allowed fields: {self.allowed_fields}"
)
class Configuration:
"""
This class is used as a singleton by the rest of the package.
It is instantiated only once at the bottom of this file, and that
instance is imported by other modules
"""
def __init__(self) -> None:
self.config_path: Optional[str] = None
self._parent_directory = os.getcwd()
self._synapse_config = SynapseConfig()
self._manifest_config = ManifestConfig()
self._model_config = ModelConfig()
self._google_sheets_config = GoogleSheetsConfig()
def load_config(self, config_path: str) -> None:
"""Loads a user created config file and overwrites any defaults listed in the file
Args:
config_path (str): The path to the config file
Raises:
ConfigNonAllowedFieldError: If there are non allowed fields in the config file
"""
allowed_config_fields = {"asset_store", "manifest", "model", "google_sheets"}
config_path = os.path.expanduser(config_path)
config_path = os.path.abspath(config_path)
self.config_path = config_path
self._parent_directory = os.path.dirname(config_path)
with open(config_path, "r", encoding="utf-8") as file:
config: dict[str, Any] = yaml.safe_load(file)
if not set(config.keys()).issubset(allowed_config_fields):
raise ConfigNonAllowedFieldError(
"Non allowed fields in top level of configuration file.",
list(config.keys()),
list(allowed_config_fields),
)
self._manifest_config = ManifestConfig(**config.get("manifest", {}))
self._model_config = ModelConfig(**config.get("model", {}))
self._google_sheets_config = GoogleSheetsConfig(
**config.get("google_sheets", {})
)
self._set_asset_store(config.get("asset_store", {}))
def _set_asset_store(self, config: dict[str, Any]) -> None:
allowed_config_fields = {"synapse"}
if not config:
pass
if not set(config.keys()).issubset(allowed_config_fields):
raise ConfigNonAllowedFieldError(
"Non allowed fields in asset_store of configuration file.",
list(config.keys()),
list(allowed_config_fields),
)
self._synapse_config = SynapseConfig(**config["synapse"])
@property
def synapse_configuration_path(self) -> str:
"""
Returns:
str: The path to the synapse configuration file
"""
return normalize_path(self._synapse_config.config, self._parent_directory)
@property
def synapse_manifest_basename(self) -> str:
"""
Returns:
str:
"""
return self._synapse_config.manifest_basename
@property
def synapse_master_fileview_id(self) -> str:
"""
Returns:
str:
"""
return self._synapse_config.master_fileview_id
@synapse_master_fileview_id.setter
def synapse_master_fileview_id(self, synapse_id: str) -> None:
"""Sets the Synapse master fileview ID
Args:
synapse_id (str): The synapse id to set
"""
self._synapse_config.master_fileview_id = synapse_id
@property
def manifest_folder(self) -> str:
"""
Returns:
str: Location where manifests will saved to
"""
return self._manifest_config.manifest_folder
@property
def manifest_title(self) -> str:
"""
Returns:
str: Title or title prefix given to generated manifest(s)
"""
return self._manifest_config.title
@property
def manifest_data_type(self) -> list[str]:
"""
Returns:
list[str]: Data types of manifests to be generated or data type (singular) to validate
manifest against
"""
return self._manifest_config.data_type
@property
def model_location(self) -> str:
"""
Returns:
str: The path to the model.jsonld
"""
return self._model_config.location
@property
def service_account_credentials_synapse_id(self) -> str:
"""
Returns:
str: The Synapse id of the Google service account credentials.
"""
return self._google_sheets_config.service_acct_creds_synapse_id
@property
def service_account_credentials_path(self) -> str:
"""
Returns:
str: The path of the Google service account credentials.
"""
return normalize_path(
self._google_sheets_config.service_acct_creds, self._parent_directory
)
@property
def google_sheets_master_template_id(self) -> str:
"""
Returns:
str: The template id of the google sheet.
"""
return "1LYS5qE4nV9jzcYw5sXwCza25slDfRA1CIg3cs-hCdpU"
@property
def google_sheets_strict_validation(self) -> bool:
"""
Returns:
bool: Weather or not to disallow bad values in the google sheet
"""
return self._google_sheets_config.strict_validation
@property
def google_required_background_color(self) -> dict[str, float]:
"""
Returns:
dict[str, float]: Background color for google sheet
"""
return {
"red": 0.9215,
"green": 0.9725,
"blue": 0.9803,
}
@property
def google_optional_background_color(self) -> dict[str, float]:
"""
Returns:
dict[str, float]: Background color for google sheet
"""
return {
"red": 1.0,
"green": 1.0,
"blue": 0.9019,
}
# This instantiates the singleton for the rest of the package
CONFIG = Configuration()
| 0.909914 | 0.174199 |
Schematics Factory
==================
Inspired by [Voluptuous](https://github.com/alecthomas/voluptuous).
It's sometimes inconvenient to define
named [Schematics](https://github.com/schematics/schematics)
Models, especially when those models are deeply nested.
Example:
```
class InnerModel(Model):
inner_bool = BooleanType()
class MiddleModel(Model):
middle_int = IntType()
middle_nested = ModelType(InnerModel)
class OuterModel(Model):
outer_str = StringType()
outer_nested = ModelType(MiddleModel)
model_instance = OuterModel(input_)
model_instance.validate()
```
So, this library provides a convenient syntax for defining
deeply nested Models.
```
from schematics_factory import model
OuterModel = model({
'outer_str': StringType(),
'outer_nested': ModelType(model({
'middle_int': IntType(),
'middle_nested': ModelType(model({
'inner_bool': BooleanType()
}))
}))
})
model_instance = OuterModel(input_)
model_instance.validate()
```
The model() function can also be imported as _model_factory_.
Alternative Syntax
------------------
Schema factory arguments can also be supplied as keyword
arguments rather than a dictionary.
```
Person = model(name=StringType(), age=IntType())
person = Person(dict(name='Test', age=27))
person.validate()
```
For nested Models, a concise __nested()__ convenience function
is provided to replace ModelType(model(...)) with nested(...).
The nested() function can also be imported as _nested_model_.
```
from schematics_factory import model, nested
Person = model(name=StringType(), pet=nested(name=StringType()))
person = Person(dict(name='Test', pet=dict(name='Rover')))
person.validate()
```
Nested models can also be provided as plain dictionary literals.
```
Person = model(name=StringType(), pet=dict(name=StringType()))
person = Person(dict(name='Test', pet=dict(name='Rover')))
person.validate()
```
Or equivalently...
```
Person = model({
'name': StringType(),
'pet': {
'name': StringType()
}
})
person = Person({
'name': 'Test',
'pet': {
'name': 'Rover'
}
})
person.validate()
```
|
schematics-factory
|
/schematics-factory-0.1.0.tar.gz/schematics-factory-0.1.0/README.md
|
README.md
|
class InnerModel(Model):
inner_bool = BooleanType()
class MiddleModel(Model):
middle_int = IntType()
middle_nested = ModelType(InnerModel)
class OuterModel(Model):
outer_str = StringType()
outer_nested = ModelType(MiddleModel)
model_instance = OuterModel(input_)
model_instance.validate()
from schematics_factory import model
OuterModel = model({
'outer_str': StringType(),
'outer_nested': ModelType(model({
'middle_int': IntType(),
'middle_nested': ModelType(model({
'inner_bool': BooleanType()
}))
}))
})
model_instance = OuterModel(input_)
model_instance.validate()
Person = model(name=StringType(), age=IntType())
person = Person(dict(name='Test', age=27))
person.validate()
from schematics_factory import model, nested
Person = model(name=StringType(), pet=nested(name=StringType()))
person = Person(dict(name='Test', pet=dict(name='Rover')))
person.validate()
Person = model(name=StringType(), pet=dict(name=StringType()))
person = Person(dict(name='Test', pet=dict(name='Rover')))
person.validate()
Person = model({
'name': StringType(),
'pet': {
'name': StringType()
}
})
person = Person({
'name': 'Test',
'pet': {
'name': 'Rover'
}
})
person.validate()
| 0.572484 | 0.890818 |
2.1.0 / Unreleased
==================
**[BREAKING CHANGE]**
- Drop Python 2.6 support
`#517 <https://github.com/schematics/schematics/pull/517>`__
(`rooterkyberian <https://github.com/rooterkyberian>`__)
Other changes:
- Add TimedeltaType
`#540 <https://github.com/schematics/schematics/pull/540>`__
(`gabisurita <https://github.com/gabisurita>`__)
- Allow to create Model fields dynamically
`#512 <https://github.com/schematics/schematics/pull/512>`__
(`lkraider <https://github.com/lkraider>`__)
- Allow ModelOptions to have extra parameters
`#449 <https://github.com/schematics/schematics/pull/449>`__
(`rmb938 <https://github.com/rmb938>`__)
`#506 <https://github.com/schematics/schematics/pull/506>`__
(`ekampf <https://github.com/ekampf>`__)
- Accept callables as serialize roles
`#508 <https://github.com/schematics/schematics/pull/508>`__
(`lkraider <https://github.com/lkraider>`__)
(`jaysonsantos <https://github.com/jaysonsantos>`__)
- Simplify PolyModelType.find_model for readability
`#537 <https://github.com/schematics/schematics/pull/537>`__
(`kstrauser <https://github.com/kstrauser>`__)
- Enable PolyModelType recursive validation
`#535 <https://github.com/schematics/schematics/pull/535>`__
(`javiertejero <https://github.com/javiertejero>`__)
- Documentation fixes
`#509 <https://github.com/schematics/schematics/pull/509>`__
(`Tuoris <https://github.com/Tuoris>`__)
`#514 <https://github.com/schematics/schematics/pull/514>`__
(`tommyzli <https://github.com/tommyzli>`__)
`#518 <https://github.com/schematics/schematics/pull/518>`__
(`rooterkyberian <https://github.com/rooterkyberian>`__)
`#546 <https://github.com/schematics/schematics/pull/546>`__
(`harveyslash <https://github.com/harveyslash>`__)
- Fix Model.init validation when partial is True
`#531 <https://github.com/schematics/schematics/issues/531>`__
(`lkraider <https://github.com/lkraider>`__)
- Minor number types refactor and mocking fixes
`#519 <https://github.com/schematics/schematics/pull/519>`__
(`rooterkyberian <https://github.com/rooterkyberian>`__)
`#520 <https://github.com/schematics/schematics/pull/520>`__
(`rooterkyberian <https://github.com/rooterkyberian>`__)
- Add ability to import models as strings
`#496 <https://github.com/schematics/schematics/pull/496>`__
(`jaysonsantos <https://github.com/jaysonsantos>`__)
- Add EnumType
`#504 <https://github.com/schematics/schematics/pull/504>`__
(`ekamil <https://github.com/ekamil>`__)
- Dynamic models: Possible memory issues because of _subclasses
`#502 <https://github.com/schematics/schematics/pull/502>`__
(`mjrk <https://github.com/mjrk>`__)
- Add type hints to constructors of field type classes
`#488 <https://github.com/schematics/schematics/pull/488>`__
(`KonishchevDmitry <https://github.com/KonishchevDmitry>`__)
- Regression: Do not call field validator if field has not been set
`#499 <https://github.com/schematics/schematics/pull/499>`__
(`cmonfort <https://github.com/cmonfort>`__)
- Add possibility to translate strings and add initial pt_BR translations
`#495 <https://github.com/schematics/schematics/pull/495>`__
(`jaysonsantos <https://github.com/jaysonsantos>`__)
(`lkraider <https://github.com/lkraider>`__)
2.0.1 / 2017-05-30
==================
- Support for raising DataError inside custom validate_fieldname methods.
`#441 <https://github.com/schematics/schematics/pull/441>`__
(`alexhayes <https://github.com/alexhayes>`__)
- Add specialized SchematicsDeprecationWarning.
(`lkraider <https://github.com/lkraider>`__)
- DateTimeType to_native method should handle type errors gracefully.
`#491 <https://github.com/schematics/schematics/pull/491>`__
(`e271828- <https://github.com/e271828->`__)
- Allow fields names to override the mapping-interface methods.
`#489 <https://github.com/schematics/schematics/pull/489>`__
(`toumorokoshi <https://github.com/toumorokoshi>`__)
(`lkraider <https://github.com/lkraider>`__)
2.0.0 / 2017-05-22
==================
**[BREAKING CHANGE]**
Version 2.0 introduces many API changes, and it is not fully backwards-compatible with 1.x code.
`Full Changelog <https://github.com/schematics/schematics/compare/v1.1.2...v2.0.0>`_
- Add syntax highlighting to README examples
`#486 <https://github.com/schematics/schematics/pull/486>`__
(`gabisurita <https://github.com/gabisurita>`__)
- Encode Unsafe data state in Model
`#484 <https://github.com/schematics/schematics/pull/484>`__
(`lkraider <https://github.com/lkraider>`__)
- Add MACAddressType
`#482 <https://github.com/schematics/schematics/pull/482>`__
(`aleksej-paschenko <https://github.com/aleksej-paschenko>`__)
2.0.0.b1 / 2017-04-06
=====================
- Enhancing and addressing some issues around exceptions:
`#477 <https://github.com/schematics/schematics/pull/477>`__
(`toumorokoshi <https://github.com/toumorokoshi>`__)
- Allow primitive and native types to be inspected
`#431 <https://github.com/schematics/schematics/pull/431>`__
(`chadrik <https://github.com/chadrik>`__)
- Atoms iterator performance improvement
`#476 <https://github.com/schematics/schematics/pull/476>`__
(`vovanbo <https://github.com/vovanbo>`__)
- Fixes 453: Recursive import\_loop with ListType
`#475 <https://github.com/schematics/schematics/pull/475>`__
(`lkraider <https://github.com/lkraider>`__)
- Schema API
`#466 <https://github.com/schematics/schematics/pull/466>`__
(`lkraider <https://github.com/lkraider>`__)
- Tweak code example to avoid sql injection
`#462 <https://github.com/schematics/schematics/pull/462>`__
(`Ian-Foote <https://github.com/Ian-Foote>`__)
- Convert readthedocs links for their .org -> .io migration for hosted
projects `#454 <https://github.com/schematics/schematics/pull/454>`__
(`adamchainz <https://github.com/adamchainz>`__)
- Support all non-string Iterables as choices (dev branch)
`#436 <https://github.com/schematics/schematics/pull/436>`__
(`di <https://github.com/di>`__)
- When testing if a values is None or Undefined, use 'is'.
`#425 <https://github.com/schematics/schematics/pull/425>`__
(`chadrik <https://github.com/chadrik>`__)
2.0.0a1 / 2016-05-03
====================
- Restore v1 to\_native behavior; simplify converter code
`#412 <https://github.com/schematics/schematics/pull/412>`__
(`bintoro <https://github.com/bintoro>`__)
- Change conversion rules for booleans
`#407 <https://github.com/schematics/schematics/pull/407>`__
(`bintoro <https://github.com/bintoro>`__)
- Test for Model.\_\_init\_\_ context passing to types
`#399 <https://github.com/schematics/schematics/pull/399>`__
(`sheilatron <https://github.com/sheilatron>`__)
- Code normalization for Python 3 + general cleanup
`#391 <https://github.com/schematics/schematics/pull/391>`__
(`bintoro <https://github.com/bintoro>`__)
- Add support for arbitrary field metadata.
`#390 <https://github.com/schematics/schematics/pull/390>`__
(`chadrik <https://github.com/chadrik>`__)
- Introduce MixedType
`#380 <https://github.com/schematics/schematics/pull/380>`__
(`bintoro <https://github.com/bintoro>`__)
2.0.0.dev2 / 2016-02-06
=======================
- Type maintenance
`#383 <https://github.com/schematics/schematics/pull/383>`__
(`bintoro <https://github.com/bintoro>`__)
2.0.0.dev1 / 2016-02-01
=======================
- Performance optimizations
`#378 <https://github.com/schematics/schematics/pull/378>`__
(`bintoro <https://github.com/bintoro>`__)
- Validation refactoring + exception redesign
`#374 <https://github.com/schematics/schematics/pull/374>`__
(`bintoro <https://github.com/bintoro>`__)
- Fix typo: serilaizataion --> serialization
`#373 <https://github.com/schematics/schematics/pull/373>`__
(`jeffwidman <https://github.com/jeffwidman>`__)
- Add support for undefined values
`#372 <https://github.com/schematics/schematics/pull/372>`__
(`bintoro <https://github.com/bintoro>`__)
- Serializable improvements
`#371 <https://github.com/schematics/schematics/pull/371>`__
(`bintoro <https://github.com/bintoro>`__)
- Unify import/export interface across all types
`#368 <https://github.com/schematics/schematics/pull/368>`__
(`bintoro <https://github.com/bintoro>`__)
- Correctly decode bytestrings in Python 3
`#365 <https://github.com/schematics/schematics/pull/365>`__
(`bintoro <https://github.com/bintoro>`__)
- Fix NumberType.to\_native()
`#364 <https://github.com/schematics/schematics/pull/364>`__
(`bintoro <https://github.com/bintoro>`__)
- Make sure field.validate() uses a native type
`#363 <https://github.com/schematics/schematics/pull/363>`__
(`bintoro <https://github.com/bintoro>`__)
- Don't validate ListType items twice
`#362 <https://github.com/schematics/schematics/pull/362>`__
(`bintoro <https://github.com/bintoro>`__)
- Collect field validators as bound methods
`#361 <https://github.com/schematics/schematics/pull/361>`__
(`bintoro <https://github.com/bintoro>`__)
- Propagate environment during recursive import/export/validation
`#359 <https://github.com/schematics/schematics/pull/359>`__
(`bintoro <https://github.com/bintoro>`__)
- DateTimeType & TimestampType major rewrite
`#358 <https://github.com/schematics/schematics/pull/358>`__
(`bintoro <https://github.com/bintoro>`__)
- Always export empty compound objects as {} / []
`#351 <https://github.com/schematics/schematics/pull/351>`__
(`bintoro <https://github.com/bintoro>`__)
- export\_loop cleanup
`#350 <https://github.com/schematics/schematics/pull/350>`__
(`bintoro <https://github.com/bintoro>`__)
- Fix FieldDescriptor.\_\_delete\_\_ to not touch model
`#349 <https://github.com/schematics/schematics/pull/349>`__
(`bintoro <https://github.com/bintoro>`__)
- Add validation method for latitude and longitude ranges in
GeoPointType
`#347 <https://github.com/schematics/schematics/pull/347>`__
(`wraziens <https://github.com/wraziens>`__)
- Fix longitude values for GeoPointType mock and add tests
`#344 <https://github.com/schematics/schematics/pull/344>`__
(`wraziens <https://github.com/wraziens>`__)
- Add support for self-referential ModelType fields
`#335 <https://github.com/schematics/schematics/pull/335>`__
(`bintoro <https://github.com/bintoro>`__)
- avoid unnecessary code path through try/except
`#327 <https://github.com/schematics/schematics/pull/327>`__
(`scavpy <https://github.com/scavpy>`__)
- Get mock object for ModelType and ListType
`#306 <https://github.com/schematics/schematics/pull/306>`__
(`kaiix <https://github.com/kaiix>`__)
1.1.3 / 2017-06-27
==================
* [Maintenance] (`#501 <https://github.com/schematics/schematics/issues/501>`_) Dynamic models: Possible memory issues because of _subclasses
1.1.2 / 2017-03-27
==================
* [Bug] (`#478 <https://github.com/schematics/schematics/pull/478>`_) Fix dangerous performance issue with ModelConversionError in nested models
1.1.1 / 2015-11-03
==================
* [Bug] (`befa202 <https://github.com/schematics/schematics/commit/befa202c3b3202aca89fb7ef985bdca06f9da37c>`_) Fix Unicode issue with DecimalType
* [Documentation] (`41157a1 <https://github.com/schematics/schematics/commit/41157a13896bd32a337c5503c04c5e9cc30ba4c7>`_) Documentation overhaul
* [Bug] (`860d717 <https://github.com/schematics/schematics/commit/860d71778421981f284c0612aec665ebf0cfcba2>`_) Fix import that was negatively affecting performance
* [Feature] (`93b554f <https://github.com/schematics/schematics/commit/93b554fd6a4e7b38133c4da5592b1843101792f0>`_) Add DataObject to datastructures.py
* [Bug] (`#236 <https://github.com/schematics/schematics/pull/236>`_) Set `None` on a field that's a compound type should honour that semantics
* [Maintenance] (`#348 <https://github.com/schematics/schematics/pull/348>`_) Update requirements
* [Maintenance] (`#346 <https://github.com/schematics/schematics/pull/346>`_) Combining Requirements
* [Maintenance] (`#342 <https://github.com/schematics/schematics/pull/342>`_) Remove to_primitive() method from compound types
* [Bug] (`#339 <https://github.com/schematics/schematics/pull/339>`_) Basic number validation
* [Bug] (`#336 <https://github.com/schematics/schematics/pull/336>`_) Don't evaluate serializable when accessed through class
* [Bug] (`#321 <https://github.com/schematics/schematics/pull/321>`_) Do not compile regex
* [Maintenance] (`#319 <https://github.com/schematics/schematics/pull/319>`_) Remove mock from install_requires
1.1.0 / 2015-07-12
==================
* [Feature] (`#303 <https://github.com/schematics/schematics/pull/303>`_) fix ListType, validate_items adds to errors list just field name without...
* [Feature] (`#304 <https://github.com/schematics/schematics/pull/304>`_) Include Partial Data when Raising ModelConversionError
* [Feature] (`#305 <https://github.com/schematics/schematics/pull/305>`_) Updated domain verifications to fit to RFC/working standards
* [Feature] (`#308 <https://github.com/schematics/schematics/pull/308>`_) Grennady ordered validation
* [Feature] (`#309 <https://github.com/schematics/schematics/pull/309>`_) improves date_time_type error message for custom formats
* [Feature] (`#310 <https://github.com/schematics/schematics/pull/310>`_) accept optional 'Z' suffix for UTC date_time_type format
* [Feature] (`#311 <https://github.com/schematics/schematics/pull/311>`_) Remove commented lines from models.py
* [Feature] (`#230 <https://github.com/schematics/schematics/pull/230>`_) Message normalization
1.0.4 / 2015-04-13
==================
* [Example] (`#286 <https://github.com/schematics/schematics/pull/286>`_) Add schematics usage with Django
* [Feature] (`#292 <https://github.com/schematics/schematics/pull/292>`_) increase domain length to 10 for .holiday, .vacations
* [Feature] (`#297 <https://github.com/schematics/schematics/pull/297>`_) Support for fields order in serialized format
* [Feature] (`#300 <https://github.com/schematics/schematics/pull/300>`_) increase domain length to 32
1.0.3 / 2015-03-07
==================
* [Feature] (`#284 <https://github.com/schematics/schematics/pull/284>`_) Add missing requirement for `six`
* [Feature] (`#283 <https://github.com/schematics/schematics/pull/283>`_) Update error msgs to print out invalid values in base.py
* [Feature] (`#281 <https://github.com/schematics/schematics/pull/281>`_) Update Model.__eq__
* [Feature] (`#267 <https://github.com/schematics/schematics/pull/267>`_) Type choices should be list or tuple
1.0.2 / 2015-02-12
==================
* [Bug] (`#280 <https://github.com/schematics/schematics/issues/280>`_) Fix the circular import issue.
1.0.1 / 2015-02-01
==================
* [Feature] (`#184 <https://github.com/schematics/schematics/issues/184>`_ / `03b2fd9 <https://github.com/schematics/schematics/commit/03b2fd97fb47c00e8d667cc8ea7254cc64d0f0a0>`_) Support for polymorphic model fields
* [Bug] (`#233 <https://github.com/schematics/schematics/pull/233>`_) Set field.owner_model recursively and honor ListType.field.serialize_when_none
* [Bug](`#252 <https://github.com/schematics/schematics/pull/252>`_) Fixed project URL
* [Feature] (`#259 <https://github.com/schematics/schematics/pull/259>`_) Give export loop to serializable when type has one
* [Feature] (`#262 <https://github.com/schematics/schematics/pull/262>`_) Make copies of inherited meta attributes when setting up a Model
* [Documentation] (`#276 <https://github.com/schematics/schematics/pull/276>`_) Improve the documentation of get_mock_object
1.0.0 / 2014-10-16
==================
* [Documentation] (`#239 <https://github.com/schematics/schematics/issues/239>`_) Fix typo with wording suggestion
* [Documentation] (`#244 <https://github.com/schematics/schematics/issues/244>`_) fix wrong reference in docs
* [Documentation] (`#246 <https://github.com/schematics/schematics/issues/246>`_) Using the correct function name in the docstring
* [Documentation] (`#245 <https://github.com/schematics/schematics/issues/245>`_) Making the docstring match actual parameter names
* [Feature] (`#241 <https://github.com/schematics/schematics/issues/241>`_) Py3k support
0.9.5 / 2014-07-19
==================
* [Feature] (`#191 <https://github.com/schematics/schematics/pull/191>`_) Updated import_data to avoid overwriting existing data. deserialize_mapping can now support partial and nested models.
* [Documentation] (`#192 <https://github.com/schematics/schematics/pull/192>`_) Document the creation of custom types
* [Feature] (`#193 <https://github.com/schematics/schematics/pull/193>`_) Add primitive types accepting values of any simple or compound primitive JSON type.
* [Bug] (`#194 <https://github.com/schematics/schematics/pull/194>`_) Change standard coerce_key function to unicode
* [Tests] (`#196 <https://github.com/schematics/schematics/pull/196>`_) Test fixes and cleanup
* [Feature] (`#197 <https://github.com/schematics/schematics/pull/197>`_) Giving context to serialization
* [Bug] (`#198 <https://github.com/schematics/schematics/pull/198>`_) Fixed typo in variable name in DateTimeType
* [Feature] (`#200 <https://github.com/schematics/schematics/pull/200>`_) Added the option to turn of strict conversion when creating a Model from a dict
* [Feature] (`#212 <https://github.com/schematics/schematics/pull/212>`_) Support exporting ModelType fields with subclassed model instances
* [Feature] (`#214 <https://github.com/schematics/schematics/pull/214>`_) Create mock objects using a class's fields as a template
* [Bug] (`#215 <https://github.com/schematics/schematics/pull/215>`_) PEP 8 FTW
* [Feature] (`#216 <https://github.com/schematics/schematics/pull/216>`_) Datastructures cleanup
* [Feature] (`#217 <https://github.com/schematics/schematics/pull/217>`_) Models cleanup pt 1
* [Feature] (`#218 <https://github.com/schematics/schematics/pull/218>`_) Models cleanup pt 2
* [Feature] (`#219 <https://github.com/schematics/schematics/pull/219>`_) Mongo cleanup
* [Feature] (`#220 <https://github.com/schematics/schematics/pull/220>`_) Temporal cleanup
* [Feature] (`#221 <https://github.com/schematics/schematics/pull/221>`_) Base cleanup
* [Feature] (`#224 <https://github.com/schematics/schematics/pull/224>`_) Exceptions cleanup
* [Feature] (`#225 <https://github.com/schematics/schematics/pull/225>`_) Validate cleanup
* [Feature] (`#226 <https://github.com/schematics/schematics/pull/226>`_) Serializable cleanup
* [Feature] (`#227 <https://github.com/schematics/schematics/pull/227>`_) Transforms cleanup
* [Feature] (`#228 <https://github.com/schematics/schematics/pull/228>`_) Compound cleanup
* [Feature] (`#229 <https://github.com/schematics/schematics/pull/229>`_) UUID cleanup
* [Feature] (`#231 <https://github.com/schematics/schematics/pull/231>`_) Booleans as numbers
0.9.4 / 2013-12-08
==================
* [Feature] (`#178 <https://github.com/schematics/schematics/pull/178>`_) Added deserialize_from flag to BaseType for alternate field names on import
* [Bug] (`#186 <https://github.com/schematics/schematics/pull/186>`_) Compoundtype support in ListTypes
* [Bug] (`#181 <https://github.com/schematics/schematics/pull/181>`_) Removed that stupid print statement!
* [Feature] (`#182 <https://github.com/schematics/schematics/pull/182>`_) Default roles system
* [Documentation] (`#190 <https://github.com/schematics/schematics/pull/190>`_) Typos
* [Bug] (`#177 <https://github.com/schematics/schematics/pull/177>`_) Removed `__iter__` from ModelMeta
* [Documentation] (`#188 <https://github.com/schematics/schematics/pull/188>`_) Typos
0.9.3 / 2013-10-20
==================
* [Documentation] More improvements
* [Feature] (`#147 <https://github.com/schematics/schematics/pull/147>`_) Complete conversion over to py.test
* [Bug] (`#176 <https://github.com/schematics/schematics/pull/176>`_) Fixed bug preventing clean override of options class
* [Bug] (`#174 <https://github.com/schematics/schematics/pull/174>`_) Python 2.6 support
0.9.2 / 2013-09-13
==================
* [Documentation] New History file!
* [Documentation] Major improvements to documentation
* [Feature] Renamed ``check_value`` to ``validate_range``
* [Feature] Changed ``serialize`` to ``to_native``
* [Bug] (`#155 <https://github.com/schematics/schematics/pull/155>`_) NumberType number range validation bugfix
|
schematics-fork
|
/schematics-fork-2.1.1.tar.gz/schematics-fork-2.1.1/HISTORY.rst
|
HISTORY.rst
|
2.1.0 / Unreleased
==================
**[BREAKING CHANGE]**
- Drop Python 2.6 support
`#517 <https://github.com/schematics/schematics/pull/517>`__
(`rooterkyberian <https://github.com/rooterkyberian>`__)
Other changes:
- Add TimedeltaType
`#540 <https://github.com/schematics/schematics/pull/540>`__
(`gabisurita <https://github.com/gabisurita>`__)
- Allow to create Model fields dynamically
`#512 <https://github.com/schematics/schematics/pull/512>`__
(`lkraider <https://github.com/lkraider>`__)
- Allow ModelOptions to have extra parameters
`#449 <https://github.com/schematics/schematics/pull/449>`__
(`rmb938 <https://github.com/rmb938>`__)
`#506 <https://github.com/schematics/schematics/pull/506>`__
(`ekampf <https://github.com/ekampf>`__)
- Accept callables as serialize roles
`#508 <https://github.com/schematics/schematics/pull/508>`__
(`lkraider <https://github.com/lkraider>`__)
(`jaysonsantos <https://github.com/jaysonsantos>`__)
- Simplify PolyModelType.find_model for readability
`#537 <https://github.com/schematics/schematics/pull/537>`__
(`kstrauser <https://github.com/kstrauser>`__)
- Enable PolyModelType recursive validation
`#535 <https://github.com/schematics/schematics/pull/535>`__
(`javiertejero <https://github.com/javiertejero>`__)
- Documentation fixes
`#509 <https://github.com/schematics/schematics/pull/509>`__
(`Tuoris <https://github.com/Tuoris>`__)
`#514 <https://github.com/schematics/schematics/pull/514>`__
(`tommyzli <https://github.com/tommyzli>`__)
`#518 <https://github.com/schematics/schematics/pull/518>`__
(`rooterkyberian <https://github.com/rooterkyberian>`__)
`#546 <https://github.com/schematics/schematics/pull/546>`__
(`harveyslash <https://github.com/harveyslash>`__)
- Fix Model.init validation when partial is True
`#531 <https://github.com/schematics/schematics/issues/531>`__
(`lkraider <https://github.com/lkraider>`__)
- Minor number types refactor and mocking fixes
`#519 <https://github.com/schematics/schematics/pull/519>`__
(`rooterkyberian <https://github.com/rooterkyberian>`__)
`#520 <https://github.com/schematics/schematics/pull/520>`__
(`rooterkyberian <https://github.com/rooterkyberian>`__)
- Add ability to import models as strings
`#496 <https://github.com/schematics/schematics/pull/496>`__
(`jaysonsantos <https://github.com/jaysonsantos>`__)
- Add EnumType
`#504 <https://github.com/schematics/schematics/pull/504>`__
(`ekamil <https://github.com/ekamil>`__)
- Dynamic models: Possible memory issues because of _subclasses
`#502 <https://github.com/schematics/schematics/pull/502>`__
(`mjrk <https://github.com/mjrk>`__)
- Add type hints to constructors of field type classes
`#488 <https://github.com/schematics/schematics/pull/488>`__
(`KonishchevDmitry <https://github.com/KonishchevDmitry>`__)
- Regression: Do not call field validator if field has not been set
`#499 <https://github.com/schematics/schematics/pull/499>`__
(`cmonfort <https://github.com/cmonfort>`__)
- Add possibility to translate strings and add initial pt_BR translations
`#495 <https://github.com/schematics/schematics/pull/495>`__
(`jaysonsantos <https://github.com/jaysonsantos>`__)
(`lkraider <https://github.com/lkraider>`__)
2.0.1 / 2017-05-30
==================
- Support for raising DataError inside custom validate_fieldname methods.
`#441 <https://github.com/schematics/schematics/pull/441>`__
(`alexhayes <https://github.com/alexhayes>`__)
- Add specialized SchematicsDeprecationWarning.
(`lkraider <https://github.com/lkraider>`__)
- DateTimeType to_native method should handle type errors gracefully.
`#491 <https://github.com/schematics/schematics/pull/491>`__
(`e271828- <https://github.com/e271828->`__)
- Allow fields names to override the mapping-interface methods.
`#489 <https://github.com/schematics/schematics/pull/489>`__
(`toumorokoshi <https://github.com/toumorokoshi>`__)
(`lkraider <https://github.com/lkraider>`__)
2.0.0 / 2017-05-22
==================
**[BREAKING CHANGE]**
Version 2.0 introduces many API changes, and it is not fully backwards-compatible with 1.x code.
`Full Changelog <https://github.com/schematics/schematics/compare/v1.1.2...v2.0.0>`_
- Add syntax highlighting to README examples
`#486 <https://github.com/schematics/schematics/pull/486>`__
(`gabisurita <https://github.com/gabisurita>`__)
- Encode Unsafe data state in Model
`#484 <https://github.com/schematics/schematics/pull/484>`__
(`lkraider <https://github.com/lkraider>`__)
- Add MACAddressType
`#482 <https://github.com/schematics/schematics/pull/482>`__
(`aleksej-paschenko <https://github.com/aleksej-paschenko>`__)
2.0.0.b1 / 2017-04-06
=====================
- Enhancing and addressing some issues around exceptions:
`#477 <https://github.com/schematics/schematics/pull/477>`__
(`toumorokoshi <https://github.com/toumorokoshi>`__)
- Allow primitive and native types to be inspected
`#431 <https://github.com/schematics/schematics/pull/431>`__
(`chadrik <https://github.com/chadrik>`__)
- Atoms iterator performance improvement
`#476 <https://github.com/schematics/schematics/pull/476>`__
(`vovanbo <https://github.com/vovanbo>`__)
- Fixes 453: Recursive import\_loop with ListType
`#475 <https://github.com/schematics/schematics/pull/475>`__
(`lkraider <https://github.com/lkraider>`__)
- Schema API
`#466 <https://github.com/schematics/schematics/pull/466>`__
(`lkraider <https://github.com/lkraider>`__)
- Tweak code example to avoid sql injection
`#462 <https://github.com/schematics/schematics/pull/462>`__
(`Ian-Foote <https://github.com/Ian-Foote>`__)
- Convert readthedocs links for their .org -> .io migration for hosted
projects `#454 <https://github.com/schematics/schematics/pull/454>`__
(`adamchainz <https://github.com/adamchainz>`__)
- Support all non-string Iterables as choices (dev branch)
`#436 <https://github.com/schematics/schematics/pull/436>`__
(`di <https://github.com/di>`__)
- When testing if a values is None or Undefined, use 'is'.
`#425 <https://github.com/schematics/schematics/pull/425>`__
(`chadrik <https://github.com/chadrik>`__)
2.0.0a1 / 2016-05-03
====================
- Restore v1 to\_native behavior; simplify converter code
`#412 <https://github.com/schematics/schematics/pull/412>`__
(`bintoro <https://github.com/bintoro>`__)
- Change conversion rules for booleans
`#407 <https://github.com/schematics/schematics/pull/407>`__
(`bintoro <https://github.com/bintoro>`__)
- Test for Model.\_\_init\_\_ context passing to types
`#399 <https://github.com/schematics/schematics/pull/399>`__
(`sheilatron <https://github.com/sheilatron>`__)
- Code normalization for Python 3 + general cleanup
`#391 <https://github.com/schematics/schematics/pull/391>`__
(`bintoro <https://github.com/bintoro>`__)
- Add support for arbitrary field metadata.
`#390 <https://github.com/schematics/schematics/pull/390>`__
(`chadrik <https://github.com/chadrik>`__)
- Introduce MixedType
`#380 <https://github.com/schematics/schematics/pull/380>`__
(`bintoro <https://github.com/bintoro>`__)
2.0.0.dev2 / 2016-02-06
=======================
- Type maintenance
`#383 <https://github.com/schematics/schematics/pull/383>`__
(`bintoro <https://github.com/bintoro>`__)
2.0.0.dev1 / 2016-02-01
=======================
- Performance optimizations
`#378 <https://github.com/schematics/schematics/pull/378>`__
(`bintoro <https://github.com/bintoro>`__)
- Validation refactoring + exception redesign
`#374 <https://github.com/schematics/schematics/pull/374>`__
(`bintoro <https://github.com/bintoro>`__)
- Fix typo: serilaizataion --> serialization
`#373 <https://github.com/schematics/schematics/pull/373>`__
(`jeffwidman <https://github.com/jeffwidman>`__)
- Add support for undefined values
`#372 <https://github.com/schematics/schematics/pull/372>`__
(`bintoro <https://github.com/bintoro>`__)
- Serializable improvements
`#371 <https://github.com/schematics/schematics/pull/371>`__
(`bintoro <https://github.com/bintoro>`__)
- Unify import/export interface across all types
`#368 <https://github.com/schematics/schematics/pull/368>`__
(`bintoro <https://github.com/bintoro>`__)
- Correctly decode bytestrings in Python 3
`#365 <https://github.com/schematics/schematics/pull/365>`__
(`bintoro <https://github.com/bintoro>`__)
- Fix NumberType.to\_native()
`#364 <https://github.com/schematics/schematics/pull/364>`__
(`bintoro <https://github.com/bintoro>`__)
- Make sure field.validate() uses a native type
`#363 <https://github.com/schematics/schematics/pull/363>`__
(`bintoro <https://github.com/bintoro>`__)
- Don't validate ListType items twice
`#362 <https://github.com/schematics/schematics/pull/362>`__
(`bintoro <https://github.com/bintoro>`__)
- Collect field validators as bound methods
`#361 <https://github.com/schematics/schematics/pull/361>`__
(`bintoro <https://github.com/bintoro>`__)
- Propagate environment during recursive import/export/validation
`#359 <https://github.com/schematics/schematics/pull/359>`__
(`bintoro <https://github.com/bintoro>`__)
- DateTimeType & TimestampType major rewrite
`#358 <https://github.com/schematics/schematics/pull/358>`__
(`bintoro <https://github.com/bintoro>`__)
- Always export empty compound objects as {} / []
`#351 <https://github.com/schematics/schematics/pull/351>`__
(`bintoro <https://github.com/bintoro>`__)
- export\_loop cleanup
`#350 <https://github.com/schematics/schematics/pull/350>`__
(`bintoro <https://github.com/bintoro>`__)
- Fix FieldDescriptor.\_\_delete\_\_ to not touch model
`#349 <https://github.com/schematics/schematics/pull/349>`__
(`bintoro <https://github.com/bintoro>`__)
- Add validation method for latitude and longitude ranges in
GeoPointType
`#347 <https://github.com/schematics/schematics/pull/347>`__
(`wraziens <https://github.com/wraziens>`__)
- Fix longitude values for GeoPointType mock and add tests
`#344 <https://github.com/schematics/schematics/pull/344>`__
(`wraziens <https://github.com/wraziens>`__)
- Add support for self-referential ModelType fields
`#335 <https://github.com/schematics/schematics/pull/335>`__
(`bintoro <https://github.com/bintoro>`__)
- avoid unnecessary code path through try/except
`#327 <https://github.com/schematics/schematics/pull/327>`__
(`scavpy <https://github.com/scavpy>`__)
- Get mock object for ModelType and ListType
`#306 <https://github.com/schematics/schematics/pull/306>`__
(`kaiix <https://github.com/kaiix>`__)
1.1.3 / 2017-06-27
==================
* [Maintenance] (`#501 <https://github.com/schematics/schematics/issues/501>`_) Dynamic models: Possible memory issues because of _subclasses
1.1.2 / 2017-03-27
==================
* [Bug] (`#478 <https://github.com/schematics/schematics/pull/478>`_) Fix dangerous performance issue with ModelConversionError in nested models
1.1.1 / 2015-11-03
==================
* [Bug] (`befa202 <https://github.com/schematics/schematics/commit/befa202c3b3202aca89fb7ef985bdca06f9da37c>`_) Fix Unicode issue with DecimalType
* [Documentation] (`41157a1 <https://github.com/schematics/schematics/commit/41157a13896bd32a337c5503c04c5e9cc30ba4c7>`_) Documentation overhaul
* [Bug] (`860d717 <https://github.com/schematics/schematics/commit/860d71778421981f284c0612aec665ebf0cfcba2>`_) Fix import that was negatively affecting performance
* [Feature] (`93b554f <https://github.com/schematics/schematics/commit/93b554fd6a4e7b38133c4da5592b1843101792f0>`_) Add DataObject to datastructures.py
* [Bug] (`#236 <https://github.com/schematics/schematics/pull/236>`_) Set `None` on a field that's a compound type should honour that semantics
* [Maintenance] (`#348 <https://github.com/schematics/schematics/pull/348>`_) Update requirements
* [Maintenance] (`#346 <https://github.com/schematics/schematics/pull/346>`_) Combining Requirements
* [Maintenance] (`#342 <https://github.com/schematics/schematics/pull/342>`_) Remove to_primitive() method from compound types
* [Bug] (`#339 <https://github.com/schematics/schematics/pull/339>`_) Basic number validation
* [Bug] (`#336 <https://github.com/schematics/schematics/pull/336>`_) Don't evaluate serializable when accessed through class
* [Bug] (`#321 <https://github.com/schematics/schematics/pull/321>`_) Do not compile regex
* [Maintenance] (`#319 <https://github.com/schematics/schematics/pull/319>`_) Remove mock from install_requires
1.1.0 / 2015-07-12
==================
* [Feature] (`#303 <https://github.com/schematics/schematics/pull/303>`_) fix ListType, validate_items adds to errors list just field name without...
* [Feature] (`#304 <https://github.com/schematics/schematics/pull/304>`_) Include Partial Data when Raising ModelConversionError
* [Feature] (`#305 <https://github.com/schematics/schematics/pull/305>`_) Updated domain verifications to fit to RFC/working standards
* [Feature] (`#308 <https://github.com/schematics/schematics/pull/308>`_) Grennady ordered validation
* [Feature] (`#309 <https://github.com/schematics/schematics/pull/309>`_) improves date_time_type error message for custom formats
* [Feature] (`#310 <https://github.com/schematics/schematics/pull/310>`_) accept optional 'Z' suffix for UTC date_time_type format
* [Feature] (`#311 <https://github.com/schematics/schematics/pull/311>`_) Remove commented lines from models.py
* [Feature] (`#230 <https://github.com/schematics/schematics/pull/230>`_) Message normalization
1.0.4 / 2015-04-13
==================
* [Example] (`#286 <https://github.com/schematics/schematics/pull/286>`_) Add schematics usage with Django
* [Feature] (`#292 <https://github.com/schematics/schematics/pull/292>`_) increase domain length to 10 for .holiday, .vacations
* [Feature] (`#297 <https://github.com/schematics/schematics/pull/297>`_) Support for fields order in serialized format
* [Feature] (`#300 <https://github.com/schematics/schematics/pull/300>`_) increase domain length to 32
1.0.3 / 2015-03-07
==================
* [Feature] (`#284 <https://github.com/schematics/schematics/pull/284>`_) Add missing requirement for `six`
* [Feature] (`#283 <https://github.com/schematics/schematics/pull/283>`_) Update error msgs to print out invalid values in base.py
* [Feature] (`#281 <https://github.com/schematics/schematics/pull/281>`_) Update Model.__eq__
* [Feature] (`#267 <https://github.com/schematics/schematics/pull/267>`_) Type choices should be list or tuple
1.0.2 / 2015-02-12
==================
* [Bug] (`#280 <https://github.com/schematics/schematics/issues/280>`_) Fix the circular import issue.
1.0.1 / 2015-02-01
==================
* [Feature] (`#184 <https://github.com/schematics/schematics/issues/184>`_ / `03b2fd9 <https://github.com/schematics/schematics/commit/03b2fd97fb47c00e8d667cc8ea7254cc64d0f0a0>`_) Support for polymorphic model fields
* [Bug] (`#233 <https://github.com/schematics/schematics/pull/233>`_) Set field.owner_model recursively and honor ListType.field.serialize_when_none
* [Bug](`#252 <https://github.com/schematics/schematics/pull/252>`_) Fixed project URL
* [Feature] (`#259 <https://github.com/schematics/schematics/pull/259>`_) Give export loop to serializable when type has one
* [Feature] (`#262 <https://github.com/schematics/schematics/pull/262>`_) Make copies of inherited meta attributes when setting up a Model
* [Documentation] (`#276 <https://github.com/schematics/schematics/pull/276>`_) Improve the documentation of get_mock_object
1.0.0 / 2014-10-16
==================
* [Documentation] (`#239 <https://github.com/schematics/schematics/issues/239>`_) Fix typo with wording suggestion
* [Documentation] (`#244 <https://github.com/schematics/schematics/issues/244>`_) fix wrong reference in docs
* [Documentation] (`#246 <https://github.com/schematics/schematics/issues/246>`_) Using the correct function name in the docstring
* [Documentation] (`#245 <https://github.com/schematics/schematics/issues/245>`_) Making the docstring match actual parameter names
* [Feature] (`#241 <https://github.com/schematics/schematics/issues/241>`_) Py3k support
0.9.5 / 2014-07-19
==================
* [Feature] (`#191 <https://github.com/schematics/schematics/pull/191>`_) Updated import_data to avoid overwriting existing data. deserialize_mapping can now support partial and nested models.
* [Documentation] (`#192 <https://github.com/schematics/schematics/pull/192>`_) Document the creation of custom types
* [Feature] (`#193 <https://github.com/schematics/schematics/pull/193>`_) Add primitive types accepting values of any simple or compound primitive JSON type.
* [Bug] (`#194 <https://github.com/schematics/schematics/pull/194>`_) Change standard coerce_key function to unicode
* [Tests] (`#196 <https://github.com/schematics/schematics/pull/196>`_) Test fixes and cleanup
* [Feature] (`#197 <https://github.com/schematics/schematics/pull/197>`_) Giving context to serialization
* [Bug] (`#198 <https://github.com/schematics/schematics/pull/198>`_) Fixed typo in variable name in DateTimeType
* [Feature] (`#200 <https://github.com/schematics/schematics/pull/200>`_) Added the option to turn of strict conversion when creating a Model from a dict
* [Feature] (`#212 <https://github.com/schematics/schematics/pull/212>`_) Support exporting ModelType fields with subclassed model instances
* [Feature] (`#214 <https://github.com/schematics/schematics/pull/214>`_) Create mock objects using a class's fields as a template
* [Bug] (`#215 <https://github.com/schematics/schematics/pull/215>`_) PEP 8 FTW
* [Feature] (`#216 <https://github.com/schematics/schematics/pull/216>`_) Datastructures cleanup
* [Feature] (`#217 <https://github.com/schematics/schematics/pull/217>`_) Models cleanup pt 1
* [Feature] (`#218 <https://github.com/schematics/schematics/pull/218>`_) Models cleanup pt 2
* [Feature] (`#219 <https://github.com/schematics/schematics/pull/219>`_) Mongo cleanup
* [Feature] (`#220 <https://github.com/schematics/schematics/pull/220>`_) Temporal cleanup
* [Feature] (`#221 <https://github.com/schematics/schematics/pull/221>`_) Base cleanup
* [Feature] (`#224 <https://github.com/schematics/schematics/pull/224>`_) Exceptions cleanup
* [Feature] (`#225 <https://github.com/schematics/schematics/pull/225>`_) Validate cleanup
* [Feature] (`#226 <https://github.com/schematics/schematics/pull/226>`_) Serializable cleanup
* [Feature] (`#227 <https://github.com/schematics/schematics/pull/227>`_) Transforms cleanup
* [Feature] (`#228 <https://github.com/schematics/schematics/pull/228>`_) Compound cleanup
* [Feature] (`#229 <https://github.com/schematics/schematics/pull/229>`_) UUID cleanup
* [Feature] (`#231 <https://github.com/schematics/schematics/pull/231>`_) Booleans as numbers
0.9.4 / 2013-12-08
==================
* [Feature] (`#178 <https://github.com/schematics/schematics/pull/178>`_) Added deserialize_from flag to BaseType for alternate field names on import
* [Bug] (`#186 <https://github.com/schematics/schematics/pull/186>`_) Compoundtype support in ListTypes
* [Bug] (`#181 <https://github.com/schematics/schematics/pull/181>`_) Removed that stupid print statement!
* [Feature] (`#182 <https://github.com/schematics/schematics/pull/182>`_) Default roles system
* [Documentation] (`#190 <https://github.com/schematics/schematics/pull/190>`_) Typos
* [Bug] (`#177 <https://github.com/schematics/schematics/pull/177>`_) Removed `__iter__` from ModelMeta
* [Documentation] (`#188 <https://github.com/schematics/schematics/pull/188>`_) Typos
0.9.3 / 2013-10-20
==================
* [Documentation] More improvements
* [Feature] (`#147 <https://github.com/schematics/schematics/pull/147>`_) Complete conversion over to py.test
* [Bug] (`#176 <https://github.com/schematics/schematics/pull/176>`_) Fixed bug preventing clean override of options class
* [Bug] (`#174 <https://github.com/schematics/schematics/pull/174>`_) Python 2.6 support
0.9.2 / 2013-09-13
==================
* [Documentation] New History file!
* [Documentation] Major improvements to documentation
* [Feature] Renamed ``check_value`` to ``validate_range``
* [Feature] Changed ``serialize`` to ``to_native``
* [Bug] (`#155 <https://github.com/schematics/schematics/pull/155>`_) NumberType number range validation bugfix
| 0.816626 | 0.734429 |
==========
Schematics
==========
.. rubric:: Python Data Structures for Humans™.
.. image:: https://travis-ci.org/schematics/schematics.svg?branch=master
:target: https://travis-ci.org/schematics/schematics
:alt: Build Status
.. image:: https://coveralls.io/repos/github/schematics/schematics/badge.svg?branch=master
:target: https://coveralls.io/github/schematics/schematics?branch=master
:alt: Coverage
About
=====
**Project documentation:** https://schematics.readthedocs.io/en/latest/
Schematics is a Python library to combine types into structures, validate them,
and transform the shapes of your data based on simple descriptions.
The internals are similar to ORM type systems, but there is no database layer
in Schematics. Instead, we believe that building a database
layer is made significantly easier when Schematics handles everything but
writing the query.
Further, it can be used for a range of tasks where having a database involved
may not make sense.
Some common use cases:
+ Design and document specific `data structures <https://schematics.readthedocs.io/en/latest/usage/models.html>`_
+ `Convert structures <https://schematics.readthedocs.io/en/latest/usage/exporting.html#converting-data>`_ to and from different formats such as JSON or MsgPack
+ `Validate <https://schematics.readthedocs.io/en/latest/usage/validation.html>`_ API inputs
+ `Remove fields based on access rights <https://schematics.readthedocs.io/en/latest/usage/exporting.html>`_ of some data's recipient
+ Define message formats for communications protocols, like an RPC
+ Custom `persistence layers <https://schematics.readthedocs.io/en/latest/usage/models.html#model-configuration>`_
Example
=======
This is a simple Model.
.. code:: python
>>> from schematics.models import Model
>>> from schematics.types import StringType, URLType
>>> class Person(Model):
... name = StringType(required=True)
... website = URLType()
...
>>> person = Person({'name': u'Joe Strummer',
... 'website': 'http://soundcloud.com/joestrummer'})
>>> person.name
u'Joe Strummer'
Serializing the data to JSON.
.. code:: python
>>> import json
>>> json.dumps(person.to_primitive())
{"name": "Joe Strummer", "website": "http://soundcloud.com/joestrummer"}
Let's try validating without a name value, since it's required.
.. code:: python
>>> person = Person()
>>> person.website = 'http://www.amontobin.com/'
>>> person.validate()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "schematics/models.py", line 231, in validate
raise DataError(e.messages)
schematics.exceptions.DataError: {'name': ['This field is required.']}
Add the field and validation passes.
.. code:: python
>>> person = Person()
>>> person.name = 'Amon Tobin'
>>> person.website = 'http://www.amontobin.com/'
>>> person.validate()
>>>
.. _coverage:
Testing & Coverage support
==========================
Run coverage and check the missing statements. ::
$ coverage run --source schematics -m py.test && coverage report
|
schematics-fork
|
/schematics-fork-2.1.1.tar.gz/schematics-fork-2.1.1/README.rst
|
README.rst
|
==========
Schematics
==========
.. rubric:: Python Data Structures for Humans™.
.. image:: https://travis-ci.org/schematics/schematics.svg?branch=master
:target: https://travis-ci.org/schematics/schematics
:alt: Build Status
.. image:: https://coveralls.io/repos/github/schematics/schematics/badge.svg?branch=master
:target: https://coveralls.io/github/schematics/schematics?branch=master
:alt: Coverage
About
=====
**Project documentation:** https://schematics.readthedocs.io/en/latest/
Schematics is a Python library to combine types into structures, validate them,
and transform the shapes of your data based on simple descriptions.
The internals are similar to ORM type systems, but there is no database layer
in Schematics. Instead, we believe that building a database
layer is made significantly easier when Schematics handles everything but
writing the query.
Further, it can be used for a range of tasks where having a database involved
may not make sense.
Some common use cases:
+ Design and document specific `data structures <https://schematics.readthedocs.io/en/latest/usage/models.html>`_
+ `Convert structures <https://schematics.readthedocs.io/en/latest/usage/exporting.html#converting-data>`_ to and from different formats such as JSON or MsgPack
+ `Validate <https://schematics.readthedocs.io/en/latest/usage/validation.html>`_ API inputs
+ `Remove fields based on access rights <https://schematics.readthedocs.io/en/latest/usage/exporting.html>`_ of some data's recipient
+ Define message formats for communications protocols, like an RPC
+ Custom `persistence layers <https://schematics.readthedocs.io/en/latest/usage/models.html#model-configuration>`_
Example
=======
This is a simple Model.
.. code:: python
>>> from schematics.models import Model
>>> from schematics.types import StringType, URLType
>>> class Person(Model):
... name = StringType(required=True)
... website = URLType()
...
>>> person = Person({'name': u'Joe Strummer',
... 'website': 'http://soundcloud.com/joestrummer'})
>>> person.name
u'Joe Strummer'
Serializing the data to JSON.
.. code:: python
>>> import json
>>> json.dumps(person.to_primitive())
{"name": "Joe Strummer", "website": "http://soundcloud.com/joestrummer"}
Let's try validating without a name value, since it's required.
.. code:: python
>>> person = Person()
>>> person.website = 'http://www.amontobin.com/'
>>> person.validate()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "schematics/models.py", line 231, in validate
raise DataError(e.messages)
schematics.exceptions.DataError: {'name': ['This field is required.']}
Add the field and validation passes.
.. code:: python
>>> person = Person()
>>> person.name = 'Amon Tobin'
>>> person.website = 'http://www.amontobin.com/'
>>> person.validate()
>>>
.. _coverage:
Testing & Coverage support
==========================
Run coverage and check the missing statements. ::
$ coverage run --source schematics -m py.test && coverage report
| 0.924556 | 0.626224 |
from __future__ import absolute_import
import functools
import operator
import sys
__all__ = ['PY2', 'PY3', 'string_type', 'iteritems', 'metaclass', 'py_native_string', 'reraise', 'str_compat']
PY2 = sys.version_info[0] == 2
PY3 = sys.version_info[0] == 3
if PY2:
__all__ += ['bytes', 'str', 'map', 'zip', 'range']
bytes = str
str = unicode
string_type = basestring
range = xrange
from itertools import imap as map
from itertools import izip as zip
iteritems = operator.methodcaller('iteritems')
itervalues = operator.methodcaller('itervalues')
# reraise code taken from werzeug BSD license at https://github.com/pallets/werkzeug/blob/master/LICENSE
exec('def reraise(tp, value, tb=None):\n raise tp, value, tb')
else:
string_type = str
iteritems = operator.methodcaller('items')
itervalues = operator.methodcaller('values')
# reraise code taken from werzeug BSD license at https://github.com/pallets/werkzeug/blob/master/LICENSE
def reraise(tp, value, tb=None):
if value.__traceback__ is not tb:
raise value.with_traceback(tb)
raise value
def metaclass(metaclass):
def make_class(cls):
attrs = cls.__dict__.copy()
if attrs.get('__dict__'):
del attrs['__dict__']
del attrs['__weakref__']
return metaclass(cls.__name__, cls.__bases__, attrs)
return make_class
def py_native_string(source):
"""
Converts Unicode strings to bytestrings on Python 2. The intended usage is to
wrap a function or a string in cases where Python 2 expects a native string.
"""
if PY2:
if isinstance(source, str):
return source.encode('ascii')
elif callable(source):
@functools.wraps(source)
def new_func(*args, **kwargs):
rv = source(*args, **kwargs)
if isinstance(rv, str):
rv = rv.encode('unicode-escape')
return rv
return new_func
return source
def str_compat(class_):
"""
On Python 2, patches the ``__str__`` and ``__repr__`` methods on the given class
so that the class can be written for Python 3 and Unicode.
"""
if PY2:
if '__str__' in class_.__dict__ and '__unicode__' not in class_.__dict__:
class_.__unicode__ = class_.__str__
class_.__str__ = py_native_string(class_.__unicode__)
return class_
def repr_compat(class_):
if PY2:
if '__repr__' in class_.__dict__:
class_.__repr__ = py_native_string(class_.__repr__)
return class_
def _dict(mapping):
return dict((key, mapping[key]) for key in mapping)
|
schematics-fork
|
/schematics-fork-2.1.1.tar.gz/schematics-fork-2.1.1/schematics/compat.py
|
compat.py
|
from __future__ import absolute_import
import functools
import operator
import sys
__all__ = ['PY2', 'PY3', 'string_type', 'iteritems', 'metaclass', 'py_native_string', 'reraise', 'str_compat']
PY2 = sys.version_info[0] == 2
PY3 = sys.version_info[0] == 3
if PY2:
__all__ += ['bytes', 'str', 'map', 'zip', 'range']
bytes = str
str = unicode
string_type = basestring
range = xrange
from itertools import imap as map
from itertools import izip as zip
iteritems = operator.methodcaller('iteritems')
itervalues = operator.methodcaller('itervalues')
# reraise code taken from werzeug BSD license at https://github.com/pallets/werkzeug/blob/master/LICENSE
exec('def reraise(tp, value, tb=None):\n raise tp, value, tb')
else:
string_type = str
iteritems = operator.methodcaller('items')
itervalues = operator.methodcaller('values')
# reraise code taken from werzeug BSD license at https://github.com/pallets/werkzeug/blob/master/LICENSE
def reraise(tp, value, tb=None):
if value.__traceback__ is not tb:
raise value.with_traceback(tb)
raise value
def metaclass(metaclass):
def make_class(cls):
attrs = cls.__dict__.copy()
if attrs.get('__dict__'):
del attrs['__dict__']
del attrs['__weakref__']
return metaclass(cls.__name__, cls.__bases__, attrs)
return make_class
def py_native_string(source):
"""
Converts Unicode strings to bytestrings on Python 2. The intended usage is to
wrap a function or a string in cases where Python 2 expects a native string.
"""
if PY2:
if isinstance(source, str):
return source.encode('ascii')
elif callable(source):
@functools.wraps(source)
def new_func(*args, **kwargs):
rv = source(*args, **kwargs)
if isinstance(rv, str):
rv = rv.encode('unicode-escape')
return rv
return new_func
return source
def str_compat(class_):
"""
On Python 2, patches the ``__str__`` and ``__repr__`` methods on the given class
so that the class can be written for Python 3 and Unicode.
"""
if PY2:
if '__str__' in class_.__dict__ and '__unicode__' not in class_.__dict__:
class_.__unicode__ = class_.__str__
class_.__str__ = py_native_string(class_.__unicode__)
return class_
def repr_compat(class_):
if PY2:
if '__repr__' in class_.__dict__:
class_.__repr__ = py_native_string(class_.__repr__)
return class_
def _dict(mapping):
return dict((key, mapping[key]) for key in mapping)
| 0.561816 | 0.163813 |
from __future__ import unicode_literals, absolute_import
import inspect
import functools
from .common import *
from .datastructures import Context
from .exceptions import FieldError, DataError
from .transforms import import_loop, validation_converter
from .undefined import Undefined
from .iteration import atoms
__all__ = []
def validate(schema, mutable, raw_data=None, trusted_data=None,
partial=False, strict=False, convert=True, context=None, **kwargs):
"""
Validate some untrusted data using a model. Trusted data can be passed in
the `trusted_data` parameter.
:param schema:
The Schema to use as source for validation.
:param mutable:
A mapping or instance that can be changed during validation by Schema
functions.
:param raw_data:
A mapping or instance containing new data to be validated.
:param partial:
Allow partial data to validate; useful for PATCH requests.
Essentially drops the ``required=True`` arguments from field
definitions. Default: False
:param strict:
Complain about unrecognized keys. Default: False
:param trusted_data:
A ``dict``-like structure that may contain already validated data.
:param convert:
Controls whether to perform import conversion before validating.
Can be turned off to skip an unnecessary conversion step if all values
are known to have the right datatypes (e.g., when validating immediately
after the initial import). Default: True
:returns: data
``dict`` containing the valid raw_data plus ``trusted_data``.
If errors are found, they are raised as a ValidationError with a list
of errors attached.
"""
if raw_data is None:
raw_data = mutable
context = context or get_validation_context(partial=partial, strict=strict,
convert=convert)
errors = {}
try:
data = import_loop(schema, mutable, raw_data, trusted_data=trusted_data,
context=context, **kwargs)
except DataError as exc:
errors = dict(exc.errors)
data = exc.partial_data
errors.update(_validate_model(schema, mutable, data, context))
if errors:
raise DataError(errors, data)
return data
def _validate_model(schema, mutable, data, context):
"""
Validate data using model level methods.
:param schema:
The Schema to validate ``data`` against.
:param mutable:
A mapping or instance that will be passed to the validator containing
the original data and that can be mutated.
:param data:
A dict with data to validate. Invalid items are removed from it.
:returns:
Errors of the fields that did not pass validation.
"""
errors = {}
invalid_fields = []
has_validator = lambda atom: (
atom.value is not Undefined and
atom.name in schema._validator_functions
)
for field_name, field, value in atoms(schema, data, filter=has_validator):
try:
schema._validator_functions[field_name](mutable, data, value, context)
except (FieldError, DataError) as exc:
serialized_field_name = field.serialized_name or field_name
errors[serialized_field_name] = exc.errors
invalid_fields.append(field_name)
for field_name in invalid_fields:
data.pop(field_name)
return errors
def get_validation_context(**options):
validation_options = {
'field_converter': validation_converter,
'partial': False,
'strict': False,
'convert': True,
'validate': True,
'new': False,
}
validation_options.update(options)
return Context(**validation_options)
def prepare_validator(func, argcount):
if isinstance(func, classmethod):
func = func.__get__(object).__func__
try:
func_args = inspect.getfullargspec(func).args # PY3
except AttributeError:
func_args = inspect.getargspec(func).args # PY2
if len(func_args) < argcount:
@functools.wraps(func)
def newfunc(*args, **kwargs):
if not kwargs or kwargs.pop('context', 0) is 0:
args = args[:-1]
return func(*args, **kwargs)
return newfunc
return func
|
schematics-fork
|
/schematics-fork-2.1.1.tar.gz/schematics-fork-2.1.1/schematics/validate.py
|
validate.py
|
from __future__ import unicode_literals, absolute_import
import inspect
import functools
from .common import *
from .datastructures import Context
from .exceptions import FieldError, DataError
from .transforms import import_loop, validation_converter
from .undefined import Undefined
from .iteration import atoms
__all__ = []
def validate(schema, mutable, raw_data=None, trusted_data=None,
partial=False, strict=False, convert=True, context=None, **kwargs):
"""
Validate some untrusted data using a model. Trusted data can be passed in
the `trusted_data` parameter.
:param schema:
The Schema to use as source for validation.
:param mutable:
A mapping or instance that can be changed during validation by Schema
functions.
:param raw_data:
A mapping or instance containing new data to be validated.
:param partial:
Allow partial data to validate; useful for PATCH requests.
Essentially drops the ``required=True`` arguments from field
definitions. Default: False
:param strict:
Complain about unrecognized keys. Default: False
:param trusted_data:
A ``dict``-like structure that may contain already validated data.
:param convert:
Controls whether to perform import conversion before validating.
Can be turned off to skip an unnecessary conversion step if all values
are known to have the right datatypes (e.g., when validating immediately
after the initial import). Default: True
:returns: data
``dict`` containing the valid raw_data plus ``trusted_data``.
If errors are found, they are raised as a ValidationError with a list
of errors attached.
"""
if raw_data is None:
raw_data = mutable
context = context or get_validation_context(partial=partial, strict=strict,
convert=convert)
errors = {}
try:
data = import_loop(schema, mutable, raw_data, trusted_data=trusted_data,
context=context, **kwargs)
except DataError as exc:
errors = dict(exc.errors)
data = exc.partial_data
errors.update(_validate_model(schema, mutable, data, context))
if errors:
raise DataError(errors, data)
return data
def _validate_model(schema, mutable, data, context):
"""
Validate data using model level methods.
:param schema:
The Schema to validate ``data`` against.
:param mutable:
A mapping or instance that will be passed to the validator containing
the original data and that can be mutated.
:param data:
A dict with data to validate. Invalid items are removed from it.
:returns:
Errors of the fields that did not pass validation.
"""
errors = {}
invalid_fields = []
has_validator = lambda atom: (
atom.value is not Undefined and
atom.name in schema._validator_functions
)
for field_name, field, value in atoms(schema, data, filter=has_validator):
try:
schema._validator_functions[field_name](mutable, data, value, context)
except (FieldError, DataError) as exc:
serialized_field_name = field.serialized_name or field_name
errors[serialized_field_name] = exc.errors
invalid_fields.append(field_name)
for field_name in invalid_fields:
data.pop(field_name)
return errors
def get_validation_context(**options):
validation_options = {
'field_converter': validation_converter,
'partial': False,
'strict': False,
'convert': True,
'validate': True,
'new': False,
}
validation_options.update(options)
return Context(**validation_options)
def prepare_validator(func, argcount):
if isinstance(func, classmethod):
func = func.__get__(object).__func__
try:
func_args = inspect.getfullargspec(func).args # PY3
except AttributeError:
func_args = inspect.getargspec(func).args # PY2
if len(func_args) < argcount:
@functools.wraps(func)
def newfunc(*args, **kwargs):
if not kwargs or kwargs.pop('context', 0) is 0:
args = args[:-1]
return func(*args, **kwargs)
return newfunc
return func
| 0.841598 | 0.490175 |
from __future__ import unicode_literals, absolute_import
from collections import namedtuple
from .compat import iteritems
from .undefined import Undefined
try:
# optional type checking
import typing
if typing.TYPE_CHECKING:
from typing import Mapping, Tuple, Callable, Optional, Any, Iterable
from .schema import Schema
except ImportError:
pass
Atom = namedtuple('Atom', ('name', 'field', 'value'))
Atom.__new__.__defaults__ = (None,) * len(Atom._fields)
def atoms(schema, mapping, keys=tuple(Atom._fields), filter=None):
# type: (Schema, Mapping, Tuple[str, str, str], Optional[Callable[[Atom], bool]]) -> Iterable[Atom]
"""
Iterator for the atomic components of a model definition and relevant
data that creates a 3-tuple of the field's name, its type instance and
its value.
:type schema: schematics.schema.Schema
:param schema:
The Schema definition.
:type mapping: Mapping
:param mapping:
The structure where fields from schema are mapped to values. The only
expectation for this structure is that it implements a ``Mapping``
interface.
:type keys: Tuple[str, str, str]
:param keys:
Tuple specifying the output of the iterator. Valid keys are:
`name`: the field name
`field`: the field descriptor object
`value`: the current value set on the field
Specifying invalid keys will raise an exception.
:type filter: Optional[Callable[[Atom], bool]]
:param filter:
Function to filter out atoms from the iteration.
:rtype: Iterable[Atom]
"""
if not set(keys).issubset(Atom._fields):
raise TypeError('invalid key specified')
has_name = 'name' in keys
has_field = 'field' in keys
has_value = (mapping is not None) and ('value' in keys)
for field_name, field in iteritems(schema.fields):
value = Undefined
if has_value:
try:
value = mapping[field_name]
except Exception:
value = Undefined
atom_tuple = Atom(
name=field_name if has_name else None,
field=field if has_field else None,
value=value)
if filter is None:
yield atom_tuple
elif filter(atom_tuple):
yield atom_tuple
class atom_filter:
"""Group for the default filter functions."""
@staticmethod
def has_setter(atom):
return getattr(atom.field, 'fset', None) is not None
@staticmethod
def not_setter(atom):
return not atom_filter.has_setter(atom)
|
schematics-fork
|
/schematics-fork-2.1.1.tar.gz/schematics-fork-2.1.1/schematics/iteration.py
|
iteration.py
|
from __future__ import unicode_literals, absolute_import
from collections import namedtuple
from .compat import iteritems
from .undefined import Undefined
try:
# optional type checking
import typing
if typing.TYPE_CHECKING:
from typing import Mapping, Tuple, Callable, Optional, Any, Iterable
from .schema import Schema
except ImportError:
pass
Atom = namedtuple('Atom', ('name', 'field', 'value'))
Atom.__new__.__defaults__ = (None,) * len(Atom._fields)
def atoms(schema, mapping, keys=tuple(Atom._fields), filter=None):
# type: (Schema, Mapping, Tuple[str, str, str], Optional[Callable[[Atom], bool]]) -> Iterable[Atom]
"""
Iterator for the atomic components of a model definition and relevant
data that creates a 3-tuple of the field's name, its type instance and
its value.
:type schema: schematics.schema.Schema
:param schema:
The Schema definition.
:type mapping: Mapping
:param mapping:
The structure where fields from schema are mapped to values. The only
expectation for this structure is that it implements a ``Mapping``
interface.
:type keys: Tuple[str, str, str]
:param keys:
Tuple specifying the output of the iterator. Valid keys are:
`name`: the field name
`field`: the field descriptor object
`value`: the current value set on the field
Specifying invalid keys will raise an exception.
:type filter: Optional[Callable[[Atom], bool]]
:param filter:
Function to filter out atoms from the iteration.
:rtype: Iterable[Atom]
"""
if not set(keys).issubset(Atom._fields):
raise TypeError('invalid key specified')
has_name = 'name' in keys
has_field = 'field' in keys
has_value = (mapping is not None) and ('value' in keys)
for field_name, field in iteritems(schema.fields):
value = Undefined
if has_value:
try:
value = mapping[field_name]
except Exception:
value = Undefined
atom_tuple = Atom(
name=field_name if has_name else None,
field=field if has_field else None,
value=value)
if filter is None:
yield atom_tuple
elif filter(atom_tuple):
yield atom_tuple
class atom_filter:
"""Group for the default filter functions."""
@staticmethod
def has_setter(atom):
return getattr(atom.field, 'fset', None) is not None
@staticmethod
def not_setter(atom):
return not atom_filter.has_setter(atom)
| 0.899635 | 0.335514 |
from .compat import str_compat, repr_compat
try:
from collections.abc import Set # PY3
except ImportError:
from collections import Set # PY2
@repr_compat
@str_compat
class Role(Set):
"""
A ``Role`` object can be used to filter specific fields against a sequence.
The ``Role`` contains two things: a set of names and a function.
The function describes how to filter, taking a field name as input and then
returning ``True`` or ``False`` to indicate that field should or should not
be skipped.
A ``Role`` can be operated on as a ``Set`` object representing the fields
it has an opinion on. When Roles are combined with other roles, only the
filtering behavior of the first role is used.
"""
def __init__(self, function, fields):
self.function = function
self.fields = set(fields)
def _from_iterable(self, iterable):
return Role(self.function, iterable)
def __contains__(self, value):
return value in self.fields
def __iter__(self):
return iter(self.fields)
def __len__(self):
return len(self.fields)
def __eq__(self, other):
return (self.function.__name__ == other.function.__name__ and
self.fields == other.fields)
def __str__(self):
return '%s(%s)' % (self.function.__name__,
', '.join("'%s'" % f for f in self.fields))
def __repr__(self):
return '<Role %s>' % str(self)
# edit role fields
def __add__(self, other):
fields = self.fields.union(other)
return self._from_iterable(fields)
def __sub__(self, other):
fields = self.fields.difference(other)
return self._from_iterable(fields)
# apply role to field
def __call__(self, name, value):
return self.function(name, value, self.fields)
# static filter functions
@staticmethod
def wholelist(name, value, seq):
"""
Accepts a field name, value, and a field list. This function
implements acceptance of all fields by never requesting a field be
skipped, thus returns False for all input.
:param name:
The field name to inspect.
:param value:
The field's value.
:param seq:
The list of fields associated with the ``Role``.
"""
return False
@staticmethod
def whitelist(name, value, seq):
"""
Implements the behavior of a whitelist by requesting a field be skipped
whenever its name is not in the list of fields.
:param name:
The field name to inspect.
:param value:
The field's value.
:param seq:
The list of fields associated with the ``Role``.
"""
if seq is not None and len(seq) > 0:
return name not in seq
return True
@staticmethod
def blacklist(name, value, seq):
"""
Implements the behavior of a blacklist by requesting a field be skipped
whenever its name is found in the list of fields.
:param name:
The field name to inspect.
:param value:
The field's value.
:param seq:
The list of fields associated with the ``Role``.
"""
if seq is not None and len(seq) > 0:
return name in seq
return False
|
schematics-fork
|
/schematics-fork-2.1.1.tar.gz/schematics-fork-2.1.1/schematics/role.py
|
role.py
|
from .compat import str_compat, repr_compat
try:
from collections.abc import Set # PY3
except ImportError:
from collections import Set # PY2
@repr_compat
@str_compat
class Role(Set):
"""
A ``Role`` object can be used to filter specific fields against a sequence.
The ``Role`` contains two things: a set of names and a function.
The function describes how to filter, taking a field name as input and then
returning ``True`` or ``False`` to indicate that field should or should not
be skipped.
A ``Role`` can be operated on as a ``Set`` object representing the fields
it has an opinion on. When Roles are combined with other roles, only the
filtering behavior of the first role is used.
"""
def __init__(self, function, fields):
self.function = function
self.fields = set(fields)
def _from_iterable(self, iterable):
return Role(self.function, iterable)
def __contains__(self, value):
return value in self.fields
def __iter__(self):
return iter(self.fields)
def __len__(self):
return len(self.fields)
def __eq__(self, other):
return (self.function.__name__ == other.function.__name__ and
self.fields == other.fields)
def __str__(self):
return '%s(%s)' % (self.function.__name__,
', '.join("'%s'" % f for f in self.fields))
def __repr__(self):
return '<Role %s>' % str(self)
# edit role fields
def __add__(self, other):
fields = self.fields.union(other)
return self._from_iterable(fields)
def __sub__(self, other):
fields = self.fields.difference(other)
return self._from_iterable(fields)
# apply role to field
def __call__(self, name, value):
return self.function(name, value, self.fields)
# static filter functions
@staticmethod
def wholelist(name, value, seq):
"""
Accepts a field name, value, and a field list. This function
implements acceptance of all fields by never requesting a field be
skipped, thus returns False for all input.
:param name:
The field name to inspect.
:param value:
The field's value.
:param seq:
The list of fields associated with the ``Role``.
"""
return False
@staticmethod
def whitelist(name, value, seq):
"""
Implements the behavior of a whitelist by requesting a field be skipped
whenever its name is not in the list of fields.
:param name:
The field name to inspect.
:param value:
The field's value.
:param seq:
The list of fields associated with the ``Role``.
"""
if seq is not None and len(seq) > 0:
return name not in seq
return True
@staticmethod
def blacklist(name, value, seq):
"""
Implements the behavior of a blacklist by requesting a field be skipped
whenever its name is found in the list of fields.
:param name:
The field name to inspect.
:param value:
The field's value.
:param seq:
The list of fields associated with the ``Role``.
"""
if seq is not None and len(seq) > 0:
return name in seq
return False
| 0.857872 | 0.576482 |
import warnings
import functools
from collections import OrderedDict
from .compat import iteritems
from .types.serializable import Serializable
from . import transforms
class SchematicsDeprecationWarning(DeprecationWarning):
pass
def deprecated(func):
@functools.wraps(func)
def new_func(*args, **kwargs):
warnings.warn(
"Call to deprecated function {0}.".format(func.__name__),
category=SchematicsDeprecationWarning,
stacklevel=2
)
return func(*args, **kwargs)
return new_func
class SchemaCompatibilityMixin(object):
"""Compatibility layer for previous deprecated Schematics Model API."""
@property
@deprecated
def __name__(self):
return self.name
@property
@deprecated
def _options(self):
return self.options
@property
@deprecated
def _validator_functions(self):
return self.validators
@property
@deprecated
def _fields(self):
return self.fields
@property
@deprecated
def _valid_input_keys(self):
return self.valid_input_keys
@property
@deprecated
def _serializables(self):
return OrderedDict((k, t) for k, t in iteritems(self.fields) if isinstance(t, Serializable))
class class_property(property):
def __get__(self, instance, type=None):
if instance is None:
return super(class_property, self).__get__(type, type)
return super(class_property, self).__get__(instance, type)
class ModelCompatibilityMixin(object):
"""Compatibility layer for previous deprecated Schematics Model API."""
@class_property
@deprecated
def _valid_input_keys(cls):
return cls._schema.valid_input_keys
@class_property
@deprecated
def _options(cls):
return cls._schema.options
@class_property
@deprecated
def fields(cls):
return cls._schema.fields
@class_property
@deprecated
def _fields(cls):
return cls._schema.fields
@class_property
@deprecated
def _field_list(cls):
return list(iteritems(cls._schema.fields))
@class_property
@deprecated
def _serializables(cls):
return cls._schema._serializables
@class_property
@deprecated
def _validator_functions(cls):
return cls._schema.validators
@classmethod
@deprecated
def convert(cls, raw_data, context=None, **kw):
return transforms.convert(cls._schema, raw_data, oo=True,
context=context, **kw)
class BaseErrorV1Mixin(object):
@property
@deprecated
def messages(self):
""" an alias for errors, provided for compatibility with V1. """
return self.errors
def patch_models():
global models_Model
from . import schema
from . import models
models_Model = models.Model
class Model(ModelCompatibilityMixin, models.Model):
__doc__ = models.Model.__doc__
models.Model = Model
models.ModelOptions = schema.SchemaOptions # deprecated alias
def patch_schema():
global schema_Schema
from . import schema
schema_Schema = schema.Schema
class Schema(SchemaCompatibilityMixin, schema.Schema):
__doc__ = schema.Schema.__doc__
schema.Schema = Schema
def patch_exceptions():
from . import exceptions
exceptions.BaseError.messages = BaseErrorV1Mixin.messages
exceptions.ModelConversionError = exceptions.DataError # v1
exceptions.ModelValidationError = exceptions.DataError # v1
exceptions.StopValidation = exceptions.StopValidationError # v1
def patch_all():
patch_schema()
patch_models()
patch_exceptions()
|
schematics-fork
|
/schematics-fork-2.1.1.tar.gz/schematics-fork-2.1.1/schematics/deprecated.py
|
deprecated.py
|
import warnings
import functools
from collections import OrderedDict
from .compat import iteritems
from .types.serializable import Serializable
from . import transforms
class SchematicsDeprecationWarning(DeprecationWarning):
pass
def deprecated(func):
@functools.wraps(func)
def new_func(*args, **kwargs):
warnings.warn(
"Call to deprecated function {0}.".format(func.__name__),
category=SchematicsDeprecationWarning,
stacklevel=2
)
return func(*args, **kwargs)
return new_func
class SchemaCompatibilityMixin(object):
"""Compatibility layer for previous deprecated Schematics Model API."""
@property
@deprecated
def __name__(self):
return self.name
@property
@deprecated
def _options(self):
return self.options
@property
@deprecated
def _validator_functions(self):
return self.validators
@property
@deprecated
def _fields(self):
return self.fields
@property
@deprecated
def _valid_input_keys(self):
return self.valid_input_keys
@property
@deprecated
def _serializables(self):
return OrderedDict((k, t) for k, t in iteritems(self.fields) if isinstance(t, Serializable))
class class_property(property):
def __get__(self, instance, type=None):
if instance is None:
return super(class_property, self).__get__(type, type)
return super(class_property, self).__get__(instance, type)
class ModelCompatibilityMixin(object):
"""Compatibility layer for previous deprecated Schematics Model API."""
@class_property
@deprecated
def _valid_input_keys(cls):
return cls._schema.valid_input_keys
@class_property
@deprecated
def _options(cls):
return cls._schema.options
@class_property
@deprecated
def fields(cls):
return cls._schema.fields
@class_property
@deprecated
def _fields(cls):
return cls._schema.fields
@class_property
@deprecated
def _field_list(cls):
return list(iteritems(cls._schema.fields))
@class_property
@deprecated
def _serializables(cls):
return cls._schema._serializables
@class_property
@deprecated
def _validator_functions(cls):
return cls._schema.validators
@classmethod
@deprecated
def convert(cls, raw_data, context=None, **kw):
return transforms.convert(cls._schema, raw_data, oo=True,
context=context, **kw)
class BaseErrorV1Mixin(object):
@property
@deprecated
def messages(self):
""" an alias for errors, provided for compatibility with V1. """
return self.errors
def patch_models():
global models_Model
from . import schema
from . import models
models_Model = models.Model
class Model(ModelCompatibilityMixin, models.Model):
__doc__ = models.Model.__doc__
models.Model = Model
models.ModelOptions = schema.SchemaOptions # deprecated alias
def patch_schema():
global schema_Schema
from . import schema
schema_Schema = schema.Schema
class Schema(SchemaCompatibilityMixin, schema.Schema):
__doc__ = schema.Schema.__doc__
schema.Schema = Schema
def patch_exceptions():
from . import exceptions
exceptions.BaseError.messages = BaseErrorV1Mixin.messages
exceptions.ModelConversionError = exceptions.DataError # v1
exceptions.ModelValidationError = exceptions.DataError # v1
exceptions.StopValidation = exceptions.StopValidationError # v1
def patch_all():
patch_schema()
patch_models()
patch_exceptions()
| 0.834238 | 0.111507 |
from __future__ import unicode_literals, absolute_import
from .compat import *
try:
from collections.abc import Mapping, Sequence # PY3
except ImportError:
from collections import Mapping, Sequence # PY2
__all__ = []
class DataObject(object):
"""
An object for holding data as attributes.
``DataObject`` can be instantiated like ``dict``::
>>> d = DataObject({'one': 1, 'two': 2}, three=3)
>>> d.__dict__
{'one': 1, 'two': 2, 'three': 3}
Attributes are accessible via the regular dot notation (``d.x``) as well as
the subscription syntax (``d['x']``)::
>>> d.one == d['one'] == 1
True
To convert a ``DataObject`` into a dictionary, use ``d._to_dict()``.
``DataObject`` implements the following collection-like operations:
* iteration through attributes as name-value pairs
* ``'x' in d`` for membership tests
* ``len(d)`` to get the number of attributes
Additionally, the following methods are equivalent to their ``dict` counterparts:
``_clear``, ``_get``, ``_keys``, ``_items``, ``_pop``, ``_setdefault``, ``_update``.
An advantage of ``DataObject`` over ``dict` subclasses is that every method name
in ``DataObject`` begins with an underscore, so attributes like ``"update"`` or
``"values"`` are valid.
"""
def __init__(self, *args, **kwargs):
source = args[0] if args else {}
self._update(source, **kwargs)
def __repr__(self):
return self.__class__.__name__ + '(%s)' % repr(self.__dict__)
def _copy(self):
return self.__class__(self)
__copy__ = _copy
def __eq__(self, other):
return isinstance(other, DataObject) and self.__dict__ == other.__dict__
def __iter__(self):
return iter(self.__dict__.items())
def _update(self, source=None, **kwargs):
if isinstance(source, DataObject):
source = source.__dict__
self.__dict__.update(source, **kwargs)
def _setdefaults(self, source):
if isinstance(source, dict):
source = source.items()
for name, value in source:
self._setdefault(name, value)
return self
def _to_dict(self):
d = dict(self.__dict__)
for k, v in d.items():
if isinstance(v, DataObject):
d[k] = v._to_dict()
return d
def __setitem__(self, key, value): self.__dict__[key] = value
def __getitem__(self, key): return self.__dict__[key]
def __delitem__(self, key): del self.__dict__[key]
def __len__(self): return len(self.__dict__)
def __contains__(self, key): return key in self.__dict__
def _clear(self): return self.__dict__.clear()
def _get(self, *args): return self.__dict__.get(*args)
def _items(self): return self.__dict__.items()
def _keys(self): return self.__dict__.keys()
def _pop(self, *args): return self.__dict__.pop(*args)
def _setdefault(self, *args): return self.__dict__.setdefault(*args)
class Context(DataObject):
_fields = ()
def __init__(self, *args, **kwargs):
super(Context, self).__init__(*args, **kwargs)
if self._fields:
unknowns = [name for name in self._keys() if name not in self._fields]
if unknowns:
raise ValueError('Unexpected field names: %r' % unknowns)
@classmethod
def _new(cls, *args, **kwargs):
if len(args) > len(cls._fields):
raise TypeError('Too many positional arguments')
return cls(zip(cls._fields, args), **kwargs)
@classmethod
def _make(cls, obj):
if obj is None:
return cls()
elif isinstance(obj, cls):
return obj
else:
return cls(obj)
def __setattr__(self, name, value):
if name in self:
raise TypeError("Field '{0}' already set".format(name))
super(Context, self).__setattr__(name, value)
def _branch(self, **kwargs):
if not kwargs:
return self
items = dict(((k, v) for k, v in kwargs.items() if v is not None and v != self[k]))
if items:
return self.__class__(self, **items)
else:
return self
def _setdefaults(self, source):
if not isinstance(source, dict):
source = source.__dict__
new_values = source.copy()
new_values.update(self.__dict__)
self.__dict__.update(new_values)
return self
def __bool__(self):
return True
__nonzero__ = __bool__
try:
from collections import ChainMap
except ImportError:
""" Code extracted from CPython 3 stdlib:
https://github.com/python/cpython/blob/85f2c89ee8223590ba08e3aea97476f76c7e3734/Lib/collections/__init__.py#L852
"""
from collections import MutableMapping
class ChainMap(MutableMapping):
''' A ChainMap groups multiple dicts (or other mappings) together
to create a single, updateable view.
The underlying mappings are stored in a list. That list is public and can
be accessed or updated using the *maps* attribute. There is no other
state.
Lookups search the underlying mappings successively until a key is found.
In contrast, writes, updates, and deletions only operate on the first
mapping.
'''
def __init__(self, *maps):
'''Initialize a ChainMap by setting *maps* to the given mappings.
If no mappings are provided, a single empty dictionary is used.
'''
self.maps = list(maps) or [{}] # always at least one map
def __missing__(self, key):
raise KeyError(key)
def __getitem__(self, key):
for mapping in self.maps:
try:
return mapping[key] # can't use 'key in mapping' with defaultdict
except KeyError:
pass
return self.__missing__(key) # support subclasses that define __missing__
def get(self, key, default=None):
return self[key] if key in self else default
def __len__(self):
return len(set().union(*self.maps)) # reuses stored hash values if possible
def __iter__(self):
return iter(set().union(*self.maps))
def __contains__(self, key):
return any(key in m for m in self.maps)
def __bool__(self):
return any(self.maps)
# @_recursive_repr()
def __repr__(self):
return '{0.__class__.__name__}({1})'.format(
self, ', '.join(map(repr, self.maps)))
@classmethod
def fromkeys(cls, iterable, *args):
'Create a ChainMap with a single dict created from the iterable.'
return cls(dict.fromkeys(iterable, *args))
def copy(self):
'New ChainMap or subclass with a new copy of maps[0] and refs to maps[1:]'
return self.__class__(self.maps[0].copy(), *self.maps[1:])
__copy__ = copy
def new_child(self, m=None): # like Django's Context.push()
'''New ChainMap with a new map followed by all previous maps.
If no map is provided, an empty dict is used.
'''
if m is None:
m = {}
return self.__class__(m, *self.maps)
@property
def parents(self): # like Django's Context.pop()
'New ChainMap from maps[1:].'
return self.__class__(*self.maps[1:])
def __setitem__(self, key, value):
self.maps[0][key] = value
def __delitem__(self, key):
try:
del self.maps[0][key]
except KeyError:
raise KeyError('Key not found in the first mapping: {!r}'.format(key))
def popitem(self):
'Remove and return an item pair from maps[0]. Raise KeyError is maps[0] is empty.'
try:
return self.maps[0].popitem()
except KeyError:
raise KeyError('No keys found in the first mapping.')
def pop(self, key, *args):
'Remove *key* from maps[0] and return its value. Raise KeyError if *key* not in maps[0].'
try:
return self.maps[0].pop(key, *args)
except KeyError:
raise KeyError('Key not found in the first mapping: {!r}'.format(key))
def clear(self):
'Clear maps[0], leaving maps[1:] intact.'
self.maps[0].clear()
try:
from types import MappingProxyType
except ImportError:
from collections import Mapping
class MappingProxyType(Mapping):
def __init__(self, map):
self._map = map
def __len__(self):
return len(self._map)
def __iter__(self):
return iter(self._map)
def __getitem__(self, key):
return self._map[key]
def __repr__(self):
return '{0.__class__.__name__}({1})'.format(self, self._map)
class FrozenDict(Mapping):
def __init__(self, value):
self._value = dict(value)
def __getitem__(self, key):
return self._value[key]
def __iter__(self):
return iter(self._value)
def __len__(self):
return len(self._value)
def __hash__(self):
if not hasattr(self, "_hash"):
_hash = 0
for k, v in self._value.items():
_hash ^= hash(k)
_hash ^= hash(v)
self._hash = _hash
return self._hash
def __repr__(self):
return repr(self._value)
def __str__(self):
return str(self._value)
class FrozenList(Sequence):
def __init__(self, value):
self._list = list(value)
def __getitem__(self, index):
return self._list[index]
def __len__(self):
return len(self._list)
def __hash__(self):
if not hasattr(self, "_hash"):
_hash = 0
for e in self._list:
_hash ^= hash(e)
self._hash = _hash
return self._hash
def __repr__(self):
return repr(self._list)
def __str__(self):
return str(self._list)
def __eq__(self, other):
if len(self) != len(other):
return False
for i in range(len(self)):
if self[i] != other[i]:
return False
return True
|
schematics-fork
|
/schematics-fork-2.1.1.tar.gz/schematics-fork-2.1.1/schematics/datastructures.py
|
datastructures.py
|
from __future__ import unicode_literals, absolute_import
from .compat import *
try:
from collections.abc import Mapping, Sequence # PY3
except ImportError:
from collections import Mapping, Sequence # PY2
__all__ = []
class DataObject(object):
"""
An object for holding data as attributes.
``DataObject`` can be instantiated like ``dict``::
>>> d = DataObject({'one': 1, 'two': 2}, three=3)
>>> d.__dict__
{'one': 1, 'two': 2, 'three': 3}
Attributes are accessible via the regular dot notation (``d.x``) as well as
the subscription syntax (``d['x']``)::
>>> d.one == d['one'] == 1
True
To convert a ``DataObject`` into a dictionary, use ``d._to_dict()``.
``DataObject`` implements the following collection-like operations:
* iteration through attributes as name-value pairs
* ``'x' in d`` for membership tests
* ``len(d)`` to get the number of attributes
Additionally, the following methods are equivalent to their ``dict` counterparts:
``_clear``, ``_get``, ``_keys``, ``_items``, ``_pop``, ``_setdefault``, ``_update``.
An advantage of ``DataObject`` over ``dict` subclasses is that every method name
in ``DataObject`` begins with an underscore, so attributes like ``"update"`` or
``"values"`` are valid.
"""
def __init__(self, *args, **kwargs):
source = args[0] if args else {}
self._update(source, **kwargs)
def __repr__(self):
return self.__class__.__name__ + '(%s)' % repr(self.__dict__)
def _copy(self):
return self.__class__(self)
__copy__ = _copy
def __eq__(self, other):
return isinstance(other, DataObject) and self.__dict__ == other.__dict__
def __iter__(self):
return iter(self.__dict__.items())
def _update(self, source=None, **kwargs):
if isinstance(source, DataObject):
source = source.__dict__
self.__dict__.update(source, **kwargs)
def _setdefaults(self, source):
if isinstance(source, dict):
source = source.items()
for name, value in source:
self._setdefault(name, value)
return self
def _to_dict(self):
d = dict(self.__dict__)
for k, v in d.items():
if isinstance(v, DataObject):
d[k] = v._to_dict()
return d
def __setitem__(self, key, value): self.__dict__[key] = value
def __getitem__(self, key): return self.__dict__[key]
def __delitem__(self, key): del self.__dict__[key]
def __len__(self): return len(self.__dict__)
def __contains__(self, key): return key in self.__dict__
def _clear(self): return self.__dict__.clear()
def _get(self, *args): return self.__dict__.get(*args)
def _items(self): return self.__dict__.items()
def _keys(self): return self.__dict__.keys()
def _pop(self, *args): return self.__dict__.pop(*args)
def _setdefault(self, *args): return self.__dict__.setdefault(*args)
class Context(DataObject):
_fields = ()
def __init__(self, *args, **kwargs):
super(Context, self).__init__(*args, **kwargs)
if self._fields:
unknowns = [name for name in self._keys() if name not in self._fields]
if unknowns:
raise ValueError('Unexpected field names: %r' % unknowns)
@classmethod
def _new(cls, *args, **kwargs):
if len(args) > len(cls._fields):
raise TypeError('Too many positional arguments')
return cls(zip(cls._fields, args), **kwargs)
@classmethod
def _make(cls, obj):
if obj is None:
return cls()
elif isinstance(obj, cls):
return obj
else:
return cls(obj)
def __setattr__(self, name, value):
if name in self:
raise TypeError("Field '{0}' already set".format(name))
super(Context, self).__setattr__(name, value)
def _branch(self, **kwargs):
if not kwargs:
return self
items = dict(((k, v) for k, v in kwargs.items() if v is not None and v != self[k]))
if items:
return self.__class__(self, **items)
else:
return self
def _setdefaults(self, source):
if not isinstance(source, dict):
source = source.__dict__
new_values = source.copy()
new_values.update(self.__dict__)
self.__dict__.update(new_values)
return self
def __bool__(self):
return True
__nonzero__ = __bool__
try:
from collections import ChainMap
except ImportError:
""" Code extracted from CPython 3 stdlib:
https://github.com/python/cpython/blob/85f2c89ee8223590ba08e3aea97476f76c7e3734/Lib/collections/__init__.py#L852
"""
from collections import MutableMapping
class ChainMap(MutableMapping):
''' A ChainMap groups multiple dicts (or other mappings) together
to create a single, updateable view.
The underlying mappings are stored in a list. That list is public and can
be accessed or updated using the *maps* attribute. There is no other
state.
Lookups search the underlying mappings successively until a key is found.
In contrast, writes, updates, and deletions only operate on the first
mapping.
'''
def __init__(self, *maps):
'''Initialize a ChainMap by setting *maps* to the given mappings.
If no mappings are provided, a single empty dictionary is used.
'''
self.maps = list(maps) or [{}] # always at least one map
def __missing__(self, key):
raise KeyError(key)
def __getitem__(self, key):
for mapping in self.maps:
try:
return mapping[key] # can't use 'key in mapping' with defaultdict
except KeyError:
pass
return self.__missing__(key) # support subclasses that define __missing__
def get(self, key, default=None):
return self[key] if key in self else default
def __len__(self):
return len(set().union(*self.maps)) # reuses stored hash values if possible
def __iter__(self):
return iter(set().union(*self.maps))
def __contains__(self, key):
return any(key in m for m in self.maps)
def __bool__(self):
return any(self.maps)
# @_recursive_repr()
def __repr__(self):
return '{0.__class__.__name__}({1})'.format(
self, ', '.join(map(repr, self.maps)))
@classmethod
def fromkeys(cls, iterable, *args):
'Create a ChainMap with a single dict created from the iterable.'
return cls(dict.fromkeys(iterable, *args))
def copy(self):
'New ChainMap or subclass with a new copy of maps[0] and refs to maps[1:]'
return self.__class__(self.maps[0].copy(), *self.maps[1:])
__copy__ = copy
def new_child(self, m=None): # like Django's Context.push()
'''New ChainMap with a new map followed by all previous maps.
If no map is provided, an empty dict is used.
'''
if m is None:
m = {}
return self.__class__(m, *self.maps)
@property
def parents(self): # like Django's Context.pop()
'New ChainMap from maps[1:].'
return self.__class__(*self.maps[1:])
def __setitem__(self, key, value):
self.maps[0][key] = value
def __delitem__(self, key):
try:
del self.maps[0][key]
except KeyError:
raise KeyError('Key not found in the first mapping: {!r}'.format(key))
def popitem(self):
'Remove and return an item pair from maps[0]. Raise KeyError is maps[0] is empty.'
try:
return self.maps[0].popitem()
except KeyError:
raise KeyError('No keys found in the first mapping.')
def pop(self, key, *args):
'Remove *key* from maps[0] and return its value. Raise KeyError if *key* not in maps[0].'
try:
return self.maps[0].pop(key, *args)
except KeyError:
raise KeyError('Key not found in the first mapping: {!r}'.format(key))
def clear(self):
'Clear maps[0], leaving maps[1:] intact.'
self.maps[0].clear()
try:
from types import MappingProxyType
except ImportError:
from collections import Mapping
class MappingProxyType(Mapping):
def __init__(self, map):
self._map = map
def __len__(self):
return len(self._map)
def __iter__(self):
return iter(self._map)
def __getitem__(self, key):
return self._map[key]
def __repr__(self):
return '{0.__class__.__name__}({1})'.format(self, self._map)
class FrozenDict(Mapping):
def __init__(self, value):
self._value = dict(value)
def __getitem__(self, key):
return self._value[key]
def __iter__(self):
return iter(self._value)
def __len__(self):
return len(self._value)
def __hash__(self):
if not hasattr(self, "_hash"):
_hash = 0
for k, v in self._value.items():
_hash ^= hash(k)
_hash ^= hash(v)
self._hash = _hash
return self._hash
def __repr__(self):
return repr(self._value)
def __str__(self):
return str(self._value)
class FrozenList(Sequence):
def __init__(self, value):
self._list = list(value)
def __getitem__(self, index):
return self._list[index]
def __len__(self):
return len(self._list)
def __hash__(self):
if not hasattr(self, "_hash"):
_hash = 0
for e in self._list:
_hash ^= hash(e)
self._hash = _hash
return self._hash
def __repr__(self):
return repr(self._list)
def __str__(self):
return str(self._list)
def __eq__(self, other):
if len(self) != len(other):
return False
for i in range(len(self)):
if self[i] != other[i]:
return False
return True
| 0.85166 | 0.312632 |
from __future__ import unicode_literals, absolute_import
from copy import deepcopy
import inspect
from collections import OrderedDict
from types import FunctionType
from .common import *
from .compat import str_compat, repr_compat, _dict
from .datastructures import Context, ChainMap, MappingProxyType
from .exceptions import *
from .iteration import atoms
from .transforms import (
export_loop, convert,
to_native, to_primitive,
)
from .validate import validate, prepare_validator
from .types import BaseType
from .types.serializable import Serializable
from .undefined import Undefined
from .util import get_ident
from . import schema
__all__ = []
class FieldDescriptor(object):
"""
``FieldDescriptor`` instances serve as field accessors on models.
"""
def __init__(self, name):
"""
:param name:
The field's name
"""
self.name = name
def __get__(self, instance, cls):
"""
For a model instance, returns the field's current value.
For a model class, returns the field's type object.
"""
if instance is None:
return cls._fields[self.name]
else:
value = instance._data.get(self.name, Undefined)
if value is Undefined:
raise UndefinedValueError(instance, self.name)
else:
return value
def __set__(self, instance, value):
"""
Sets the field's value.
"""
field = instance._fields[self.name]
value = field.pre_setattr(value)
instance._data.converted[self.name] = value
def __delete__(self, instance):
"""
Deletes the field's value.
"""
del instance._data[self.name]
class ModelMeta(type):
"""
Metaclass for Models.
"""
def __new__(mcs, name, bases, attrs):
"""
This metaclass parses the declarative Model into a corresponding Schema,
then adding it as the `_schema` attribute to the host class.
"""
# Structures used to accumulate meta info
fields = OrderedDict()
validator_functions = {} # Model level
options_members = {}
# Accumulate metas info from parent classes
for base in reversed(bases):
if hasattr(base, '_schema'):
fields.update(deepcopy(base._schema.fields))
options_members.update(dict(base._schema.options))
validator_functions.update(base._schema.validators)
# Parse this class's attributes into schema structures
for key, value in iteritems(attrs):
if key.startswith('validate_') and isinstance(value, (FunctionType, classmethod)):
validator_functions[key[9:]] = prepare_validator(value, 4)
if isinstance(value, BaseType):
fields[key] = value
elif isinstance(value, Serializable):
fields[key] = value
# Convert declared fields into descriptors for new class
fields = OrderedDict(sorted(
(kv for kv in fields.items()),
key=lambda i: i[1]._position_hint,
))
for key, field in iteritems(fields):
if isinstance(field, BaseType):
attrs[key] = FieldDescriptor(key)
elif isinstance(field, Serializable):
attrs[key] = field
klass = type.__new__(mcs, name, bases, attrs)
klass = repr_compat(str_compat(klass))
# Parse schema options
options = mcs._read_options(name, bases, attrs, options_members)
# Parse meta data into new schema
klass._schema = schema.Schema(name, model=klass, options=options,
validators=validator_functions, *(schema.Field(k, t) for k, t in iteritems(fields)))
return klass
@classmethod
def _read_options(mcs, name, bases, attrs, options_members):
"""
Parses model `Options` class into a `SchemaOptions` instance.
"""
options_class = attrs.get('__optionsclass__', schema.SchemaOptions)
if 'Options' in attrs:
for key, value in inspect.getmembers(attrs['Options']):
if key.startswith("__"):
continue
elif key.startswith("_"):
extras = options_members.get("extras", {}).copy()
extras.update({key: value})
options_members["extras"] = extras
elif key == "roles":
roles = options_members.get("roles", {}).copy()
roles.update(value)
options_members[key] = roles
else:
options_members[key] = value
return options_class(**options_members)
class ModelDict(ChainMap):
__slots__ = ['_unsafe', '_converted', '__valid', '_valid']
def __init__(self, unsafe=None, converted=None, valid=None):
self._unsafe = unsafe if unsafe is not None else {}
self._converted = converted if converted is not None else {}
self.__valid = valid if valid is not None else {}
self._valid = MappingProxyType(self.__valid)
super(ModelDict, self).__init__(self._unsafe, self._converted, self._valid)
@property
def unsafe(self):
return self._unsafe
@unsafe.setter
def unsafe(self, value):
self._unsafe = value
self.maps[0] = self._unsafe
@property
def converted(self):
return self._converted
@converted.setter
def converted(self, value):
self._converted = value
self.maps[1] = self._converted
@property
def valid(self):
return self._valid
@valid.setter
def valid(self, value):
self._valid = MappingProxyType(value)
self.maps[2] = self._valid
def __delitem__(self, key):
did_delete = False
for data in [self.__valid, self._converted, self._unsafe]:
try:
del data[key]
did_delete = True
except KeyError:
pass
if not did_delete:
raise KeyError(key)
def __repr__(self):
return repr(dict(self))
@metaclass(ModelMeta)
class Model(object):
"""
Enclosure for fields and validation. Same pattern deployed by Django
models, SQLAlchemy declarative extension and other developer friendly
libraries.
:param Mapping raw_data:
The data to be imported into the model instance.
:param Mapping deserialize_mapping:
Can be used to provide alternative input names for fields. Values may be
strings or lists of strings, keyed by the actual field name.
:param bool partial:
Allow partial data to validate. Essentially drops the ``required=True``
settings from field definitions. Default: True
:param bool strict:
Complain about unrecognized keys. Default: True
"""
def __init__(self, raw_data=None, trusted_data=None, deserialize_mapping=None,
init=True, partial=True, strict=True, validate=False, app_data=None,
lazy=False, **kwargs):
kwargs.setdefault('init_values', init)
kwargs.setdefault('apply_defaults', init)
if lazy:
self._data = ModelDict(unsafe=raw_data, valid=trusted_data)
return
self._data = ModelDict(valid=trusted_data)
data = self._convert(raw_data,
trusted_data=trusted_data, mapping=deserialize_mapping,
partial=partial, strict=strict, validate=validate, new=True,
app_data=app_data, **kwargs)
self._data.converted = data
if validate:
self.validate(partial=partial, app_data=app_data, **kwargs)
def validate(self, partial=False, convert=True, app_data=None, **kwargs):
"""
Validates the state of the model. If the data is invalid, raises a ``DataError``
with error messages.
:param bool partial:
Allow partial data to validate. Essentially drops the ``required=True``
settings from field definitions. Default: False
:param convert:
Controls whether to perform import conversion before validating.
Can be turned off to skip an unnecessary conversion step if all values
are known to have the right datatypes (e.g., when validating immediately
after the initial import). Default: True
"""
if not self._data.converted and partial:
return # no new input data to validate
try:
data = self._convert(validate=True,
partial=partial, convert=convert, app_data=app_data, **kwargs)
self._data.valid = data
except DataError as e:
valid = dict(self._data.valid)
valid.update(e.partial_data)
self._data.valid = valid
raise
finally:
self._data.converted = {}
def import_data(self, raw_data, recursive=False, **kwargs):
"""
Converts and imports the raw data into an existing model instance.
:param raw_data:
The data to be imported.
"""
data = self._convert(raw_data, trusted_data=_dict(self), recursive=recursive, **kwargs)
self._data.converted.update(data)
if kwargs.get('validate'):
self.validate(convert=False)
return self
def _convert(self, raw_data=None, context=None, **kwargs):
"""
Converts the instance raw data into richer Python constructs according
to the fields on the model, validating data if requested.
:param raw_data:
New data to be imported and converted
"""
raw_data = _dict(raw_data) if raw_data else self._data.converted
kwargs['trusted_data'] = kwargs.get('trusted_data') or {}
kwargs['convert'] = getattr(context, 'convert', kwargs.get('convert', True))
if self._data.unsafe:
self._data.unsafe.update(raw_data)
raw_data = self._data.unsafe
self._data.unsafe = {}
kwargs['convert'] = True
should_validate = getattr(context, 'validate', kwargs.get('validate', False))
func = validate if should_validate else convert
return func(self._schema, self, raw_data=raw_data, oo=True, context=context, **kwargs)
def export(self, field_converter=None, role=None, app_data=None, **kwargs):
return export_loop(self._schema, self, field_converter=field_converter,
role=role, app_data=app_data, **kwargs)
def to_native(self, role=None, app_data=None, **kwargs):
return to_native(self._schema, self, role=role, app_data=app_data, **kwargs)
def to_primitive(self, role=None, app_data=None, **kwargs):
return to_primitive(self._schema, self, role=role, app_data=app_data, **kwargs)
def serialize(self, *args, **kwargs):
raw_data = self._data.converted
try:
self.validate(apply_defaults=True)
except DataError:
pass
data = self.to_primitive(*args, **kwargs)
self._data.converted = raw_data
return data
def atoms(self):
"""
Iterator for the atomic components of a model definition and relevant
data that creates a 3-tuple of the field's name, its type instance and
its value.
"""
return atoms(self._schema, self)
def __iter__(self):
return (k for k in self._schema.fields if k in self._data
and getattr(self._schema.fields[k], 'fset', None) is None)
def keys(self):
return list(iter(self))
def items(self):
return [(k, self._data[k]) for k in self]
def values(self):
return [self._data[k] for k in self]
def get(self, key, default=None):
return getattr(self, key, default)
@classmethod
def _append_field(cls, field_name, field_type):
"""
Add a new field to this class.
:type field_name: str
:param field_name:
The name of the field to add.
:type field_type: BaseType
:param field_type:
The type to use for the field.
"""
cls._schema.append_field(schema.Field(field_name, field_type))
setattr(cls, field_name, FieldDescriptor(field_name))
@classmethod
def get_mock_object(cls, context=None, overrides={}):
"""Get a mock object.
:param dict context:
:param dict overrides: overrides for the model
"""
context = Context._make(context)
context._setdefault('memo', set())
context.memo.add(cls)
values = {}
for name, field in cls.fields.items():
if name in overrides:
continue
if getattr(field, 'model_class', None) in context.memo:
continue
try:
values[name] = field.mock(context)
except MockCreationError as exc:
raise MockCreationError('%s: %s' % (name, exc.message))
values.update(overrides)
return cls(values)
def __getitem__(self, name):
if name in self._schema.fields:
return getattr(self, name)
else:
raise UnknownFieldError(self, name)
def __setitem__(self, name, value):
if name in self._schema.fields:
return setattr(self, name, value)
else:
raise UnknownFieldError(self, name)
def __delitem__(self, name):
if name in self._schema.fields:
return delattr(self, name)
else:
raise UnknownFieldError(self, name)
def __contains__(self, name):
return (name in self._data and getattr(self, name, Undefined) is not Undefined) \
or name in self._serializables
def __len__(self):
return len(self._data)
def __eq__(self, other, memo=set()):
if self is other:
return True
if type(self) is not type(other):
return NotImplemented
key = (id(self), id(other), get_ident())
if key in memo:
return True
else:
memo.add(key)
try:
for k in self:
if self.get(k) != other.get(k):
return False
return True
finally:
memo.remove(key)
def __ne__(self, other):
return not self == other
def __repr__(self):
model = self.__class__.__name__
info = self._repr_info()
if info:
return '<%s: %s>' % (model, info)
else:
return '<%s instance>' % model
def _repr_info(self):
"""
Subclasses may implement this method to augment the ``__repr__()`` output for the instance::
class Person(Model):
...
def _repr_info(self):
return self.name
>>> Person({'name': 'Mr. Pink'})
<Person: Mr. Pink>
"""
return None
|
schematics-fork
|
/schematics-fork-2.1.1.tar.gz/schematics-fork-2.1.1/schematics/models.py
|
models.py
|
from __future__ import unicode_literals, absolute_import
from copy import deepcopy
import inspect
from collections import OrderedDict
from types import FunctionType
from .common import *
from .compat import str_compat, repr_compat, _dict
from .datastructures import Context, ChainMap, MappingProxyType
from .exceptions import *
from .iteration import atoms
from .transforms import (
export_loop, convert,
to_native, to_primitive,
)
from .validate import validate, prepare_validator
from .types import BaseType
from .types.serializable import Serializable
from .undefined import Undefined
from .util import get_ident
from . import schema
__all__ = []
class FieldDescriptor(object):
"""
``FieldDescriptor`` instances serve as field accessors on models.
"""
def __init__(self, name):
"""
:param name:
The field's name
"""
self.name = name
def __get__(self, instance, cls):
"""
For a model instance, returns the field's current value.
For a model class, returns the field's type object.
"""
if instance is None:
return cls._fields[self.name]
else:
value = instance._data.get(self.name, Undefined)
if value is Undefined:
raise UndefinedValueError(instance, self.name)
else:
return value
def __set__(self, instance, value):
"""
Sets the field's value.
"""
field = instance._fields[self.name]
value = field.pre_setattr(value)
instance._data.converted[self.name] = value
def __delete__(self, instance):
"""
Deletes the field's value.
"""
del instance._data[self.name]
class ModelMeta(type):
"""
Metaclass for Models.
"""
def __new__(mcs, name, bases, attrs):
"""
This metaclass parses the declarative Model into a corresponding Schema,
then adding it as the `_schema` attribute to the host class.
"""
# Structures used to accumulate meta info
fields = OrderedDict()
validator_functions = {} # Model level
options_members = {}
# Accumulate metas info from parent classes
for base in reversed(bases):
if hasattr(base, '_schema'):
fields.update(deepcopy(base._schema.fields))
options_members.update(dict(base._schema.options))
validator_functions.update(base._schema.validators)
# Parse this class's attributes into schema structures
for key, value in iteritems(attrs):
if key.startswith('validate_') and isinstance(value, (FunctionType, classmethod)):
validator_functions[key[9:]] = prepare_validator(value, 4)
if isinstance(value, BaseType):
fields[key] = value
elif isinstance(value, Serializable):
fields[key] = value
# Convert declared fields into descriptors for new class
fields = OrderedDict(sorted(
(kv for kv in fields.items()),
key=lambda i: i[1]._position_hint,
))
for key, field in iteritems(fields):
if isinstance(field, BaseType):
attrs[key] = FieldDescriptor(key)
elif isinstance(field, Serializable):
attrs[key] = field
klass = type.__new__(mcs, name, bases, attrs)
klass = repr_compat(str_compat(klass))
# Parse schema options
options = mcs._read_options(name, bases, attrs, options_members)
# Parse meta data into new schema
klass._schema = schema.Schema(name, model=klass, options=options,
validators=validator_functions, *(schema.Field(k, t) for k, t in iteritems(fields)))
return klass
@classmethod
def _read_options(mcs, name, bases, attrs, options_members):
"""
Parses model `Options` class into a `SchemaOptions` instance.
"""
options_class = attrs.get('__optionsclass__', schema.SchemaOptions)
if 'Options' in attrs:
for key, value in inspect.getmembers(attrs['Options']):
if key.startswith("__"):
continue
elif key.startswith("_"):
extras = options_members.get("extras", {}).copy()
extras.update({key: value})
options_members["extras"] = extras
elif key == "roles":
roles = options_members.get("roles", {}).copy()
roles.update(value)
options_members[key] = roles
else:
options_members[key] = value
return options_class(**options_members)
class ModelDict(ChainMap):
__slots__ = ['_unsafe', '_converted', '__valid', '_valid']
def __init__(self, unsafe=None, converted=None, valid=None):
self._unsafe = unsafe if unsafe is not None else {}
self._converted = converted if converted is not None else {}
self.__valid = valid if valid is not None else {}
self._valid = MappingProxyType(self.__valid)
super(ModelDict, self).__init__(self._unsafe, self._converted, self._valid)
@property
def unsafe(self):
return self._unsafe
@unsafe.setter
def unsafe(self, value):
self._unsafe = value
self.maps[0] = self._unsafe
@property
def converted(self):
return self._converted
@converted.setter
def converted(self, value):
self._converted = value
self.maps[1] = self._converted
@property
def valid(self):
return self._valid
@valid.setter
def valid(self, value):
self._valid = MappingProxyType(value)
self.maps[2] = self._valid
def __delitem__(self, key):
did_delete = False
for data in [self.__valid, self._converted, self._unsafe]:
try:
del data[key]
did_delete = True
except KeyError:
pass
if not did_delete:
raise KeyError(key)
def __repr__(self):
return repr(dict(self))
@metaclass(ModelMeta)
class Model(object):
"""
Enclosure for fields and validation. Same pattern deployed by Django
models, SQLAlchemy declarative extension and other developer friendly
libraries.
:param Mapping raw_data:
The data to be imported into the model instance.
:param Mapping deserialize_mapping:
Can be used to provide alternative input names for fields. Values may be
strings or lists of strings, keyed by the actual field name.
:param bool partial:
Allow partial data to validate. Essentially drops the ``required=True``
settings from field definitions. Default: True
:param bool strict:
Complain about unrecognized keys. Default: True
"""
def __init__(self, raw_data=None, trusted_data=None, deserialize_mapping=None,
init=True, partial=True, strict=True, validate=False, app_data=None,
lazy=False, **kwargs):
kwargs.setdefault('init_values', init)
kwargs.setdefault('apply_defaults', init)
if lazy:
self._data = ModelDict(unsafe=raw_data, valid=trusted_data)
return
self._data = ModelDict(valid=trusted_data)
data = self._convert(raw_data,
trusted_data=trusted_data, mapping=deserialize_mapping,
partial=partial, strict=strict, validate=validate, new=True,
app_data=app_data, **kwargs)
self._data.converted = data
if validate:
self.validate(partial=partial, app_data=app_data, **kwargs)
def validate(self, partial=False, convert=True, app_data=None, **kwargs):
"""
Validates the state of the model. If the data is invalid, raises a ``DataError``
with error messages.
:param bool partial:
Allow partial data to validate. Essentially drops the ``required=True``
settings from field definitions. Default: False
:param convert:
Controls whether to perform import conversion before validating.
Can be turned off to skip an unnecessary conversion step if all values
are known to have the right datatypes (e.g., when validating immediately
after the initial import). Default: True
"""
if not self._data.converted and partial:
return # no new input data to validate
try:
data = self._convert(validate=True,
partial=partial, convert=convert, app_data=app_data, **kwargs)
self._data.valid = data
except DataError as e:
valid = dict(self._data.valid)
valid.update(e.partial_data)
self._data.valid = valid
raise
finally:
self._data.converted = {}
def import_data(self, raw_data, recursive=False, **kwargs):
"""
Converts and imports the raw data into an existing model instance.
:param raw_data:
The data to be imported.
"""
data = self._convert(raw_data, trusted_data=_dict(self), recursive=recursive, **kwargs)
self._data.converted.update(data)
if kwargs.get('validate'):
self.validate(convert=False)
return self
def _convert(self, raw_data=None, context=None, **kwargs):
"""
Converts the instance raw data into richer Python constructs according
to the fields on the model, validating data if requested.
:param raw_data:
New data to be imported and converted
"""
raw_data = _dict(raw_data) if raw_data else self._data.converted
kwargs['trusted_data'] = kwargs.get('trusted_data') or {}
kwargs['convert'] = getattr(context, 'convert', kwargs.get('convert', True))
if self._data.unsafe:
self._data.unsafe.update(raw_data)
raw_data = self._data.unsafe
self._data.unsafe = {}
kwargs['convert'] = True
should_validate = getattr(context, 'validate', kwargs.get('validate', False))
func = validate if should_validate else convert
return func(self._schema, self, raw_data=raw_data, oo=True, context=context, **kwargs)
def export(self, field_converter=None, role=None, app_data=None, **kwargs):
return export_loop(self._schema, self, field_converter=field_converter,
role=role, app_data=app_data, **kwargs)
def to_native(self, role=None, app_data=None, **kwargs):
return to_native(self._schema, self, role=role, app_data=app_data, **kwargs)
def to_primitive(self, role=None, app_data=None, **kwargs):
return to_primitive(self._schema, self, role=role, app_data=app_data, **kwargs)
def serialize(self, *args, **kwargs):
raw_data = self._data.converted
try:
self.validate(apply_defaults=True)
except DataError:
pass
data = self.to_primitive(*args, **kwargs)
self._data.converted = raw_data
return data
def atoms(self):
"""
Iterator for the atomic components of a model definition and relevant
data that creates a 3-tuple of the field's name, its type instance and
its value.
"""
return atoms(self._schema, self)
def __iter__(self):
return (k for k in self._schema.fields if k in self._data
and getattr(self._schema.fields[k], 'fset', None) is None)
def keys(self):
return list(iter(self))
def items(self):
return [(k, self._data[k]) for k in self]
def values(self):
return [self._data[k] for k in self]
def get(self, key, default=None):
return getattr(self, key, default)
@classmethod
def _append_field(cls, field_name, field_type):
"""
Add a new field to this class.
:type field_name: str
:param field_name:
The name of the field to add.
:type field_type: BaseType
:param field_type:
The type to use for the field.
"""
cls._schema.append_field(schema.Field(field_name, field_type))
setattr(cls, field_name, FieldDescriptor(field_name))
@classmethod
def get_mock_object(cls, context=None, overrides={}):
"""Get a mock object.
:param dict context:
:param dict overrides: overrides for the model
"""
context = Context._make(context)
context._setdefault('memo', set())
context.memo.add(cls)
values = {}
for name, field in cls.fields.items():
if name in overrides:
continue
if getattr(field, 'model_class', None) in context.memo:
continue
try:
values[name] = field.mock(context)
except MockCreationError as exc:
raise MockCreationError('%s: %s' % (name, exc.message))
values.update(overrides)
return cls(values)
def __getitem__(self, name):
if name in self._schema.fields:
return getattr(self, name)
else:
raise UnknownFieldError(self, name)
def __setitem__(self, name, value):
if name in self._schema.fields:
return setattr(self, name, value)
else:
raise UnknownFieldError(self, name)
def __delitem__(self, name):
if name in self._schema.fields:
return delattr(self, name)
else:
raise UnknownFieldError(self, name)
def __contains__(self, name):
return (name in self._data and getattr(self, name, Undefined) is not Undefined) \
or name in self._serializables
def __len__(self):
return len(self._data)
def __eq__(self, other, memo=set()):
if self is other:
return True
if type(self) is not type(other):
return NotImplemented
key = (id(self), id(other), get_ident())
if key in memo:
return True
else:
memo.add(key)
try:
for k in self:
if self.get(k) != other.get(k):
return False
return True
finally:
memo.remove(key)
def __ne__(self, other):
return not self == other
def __repr__(self):
model = self.__class__.__name__
info = self._repr_info()
if info:
return '<%s: %s>' % (model, info)
else:
return '<%s instance>' % model
def _repr_info(self):
"""
Subclasses may implement this method to augment the ``__repr__()`` output for the instance::
class Person(Model):
...
def _repr_info(self):
return self.name
>>> Person({'name': 'Mr. Pink'})
<Person: Mr. Pink>
"""
return None
| 0.849238 | 0.143818 |
from __future__ import unicode_literals, absolute_import
import json
from .common import *
from .compat import string_type, str_compat
from .datastructures import FrozenDict, FrozenList
from .translator import LazyText
try:
from collections.abc import Mapping, Sequence # PY3
except ImportError:
from collections import Mapping, Sequence # PY2
__all__ = [
'BaseError', 'ErrorMessage', 'FieldError', 'ConversionError',
'ValidationError', 'StopValidationError', 'CompoundError', 'DataError',
'MockCreationError', 'UndefinedValueError', 'UnknownFieldError']
@str_compat
class BaseError(Exception):
def __init__(self, errors):
"""
The base class for all Schematics errors.
message should be a human-readable message,
while errors is a machine-readable list, or dictionary.
if None is passed as the message, and error is populated,
the primitive representation will be serialized.
the Python logging module expects exceptions to be hashable
and therefore immutable. As a result, it is not possible to
mutate BaseError's error list or dict after initialization.
"""
errors = self._freeze(errors)
super(BaseError, self).__init__(errors)
@property
def errors(self):
return self.args[0]
def to_primitive(self):
"""
converts the errors dict to a primitive representation of dicts,
list and strings.
"""
if not hasattr(self, "_primitive"):
self._primitive = self._to_primitive(self.errors)
return self._primitive
@staticmethod
def _freeze(obj):
""" freeze common data structures to something immutable. """
if isinstance(obj, dict):
return FrozenDict(obj)
elif isinstance(obj, list):
return FrozenList(obj)
else:
return obj
@classmethod
def _to_primitive(cls, obj):
""" recursive to_primitive for basic data types. """
if isinstance(obj, string_type):
return obj
if isinstance(obj, Sequence):
return [cls._to_primitive(e) for e in obj]
elif isinstance(obj, Mapping):
return dict(
(k, cls._to_primitive(v)) for k, v in obj.items()
)
else:
return str(obj)
def __str__(self):
return json.dumps(self.to_primitive())
def __repr__(self):
return "%s(%s)" % (self.__class__.__name__, repr(self.errors))
def __hash__(self):
return hash(self.errors)
def __eq__(self, other):
if type(self) is type(other):
return self.errors == other.errors
else:
return self.errors == other
return False
def __ne__(self, other):
return not (self == other)
@str_compat
class ErrorMessage(object):
def __init__(self, summary, info=None):
self.type = None
self.summary = summary
self.info = info
def __repr__(self):
return "%s(%s, %s)" % (
self.__class__.__name__,
repr(self.summary),
repr(self.info)
)
def __str__(self):
if self.info:
return '%s: %s' % (self.summary, self._info_as_str())
else:
return '%s' % self.summary
def _info_as_str(self):
if isinstance(self.info, int):
return str(self.info)
elif isinstance(self.info, string_type):
return '"%s"' % self.info
else:
return str(self.info)
def __eq__(self, other):
if isinstance(other, ErrorMessage):
return (
self.summary == other.summary and
self.type == other.type and
self.info == other.info
)
elif isinstance(other, string_type):
return self.summary == other
else:
return False
def __ne__(self, other):
return not (self == other)
def __hash__(self):
return hash((self.summary, self.type, self.info))
class FieldError(BaseError, Sequence):
type = None
def __init__(self, *args, **kwargs):
if type(self) is FieldError:
raise NotImplementedError("Please raise either ConversionError or ValidationError.")
if len(args) == 0:
raise TypeError("Please provide at least one error or error message.")
if kwargs:
items = [ErrorMessage(*args, **kwargs)]
elif len(args) == 1:
arg = args[0]
if isinstance(arg, list):
items = list(arg)
else:
items = [arg]
else:
items = args
errors = []
for item in items:
if isinstance(item, (string_type, LazyText)):
errors.append(ErrorMessage(str(item)))
elif isinstance(item, tuple):
errors.append(ErrorMessage(*item))
elif isinstance(item, ErrorMessage):
errors.append(item)
elif isinstance(item, self.__class__):
errors.extend(item.errors)
else:
raise TypeError("'{0}()' object is neither a {1} nor an error message."\
.format(type(item).__name__, type(self).__name__))
for error in errors:
error.type = self.type or type(self)
super(FieldError, self).__init__(errors)
def __contains__(self, value):
return value in self.errors
def __getitem__(self, index):
return self.errors[index]
def __iter__(self):
return iter(self.errors)
def __len__(self):
return len(self.errors)
class ConversionError(FieldError, TypeError):
""" Exception raised when data cannot be converted to the correct python type """
pass
class ValidationError(FieldError, ValueError):
"""Exception raised when invalid data is encountered."""
pass
class StopValidationError(ValidationError):
"""Exception raised when no more validation need occur."""
type = ValidationError
class CompoundError(BaseError):
def __init__(self, errors):
if not isinstance(errors, dict):
raise TypeError("Compound errors must be reported as a dictionary.")
for key, value in errors.items():
if isinstance(value, CompoundError):
errors[key] = value.errors
else:
errors[key] = value
super(CompoundError, self).__init__(errors)
class DataError(CompoundError):
def __init__(self, errors, partial_data=None):
super(DataError, self).__init__(errors)
self.partial_data = partial_data
class MockCreationError(ValueError):
"""Exception raised when a mock value cannot be generated."""
pass
class UndefinedValueError(AttributeError, KeyError):
"""Exception raised when accessing a field with an undefined value."""
def __init__(self, model, name):
msg = "'%s' instance has no value for field '%s'" % (model.__class__.__name__, name)
super(UndefinedValueError, self).__init__(msg)
class UnknownFieldError(KeyError):
"""Exception raised when attempting to access a nonexistent field using the subscription syntax."""
def __init__(self, model, name):
msg = "Model '%s' has no field named '%s'" % (model.__class__.__name__, name)
super(UnknownFieldError, self).__init__(msg)
if PY2:
# Python 2 names cannot be unicode
__all__ = [n.encode('ascii') for n in __all__]
|
schematics-fork
|
/schematics-fork-2.1.1.tar.gz/schematics-fork-2.1.1/schematics/exceptions.py
|
exceptions.py
|
from __future__ import unicode_literals, absolute_import
import json
from .common import *
from .compat import string_type, str_compat
from .datastructures import FrozenDict, FrozenList
from .translator import LazyText
try:
from collections.abc import Mapping, Sequence # PY3
except ImportError:
from collections import Mapping, Sequence # PY2
__all__ = [
'BaseError', 'ErrorMessage', 'FieldError', 'ConversionError',
'ValidationError', 'StopValidationError', 'CompoundError', 'DataError',
'MockCreationError', 'UndefinedValueError', 'UnknownFieldError']
@str_compat
class BaseError(Exception):
def __init__(self, errors):
"""
The base class for all Schematics errors.
message should be a human-readable message,
while errors is a machine-readable list, or dictionary.
if None is passed as the message, and error is populated,
the primitive representation will be serialized.
the Python logging module expects exceptions to be hashable
and therefore immutable. As a result, it is not possible to
mutate BaseError's error list or dict after initialization.
"""
errors = self._freeze(errors)
super(BaseError, self).__init__(errors)
@property
def errors(self):
return self.args[0]
def to_primitive(self):
"""
converts the errors dict to a primitive representation of dicts,
list and strings.
"""
if not hasattr(self, "_primitive"):
self._primitive = self._to_primitive(self.errors)
return self._primitive
@staticmethod
def _freeze(obj):
""" freeze common data structures to something immutable. """
if isinstance(obj, dict):
return FrozenDict(obj)
elif isinstance(obj, list):
return FrozenList(obj)
else:
return obj
@classmethod
def _to_primitive(cls, obj):
""" recursive to_primitive for basic data types. """
if isinstance(obj, string_type):
return obj
if isinstance(obj, Sequence):
return [cls._to_primitive(e) for e in obj]
elif isinstance(obj, Mapping):
return dict(
(k, cls._to_primitive(v)) for k, v in obj.items()
)
else:
return str(obj)
def __str__(self):
return json.dumps(self.to_primitive())
def __repr__(self):
return "%s(%s)" % (self.__class__.__name__, repr(self.errors))
def __hash__(self):
return hash(self.errors)
def __eq__(self, other):
if type(self) is type(other):
return self.errors == other.errors
else:
return self.errors == other
return False
def __ne__(self, other):
return not (self == other)
@str_compat
class ErrorMessage(object):
def __init__(self, summary, info=None):
self.type = None
self.summary = summary
self.info = info
def __repr__(self):
return "%s(%s, %s)" % (
self.__class__.__name__,
repr(self.summary),
repr(self.info)
)
def __str__(self):
if self.info:
return '%s: %s' % (self.summary, self._info_as_str())
else:
return '%s' % self.summary
def _info_as_str(self):
if isinstance(self.info, int):
return str(self.info)
elif isinstance(self.info, string_type):
return '"%s"' % self.info
else:
return str(self.info)
def __eq__(self, other):
if isinstance(other, ErrorMessage):
return (
self.summary == other.summary and
self.type == other.type and
self.info == other.info
)
elif isinstance(other, string_type):
return self.summary == other
else:
return False
def __ne__(self, other):
return not (self == other)
def __hash__(self):
return hash((self.summary, self.type, self.info))
class FieldError(BaseError, Sequence):
type = None
def __init__(self, *args, **kwargs):
if type(self) is FieldError:
raise NotImplementedError("Please raise either ConversionError or ValidationError.")
if len(args) == 0:
raise TypeError("Please provide at least one error or error message.")
if kwargs:
items = [ErrorMessage(*args, **kwargs)]
elif len(args) == 1:
arg = args[0]
if isinstance(arg, list):
items = list(arg)
else:
items = [arg]
else:
items = args
errors = []
for item in items:
if isinstance(item, (string_type, LazyText)):
errors.append(ErrorMessage(str(item)))
elif isinstance(item, tuple):
errors.append(ErrorMessage(*item))
elif isinstance(item, ErrorMessage):
errors.append(item)
elif isinstance(item, self.__class__):
errors.extend(item.errors)
else:
raise TypeError("'{0}()' object is neither a {1} nor an error message."\
.format(type(item).__name__, type(self).__name__))
for error in errors:
error.type = self.type or type(self)
super(FieldError, self).__init__(errors)
def __contains__(self, value):
return value in self.errors
def __getitem__(self, index):
return self.errors[index]
def __iter__(self):
return iter(self.errors)
def __len__(self):
return len(self.errors)
class ConversionError(FieldError, TypeError):
""" Exception raised when data cannot be converted to the correct python type """
pass
class ValidationError(FieldError, ValueError):
"""Exception raised when invalid data is encountered."""
pass
class StopValidationError(ValidationError):
"""Exception raised when no more validation need occur."""
type = ValidationError
class CompoundError(BaseError):
def __init__(self, errors):
if not isinstance(errors, dict):
raise TypeError("Compound errors must be reported as a dictionary.")
for key, value in errors.items():
if isinstance(value, CompoundError):
errors[key] = value.errors
else:
errors[key] = value
super(CompoundError, self).__init__(errors)
class DataError(CompoundError):
def __init__(self, errors, partial_data=None):
super(DataError, self).__init__(errors)
self.partial_data = partial_data
class MockCreationError(ValueError):
"""Exception raised when a mock value cannot be generated."""
pass
class UndefinedValueError(AttributeError, KeyError):
"""Exception raised when accessing a field with an undefined value."""
def __init__(self, model, name):
msg = "'%s' instance has no value for field '%s'" % (model.__class__.__name__, name)
super(UndefinedValueError, self).__init__(msg)
class UnknownFieldError(KeyError):
"""Exception raised when attempting to access a nonexistent field using the subscription syntax."""
def __init__(self, model, name):
msg = "Model '%s' has no field named '%s'" % (model.__class__.__name__, name)
super(UnknownFieldError, self).__init__(msg)
if PY2:
# Python 2 names cannot be unicode
__all__ = [n.encode('ascii') for n in __all__]
| 0.834036 | 0.178633 |
import functools
from ..transforms import convert, to_primitive
from ..validate import validate
def _callback_wrap(data, schema, transform, *args, **kwargs):
return transform(schema, data, *args, **kwargs)
class Machine(object):
""" A poor man's state machine. """
states = ('raw', 'converted', 'validated', 'serialized')
transitions = (
{'trigger': 'init', 'to': 'raw'},
{'trigger': 'convert', 'from': 'raw', 'to': 'converted'},
{'trigger': 'validate', 'from': 'converted', 'to': 'validated'},
{'trigger': 'serialize', 'from': 'validated', 'to': 'serialized'}
)
callbacks = {
'convert': functools.partial(_callback_wrap, transform=convert, partial=True),
'validate': functools.partial(_callback_wrap, transform=validate, convert=False, partial=False),
'serialize': functools.partial(_callback_wrap, transform=to_primitive)
}
def __init__(self, data, *args):
self.state = self._transition(trigger='init')['to']
self.data = data
self.args = args
def __getattr__(self, name):
return functools.partial(self.trigger, name)
def _transition(self, trigger=None, src_state=None, dst_state=None):
try:
return next(self._transitions(trigger=trigger, src_state=src_state,
dst_state=dst_state))
except StopIteration:
return None
def _transitions(self, trigger=None, src_state=None, dst_state=None):
def pred(d, key, var):
return d.get(key) == var if var is not None else True
return (d for d in self.transitions if
pred(d, 'trigger', trigger) and
pred(d, 'from', src_state) and
pred(d, 'to', dst_state)
)
def trigger(self, trigger):
transition = self._transition(trigger=trigger, src_state=self.state)
if not transition:
raise AttributeError(trigger)
callback = self.callbacks.get(trigger)
self.data = callback(self.data, *self.args) if callback else self.data
self.state = transition['to']
def can(self, state):
return bool(self._transition(src_state=self.state, dst_state=state))
def cannot(self, state):
return not self.can(state)
|
schematics-fork
|
/schematics-fork-2.1.1.tar.gz/schematics-fork-2.1.1/schematics/contrib/machine.py
|
machine.py
|
import functools
from ..transforms import convert, to_primitive
from ..validate import validate
def _callback_wrap(data, schema, transform, *args, **kwargs):
return transform(schema, data, *args, **kwargs)
class Machine(object):
""" A poor man's state machine. """
states = ('raw', 'converted', 'validated', 'serialized')
transitions = (
{'trigger': 'init', 'to': 'raw'},
{'trigger': 'convert', 'from': 'raw', 'to': 'converted'},
{'trigger': 'validate', 'from': 'converted', 'to': 'validated'},
{'trigger': 'serialize', 'from': 'validated', 'to': 'serialized'}
)
callbacks = {
'convert': functools.partial(_callback_wrap, transform=convert, partial=True),
'validate': functools.partial(_callback_wrap, transform=validate, convert=False, partial=False),
'serialize': functools.partial(_callback_wrap, transform=to_primitive)
}
def __init__(self, data, *args):
self.state = self._transition(trigger='init')['to']
self.data = data
self.args = args
def __getattr__(self, name):
return functools.partial(self.trigger, name)
def _transition(self, trigger=None, src_state=None, dst_state=None):
try:
return next(self._transitions(trigger=trigger, src_state=src_state,
dst_state=dst_state))
except StopIteration:
return None
def _transitions(self, trigger=None, src_state=None, dst_state=None):
def pred(d, key, var):
return d.get(key) == var if var is not None else True
return (d for d in self.transitions if
pred(d, 'trigger', trigger) and
pred(d, 'from', src_state) and
pred(d, 'to', dst_state)
)
def trigger(self, trigger):
transition = self._transition(trigger=trigger, src_state=self.state)
if not transition:
raise AttributeError(trigger)
callback = self.callbacks.get(trigger)
self.data = callback(self.data, *self.args) if callback else self.data
self.state = transition['to']
def can(self, state):
return bool(self._transition(src_state=self.state, dst_state=state))
def cannot(self, state):
return not self.can(state)
| 0.615666 | 0.265556 |
from __future__ import unicode_literals, absolute_import
try:
from enum import Enum
except ImportError:
pass
from ..exceptions import ConversionError
from ..translator import _
from ..types import BaseType
from ..compat import string_type
class EnumType(BaseType):
"""A field type allowing to use native enums as values.
Restricts values to enum members and (optionally) enum values.
`use_values` - if set to True allows do assign enumerated values to the field.
>>> import enum
>>> class E(enum.Enum):
... A = 1
... B = 2
>>> from schematics import Model
>>> class AModel(Model):
... foo = EnumType(E)
>>> a = AModel()
>>> a.foo = E.A
>>> a.foo.value == 1
"""
MESSAGES = {
'convert': _("Couldn't interpret '{0}' as member of {1}."),
}
def __init__(self, enum, use_values=False, **kwargs):
"""
:param enum: Enum class to which restrict values assigned to the field.
:param use_values: If true, also values of the enum (right-hand side) can be assigned here.
Other args are passed to superclass.
"""
self._enum_class = enum
self._use_values = use_values
super(EnumType, self).__init__(**kwargs)
def to_native(self, value, context=None):
if isinstance(value, self._enum_class):
return value
else:
by_name = self._find_by_name(value)
if by_name:
return by_name
by_value = self._find_by_value(value)
if by_value:
return by_value
raise ConversionError(self.messages['convert'].format(value, self._enum_class))
def _find_by_name(self, value):
if isinstance(value, string_type):
try:
return self._enum_class[value]
except KeyError:
pass
def _find_by_value(self, value):
if not self._use_values:
return
for member in self._enum_class:
if member.value == value:
return member
def to_primitive(self, value, context=None):
if isinstance(value, Enum):
if self._use_values:
return value.value
else:
return value.name
else:
return str(value)
|
schematics-fork
|
/schematics-fork-2.1.1.tar.gz/schematics-fork-2.1.1/schematics/contrib/enum_type.py
|
enum_type.py
|
from __future__ import unicode_literals, absolute_import
try:
from enum import Enum
except ImportError:
pass
from ..exceptions import ConversionError
from ..translator import _
from ..types import BaseType
from ..compat import string_type
class EnumType(BaseType):
"""A field type allowing to use native enums as values.
Restricts values to enum members and (optionally) enum values.
`use_values` - if set to True allows do assign enumerated values to the field.
>>> import enum
>>> class E(enum.Enum):
... A = 1
... B = 2
>>> from schematics import Model
>>> class AModel(Model):
... foo = EnumType(E)
>>> a = AModel()
>>> a.foo = E.A
>>> a.foo.value == 1
"""
MESSAGES = {
'convert': _("Couldn't interpret '{0}' as member of {1}."),
}
def __init__(self, enum, use_values=False, **kwargs):
"""
:param enum: Enum class to which restrict values assigned to the field.
:param use_values: If true, also values of the enum (right-hand side) can be assigned here.
Other args are passed to superclass.
"""
self._enum_class = enum
self._use_values = use_values
super(EnumType, self).__init__(**kwargs)
def to_native(self, value, context=None):
if isinstance(value, self._enum_class):
return value
else:
by_name = self._find_by_name(value)
if by_name:
return by_name
by_value = self._find_by_value(value)
if by_value:
return by_value
raise ConversionError(self.messages['convert'].format(value, self._enum_class))
def _find_by_name(self, value):
if isinstance(value, string_type):
try:
return self._enum_class[value]
except KeyError:
pass
def _find_by_value(self, value):
if not self._use_values:
return
for member in self._enum_class:
if member.value == value:
return member
def to_primitive(self, value, context=None):
if isinstance(value, Enum):
if self._use_values:
return value.value
else:
return value.name
else:
return str(value)
| 0.798933 | 0.265065 |
from __future__ import unicode_literals, absolute_import
import itertools
from ..common import *
from ..exceptions import *
from ..transforms import (
export_loop,
get_import_context, get_export_context,
to_native_converter, to_primitive_converter)
from ..translator import _
from ..util import get_all_subclasses, import_string
from .base import BaseType, get_value_in
try:
import typing
except ImportError:
pass
else:
T = typing.TypeVar("T")
try:
from collections.abc import Iterable, Sequence, Mapping # PY3
except ImportError:
from collections import Iterable, Sequence, Mapping # PY2
__all__ = ['CompoundType', 'MultiType', 'ModelType', 'ListType', 'DictType',
'PolyModelType']
class CompoundType(BaseType):
def __init__(self, **kwargs):
super(CompoundType, self).__init__(**kwargs)
self.is_compound = True
try:
self.field.parent_field = self
except AttributeError:
pass
def _setup(self, field_name, owner_model):
# Recursively set up inner fields.
if hasattr(self, 'field'):
self.field._setup(None, owner_model)
super(CompoundType, self)._setup(field_name, owner_model)
def convert(self, value, context=None):
context = context or get_import_context()
return self._convert(value, context)
def _convert(self, value, context):
raise NotImplementedError
def export(self, value, format, context=None):
context = context or get_export_context()
return self._export(value, format, context)
def _export(self, value, format, context):
raise NotImplementedError
def to_native(self, value, context=None):
context = context or get_export_context(to_native_converter)
return to_native_converter(self, value, context)
def to_primitive(self, value, context=None):
context = context or get_export_context(to_primitive_converter)
return to_primitive_converter(self, value, context)
def _init_field(self, field, options):
"""
Instantiate the inner field that represents each element within this compound type.
In case the inner field is itself a compound type, its inner field can be provided
as the ``nested_field`` keyword argument.
"""
if not isinstance(field, BaseType):
nested_field = options.pop('nested_field', None) or options.pop('compound_field', None)
if nested_field:
field = field(field=nested_field, **options)
else:
field = field(**options)
return field
MultiType = CompoundType
class ModelType(CompoundType):
"""A field that can hold an instance of the specified model."""
primitive_type = dict
@property
def native_type(self):
return self.model_class
@property
def fields(self):
return self.model_class.fields
@property
def model_class(self):
if self._model_class:
return self._model_class
model_class = import_string(self.model_name)
self._model_class = model_class
return model_class
def __init__(self,
model_spec, # type: typing.Type[T]
**kwargs):
# type: (...) -> T
if isinstance(model_spec, ModelMeta):
self._model_class = model_spec
self.model_name = self.model_class.__name__
elif isinstance(model_spec, string_type):
self._model_class = None
self.model_name = model_spec
else:
raise TypeError("ModelType: Expected a model, got an argument "
"of the type '{}'.".format(model_spec.__class__.__name__))
super(ModelType, self).__init__(**kwargs)
def _repr_info(self):
return self.model_class.__name__
def _mock(self, context=None):
return self.model_class.get_mock_object(context)
def _setup(self, field_name, owner_model):
# Resolve possible name-based model reference.
if not self._model_class:
if self.model_name == owner_model.__name__:
self._model_class = owner_model
else:
pass # Intentionally left blank, it will be setup later.
super(ModelType, self)._setup(field_name, owner_model)
def pre_setattr(self, value):
if value is not None \
and not isinstance(value, Model):
if not isinstance(value, dict):
raise ConversionError(_('Model conversion requires a model or dict'))
value = self.model_class(value)
return value
def _convert(self, value, context):
field_model_class = self.model_class
if isinstance(value, field_model_class):
model_class = type(value)
elif isinstance(value, dict):
model_class = field_model_class
else:
raise ConversionError(
_("Input must be a mapping or '%s' instance") % field_model_class.__name__)
if context.convert and context.oo:
return model_class(value, context=context)
else:
return model_class.convert(value, context=context)
def _export(self, value, format, context):
if isinstance(value, Model):
model_class = type(value)
else:
model_class = self.model_class
return export_loop(model_class, value, context=context)
class ListType(CompoundType):
"""A field for storing a list of items, all of which must conform to the type
specified by the ``field`` parameter.
Use it like this::
...
categories = ListType(StringType)
"""
primitive_type = list
native_type = list
def __init__(self,
field, # type: T
min_size=None, max_size=None, **kwargs):
# type: (...) -> typing.List[T]
self.field = self._init_field(field, kwargs)
self.min_size = min_size
self.max_size = max_size
validators = [self.check_length] + kwargs.pop("validators", [])
super(ListType, self).__init__(validators=validators, **kwargs)
@property
def model_class(self):
return self.field.model_class
def _repr_info(self):
return self.field.__class__.__name__
def _mock(self, context=None):
random_length = get_value_in(self.min_size, self.max_size)
return [self.field._mock(context) for dummy in range(random_length)]
def _coerce(self, value):
if isinstance(value, list):
return value
elif isinstance(value, (string_type, Mapping)): # unacceptable iterables
pass
elif isinstance(value, Sequence):
return value
elif isinstance(value, Iterable):
return value
raise ConversionError(_('Could not interpret the value as a list'))
def _convert(self, value, context):
value = self._coerce(value)
data = []
errors = {}
for index, item in enumerate(value):
try:
data.append(context.field_converter(self.field, item, context))
except BaseError as exc:
errors[index] = exc
if errors:
raise CompoundError(errors)
return data
def check_length(self, value, context):
list_length = len(value) if value else 0
if self.min_size is not None and list_length < self.min_size:
message = ({
True: _('Please provide at least %d item.'),
False: _('Please provide at least %d items.'),
}[self.min_size == 1]) % self.min_size
raise ValidationError(message)
if self.max_size is not None and list_length > self.max_size:
message = ({
True: _('Please provide no more than %d item.'),
False: _('Please provide no more than %d items.'),
}[self.max_size == 1]) % self.max_size
raise ValidationError(message)
def _export(self, list_instance, format, context):
"""Loops over each item in the model and applies either the field
transform or the multitype transform. Essentially functions the same
as `transforms.export_loop`.
"""
data = []
_export_level = self.field.get_export_level(context)
if _export_level == DROP:
return data
for value in list_instance:
shaped = self.field.export(value, format, context)
if shaped is None:
if _export_level <= NOT_NONE:
continue
elif self.field.is_compound and len(shaped) == 0:
if _export_level <= NONEMPTY:
continue
data.append(shaped)
return data
class DictType(CompoundType):
"""A field for storing a mapping of items, the values of which must conform to the type
specified by the ``field`` parameter.
Use it like this::
...
categories = DictType(StringType)
"""
primitive_type = dict
native_type = dict
def __init__(self, field, coerce_key=None, **kwargs):
# type: (...) -> typing.Dict[str, T]
self.field = self._init_field(field, kwargs)
self.coerce_key = coerce_key or str
super(DictType, self).__init__(**kwargs)
@property
def model_class(self):
return self.field.model_class
def _repr_info(self):
return self.field.__class__.__name__
def _convert(self, value, context, safe=False):
if not isinstance(value, Mapping):
raise ConversionError(_('Only mappings may be used in a DictType'))
data = {}
errors = {}
for k, v in iteritems(value):
try:
data[self.coerce_key(k)] = context.field_converter(self.field, v, context)
except BaseError as exc:
errors[k] = exc
if errors:
raise CompoundError(errors)
return data
def _export(self, dict_instance, format, context):
"""Loops over each item in the model and applies either the field
transform or the multitype transform. Essentially functions the same
as `transforms.export_loop`.
"""
data = {}
_export_level = self.field.get_export_level(context)
if _export_level == DROP:
return data
for key, value in iteritems(dict_instance):
shaped = self.field.export(value, format, context)
if shaped is None:
if _export_level <= NOT_NONE:
continue
elif self.field.is_compound and len(shaped) == 0:
if _export_level <= NONEMPTY:
continue
data[key] = shaped
return data
class PolyModelType(CompoundType):
"""A field that accepts an instance of any of the specified models."""
primitive_type = dict
native_type = None # cannot be determined from a PolyModelType instance
def __init__(self, model_spec, **kwargs):
if isinstance(model_spec, (ModelMeta, string_type)):
self.model_classes = (model_spec,)
allow_subclasses = True
elif isinstance(model_spec, Iterable):
self.model_classes = tuple(model_spec)
allow_subclasses = False
else:
raise Exception("The first argument to PolyModelType.__init__() "
"must be a model or an iterable.")
self.claim_function = kwargs.pop("claim_function", None)
self.allow_subclasses = kwargs.pop("allow_subclasses", allow_subclasses)
CompoundType.__init__(self, **kwargs)
def _setup(self, field_name, owner_model):
# Resolve possible name-based model references.
resolved_classes = []
for m in self.model_classes:
if isinstance(m, string_type):
if m == owner_model.__name__:
resolved_classes.append(owner_model)
else:
raise Exception("PolyModelType: Unable to resolve model '{}'.".format(m))
else:
resolved_classes.append(m)
self.model_classes = tuple(resolved_classes)
super(PolyModelType, self)._setup(field_name, owner_model)
def is_allowed_model(self, model_instance):
if self.allow_subclasses:
if isinstance(model_instance, self.model_classes):
return True
else:
if model_instance.__class__ in self.model_classes:
return True
return False
def _convert(self, value, context):
if value is None:
return None
if not context.validate:
if self.is_allowed_model(value):
return value
if not isinstance(value, dict):
if len(self.model_classes) > 1:
instanceof_msg = 'one of: {}'.format(', '.join(
cls.__name__ for cls in self.model_classes))
else:
instanceof_msg = self.model_classes[0].__name__
raise ConversionError(_('Please use a mapping for this field or '
'an instance of {}').format(instanceof_msg))
model_class = self.find_model(value)
return model_class(value, context=context)
def find_model(self, data):
"""Finds the intended type by consulting potential classes or `claim_function`."""
if self.claim_function:
kls = self.claim_function(self, data)
if not kls:
raise Exception("Input for polymorphic field did not match any model")
return kls
fallback = None
matching_classes = []
for kls in self._get_candidates():
try:
# If a model defines a _claim_polymorphic method, use
# it to see if the model matches the data.
kls_claim = kls._claim_polymorphic
except AttributeError:
# The first model that doesn't define the hook can be
# used as a default if there's no match.
if not fallback:
fallback = kls
else:
if kls_claim(data):
matching_classes.append(kls)
if not matching_classes and fallback:
return fallback
elif len(matching_classes) != 1:
raise Exception("Got ambiguous input for polymorphic field")
return matching_classes[0]
def _export(self, model_instance, format, context):
model_class = model_instance.__class__
if not self.is_allowed_model(model_instance):
raise Exception("Cannot export: {} is not an allowed type".format(model_class))
return model_instance.export(context=context)
def _get_candidates(self):
candidates = self.model_classes
if self.allow_subclasses:
candidates = itertools.chain.from_iterable(
([m] + get_all_subclasses(m) for m in candidates)
)
return candidates
if PY2:
# Python 2 names cannot be unicode
__all__ = [n.encode('ascii') for n in __all__]
|
schematics-fork
|
/schematics-fork-2.1.1.tar.gz/schematics-fork-2.1.1/schematics/types/compound.py
|
compound.py
|
from __future__ import unicode_literals, absolute_import
import itertools
from ..common import *
from ..exceptions import *
from ..transforms import (
export_loop,
get_import_context, get_export_context,
to_native_converter, to_primitive_converter)
from ..translator import _
from ..util import get_all_subclasses, import_string
from .base import BaseType, get_value_in
try:
import typing
except ImportError:
pass
else:
T = typing.TypeVar("T")
try:
from collections.abc import Iterable, Sequence, Mapping # PY3
except ImportError:
from collections import Iterable, Sequence, Mapping # PY2
__all__ = ['CompoundType', 'MultiType', 'ModelType', 'ListType', 'DictType',
'PolyModelType']
class CompoundType(BaseType):
def __init__(self, **kwargs):
super(CompoundType, self).__init__(**kwargs)
self.is_compound = True
try:
self.field.parent_field = self
except AttributeError:
pass
def _setup(self, field_name, owner_model):
# Recursively set up inner fields.
if hasattr(self, 'field'):
self.field._setup(None, owner_model)
super(CompoundType, self)._setup(field_name, owner_model)
def convert(self, value, context=None):
context = context or get_import_context()
return self._convert(value, context)
def _convert(self, value, context):
raise NotImplementedError
def export(self, value, format, context=None):
context = context or get_export_context()
return self._export(value, format, context)
def _export(self, value, format, context):
raise NotImplementedError
def to_native(self, value, context=None):
context = context or get_export_context(to_native_converter)
return to_native_converter(self, value, context)
def to_primitive(self, value, context=None):
context = context or get_export_context(to_primitive_converter)
return to_primitive_converter(self, value, context)
def _init_field(self, field, options):
"""
Instantiate the inner field that represents each element within this compound type.
In case the inner field is itself a compound type, its inner field can be provided
as the ``nested_field`` keyword argument.
"""
if not isinstance(field, BaseType):
nested_field = options.pop('nested_field', None) or options.pop('compound_field', None)
if nested_field:
field = field(field=nested_field, **options)
else:
field = field(**options)
return field
MultiType = CompoundType
class ModelType(CompoundType):
"""A field that can hold an instance of the specified model."""
primitive_type = dict
@property
def native_type(self):
return self.model_class
@property
def fields(self):
return self.model_class.fields
@property
def model_class(self):
if self._model_class:
return self._model_class
model_class = import_string(self.model_name)
self._model_class = model_class
return model_class
def __init__(self,
model_spec, # type: typing.Type[T]
**kwargs):
# type: (...) -> T
if isinstance(model_spec, ModelMeta):
self._model_class = model_spec
self.model_name = self.model_class.__name__
elif isinstance(model_spec, string_type):
self._model_class = None
self.model_name = model_spec
else:
raise TypeError("ModelType: Expected a model, got an argument "
"of the type '{}'.".format(model_spec.__class__.__name__))
super(ModelType, self).__init__(**kwargs)
def _repr_info(self):
return self.model_class.__name__
def _mock(self, context=None):
return self.model_class.get_mock_object(context)
def _setup(self, field_name, owner_model):
# Resolve possible name-based model reference.
if not self._model_class:
if self.model_name == owner_model.__name__:
self._model_class = owner_model
else:
pass # Intentionally left blank, it will be setup later.
super(ModelType, self)._setup(field_name, owner_model)
def pre_setattr(self, value):
if value is not None \
and not isinstance(value, Model):
if not isinstance(value, dict):
raise ConversionError(_('Model conversion requires a model or dict'))
value = self.model_class(value)
return value
def _convert(self, value, context):
field_model_class = self.model_class
if isinstance(value, field_model_class):
model_class = type(value)
elif isinstance(value, dict):
model_class = field_model_class
else:
raise ConversionError(
_("Input must be a mapping or '%s' instance") % field_model_class.__name__)
if context.convert and context.oo:
return model_class(value, context=context)
else:
return model_class.convert(value, context=context)
def _export(self, value, format, context):
if isinstance(value, Model):
model_class = type(value)
else:
model_class = self.model_class
return export_loop(model_class, value, context=context)
class ListType(CompoundType):
"""A field for storing a list of items, all of which must conform to the type
specified by the ``field`` parameter.
Use it like this::
...
categories = ListType(StringType)
"""
primitive_type = list
native_type = list
def __init__(self,
field, # type: T
min_size=None, max_size=None, **kwargs):
# type: (...) -> typing.List[T]
self.field = self._init_field(field, kwargs)
self.min_size = min_size
self.max_size = max_size
validators = [self.check_length] + kwargs.pop("validators", [])
super(ListType, self).__init__(validators=validators, **kwargs)
@property
def model_class(self):
return self.field.model_class
def _repr_info(self):
return self.field.__class__.__name__
def _mock(self, context=None):
random_length = get_value_in(self.min_size, self.max_size)
return [self.field._mock(context) for dummy in range(random_length)]
def _coerce(self, value):
if isinstance(value, list):
return value
elif isinstance(value, (string_type, Mapping)): # unacceptable iterables
pass
elif isinstance(value, Sequence):
return value
elif isinstance(value, Iterable):
return value
raise ConversionError(_('Could not interpret the value as a list'))
def _convert(self, value, context):
value = self._coerce(value)
data = []
errors = {}
for index, item in enumerate(value):
try:
data.append(context.field_converter(self.field, item, context))
except BaseError as exc:
errors[index] = exc
if errors:
raise CompoundError(errors)
return data
def check_length(self, value, context):
list_length = len(value) if value else 0
if self.min_size is not None and list_length < self.min_size:
message = ({
True: _('Please provide at least %d item.'),
False: _('Please provide at least %d items.'),
}[self.min_size == 1]) % self.min_size
raise ValidationError(message)
if self.max_size is not None and list_length > self.max_size:
message = ({
True: _('Please provide no more than %d item.'),
False: _('Please provide no more than %d items.'),
}[self.max_size == 1]) % self.max_size
raise ValidationError(message)
def _export(self, list_instance, format, context):
"""Loops over each item in the model and applies either the field
transform or the multitype transform. Essentially functions the same
as `transforms.export_loop`.
"""
data = []
_export_level = self.field.get_export_level(context)
if _export_level == DROP:
return data
for value in list_instance:
shaped = self.field.export(value, format, context)
if shaped is None:
if _export_level <= NOT_NONE:
continue
elif self.field.is_compound and len(shaped) == 0:
if _export_level <= NONEMPTY:
continue
data.append(shaped)
return data
class DictType(CompoundType):
"""A field for storing a mapping of items, the values of which must conform to the type
specified by the ``field`` parameter.
Use it like this::
...
categories = DictType(StringType)
"""
primitive_type = dict
native_type = dict
def __init__(self, field, coerce_key=None, **kwargs):
# type: (...) -> typing.Dict[str, T]
self.field = self._init_field(field, kwargs)
self.coerce_key = coerce_key or str
super(DictType, self).__init__(**kwargs)
@property
def model_class(self):
return self.field.model_class
def _repr_info(self):
return self.field.__class__.__name__
def _convert(self, value, context, safe=False):
if not isinstance(value, Mapping):
raise ConversionError(_('Only mappings may be used in a DictType'))
data = {}
errors = {}
for k, v in iteritems(value):
try:
data[self.coerce_key(k)] = context.field_converter(self.field, v, context)
except BaseError as exc:
errors[k] = exc
if errors:
raise CompoundError(errors)
return data
def _export(self, dict_instance, format, context):
"""Loops over each item in the model and applies either the field
transform or the multitype transform. Essentially functions the same
as `transforms.export_loop`.
"""
data = {}
_export_level = self.field.get_export_level(context)
if _export_level == DROP:
return data
for key, value in iteritems(dict_instance):
shaped = self.field.export(value, format, context)
if shaped is None:
if _export_level <= NOT_NONE:
continue
elif self.field.is_compound and len(shaped) == 0:
if _export_level <= NONEMPTY:
continue
data[key] = shaped
return data
class PolyModelType(CompoundType):
"""A field that accepts an instance of any of the specified models."""
primitive_type = dict
native_type = None # cannot be determined from a PolyModelType instance
def __init__(self, model_spec, **kwargs):
if isinstance(model_spec, (ModelMeta, string_type)):
self.model_classes = (model_spec,)
allow_subclasses = True
elif isinstance(model_spec, Iterable):
self.model_classes = tuple(model_spec)
allow_subclasses = False
else:
raise Exception("The first argument to PolyModelType.__init__() "
"must be a model or an iterable.")
self.claim_function = kwargs.pop("claim_function", None)
self.allow_subclasses = kwargs.pop("allow_subclasses", allow_subclasses)
CompoundType.__init__(self, **kwargs)
def _setup(self, field_name, owner_model):
# Resolve possible name-based model references.
resolved_classes = []
for m in self.model_classes:
if isinstance(m, string_type):
if m == owner_model.__name__:
resolved_classes.append(owner_model)
else:
raise Exception("PolyModelType: Unable to resolve model '{}'.".format(m))
else:
resolved_classes.append(m)
self.model_classes = tuple(resolved_classes)
super(PolyModelType, self)._setup(field_name, owner_model)
def is_allowed_model(self, model_instance):
if self.allow_subclasses:
if isinstance(model_instance, self.model_classes):
return True
else:
if model_instance.__class__ in self.model_classes:
return True
return False
def _convert(self, value, context):
if value is None:
return None
if not context.validate:
if self.is_allowed_model(value):
return value
if not isinstance(value, dict):
if len(self.model_classes) > 1:
instanceof_msg = 'one of: {}'.format(', '.join(
cls.__name__ for cls in self.model_classes))
else:
instanceof_msg = self.model_classes[0].__name__
raise ConversionError(_('Please use a mapping for this field or '
'an instance of {}').format(instanceof_msg))
model_class = self.find_model(value)
return model_class(value, context=context)
def find_model(self, data):
"""Finds the intended type by consulting potential classes or `claim_function`."""
if self.claim_function:
kls = self.claim_function(self, data)
if not kls:
raise Exception("Input for polymorphic field did not match any model")
return kls
fallback = None
matching_classes = []
for kls in self._get_candidates():
try:
# If a model defines a _claim_polymorphic method, use
# it to see if the model matches the data.
kls_claim = kls._claim_polymorphic
except AttributeError:
# The first model that doesn't define the hook can be
# used as a default if there's no match.
if not fallback:
fallback = kls
else:
if kls_claim(data):
matching_classes.append(kls)
if not matching_classes and fallback:
return fallback
elif len(matching_classes) != 1:
raise Exception("Got ambiguous input for polymorphic field")
return matching_classes[0]
def _export(self, model_instance, format, context):
model_class = model_instance.__class__
if not self.is_allowed_model(model_instance):
raise Exception("Cannot export: {} is not an allowed type".format(model_class))
return model_instance.export(context=context)
def _get_candidates(self):
candidates = self.model_classes
if self.allow_subclasses:
candidates = itertools.chain.from_iterable(
([m] + get_all_subclasses(m) for m in candidates)
)
return candidates
if PY2:
# Python 2 names cannot be unicode
__all__ = [n.encode('ascii') for n in __all__]
| 0.774839 | 0.149066 |
from __future__ import unicode_literals, absolute_import
import copy
import datetime
import decimal
import itertools
import numbers
import random
import re
import string
import uuid
from collections import OrderedDict
from ..common import *
from ..exceptions import *
from ..translator import _
from ..undefined import Undefined
from ..util import listify
from ..validate import prepare_validator, get_validation_context
try:
import typing
except ImportError:
pass
try:
from collections.abc import Iterable # PY3
except ImportError:
from collections import Iterable # PY2
__all__ = [
'BaseType', 'UUIDType', 'StringType', 'MultilingualStringType',
'NumberType', 'IntType', 'LongType', 'FloatType', 'DecimalType',
'HashType', 'MD5Type', 'SHA1Type', 'BooleanType', 'GeoPointType',
'DateType', 'DateTimeType', 'UTCDateTimeType', 'TimestampType',
'TimedeltaType']
def fill_template(template, min_length, max_length):
return template % random_string(
get_value_in(
min_length,
max_length,
padding=len(template) - 2,
required_length=1))
def get_range_endpoints(min_length, max_length, padding=0, required_length=0):
if min_length is None:
min_length = 0
if max_length is None:
max_length = max(min_length * 2, 16)
if padding:
max_length = max_length - padding
min_length = max(min_length - padding, 0)
if max_length < required_length:
raise MockCreationError(
'This field is too short to hold the mock data')
min_length = max(min_length, required_length)
if max_length < min_length:
raise MockCreationError('Minimum is greater than maximum')
return min_length, max_length
def get_value_in(min_length, max_length, padding=0, required_length=0):
return random.randint(
*get_range_endpoints(min_length, max_length, padding, required_length))
_alphanumeric = string.ascii_letters + string.digits
def random_string(length, chars=_alphanumeric):
return ''.join(random.choice(chars) for _ in range(length))
_last_position_hint = -1
_next_position_hint = itertools.count()
class TypeMeta(type):
"""
Meta class for BaseType. Merges `MESSAGES` dict and accumulates
validator methods.
"""
def __new__(mcs, name, bases, attrs):
messages = {}
validators = OrderedDict()
for base in reversed(bases):
if hasattr(base, 'MESSAGES'):
messages.update(base.MESSAGES)
if hasattr(base, "_validators"):
validators.update(base._validators)
if 'MESSAGES' in attrs:
messages.update(attrs['MESSAGES'])
attrs['MESSAGES'] = messages
for attr_name, attr in attrs.items():
if attr_name.startswith("validate_"):
validators[attr_name] = 1
attrs[attr_name] = prepare_validator(attr, 3)
attrs["_validators"] = validators
return type.__new__(mcs, name, bases, attrs)
@metaclass(TypeMeta)
class BaseType(object):
"""A base class for Types in a Schematics model. Instances of this
class may be added to subclasses of ``Model`` to define a model schema.
Validators that need to access variables on the instance
can be defined be implementing methods whose names start with ``validate_``
and accept one parameter (in addition to ``self``)
:param required:
Invalidate field when value is None or is not supplied. Default:
False.
:param default:
When no data is provided default to this value. May be a callable.
Default: None.
:param serialized_name:
The name of this field defaults to the class attribute used in the
model. However if the field has another name in foreign data set this
argument. Serialized data will use this value for the key name too.
:param deserialize_from:
A name or list of named fields for which foreign data sets are
searched to provide a value for the given field. This only effects
inbound data.
:param choices:
A list of valid choices. This is the last step of the validator
chain.
:param validators:
A list of callables. Each callable receives the value after it has been
converted into a rich python type. Default: []
:param serialize_when_none:
Dictates if the field should appear in the serialized data even if the
value is None. Default: None.
:param messages:
Override the error messages with a dict. You can also do this by
subclassing the Type and defining a `MESSAGES` dict attribute on the
class. A metaclass will merge all the `MESSAGES` and override the
resulting dict with instance level `messages` and assign to
`self.messages`.
:param metadata:
Dictionary for storing custom metadata associated with the field.
To encourage compatibility with external tools, we suggest these keys
for common metadata:
- *label* : Brief human-readable label
- *description* : Explanation of the purpose of the field. Used for
help, tooltips, documentation, etc.
"""
primitive_type = None
native_type = None
MESSAGES = {
'required': _("This field is required."),
'choices': _("Value must be one of {0}."),
}
EXPORT_METHODS = {
NATIVE: 'to_native',
PRIMITIVE: 'to_primitive',
}
def __init__(self, required=False, default=Undefined, serialized_name=None,
choices=None, validators=None, deserialize_from=None,
export_level=None, serialize_when_none=None,
messages=None, metadata=None):
super(BaseType, self).__init__()
self.required = required
self._default = default
self.serialized_name = serialized_name
if choices and (isinstance(choices, string_type) or not isinstance(choices, Iterable)):
raise TypeError('"choices" must be a non-string Iterable')
self.choices = choices
self.deserialize_from = listify(deserialize_from)
self.validators = [getattr(self, validator_name) for validator_name in self._validators]
if validators:
self.validators += (prepare_validator(func, 2) for func in validators)
self._set_export_level(export_level, serialize_when_none)
self.messages = dict(self.MESSAGES, **(messages or {}))
self.metadata = metadata or {}
self._position_hint = next(_next_position_hint) # For ordering of fields
self.name = None
self.owner_model = None
self.parent_field = None
self.typeclass = self.__class__
self.is_compound = False
self.export_mapping = dict(
(format, getattr(self, fname)) for format, fname in self.EXPORT_METHODS.items())
def __repr__(self):
type_ = "%s(%s) instance" % (self.__class__.__name__, self._repr_info() or '')
model = " on %s" % self.owner_model.__name__ if self.owner_model else ''
field = " as '%s'" % self.name if self.name else ''
return "<%s>" % (type_ + model + field)
def _repr_info(self):
return None
def __call__(self, value, context=None):
return self.convert(value, context)
def __deepcopy__(self, memo):
return copy.copy(self)
def _mock(self, context=None):
return None
def _setup(self, field_name, owner_model):
"""Perform late-stage setup tasks that are run after the containing model
has been created.
"""
self.name = field_name
self.owner_model = owner_model
self._input_keys = self._get_input_keys()
def _set_export_level(self, export_level, serialize_when_none):
if export_level is not None:
self.export_level = export_level
elif serialize_when_none is True:
self.export_level = DEFAULT
elif serialize_when_none is False:
self.export_level = NONEMPTY
else:
self.export_level = None
def get_export_level(self, context):
if self.owner_model:
level = self.owner_model._options.export_level
else:
level = DEFAULT
if self.export_level is not None:
level = self.export_level
if context.export_level is not None:
level = context.export_level
return level
def get_input_keys(self, mapping=None):
if mapping:
return self._get_input_keys(mapping)
else:
return self._input_keys
def _get_input_keys(self, mapping=None):
input_keys = [self.name]
if self.serialized_name:
input_keys.append(self.serialized_name)
if mapping and self.name in mapping:
input_keys.extend(listify(mapping[self.name]))
if self.deserialize_from:
input_keys.extend(self.deserialize_from)
return input_keys
@property
def default(self):
default = self._default
if callable(default):
default = default()
return default
def pre_setattr(self, value):
return value
def convert(self, value, context=None):
return self.to_native(value, context)
def export(self, value, format, context=None):
return self.export_mapping[format](value, context)
def to_primitive(self, value, context=None):
"""Convert internal data to a value safe to serialize.
"""
return value
def to_native(self, value, context=None):
"""
Convert untrusted data to a richer Python construct.
"""
return value
def validate(self, value, context=None):
"""
Validate the field and return a converted value or raise a
``ValidationError`` with a list of errors raised by the validation
chain. Stop the validation process from continuing through the
validators by raising ``StopValidationError`` instead of ``ValidationError``.
"""
context = context or get_validation_context()
if context.convert:
value = self.convert(value, context)
elif self.is_compound:
self.convert(value, context)
errors = []
for validator in self.validators:
try:
validator(value, context)
except ValidationError as exc:
errors.append(exc)
if isinstance(exc, StopValidationError):
break
if errors:
raise ValidationError(errors)
return value
def check_required(self, value, context):
if self.required and (value is None or value is Undefined):
if self.name is None or context and not context.partial:
raise ConversionError(self.messages['required'])
def validate_choices(self, value, context):
if self.choices is not None:
if value not in self.choices:
raise ValidationError(self.messages['choices'].format(str(self.choices)))
def mock(self, context=None):
if not self.required and not random.choice([True, False]):
return self.default
if self.choices is not None:
return random.choice(self.choices)
return self._mock(context)
class UUIDType(BaseType):
"""A field that stores a valid UUID value.
"""
primitive_type = str
native_type = uuid.UUID
MESSAGES = {
'convert': _("Couldn't interpret '{0}' value as UUID."),
}
def __init__(self, **kwargs):
# type: (...) -> uuid.UUID
super(UUIDType, self).__init__(**kwargs)
def _mock(self, context=None):
return uuid.uuid4()
def to_native(self, value, context=None):
if not isinstance(value, uuid.UUID):
try:
value = uuid.UUID(value)
except (TypeError, ValueError):
raise ConversionError(self.messages['convert'].format(value))
return value
def to_primitive(self, value, context=None):
return str(value)
class StringType(BaseType):
"""A Unicode string field."""
primitive_type = str
native_type = str
allow_casts = (int, bytes)
MESSAGES = {
'convert': _("Couldn't interpret '{0}' as string."),
'decode': _("Invalid UTF-8 data."),
'max_length': _("String value is too long."),
'min_length': _("String value is too short."),
'regex': _("String value did not match validation regex."),
}
def __init__(self, regex=None, max_length=None, min_length=None, **kwargs):
# type: (...) -> typing.Text
self.regex = re.compile(regex) if regex else None
self.max_length = max_length
self.min_length = min_length
super(StringType, self).__init__(**kwargs)
def _mock(self, context=None):
return random_string(get_value_in(self.min_length, self.max_length))
def to_native(self, value, context=None):
if isinstance(value, str):
return value
if isinstance(value, self.allow_casts):
if isinstance(value, bytes):
try:
return str(value, 'utf-8')
except UnicodeError:
raise ConversionError(self.messages['decode'].format(value))
elif isinstance(value, bool):
pass
else:
return str(value)
raise ConversionError(self.messages['convert'].format(value))
def validate_length(self, value, context=None):
length = len(value)
if self.max_length is not None and length > self.max_length:
raise ValidationError(self.messages['max_length'])
if self.min_length is not None and length < self.min_length:
raise ValidationError(self.messages['min_length'])
def validate_regex(self, value, context=None):
if self.regex is not None and self.regex.match(value) is None:
raise ValidationError(self.messages['regex'])
class NumberType(BaseType):
"""A generic number field.
Converts to and validates against `number_type` parameter.
"""
primitive_type = None
native_type = None
number_type = None
MESSAGES = {
'number_coerce': _("Value '{0}' is not {1}."),
'number_min': _("{0} value should be greater than or equal to {1}."),
'number_max': _("{0} value should be less than or equal to {1}."),
}
def __init__(self, min_value=None, max_value=None, strict=False, **kwargs):
# type: (...) -> typing.Union[int, float]
self.min_value = min_value
self.max_value = max_value
self.strict = strict
super(NumberType, self).__init__(**kwargs)
def _mock(self, context=None):
number = random.uniform(
*get_range_endpoints(self.min_value, self.max_value)
)
return self.native_type(number) if self.native_type else number
def to_native(self, value, context=None):
if isinstance(value, bool):
value = int(value)
if isinstance(value, self.native_type):
return value
try:
native_value = self.native_type(value)
except (TypeError, ValueError):
pass
else:
if self.native_type is float: # Float conversion is strict enough.
return native_value
if not self.strict and native_value == value: # Match numeric types.
return native_value
if isinstance(value, (string_type, numbers.Integral)):
return native_value
raise ConversionError(self.messages['number_coerce']
.format(value, self.number_type.lower()))
def validate_range(self, value, context=None):
if self.min_value is not None and value < self.min_value:
raise ValidationError(self.messages['number_min']
.format(self.number_type, self.min_value))
if self.max_value is not None and value > self.max_value:
raise ValidationError(self.messages['number_max']
.format(self.number_type, self.max_value))
return value
class IntType(NumberType):
"""A field that validates input as an Integer
"""
primitive_type = int
native_type = int
number_type = 'Int'
def __init__(self, **kwargs):
# type: (...) -> int
super(IntType, self).__init__(**kwargs)
LongType = IntType
class FloatType(NumberType):
"""A field that validates input as a Float
"""
primitive_type = float
native_type = float
number_type = 'Float'
def __init__(self, **kwargs):
# type: (...) -> float
super(FloatType, self).__init__(**kwargs)
class DecimalType(NumberType):
"""A fixed-point decimal number field.
"""
primitive_type = str
native_type = decimal.Decimal
number_type = 'Decimal'
def to_primitive(self, value, context=None):
return str(value)
def to_native(self, value, context=None):
if isinstance(value, decimal.Decimal):
return value
if not isinstance(value, (string_type, bool)):
value = str(value)
try:
value = decimal.Decimal(value)
except (TypeError, decimal.InvalidOperation):
raise ConversionError(self.messages['number_coerce'].format(
value, self.number_type.lower()))
return value
class HashType(StringType):
MESSAGES = {
'hash_length': _("Hash value is wrong length."),
'hash_hex': _("Hash value is not hexadecimal."),
}
def _mock(self, context=None):
return random_string(self.LENGTH, string.hexdigits)
def to_native(self, value, context=None):
value = super(HashType, self).to_native(value, context)
if len(value) != self.LENGTH:
raise ValidationError(self.messages['hash_length'])
try:
int(value, 16)
except ValueError:
raise ConversionError(self.messages['hash_hex'])
return value
class MD5Type(HashType):
"""A field that validates input as resembling an MD5 hash.
"""
LENGTH = 32
class SHA1Type(HashType):
"""A field that validates input as resembling an SHA1 hash.
"""
LENGTH = 40
class BooleanType(BaseType):
"""A boolean field type. In addition to ``True`` and ``False``, coerces these
values:
+ For ``True``: "True", "true", "1"
+ For ``False``: "False", "false", "0"
"""
primitive_type = bool
native_type = bool
TRUE_VALUES = ('True', 'true', '1')
FALSE_VALUES = ('False', 'false', '0')
def __init__(self, **kwargs):
# type: (...) -> bool
super(BooleanType, self).__init__(**kwargs)
def _mock(self, context=None):
return random.choice([True, False])
def to_native(self, value, context=None):
if isinstance(value, string_type):
if value in self.TRUE_VALUES:
value = True
elif value in self.FALSE_VALUES:
value = False
elif isinstance(value, int) and value in [0, 1]:
value = bool(value)
if not isinstance(value, bool):
raise ConversionError(_("Must be either true or false."))
return value
class DateType(BaseType):
"""Defaults to converting to and from ISO8601 date values.
"""
primitive_type = str
native_type = datetime.date
SERIALIZED_FORMAT = '%Y-%m-%d'
MESSAGES = {
'parse': _("Could not parse {0}. Should be ISO 8601 (YYYY-MM-DD)."),
'parse_formats': _('Could not parse {0}. Valid formats: {1}'),
}
def __init__(self, formats=None, **kwargs):
# type: (...) -> datetime.date
if formats:
self.formats = listify(formats)
self.conversion_errmsg = self.MESSAGES['parse_formats']
else:
self.formats = ['%Y-%m-%d']
self.conversion_errmsg = self.MESSAGES['parse']
self.serialized_format = self.SERIALIZED_FORMAT
super(DateType, self).__init__(**kwargs)
def _mock(self, context=None):
return datetime.date(
year=random.randrange(600) + 1900,
month=random.randrange(12) + 1,
day=random.randrange(28) + 1,
)
def to_native(self, value, context=None):
if isinstance(value, datetime.datetime):
return value.date()
if isinstance(value, datetime.date):
return value
for fmt in self.formats:
try:
return datetime.datetime.strptime(value, fmt).date()
except (ValueError, TypeError):
continue
else:
raise ConversionError(self.conversion_errmsg.format(value, ", ".join(self.formats)))
def to_primitive(self, value, context=None):
return value.strftime(self.serialized_format)
class DateTimeType(BaseType):
"""A field that holds a combined date and time value.
The built-in parser accepts input values conforming to the ISO 8601 format
``<YYYY>-<MM>-<DD>T<hh>:<mm>[:<ss.ssssss>][<z>]``. A space may be substituted
for the delimiter ``T``. The time zone designator ``<z>`` may be either ``Z``
or ``±<hh>[:][<mm>]``.
Values are stored as standard ``datetime.datetime`` instances with the time zone
offset in the ``tzinfo`` component if available. Raw values that do not specify a time
zone will be converted to naive ``datetime`` objects unless ``tzd='utc'`` is in effect.
Unix timestamps are also valid input values and will be converted to UTC datetimes.
:param formats:
(Optional) A value or iterable of values suitable as ``datetime.datetime.strptime`` format
strings, for example ``('%Y-%m-%dT%H:%M:%S', '%Y-%m-%dT%H:%M:%S.%f')``. If the parameter is
present, ``strptime()`` will be used for parsing instead of the built-in parser.
:param serialized_format:
The output format suitable for Python ``strftime``. Default: ``'%Y-%m-%dT%H:%M:%S.%f%z'``
:param parser:
(Optional) An external function to use for parsing instead of the built-in parser. It should
return a ``datetime.datetime`` instance.
:param tzd:
Sets the time zone policy.
Default: ``'allow'``
============== ======================================================================
``'require'`` Values must specify a time zone.
``'allow'`` Values both with and without a time zone designator are allowed.
``'utc'`` Like ``allow``, but values with no time zone information are assumed
to be in UTC.
``'reject'`` Values must not specify a time zone. This also prohibits timestamps.
============== ======================================================================
:param convert_tz:
Indicates whether values with a time zone designator should be automatically converted to UTC.
Default: ``False``
* ``True``: Convert the datetime to UTC based on its time zone offset.
* ``False``: Don't convert. Keep the original time and offset intact.
:param drop_tzinfo:
Can be set to automatically remove the ``tzinfo`` objects. This option should generally
be used in conjunction with the ``convert_tz`` option unless you only care about local
wall clock times. Default: ``False``
* ``True``: Discard the ``tzinfo`` components and make naive ``datetime`` objects instead.
* ``False``: Preserve the ``tzinfo`` components if present.
"""
primitive_type = str
native_type = datetime.datetime
SERIALIZED_FORMAT = '%Y-%m-%dT%H:%M:%S.%f%z'
MESSAGES = {
'parse': _('Could not parse {0}. Should be ISO 8601 or timestamp.'),
'parse_formats': _('Could not parse {0}. Valid formats: {1}'),
'parse_external': _('Could not parse {0}.'),
'parse_tzd_require': _('Could not parse {0}. Time zone offset required.'),
'parse_tzd_reject': _('Could not parse {0}. Time zone offset not allowed.'),
'tzd_require': _('Could not convert {0}. Time zone required but not found.'),
'tzd_reject': _('Could not convert {0}. Time zone offsets not allowed.'),
'validate_tzd_require': _('Time zone information required but not found.'),
'validate_tzd_reject': _('Time zone information not allowed.'),
'validate_utc_none': _('Time zone must be UTC but was None.'),
'validate_utc_wrong': _('Time zone must be UTC.'),
}
REGEX = re.compile(r"""
(?P<year>\d{4})-(?P<month>\d\d)-(?P<day>\d\d)(?:T|\ )
(?P<hour>\d\d):(?P<minute>\d\d)
(?::(?P<second>\d\d)(?:(?:\.|,)(?P<sec_frac>\d{1,6}))?)?
(?:(?P<tzd_offset>(?P<tzd_sign>[+−-])(?P<tzd_hour>\d\d):?(?P<tzd_minute>\d\d)?)
|(?P<tzd_utc>Z))?$""", re.X)
TIMEDELTA_ZERO = datetime.timedelta(0)
class fixed_timezone(datetime.tzinfo):
def utcoffset(self, dt): return self.offset
def fromutc(self, dt): return dt + self.offset
def dst(self, dt): return None
def tzname(self, dt): return self.str
def __str__(self): return self.str
def __repr__(self, info=''): return '{0}({1})'.format(type(self).__name__, info)
class utc_timezone(fixed_timezone):
offset = datetime.timedelta(0)
name = str = 'UTC'
class offset_timezone(fixed_timezone):
def __init__(self, hours=0, minutes=0):
self.offset = datetime.timedelta(hours=hours, minutes=minutes)
total_seconds = self.offset.days * 86400 + self.offset.seconds
self.str = '{0:s}{1:02d}:{2:02d}'.format(
'+' if total_seconds >= 0 else '-',
int(abs(total_seconds) / 3600),
int(abs(total_seconds) % 3600 / 60))
def __repr__(self):
return DateTimeType.fixed_timezone.__repr__(self, self.str)
UTC = utc_timezone()
EPOCH = datetime.datetime(1970, 1, 1, tzinfo=UTC)
def __init__(self, formats=None, serialized_format=None, parser=None,
tzd='allow', convert_tz=False, drop_tzinfo=False, **kwargs):
# type: (...) -> datetime.datetime
if tzd not in ('require', 'allow', 'utc', 'reject'):
raise ValueError("DateTimeType.__init__() got an invalid value for parameter 'tzd'")
self.formats = listify(formats)
self.serialized_format = serialized_format or self.SERIALIZED_FORMAT
self.parser = parser
self.tzd = tzd
self.convert_tz = convert_tz
self.drop_tzinfo = drop_tzinfo
super(DateTimeType, self).__init__(**kwargs)
def _mock(self, context=None):
dt = datetime.datetime(
year=random.randrange(600) + 1900,
month=random.randrange(12) + 1,
day=random.randrange(28) + 1,
hour=random.randrange(24),
minute=random.randrange(60),
second=random.randrange(60),
microsecond=random.randrange(1000000))
if self.tzd == 'reject' or \
self.drop_tzinfo or \
self.tzd == 'allow' and random.randrange(2):
return dt
elif self.convert_tz:
return dt.replace(tzinfo=self.UTC)
else:
return dt.replace(tzinfo=self.offset_timezone(hours=random.randrange(-12, 15),
minutes=random.choice([0, 30, 45])))
def to_native(self, value, context=None):
if isinstance(value, datetime.datetime):
if value.tzinfo is None:
if not self.drop_tzinfo:
if self.tzd == 'require':
raise ConversionError(self.messages['tzd_require'].format(value))
if self.tzd == 'utc':
value = value.replace(tzinfo=self.UTC)
else:
if self.tzd == 'reject':
raise ConversionError(self.messages['tzd_reject'].format(value))
if self.convert_tz:
value = value.astimezone(self.UTC)
if self.drop_tzinfo:
value = value.replace(tzinfo=None)
return value
if self.formats:
# Delegate to datetime.datetime.strptime() using provided format strings.
for fmt in self.formats:
try:
dt = datetime.datetime.strptime(value, fmt)
break
except (ValueError, TypeError):
continue
else:
raise ConversionError(self.messages['parse_formats'].format(value, ", ".join(self.formats)))
elif self.parser:
# Delegate to external parser.
try:
dt = self.parser(value)
except:
raise ConversionError(self.messages['parse_external'].format(value))
else:
# Use built-in parser.
try:
value = float(value)
except ValueError:
dt = self.from_string(value)
except TypeError:
raise ConversionError(self.messages['parse'].format(value))
else:
dt = self.from_timestamp(value)
if not dt:
raise ConversionError(self.messages['parse'].format(value))
if dt.tzinfo is None:
if self.tzd == 'require':
raise ConversionError(self.messages['parse_tzd_require'].format(value))
if self.tzd == 'utc' and not self.drop_tzinfo:
dt = dt.replace(tzinfo=self.UTC)
else:
if self.tzd == 'reject':
raise ConversionError(self.messages['parse_tzd_reject'].format(value))
if self.convert_tz:
dt = dt.astimezone(self.UTC)
if self.drop_tzinfo:
dt = dt.replace(tzinfo=None)
return dt
def from_string(self, value):
match = self.REGEX.match(value)
if not match:
return None
parts = dict(((k, v) for k, v in match.groupdict().items() if v is not None))
p = lambda name: int(parts.get(name, 0))
microsecond = p('sec_frac') and p('sec_frac') * 10 ** (6 - len(parts['sec_frac']))
if 'tzd_utc' in parts:
tz = self.UTC
elif 'tzd_offset' in parts:
tz_sign = 1 if parts['tzd_sign'] == '+' else -1
tz_offset = (p('tzd_hour') * 60 + p('tzd_minute')) * tz_sign
if tz_offset == 0:
tz = self.UTC
else:
tz = self.offset_timezone(minutes=tz_offset)
else:
tz = None
try:
return datetime.datetime(p('year'), p('month'), p('day'),
p('hour'), p('minute'), p('second'),
microsecond, tz)
except (ValueError, TypeError):
return None
def from_timestamp(self, value):
try:
return datetime.datetime(1970, 1, 1, tzinfo=self.UTC) + datetime.timedelta(seconds=value)
except (ValueError, TypeError):
return None
def to_primitive(self, value, context=None):
if callable(self.serialized_format):
return self.serialized_format(value)
return value.strftime(self.serialized_format)
def validate_tz(self, value, context=None):
if value.tzinfo is None:
if not self.drop_tzinfo:
if self.tzd == 'require':
raise ValidationError(self.messages['validate_tzd_require'])
if self.tzd == 'utc':
raise ValidationError(self.messages['validate_utc_none'])
else:
if self.drop_tzinfo:
raise ValidationError(self.messages['validate_tzd_reject'])
if self.tzd == 'reject':
raise ValidationError(self.messages['validate_tzd_reject'])
if self.convert_tz \
and value.tzinfo.utcoffset(value) != self.TIMEDELTA_ZERO:
raise ValidationError(self.messages['validate_utc_wrong'])
class UTCDateTimeType(DateTimeType):
"""A variant of ``DateTimeType`` that normalizes everything to UTC and stores values
as naive ``datetime`` instances. By default sets ``tzd='utc'``, ``convert_tz=True``,
and ``drop_tzinfo=True``. The standard export format always includes the UTC time
zone designator ``"Z"``.
"""
SERIALIZED_FORMAT = '%Y-%m-%dT%H:%M:%S.%fZ'
def __init__(self, formats=None, parser=None, tzd='utc', convert_tz=True, drop_tzinfo=True, **kwargs):
# type: (...) -> datetime.datetime
super(UTCDateTimeType, self).__init__(formats=formats, parser=parser, tzd=tzd,
convert_tz=convert_tz, drop_tzinfo=drop_tzinfo, **kwargs)
class TimestampType(DateTimeType):
"""A variant of ``DateTimeType`` that exports itself as a Unix timestamp
instead of an ISO 8601 string. Always sets ``tzd='require'`` and
``convert_tz=True``.
"""
primitive_type = float
def __init__(self, formats=None, parser=None, drop_tzinfo=False, **kwargs):
# type: (...) -> datetime.datetime
super(TimestampType, self).__init__(formats=formats, parser=parser, tzd='require',
convert_tz=True, drop_tzinfo=drop_tzinfo, **kwargs)
def to_primitive(self, value, context=None):
if value.tzinfo is None:
value = value.replace(tzinfo=self.UTC)
else:
value = value.astimezone(self.UTC)
delta = value - self.EPOCH
return delta.total_seconds()
class TimedeltaType(BaseType):
"""Converts Python Timedelta objects into the corresponding value in seconds.
"""
primitive_type = float
native_type = datetime.timedelta
MESSAGES = {
'convert': _("Couldn't interpret '{0}' value as Timedelta."),
}
DAYS = 'days'
SECONDS = 'seconds'
MICROSECONDS = 'microseconds'
MILLISECONDS = 'milliseconds'
MINUTES = 'minutes'
HOURS = 'hours'
WEEKS = 'weeks'
def __init__(self, precision='seconds', **kwargs):
# type: (...) -> datetime.timedelta
precision = precision.lower()
units = (self.DAYS, self.SECONDS, self.MICROSECONDS, self.MILLISECONDS,
self.MINUTES, self.HOURS, self.WEEKS)
if precision not in units:
raise ValueError("TimedeltaType.__init__() got an invalid value for parameter 'precision'")
self.precision = precision
super(TimedeltaType, self).__init__(**kwargs)
def _mock(self, context=None):
return datetime.timedelta(seconds=random.random() * 1000)
def to_native(self, value, context=None):
if isinstance(value, datetime.timedelta):
return value
try:
return datetime.timedelta(**{self.precision: float(value)})
except (ValueError, TypeError):
raise ConversionError(self.messages['convert'].format(value))
def to_primitive(self, value, context=None):
base_unit = datetime.timedelta(**{self.precision: 1})
return int(value.total_seconds() / base_unit.total_seconds())
class GeoPointType(BaseType):
"""A list storing a latitude and longitude.
"""
primitive_type = list
native_type = list
MESSAGES = {
'point_min': _("{0} value {1} should be greater than or equal to {2}."),
'point_max': _("{0} value {1} should be less than or equal to {2}."),
}
def _mock(self, context=None):
return (random.randrange(-90, 90), random.randrange(-180, 180))
@classmethod
def _normalize(cls, value):
if isinstance(value, dict):
# py3: ensure list and not view
return list(value.values())
else:
return list(value)
def to_native(self, value, context=None):
"""Make sure that a geo-value is of type (x, y)
"""
if not isinstance(value, (tuple, list, dict)):
raise ConversionError(_('GeoPointType can only accept tuples, lists, or dicts'))
elements = self._normalize(value)
if not len(elements) == 2:
raise ConversionError(_('Value must be a two-dimensional point'))
if not all(isinstance(v, (float, int)) for v in elements):
raise ConversionError(_('Both values in point must be float or int'))
return value
def validate_range(self, value, context=None):
latitude, longitude = self._normalize(value)
if latitude < -90:
raise ValidationError(
self.messages['point_min'].format('Latitude', latitude, '-90')
)
if latitude > 90:
raise ValidationError(
self.messages['point_max'].format('Latitude', latitude, '90')
)
if longitude < -180:
raise ValidationError(
self.messages['point_min'].format('Longitude', longitude, -180)
)
if longitude > 180:
raise ValidationError(
self.messages['point_max'].format('Longitude', longitude, 180)
)
class MultilingualStringType(BaseType):
"""
A multilanguage string field, stored as a dict with {'locale': 'localized_value'}.
Minimum and maximum lengths apply to each of the localized values.
At least one of ``default_locale`` or ``context.app_data['locale']`` must be defined
when calling ``.to_primitive``.
"""
primitive_type = str
native_type = str
allow_casts = (int, bytes)
MESSAGES = {
'convert': _("Couldn't interpret value as string."),
'max_length': _("String value in locale {0} is too long."),
'min_length': _("String value in locale {0} is too short."),
'locale_not_found': _("No requested locale was available."),
'no_locale': _("No default or explicit locales were given."),
'regex_locale': _("Name of locale {0} did not match validation regex."),
'regex_localized': _("String value in locale {0} did not match validation regex."),
}
LOCALE_REGEX = r'^[a-z]{2}(:?_[A-Z]{2})?$'
def __init__(self, regex=None, max_length=None, min_length=None,
default_locale=None, locale_regex=LOCALE_REGEX, **kwargs):
self.regex = re.compile(regex) if regex else None
self.max_length = max_length
self.min_length = min_length
self.default_locale = default_locale
self.locale_regex = re.compile(locale_regex) if locale_regex else None
super(MultilingualStringType, self).__init__(**kwargs)
def _mock(self, context=None):
return random_string(get_value_in(self.min_length, self.max_length))
def to_native(self, value, context=None):
"""Make sure a MultilingualStringType value is a dict or None."""
if not (value is None or isinstance(value, dict)):
raise ConversionError(_('Value must be a dict or None'))
return value
def to_primitive(self, value, context=None):
"""
Use a combination of ``default_locale`` and ``context.app_data['locale']`` to return
the best localized string.
"""
if value is None:
return None
context_locale = None
if context and 'locale' in context.app_data:
context_locale = context.app_data['locale']
# Build a list of all possible locales to try
possible_locales = []
for locale in (context_locale, self.default_locale):
if not locale:
continue
if isinstance(locale, string_type):
possible_locales.append(locale)
else:
possible_locales.extend(locale)
if not possible_locales:
raise ConversionError(self.messages['no_locale'])
for locale in possible_locales:
if locale in value:
localized = value[locale]
break
else:
raise ConversionError(self.messages['locale_not_found'])
if not isinstance(localized, str):
if isinstance(localized, self.allow_casts):
if isinstance(localized, bytes):
localized = str(localized, 'utf-8')
else:
localized = str(localized)
else:
raise ConversionError(self.messages['convert'])
return localized
def validate_length(self, value, context=None):
for locale, localized in value.items():
len_of_value = len(localized) if localized else 0
if self.max_length is not None and len_of_value > self.max_length:
raise ValidationError(self.messages['max_length'].format(locale))
if self.min_length is not None and len_of_value < self.min_length:
raise ValidationError(self.messages['min_length'].format(locale))
def validate_regex(self, value, context=None):
if self.regex is None and self.locale_regex is None:
return
for locale, localized in value.items():
if self.regex is not None and self.regex.match(localized) is None:
raise ValidationError(
self.messages['regex_localized'].format(locale))
if self.locale_regex is not None and self.locale_regex.match(locale) is None:
raise ValidationError(
self.messages['regex_locale'].format(locale))
if PY2:
# Python 2 names cannot be unicode
__all__ = [n.encode('ascii') for n in __all__]
|
schematics-fork
|
/schematics-fork-2.1.1.tar.gz/schematics-fork-2.1.1/schematics/types/base.py
|
base.py
|
from __future__ import unicode_literals, absolute_import
import copy
import datetime
import decimal
import itertools
import numbers
import random
import re
import string
import uuid
from collections import OrderedDict
from ..common import *
from ..exceptions import *
from ..translator import _
from ..undefined import Undefined
from ..util import listify
from ..validate import prepare_validator, get_validation_context
try:
import typing
except ImportError:
pass
try:
from collections.abc import Iterable # PY3
except ImportError:
from collections import Iterable # PY2
__all__ = [
'BaseType', 'UUIDType', 'StringType', 'MultilingualStringType',
'NumberType', 'IntType', 'LongType', 'FloatType', 'DecimalType',
'HashType', 'MD5Type', 'SHA1Type', 'BooleanType', 'GeoPointType',
'DateType', 'DateTimeType', 'UTCDateTimeType', 'TimestampType',
'TimedeltaType']
def fill_template(template, min_length, max_length):
return template % random_string(
get_value_in(
min_length,
max_length,
padding=len(template) - 2,
required_length=1))
def get_range_endpoints(min_length, max_length, padding=0, required_length=0):
if min_length is None:
min_length = 0
if max_length is None:
max_length = max(min_length * 2, 16)
if padding:
max_length = max_length - padding
min_length = max(min_length - padding, 0)
if max_length < required_length:
raise MockCreationError(
'This field is too short to hold the mock data')
min_length = max(min_length, required_length)
if max_length < min_length:
raise MockCreationError('Minimum is greater than maximum')
return min_length, max_length
def get_value_in(min_length, max_length, padding=0, required_length=0):
return random.randint(
*get_range_endpoints(min_length, max_length, padding, required_length))
_alphanumeric = string.ascii_letters + string.digits
def random_string(length, chars=_alphanumeric):
return ''.join(random.choice(chars) for _ in range(length))
_last_position_hint = -1
_next_position_hint = itertools.count()
class TypeMeta(type):
"""
Meta class for BaseType. Merges `MESSAGES` dict and accumulates
validator methods.
"""
def __new__(mcs, name, bases, attrs):
messages = {}
validators = OrderedDict()
for base in reversed(bases):
if hasattr(base, 'MESSAGES'):
messages.update(base.MESSAGES)
if hasattr(base, "_validators"):
validators.update(base._validators)
if 'MESSAGES' in attrs:
messages.update(attrs['MESSAGES'])
attrs['MESSAGES'] = messages
for attr_name, attr in attrs.items():
if attr_name.startswith("validate_"):
validators[attr_name] = 1
attrs[attr_name] = prepare_validator(attr, 3)
attrs["_validators"] = validators
return type.__new__(mcs, name, bases, attrs)
@metaclass(TypeMeta)
class BaseType(object):
"""A base class for Types in a Schematics model. Instances of this
class may be added to subclasses of ``Model`` to define a model schema.
Validators that need to access variables on the instance
can be defined be implementing methods whose names start with ``validate_``
and accept one parameter (in addition to ``self``)
:param required:
Invalidate field when value is None or is not supplied. Default:
False.
:param default:
When no data is provided default to this value. May be a callable.
Default: None.
:param serialized_name:
The name of this field defaults to the class attribute used in the
model. However if the field has another name in foreign data set this
argument. Serialized data will use this value for the key name too.
:param deserialize_from:
A name or list of named fields for which foreign data sets are
searched to provide a value for the given field. This only effects
inbound data.
:param choices:
A list of valid choices. This is the last step of the validator
chain.
:param validators:
A list of callables. Each callable receives the value after it has been
converted into a rich python type. Default: []
:param serialize_when_none:
Dictates if the field should appear in the serialized data even if the
value is None. Default: None.
:param messages:
Override the error messages with a dict. You can also do this by
subclassing the Type and defining a `MESSAGES` dict attribute on the
class. A metaclass will merge all the `MESSAGES` and override the
resulting dict with instance level `messages` and assign to
`self.messages`.
:param metadata:
Dictionary for storing custom metadata associated with the field.
To encourage compatibility with external tools, we suggest these keys
for common metadata:
- *label* : Brief human-readable label
- *description* : Explanation of the purpose of the field. Used for
help, tooltips, documentation, etc.
"""
primitive_type = None
native_type = None
MESSAGES = {
'required': _("This field is required."),
'choices': _("Value must be one of {0}."),
}
EXPORT_METHODS = {
NATIVE: 'to_native',
PRIMITIVE: 'to_primitive',
}
def __init__(self, required=False, default=Undefined, serialized_name=None,
choices=None, validators=None, deserialize_from=None,
export_level=None, serialize_when_none=None,
messages=None, metadata=None):
super(BaseType, self).__init__()
self.required = required
self._default = default
self.serialized_name = serialized_name
if choices and (isinstance(choices, string_type) or not isinstance(choices, Iterable)):
raise TypeError('"choices" must be a non-string Iterable')
self.choices = choices
self.deserialize_from = listify(deserialize_from)
self.validators = [getattr(self, validator_name) for validator_name in self._validators]
if validators:
self.validators += (prepare_validator(func, 2) for func in validators)
self._set_export_level(export_level, serialize_when_none)
self.messages = dict(self.MESSAGES, **(messages or {}))
self.metadata = metadata or {}
self._position_hint = next(_next_position_hint) # For ordering of fields
self.name = None
self.owner_model = None
self.parent_field = None
self.typeclass = self.__class__
self.is_compound = False
self.export_mapping = dict(
(format, getattr(self, fname)) for format, fname in self.EXPORT_METHODS.items())
def __repr__(self):
type_ = "%s(%s) instance" % (self.__class__.__name__, self._repr_info() or '')
model = " on %s" % self.owner_model.__name__ if self.owner_model else ''
field = " as '%s'" % self.name if self.name else ''
return "<%s>" % (type_ + model + field)
def _repr_info(self):
return None
def __call__(self, value, context=None):
return self.convert(value, context)
def __deepcopy__(self, memo):
return copy.copy(self)
def _mock(self, context=None):
return None
def _setup(self, field_name, owner_model):
"""Perform late-stage setup tasks that are run after the containing model
has been created.
"""
self.name = field_name
self.owner_model = owner_model
self._input_keys = self._get_input_keys()
def _set_export_level(self, export_level, serialize_when_none):
if export_level is not None:
self.export_level = export_level
elif serialize_when_none is True:
self.export_level = DEFAULT
elif serialize_when_none is False:
self.export_level = NONEMPTY
else:
self.export_level = None
def get_export_level(self, context):
if self.owner_model:
level = self.owner_model._options.export_level
else:
level = DEFAULT
if self.export_level is not None:
level = self.export_level
if context.export_level is not None:
level = context.export_level
return level
def get_input_keys(self, mapping=None):
if mapping:
return self._get_input_keys(mapping)
else:
return self._input_keys
def _get_input_keys(self, mapping=None):
input_keys = [self.name]
if self.serialized_name:
input_keys.append(self.serialized_name)
if mapping and self.name in mapping:
input_keys.extend(listify(mapping[self.name]))
if self.deserialize_from:
input_keys.extend(self.deserialize_from)
return input_keys
@property
def default(self):
default = self._default
if callable(default):
default = default()
return default
def pre_setattr(self, value):
return value
def convert(self, value, context=None):
return self.to_native(value, context)
def export(self, value, format, context=None):
return self.export_mapping[format](value, context)
def to_primitive(self, value, context=None):
"""Convert internal data to a value safe to serialize.
"""
return value
def to_native(self, value, context=None):
"""
Convert untrusted data to a richer Python construct.
"""
return value
def validate(self, value, context=None):
"""
Validate the field and return a converted value or raise a
``ValidationError`` with a list of errors raised by the validation
chain. Stop the validation process from continuing through the
validators by raising ``StopValidationError`` instead of ``ValidationError``.
"""
context = context or get_validation_context()
if context.convert:
value = self.convert(value, context)
elif self.is_compound:
self.convert(value, context)
errors = []
for validator in self.validators:
try:
validator(value, context)
except ValidationError as exc:
errors.append(exc)
if isinstance(exc, StopValidationError):
break
if errors:
raise ValidationError(errors)
return value
def check_required(self, value, context):
if self.required and (value is None or value is Undefined):
if self.name is None or context and not context.partial:
raise ConversionError(self.messages['required'])
def validate_choices(self, value, context):
if self.choices is not None:
if value not in self.choices:
raise ValidationError(self.messages['choices'].format(str(self.choices)))
def mock(self, context=None):
if not self.required and not random.choice([True, False]):
return self.default
if self.choices is not None:
return random.choice(self.choices)
return self._mock(context)
class UUIDType(BaseType):
"""A field that stores a valid UUID value.
"""
primitive_type = str
native_type = uuid.UUID
MESSAGES = {
'convert': _("Couldn't interpret '{0}' value as UUID."),
}
def __init__(self, **kwargs):
# type: (...) -> uuid.UUID
super(UUIDType, self).__init__(**kwargs)
def _mock(self, context=None):
return uuid.uuid4()
def to_native(self, value, context=None):
if not isinstance(value, uuid.UUID):
try:
value = uuid.UUID(value)
except (TypeError, ValueError):
raise ConversionError(self.messages['convert'].format(value))
return value
def to_primitive(self, value, context=None):
return str(value)
class StringType(BaseType):
"""A Unicode string field."""
primitive_type = str
native_type = str
allow_casts = (int, bytes)
MESSAGES = {
'convert': _("Couldn't interpret '{0}' as string."),
'decode': _("Invalid UTF-8 data."),
'max_length': _("String value is too long."),
'min_length': _("String value is too short."),
'regex': _("String value did not match validation regex."),
}
def __init__(self, regex=None, max_length=None, min_length=None, **kwargs):
# type: (...) -> typing.Text
self.regex = re.compile(regex) if regex else None
self.max_length = max_length
self.min_length = min_length
super(StringType, self).__init__(**kwargs)
def _mock(self, context=None):
return random_string(get_value_in(self.min_length, self.max_length))
def to_native(self, value, context=None):
if isinstance(value, str):
return value
if isinstance(value, self.allow_casts):
if isinstance(value, bytes):
try:
return str(value, 'utf-8')
except UnicodeError:
raise ConversionError(self.messages['decode'].format(value))
elif isinstance(value, bool):
pass
else:
return str(value)
raise ConversionError(self.messages['convert'].format(value))
def validate_length(self, value, context=None):
length = len(value)
if self.max_length is not None and length > self.max_length:
raise ValidationError(self.messages['max_length'])
if self.min_length is not None and length < self.min_length:
raise ValidationError(self.messages['min_length'])
def validate_regex(self, value, context=None):
if self.regex is not None and self.regex.match(value) is None:
raise ValidationError(self.messages['regex'])
class NumberType(BaseType):
"""A generic number field.
Converts to and validates against `number_type` parameter.
"""
primitive_type = None
native_type = None
number_type = None
MESSAGES = {
'number_coerce': _("Value '{0}' is not {1}."),
'number_min': _("{0} value should be greater than or equal to {1}."),
'number_max': _("{0} value should be less than or equal to {1}."),
}
def __init__(self, min_value=None, max_value=None, strict=False, **kwargs):
# type: (...) -> typing.Union[int, float]
self.min_value = min_value
self.max_value = max_value
self.strict = strict
super(NumberType, self).__init__(**kwargs)
def _mock(self, context=None):
number = random.uniform(
*get_range_endpoints(self.min_value, self.max_value)
)
return self.native_type(number) if self.native_type else number
def to_native(self, value, context=None):
if isinstance(value, bool):
value = int(value)
if isinstance(value, self.native_type):
return value
try:
native_value = self.native_type(value)
except (TypeError, ValueError):
pass
else:
if self.native_type is float: # Float conversion is strict enough.
return native_value
if not self.strict and native_value == value: # Match numeric types.
return native_value
if isinstance(value, (string_type, numbers.Integral)):
return native_value
raise ConversionError(self.messages['number_coerce']
.format(value, self.number_type.lower()))
def validate_range(self, value, context=None):
if self.min_value is not None and value < self.min_value:
raise ValidationError(self.messages['number_min']
.format(self.number_type, self.min_value))
if self.max_value is not None and value > self.max_value:
raise ValidationError(self.messages['number_max']
.format(self.number_type, self.max_value))
return value
class IntType(NumberType):
"""A field that validates input as an Integer
"""
primitive_type = int
native_type = int
number_type = 'Int'
def __init__(self, **kwargs):
# type: (...) -> int
super(IntType, self).__init__(**kwargs)
LongType = IntType
class FloatType(NumberType):
"""A field that validates input as a Float
"""
primitive_type = float
native_type = float
number_type = 'Float'
def __init__(self, **kwargs):
# type: (...) -> float
super(FloatType, self).__init__(**kwargs)
class DecimalType(NumberType):
"""A fixed-point decimal number field.
"""
primitive_type = str
native_type = decimal.Decimal
number_type = 'Decimal'
def to_primitive(self, value, context=None):
return str(value)
def to_native(self, value, context=None):
if isinstance(value, decimal.Decimal):
return value
if not isinstance(value, (string_type, bool)):
value = str(value)
try:
value = decimal.Decimal(value)
except (TypeError, decimal.InvalidOperation):
raise ConversionError(self.messages['number_coerce'].format(
value, self.number_type.lower()))
return value
class HashType(StringType):
MESSAGES = {
'hash_length': _("Hash value is wrong length."),
'hash_hex': _("Hash value is not hexadecimal."),
}
def _mock(self, context=None):
return random_string(self.LENGTH, string.hexdigits)
def to_native(self, value, context=None):
value = super(HashType, self).to_native(value, context)
if len(value) != self.LENGTH:
raise ValidationError(self.messages['hash_length'])
try:
int(value, 16)
except ValueError:
raise ConversionError(self.messages['hash_hex'])
return value
class MD5Type(HashType):
"""A field that validates input as resembling an MD5 hash.
"""
LENGTH = 32
class SHA1Type(HashType):
"""A field that validates input as resembling an SHA1 hash.
"""
LENGTH = 40
class BooleanType(BaseType):
"""A boolean field type. In addition to ``True`` and ``False``, coerces these
values:
+ For ``True``: "True", "true", "1"
+ For ``False``: "False", "false", "0"
"""
primitive_type = bool
native_type = bool
TRUE_VALUES = ('True', 'true', '1')
FALSE_VALUES = ('False', 'false', '0')
def __init__(self, **kwargs):
# type: (...) -> bool
super(BooleanType, self).__init__(**kwargs)
def _mock(self, context=None):
return random.choice([True, False])
def to_native(self, value, context=None):
if isinstance(value, string_type):
if value in self.TRUE_VALUES:
value = True
elif value in self.FALSE_VALUES:
value = False
elif isinstance(value, int) and value in [0, 1]:
value = bool(value)
if not isinstance(value, bool):
raise ConversionError(_("Must be either true or false."))
return value
class DateType(BaseType):
"""Defaults to converting to and from ISO8601 date values.
"""
primitive_type = str
native_type = datetime.date
SERIALIZED_FORMAT = '%Y-%m-%d'
MESSAGES = {
'parse': _("Could not parse {0}. Should be ISO 8601 (YYYY-MM-DD)."),
'parse_formats': _('Could not parse {0}. Valid formats: {1}'),
}
def __init__(self, formats=None, **kwargs):
# type: (...) -> datetime.date
if formats:
self.formats = listify(formats)
self.conversion_errmsg = self.MESSAGES['parse_formats']
else:
self.formats = ['%Y-%m-%d']
self.conversion_errmsg = self.MESSAGES['parse']
self.serialized_format = self.SERIALIZED_FORMAT
super(DateType, self).__init__(**kwargs)
def _mock(self, context=None):
return datetime.date(
year=random.randrange(600) + 1900,
month=random.randrange(12) + 1,
day=random.randrange(28) + 1,
)
def to_native(self, value, context=None):
if isinstance(value, datetime.datetime):
return value.date()
if isinstance(value, datetime.date):
return value
for fmt in self.formats:
try:
return datetime.datetime.strptime(value, fmt).date()
except (ValueError, TypeError):
continue
else:
raise ConversionError(self.conversion_errmsg.format(value, ", ".join(self.formats)))
def to_primitive(self, value, context=None):
return value.strftime(self.serialized_format)
class DateTimeType(BaseType):
"""A field that holds a combined date and time value.
The built-in parser accepts input values conforming to the ISO 8601 format
``<YYYY>-<MM>-<DD>T<hh>:<mm>[:<ss.ssssss>][<z>]``. A space may be substituted
for the delimiter ``T``. The time zone designator ``<z>`` may be either ``Z``
or ``±<hh>[:][<mm>]``.
Values are stored as standard ``datetime.datetime`` instances with the time zone
offset in the ``tzinfo`` component if available. Raw values that do not specify a time
zone will be converted to naive ``datetime`` objects unless ``tzd='utc'`` is in effect.
Unix timestamps are also valid input values and will be converted to UTC datetimes.
:param formats:
(Optional) A value or iterable of values suitable as ``datetime.datetime.strptime`` format
strings, for example ``('%Y-%m-%dT%H:%M:%S', '%Y-%m-%dT%H:%M:%S.%f')``. If the parameter is
present, ``strptime()`` will be used for parsing instead of the built-in parser.
:param serialized_format:
The output format suitable for Python ``strftime``. Default: ``'%Y-%m-%dT%H:%M:%S.%f%z'``
:param parser:
(Optional) An external function to use for parsing instead of the built-in parser. It should
return a ``datetime.datetime`` instance.
:param tzd:
Sets the time zone policy.
Default: ``'allow'``
============== ======================================================================
``'require'`` Values must specify a time zone.
``'allow'`` Values both with and without a time zone designator are allowed.
``'utc'`` Like ``allow``, but values with no time zone information are assumed
to be in UTC.
``'reject'`` Values must not specify a time zone. This also prohibits timestamps.
============== ======================================================================
:param convert_tz:
Indicates whether values with a time zone designator should be automatically converted to UTC.
Default: ``False``
* ``True``: Convert the datetime to UTC based on its time zone offset.
* ``False``: Don't convert. Keep the original time and offset intact.
:param drop_tzinfo:
Can be set to automatically remove the ``tzinfo`` objects. This option should generally
be used in conjunction with the ``convert_tz`` option unless you only care about local
wall clock times. Default: ``False``
* ``True``: Discard the ``tzinfo`` components and make naive ``datetime`` objects instead.
* ``False``: Preserve the ``tzinfo`` components if present.
"""
primitive_type = str
native_type = datetime.datetime
SERIALIZED_FORMAT = '%Y-%m-%dT%H:%M:%S.%f%z'
MESSAGES = {
'parse': _('Could not parse {0}. Should be ISO 8601 or timestamp.'),
'parse_formats': _('Could not parse {0}. Valid formats: {1}'),
'parse_external': _('Could not parse {0}.'),
'parse_tzd_require': _('Could not parse {0}. Time zone offset required.'),
'parse_tzd_reject': _('Could not parse {0}. Time zone offset not allowed.'),
'tzd_require': _('Could not convert {0}. Time zone required but not found.'),
'tzd_reject': _('Could not convert {0}. Time zone offsets not allowed.'),
'validate_tzd_require': _('Time zone information required but not found.'),
'validate_tzd_reject': _('Time zone information not allowed.'),
'validate_utc_none': _('Time zone must be UTC but was None.'),
'validate_utc_wrong': _('Time zone must be UTC.'),
}
REGEX = re.compile(r"""
(?P<year>\d{4})-(?P<month>\d\d)-(?P<day>\d\d)(?:T|\ )
(?P<hour>\d\d):(?P<minute>\d\d)
(?::(?P<second>\d\d)(?:(?:\.|,)(?P<sec_frac>\d{1,6}))?)?
(?:(?P<tzd_offset>(?P<tzd_sign>[+−-])(?P<tzd_hour>\d\d):?(?P<tzd_minute>\d\d)?)
|(?P<tzd_utc>Z))?$""", re.X)
TIMEDELTA_ZERO = datetime.timedelta(0)
class fixed_timezone(datetime.tzinfo):
def utcoffset(self, dt): return self.offset
def fromutc(self, dt): return dt + self.offset
def dst(self, dt): return None
def tzname(self, dt): return self.str
def __str__(self): return self.str
def __repr__(self, info=''): return '{0}({1})'.format(type(self).__name__, info)
class utc_timezone(fixed_timezone):
offset = datetime.timedelta(0)
name = str = 'UTC'
class offset_timezone(fixed_timezone):
def __init__(self, hours=0, minutes=0):
self.offset = datetime.timedelta(hours=hours, minutes=minutes)
total_seconds = self.offset.days * 86400 + self.offset.seconds
self.str = '{0:s}{1:02d}:{2:02d}'.format(
'+' if total_seconds >= 0 else '-',
int(abs(total_seconds) / 3600),
int(abs(total_seconds) % 3600 / 60))
def __repr__(self):
return DateTimeType.fixed_timezone.__repr__(self, self.str)
UTC = utc_timezone()
EPOCH = datetime.datetime(1970, 1, 1, tzinfo=UTC)
def __init__(self, formats=None, serialized_format=None, parser=None,
tzd='allow', convert_tz=False, drop_tzinfo=False, **kwargs):
# type: (...) -> datetime.datetime
if tzd not in ('require', 'allow', 'utc', 'reject'):
raise ValueError("DateTimeType.__init__() got an invalid value for parameter 'tzd'")
self.formats = listify(formats)
self.serialized_format = serialized_format or self.SERIALIZED_FORMAT
self.parser = parser
self.tzd = tzd
self.convert_tz = convert_tz
self.drop_tzinfo = drop_tzinfo
super(DateTimeType, self).__init__(**kwargs)
def _mock(self, context=None):
dt = datetime.datetime(
year=random.randrange(600) + 1900,
month=random.randrange(12) + 1,
day=random.randrange(28) + 1,
hour=random.randrange(24),
minute=random.randrange(60),
second=random.randrange(60),
microsecond=random.randrange(1000000))
if self.tzd == 'reject' or \
self.drop_tzinfo or \
self.tzd == 'allow' and random.randrange(2):
return dt
elif self.convert_tz:
return dt.replace(tzinfo=self.UTC)
else:
return dt.replace(tzinfo=self.offset_timezone(hours=random.randrange(-12, 15),
minutes=random.choice([0, 30, 45])))
def to_native(self, value, context=None):
if isinstance(value, datetime.datetime):
if value.tzinfo is None:
if not self.drop_tzinfo:
if self.tzd == 'require':
raise ConversionError(self.messages['tzd_require'].format(value))
if self.tzd == 'utc':
value = value.replace(tzinfo=self.UTC)
else:
if self.tzd == 'reject':
raise ConversionError(self.messages['tzd_reject'].format(value))
if self.convert_tz:
value = value.astimezone(self.UTC)
if self.drop_tzinfo:
value = value.replace(tzinfo=None)
return value
if self.formats:
# Delegate to datetime.datetime.strptime() using provided format strings.
for fmt in self.formats:
try:
dt = datetime.datetime.strptime(value, fmt)
break
except (ValueError, TypeError):
continue
else:
raise ConversionError(self.messages['parse_formats'].format(value, ", ".join(self.formats)))
elif self.parser:
# Delegate to external parser.
try:
dt = self.parser(value)
except:
raise ConversionError(self.messages['parse_external'].format(value))
else:
# Use built-in parser.
try:
value = float(value)
except ValueError:
dt = self.from_string(value)
except TypeError:
raise ConversionError(self.messages['parse'].format(value))
else:
dt = self.from_timestamp(value)
if not dt:
raise ConversionError(self.messages['parse'].format(value))
if dt.tzinfo is None:
if self.tzd == 'require':
raise ConversionError(self.messages['parse_tzd_require'].format(value))
if self.tzd == 'utc' and not self.drop_tzinfo:
dt = dt.replace(tzinfo=self.UTC)
else:
if self.tzd == 'reject':
raise ConversionError(self.messages['parse_tzd_reject'].format(value))
if self.convert_tz:
dt = dt.astimezone(self.UTC)
if self.drop_tzinfo:
dt = dt.replace(tzinfo=None)
return dt
def from_string(self, value):
match = self.REGEX.match(value)
if not match:
return None
parts = dict(((k, v) for k, v in match.groupdict().items() if v is not None))
p = lambda name: int(parts.get(name, 0))
microsecond = p('sec_frac') and p('sec_frac') * 10 ** (6 - len(parts['sec_frac']))
if 'tzd_utc' in parts:
tz = self.UTC
elif 'tzd_offset' in parts:
tz_sign = 1 if parts['tzd_sign'] == '+' else -1
tz_offset = (p('tzd_hour') * 60 + p('tzd_minute')) * tz_sign
if tz_offset == 0:
tz = self.UTC
else:
tz = self.offset_timezone(minutes=tz_offset)
else:
tz = None
try:
return datetime.datetime(p('year'), p('month'), p('day'),
p('hour'), p('minute'), p('second'),
microsecond, tz)
except (ValueError, TypeError):
return None
def from_timestamp(self, value):
try:
return datetime.datetime(1970, 1, 1, tzinfo=self.UTC) + datetime.timedelta(seconds=value)
except (ValueError, TypeError):
return None
def to_primitive(self, value, context=None):
if callable(self.serialized_format):
return self.serialized_format(value)
return value.strftime(self.serialized_format)
def validate_tz(self, value, context=None):
if value.tzinfo is None:
if not self.drop_tzinfo:
if self.tzd == 'require':
raise ValidationError(self.messages['validate_tzd_require'])
if self.tzd == 'utc':
raise ValidationError(self.messages['validate_utc_none'])
else:
if self.drop_tzinfo:
raise ValidationError(self.messages['validate_tzd_reject'])
if self.tzd == 'reject':
raise ValidationError(self.messages['validate_tzd_reject'])
if self.convert_tz \
and value.tzinfo.utcoffset(value) != self.TIMEDELTA_ZERO:
raise ValidationError(self.messages['validate_utc_wrong'])
class UTCDateTimeType(DateTimeType):
"""A variant of ``DateTimeType`` that normalizes everything to UTC and stores values
as naive ``datetime`` instances. By default sets ``tzd='utc'``, ``convert_tz=True``,
and ``drop_tzinfo=True``. The standard export format always includes the UTC time
zone designator ``"Z"``.
"""
SERIALIZED_FORMAT = '%Y-%m-%dT%H:%M:%S.%fZ'
def __init__(self, formats=None, parser=None, tzd='utc', convert_tz=True, drop_tzinfo=True, **kwargs):
# type: (...) -> datetime.datetime
super(UTCDateTimeType, self).__init__(formats=formats, parser=parser, tzd=tzd,
convert_tz=convert_tz, drop_tzinfo=drop_tzinfo, **kwargs)
class TimestampType(DateTimeType):
"""A variant of ``DateTimeType`` that exports itself as a Unix timestamp
instead of an ISO 8601 string. Always sets ``tzd='require'`` and
``convert_tz=True``.
"""
primitive_type = float
def __init__(self, formats=None, parser=None, drop_tzinfo=False, **kwargs):
# type: (...) -> datetime.datetime
super(TimestampType, self).__init__(formats=formats, parser=parser, tzd='require',
convert_tz=True, drop_tzinfo=drop_tzinfo, **kwargs)
def to_primitive(self, value, context=None):
if value.tzinfo is None:
value = value.replace(tzinfo=self.UTC)
else:
value = value.astimezone(self.UTC)
delta = value - self.EPOCH
return delta.total_seconds()
class TimedeltaType(BaseType):
"""Converts Python Timedelta objects into the corresponding value in seconds.
"""
primitive_type = float
native_type = datetime.timedelta
MESSAGES = {
'convert': _("Couldn't interpret '{0}' value as Timedelta."),
}
DAYS = 'days'
SECONDS = 'seconds'
MICROSECONDS = 'microseconds'
MILLISECONDS = 'milliseconds'
MINUTES = 'minutes'
HOURS = 'hours'
WEEKS = 'weeks'
def __init__(self, precision='seconds', **kwargs):
# type: (...) -> datetime.timedelta
precision = precision.lower()
units = (self.DAYS, self.SECONDS, self.MICROSECONDS, self.MILLISECONDS,
self.MINUTES, self.HOURS, self.WEEKS)
if precision not in units:
raise ValueError("TimedeltaType.__init__() got an invalid value for parameter 'precision'")
self.precision = precision
super(TimedeltaType, self).__init__(**kwargs)
def _mock(self, context=None):
return datetime.timedelta(seconds=random.random() * 1000)
def to_native(self, value, context=None):
if isinstance(value, datetime.timedelta):
return value
try:
return datetime.timedelta(**{self.precision: float(value)})
except (ValueError, TypeError):
raise ConversionError(self.messages['convert'].format(value))
def to_primitive(self, value, context=None):
base_unit = datetime.timedelta(**{self.precision: 1})
return int(value.total_seconds() / base_unit.total_seconds())
class GeoPointType(BaseType):
"""A list storing a latitude and longitude.
"""
primitive_type = list
native_type = list
MESSAGES = {
'point_min': _("{0} value {1} should be greater than or equal to {2}."),
'point_max': _("{0} value {1} should be less than or equal to {2}."),
}
def _mock(self, context=None):
return (random.randrange(-90, 90), random.randrange(-180, 180))
@classmethod
def _normalize(cls, value):
if isinstance(value, dict):
# py3: ensure list and not view
return list(value.values())
else:
return list(value)
def to_native(self, value, context=None):
"""Make sure that a geo-value is of type (x, y)
"""
if not isinstance(value, (tuple, list, dict)):
raise ConversionError(_('GeoPointType can only accept tuples, lists, or dicts'))
elements = self._normalize(value)
if not len(elements) == 2:
raise ConversionError(_('Value must be a two-dimensional point'))
if not all(isinstance(v, (float, int)) for v in elements):
raise ConversionError(_('Both values in point must be float or int'))
return value
def validate_range(self, value, context=None):
latitude, longitude = self._normalize(value)
if latitude < -90:
raise ValidationError(
self.messages['point_min'].format('Latitude', latitude, '-90')
)
if latitude > 90:
raise ValidationError(
self.messages['point_max'].format('Latitude', latitude, '90')
)
if longitude < -180:
raise ValidationError(
self.messages['point_min'].format('Longitude', longitude, -180)
)
if longitude > 180:
raise ValidationError(
self.messages['point_max'].format('Longitude', longitude, 180)
)
class MultilingualStringType(BaseType):
"""
A multilanguage string field, stored as a dict with {'locale': 'localized_value'}.
Minimum and maximum lengths apply to each of the localized values.
At least one of ``default_locale`` or ``context.app_data['locale']`` must be defined
when calling ``.to_primitive``.
"""
primitive_type = str
native_type = str
allow_casts = (int, bytes)
MESSAGES = {
'convert': _("Couldn't interpret value as string."),
'max_length': _("String value in locale {0} is too long."),
'min_length': _("String value in locale {0} is too short."),
'locale_not_found': _("No requested locale was available."),
'no_locale': _("No default or explicit locales were given."),
'regex_locale': _("Name of locale {0} did not match validation regex."),
'regex_localized': _("String value in locale {0} did not match validation regex."),
}
LOCALE_REGEX = r'^[a-z]{2}(:?_[A-Z]{2})?$'
def __init__(self, regex=None, max_length=None, min_length=None,
default_locale=None, locale_regex=LOCALE_REGEX, **kwargs):
self.regex = re.compile(regex) if regex else None
self.max_length = max_length
self.min_length = min_length
self.default_locale = default_locale
self.locale_regex = re.compile(locale_regex) if locale_regex else None
super(MultilingualStringType, self).__init__(**kwargs)
def _mock(self, context=None):
return random_string(get_value_in(self.min_length, self.max_length))
def to_native(self, value, context=None):
"""Make sure a MultilingualStringType value is a dict or None."""
if not (value is None or isinstance(value, dict)):
raise ConversionError(_('Value must be a dict or None'))
return value
def to_primitive(self, value, context=None):
"""
Use a combination of ``default_locale`` and ``context.app_data['locale']`` to return
the best localized string.
"""
if value is None:
return None
context_locale = None
if context and 'locale' in context.app_data:
context_locale = context.app_data['locale']
# Build a list of all possible locales to try
possible_locales = []
for locale in (context_locale, self.default_locale):
if not locale:
continue
if isinstance(locale, string_type):
possible_locales.append(locale)
else:
possible_locales.extend(locale)
if not possible_locales:
raise ConversionError(self.messages['no_locale'])
for locale in possible_locales:
if locale in value:
localized = value[locale]
break
else:
raise ConversionError(self.messages['locale_not_found'])
if not isinstance(localized, str):
if isinstance(localized, self.allow_casts):
if isinstance(localized, bytes):
localized = str(localized, 'utf-8')
else:
localized = str(localized)
else:
raise ConversionError(self.messages['convert'])
return localized
def validate_length(self, value, context=None):
for locale, localized in value.items():
len_of_value = len(localized) if localized else 0
if self.max_length is not None and len_of_value > self.max_length:
raise ValidationError(self.messages['max_length'].format(locale))
if self.min_length is not None and len_of_value < self.min_length:
raise ValidationError(self.messages['min_length'].format(locale))
def validate_regex(self, value, context=None):
if self.regex is None and self.locale_regex is None:
return
for locale, localized in value.items():
if self.regex is not None and self.regex.match(localized) is None:
raise ValidationError(
self.messages['regex_localized'].format(locale))
if self.locale_regex is not None and self.locale_regex.match(locale) is None:
raise ValidationError(
self.messages['regex_locale'].format(locale))
if PY2:
# Python 2 names cannot be unicode
__all__ = [n.encode('ascii') for n in __all__]
| 0.789761 | 0.175644 |
from __future__ import unicode_literals, absolute_import
import copy
from functools import partial
from types import FunctionType
from ..common import *
from ..exceptions import *
from ..undefined import Undefined
from ..transforms import get_import_context
from .base import BaseType, TypeMeta
__all__ = ['calculated', 'serializable', 'Serializable']
def serializable(arg=None, **kwargs):
"""A serializable is a way to define dynamic serializable fields that are
derived from other fields.
>>> from schematics.models import serializable
>>> class Location(Model):
... country_code = StringType()
... @serializable
... def country_name(self):
... return {'us': 'United States'}[self.country_code]
...
>>> location = Location({'country_code': 'us'})
>>> location.serialize()
{'country_name': 'United States', 'country_code': 'us'}
>>>
:param type:
A custom subclass of `BaseType` for enforcing a certain type
on serialization.
:param serialized_name:
The name of this field in the serialized output.
"""
if isinstance(arg, FunctionType):
decorator = True
func = arg
serialized_type = BaseType
elif arg is None or isinstance(arg, (BaseType, TypeMeta)):
decorator = False
serialized_type = arg or kwargs.pop("type", BaseType)
else:
raise TypeError("The argument to 'serializable' must be a function or a type.")
if isinstance(serialized_type, BaseType):
# `serialized_type` is already a type instance,
# so update it with the options found in `kwargs`.
serialized_type._set_export_level(kwargs.pop('export_level', None),
kwargs.pop("serialize_when_none", None))
for name, value in kwargs.items():
setattr(serialized_type, name, value)
else:
serialized_type = serialized_type(**kwargs)
if decorator:
return Serializable(type=serialized_type, fget=func)
else:
return partial(Serializable, type=serialized_type)
def calculated(type, fget, fset=None):
return Serializable(type=type, fget=fget, fset=fset)
class Serializable(object):
def __init__(self, fget, type, fset=None):
self.type = type
self.fget = fget
self.fset = fset
def __getattr__(self, name):
return getattr(self.type, name)
def __get__(self, instance, cls):
if instance is None:
return self
else:
value = self.fget(instance)
if value is Undefined:
raise UndefinedValueError(instance, self.name)
else:
return value
def __set__(self, instance, value):
if self.fset is None:
raise AttributeError("can't set attribute %s" % self.name)
value = self.type.pre_setattr(value)
self.fset(instance, value)
def setter(self, fset):
self.fset = fset
return self
def _repr_info(self):
return self.type.__class__.__name__
def __deepcopy__(self, memo):
return self.__class__(self.fget, type=copy.deepcopy(self.type), fset=self.fset)
def __repr__(self):
type_ = "%s(%s) instance" % (self.__class__.__name__, self._repr_info() or '')
model = " on %s" % self.owner_model.__name__ if self.owner_model else ''
field = " as '%s'" % self.name if self.name else ''
return "<%s>" % (type_ + model + field)
if PY2:
# Python 2 names cannot be unicode
__all__ = [n.encode('ascii') for n in __all__]
|
schematics-fork
|
/schematics-fork-2.1.1.tar.gz/schematics-fork-2.1.1/schematics/types/serializable.py
|
serializable.py
|
from __future__ import unicode_literals, absolute_import
import copy
from functools import partial
from types import FunctionType
from ..common import *
from ..exceptions import *
from ..undefined import Undefined
from ..transforms import get_import_context
from .base import BaseType, TypeMeta
__all__ = ['calculated', 'serializable', 'Serializable']
def serializable(arg=None, **kwargs):
"""A serializable is a way to define dynamic serializable fields that are
derived from other fields.
>>> from schematics.models import serializable
>>> class Location(Model):
... country_code = StringType()
... @serializable
... def country_name(self):
... return {'us': 'United States'}[self.country_code]
...
>>> location = Location({'country_code': 'us'})
>>> location.serialize()
{'country_name': 'United States', 'country_code': 'us'}
>>>
:param type:
A custom subclass of `BaseType` for enforcing a certain type
on serialization.
:param serialized_name:
The name of this field in the serialized output.
"""
if isinstance(arg, FunctionType):
decorator = True
func = arg
serialized_type = BaseType
elif arg is None or isinstance(arg, (BaseType, TypeMeta)):
decorator = False
serialized_type = arg or kwargs.pop("type", BaseType)
else:
raise TypeError("The argument to 'serializable' must be a function or a type.")
if isinstance(serialized_type, BaseType):
# `serialized_type` is already a type instance,
# so update it with the options found in `kwargs`.
serialized_type._set_export_level(kwargs.pop('export_level', None),
kwargs.pop("serialize_when_none", None))
for name, value in kwargs.items():
setattr(serialized_type, name, value)
else:
serialized_type = serialized_type(**kwargs)
if decorator:
return Serializable(type=serialized_type, fget=func)
else:
return partial(Serializable, type=serialized_type)
def calculated(type, fget, fset=None):
return Serializable(type=type, fget=fget, fset=fset)
class Serializable(object):
def __init__(self, fget, type, fset=None):
self.type = type
self.fget = fget
self.fset = fset
def __getattr__(self, name):
return getattr(self.type, name)
def __get__(self, instance, cls):
if instance is None:
return self
else:
value = self.fget(instance)
if value is Undefined:
raise UndefinedValueError(instance, self.name)
else:
return value
def __set__(self, instance, value):
if self.fset is None:
raise AttributeError("can't set attribute %s" % self.name)
value = self.type.pre_setattr(value)
self.fset(instance, value)
def setter(self, fset):
self.fset = fset
return self
def _repr_info(self):
return self.type.__class__.__name__
def __deepcopy__(self, memo):
return self.__class__(self.fget, type=copy.deepcopy(self.type), fset=self.fset)
def __repr__(self):
type_ = "%s(%s) instance" % (self.__class__.__name__, self._repr_info() or '')
model = " on %s" % self.owner_model.__name__ if self.owner_model else ''
field = " as '%s'" % self.name if self.name else ''
return "<%s>" % (type_ + model + field)
if PY2:
# Python 2 names cannot be unicode
__all__ = [n.encode('ascii') for n in __all__]
| 0.778523 | 0.118896 |
from __future__ import unicode_literals, absolute_import
import inspect
from collections import OrderedDict
from ..common import *
from ..exceptions import ConversionError
from ..translator import _
from ..transforms import get_import_context, get_export_context
from .base import BaseType
__all__ = ['UnionType']
def _valid_init_args(type_):
args = set()
for cls in type_.__mro__:
try:
init_args = inspect.getfullargspec(cls.__init__).args[1:] # PY3
except AttributeError:
init_args = inspect.getargspec(cls.__init__).args[1:] # PY2
args.update(init_args)
if cls is BaseType:
break
return args
def _filter_kwargs(valid_args, kwargs):
return dict((k, v) for k, v in kwargs.items() if k in valid_args)
class UnionType(BaseType):
types = None
MESSAGES = {
'convert': _("Couldn't interpret value '{0}' as any of {1}."),
}
_baseclass_args = _valid_init_args(BaseType)
def __init__(self, types=None, resolver=None, **kwargs):
self._types = OrderedDict()
types = types or self.types
if resolver:
self.resolve = resolver
for type_ in types:
if isinstance(type_, type) and issubclass(type_, BaseType):
type_ = type_(**_filter_kwargs(_valid_init_args(type_), kwargs))
elif not isinstance(type_, BaseType):
raise TypeError("Got '%s' instance instead of a Schematics type" % type_.__class__.__name__)
self._types[type_.__class__] = type_
self.typenames = tuple((cls.__name__ for cls in self._types))
super(UnionType, self).__init__(**_filter_kwargs(self._baseclass_args, kwargs))
def resolve(self, value, context):
for field in self._types.values():
try:
value = field.convert(value, context)
except ConversionError:
pass
else:
return field, value
return None
def _resolve(self, value, context):
response = self.resolve(value, context)
if isinstance(response, type):
field = self._types[response]
try:
response = field, field.convert(value, context)
except ConversionError:
pass
if isinstance(response, tuple):
return response
raise ConversionError(self.messages['convert'].format(value, self.typenames))
def convert(self, value, context=None):
context = context or get_import_context()
field, native_value = self._resolve(value, context)
return native_value
def validate(self, value, context=None):
field, _ = self._resolve(value, context)
return field.validate(value, context)
def _export(self, value, format, context=None):
field, _ = self._resolve(value, context)
return field._export(value, format, context)
def to_native(self, value, context=None):
field, _ = self._resolve(value, context)
return field.to_native(value, context)
def to_primitive(self, value, context=None):
field, _ = self._resolve(value, context)
return field.to_primitive(value, context)
if PY2:
# Python 2 names cannot be unicode
__all__ = [n.encode('ascii') for n in __all__]
|
schematics-fork
|
/schematics-fork-2.1.1.tar.gz/schematics-fork-2.1.1/schematics/types/union.py
|
union.py
|
from __future__ import unicode_literals, absolute_import
import inspect
from collections import OrderedDict
from ..common import *
from ..exceptions import ConversionError
from ..translator import _
from ..transforms import get_import_context, get_export_context
from .base import BaseType
__all__ = ['UnionType']
def _valid_init_args(type_):
args = set()
for cls in type_.__mro__:
try:
init_args = inspect.getfullargspec(cls.__init__).args[1:] # PY3
except AttributeError:
init_args = inspect.getargspec(cls.__init__).args[1:] # PY2
args.update(init_args)
if cls is BaseType:
break
return args
def _filter_kwargs(valid_args, kwargs):
return dict((k, v) for k, v in kwargs.items() if k in valid_args)
class UnionType(BaseType):
types = None
MESSAGES = {
'convert': _("Couldn't interpret value '{0}' as any of {1}."),
}
_baseclass_args = _valid_init_args(BaseType)
def __init__(self, types=None, resolver=None, **kwargs):
self._types = OrderedDict()
types = types or self.types
if resolver:
self.resolve = resolver
for type_ in types:
if isinstance(type_, type) and issubclass(type_, BaseType):
type_ = type_(**_filter_kwargs(_valid_init_args(type_), kwargs))
elif not isinstance(type_, BaseType):
raise TypeError("Got '%s' instance instead of a Schematics type" % type_.__class__.__name__)
self._types[type_.__class__] = type_
self.typenames = tuple((cls.__name__ for cls in self._types))
super(UnionType, self).__init__(**_filter_kwargs(self._baseclass_args, kwargs))
def resolve(self, value, context):
for field in self._types.values():
try:
value = field.convert(value, context)
except ConversionError:
pass
else:
return field, value
return None
def _resolve(self, value, context):
response = self.resolve(value, context)
if isinstance(response, type):
field = self._types[response]
try:
response = field, field.convert(value, context)
except ConversionError:
pass
if isinstance(response, tuple):
return response
raise ConversionError(self.messages['convert'].format(value, self.typenames))
def convert(self, value, context=None):
context = context or get_import_context()
field, native_value = self._resolve(value, context)
return native_value
def validate(self, value, context=None):
field, _ = self._resolve(value, context)
return field.validate(value, context)
def _export(self, value, format, context=None):
field, _ = self._resolve(value, context)
return field._export(value, format, context)
def to_native(self, value, context=None):
field, _ = self._resolve(value, context)
return field.to_native(value, context)
def to_primitive(self, value, context=None):
field, _ = self._resolve(value, context)
return field.to_primitive(value, context)
if PY2:
# Python 2 names cannot be unicode
__all__ = [n.encode('ascii') for n in __all__]
| 0.648578 | 0.137504 |
from dataclasses import dataclass
from typing import Type
import schematics
from google.protobuf.message import Message
from schematics_proto3.types import OneOfType
from schematics_proto3.types.wrappers import WrapperTypeMixin
from schematics_proto3.utils import get_value_fallback
class _Ignore:
"""
Sentinel class to denote `protobuf_enum` argument in ProtobufEnum base
class.
"""
# pylint: disable=too-few-public-methods
__slots__ = []
@dataclass(frozen=True)
class ModelOptions:
message_class: Type[Message]
class ModelMeta(schematics.ModelMeta):
def __new__(mcs, name, bases, attrs, protobuf_message=None):
cls = super().__new__(mcs, name, bases, attrs)
if protobuf_message is _Ignore:
return cls
if protobuf_message is None:
raise RuntimeError(f'protobuf_enum argument of class {name} must be set')
if not issubclass(protobuf_message, Message):
raise RuntimeError('protobuf_enum must be a subclass of Protobuf message')
# TODO: Validate fields against protobuf message definition
cls.protobuf_options = ModelOptions(
message_class=protobuf_message,
)
return cls
class Model(schematics.Model, metaclass=ModelMeta, protobuf_message=_Ignore):
"""
Base class for models operating with protobuf messages.
"""
# pylint: disable=no-member
protobuf_options: ModelOptions
@classmethod
def load_protobuf(cls, msg):
field_names = {descriptor.name for descriptor, _ in msg.ListFields()}
values = {}
for name, field in cls.fields.items():
pb_field_name = field.metadata.get('protobuf_field', name)
value_getter_func = getattr(field, 'convert_protobuf', get_value_fallback)
values[name] = value_getter_func(msg, pb_field_name, field_names)
return cls(values)
def to_protobuf(self: 'Model') -> Message:
assert isinstance(self, schematics.Model)
msg = self.protobuf_options.message_class()
for name, field in self.fields.items():
pb_name = field.metadata.get('protobuf_field', name)
if isinstance(field, WrapperTypeMixin):
# This is a wrapped value, assign it iff not Unset.
val = getattr(self, name)
field.export_protobuf(msg, pb_name, val)
elif isinstance(field, Model):
# Compound, nested field, delegate serialisation to model
# instance.
setattr(msg, pb_name, field.to_protobuf())
elif isinstance(field, OneOfType):
val = getattr(self, name)
field.export_protobuf(msg, pb_name, val)
else:
# Primitive value, just assign it.
val = getattr(self, name)
if val is not None:
setattr(msg, pb_name, val)
return msg
def __hash__(self):
return hash(tuple(field for field in self.fields))
|
schematics-proto3
|
/schematics_proto3-0.1.3-py3-none-any.whl/schematics_proto3/models.py
|
models.py
|
from dataclasses import dataclass
from typing import Type
import schematics
from google.protobuf.message import Message
from schematics_proto3.types import OneOfType
from schematics_proto3.types.wrappers import WrapperTypeMixin
from schematics_proto3.utils import get_value_fallback
class _Ignore:
"""
Sentinel class to denote `protobuf_enum` argument in ProtobufEnum base
class.
"""
# pylint: disable=too-few-public-methods
__slots__ = []
@dataclass(frozen=True)
class ModelOptions:
message_class: Type[Message]
class ModelMeta(schematics.ModelMeta):
def __new__(mcs, name, bases, attrs, protobuf_message=None):
cls = super().__new__(mcs, name, bases, attrs)
if protobuf_message is _Ignore:
return cls
if protobuf_message is None:
raise RuntimeError(f'protobuf_enum argument of class {name} must be set')
if not issubclass(protobuf_message, Message):
raise RuntimeError('protobuf_enum must be a subclass of Protobuf message')
# TODO: Validate fields against protobuf message definition
cls.protobuf_options = ModelOptions(
message_class=protobuf_message,
)
return cls
class Model(schematics.Model, metaclass=ModelMeta, protobuf_message=_Ignore):
"""
Base class for models operating with protobuf messages.
"""
# pylint: disable=no-member
protobuf_options: ModelOptions
@classmethod
def load_protobuf(cls, msg):
field_names = {descriptor.name for descriptor, _ in msg.ListFields()}
values = {}
for name, field in cls.fields.items():
pb_field_name = field.metadata.get('protobuf_field', name)
value_getter_func = getattr(field, 'convert_protobuf', get_value_fallback)
values[name] = value_getter_func(msg, pb_field_name, field_names)
return cls(values)
def to_protobuf(self: 'Model') -> Message:
assert isinstance(self, schematics.Model)
msg = self.protobuf_options.message_class()
for name, field in self.fields.items():
pb_name = field.metadata.get('protobuf_field', name)
if isinstance(field, WrapperTypeMixin):
# This is a wrapped value, assign it iff not Unset.
val = getattr(self, name)
field.export_protobuf(msg, pb_name, val)
elif isinstance(field, Model):
# Compound, nested field, delegate serialisation to model
# instance.
setattr(msg, pb_name, field.to_protobuf())
elif isinstance(field, OneOfType):
val = getattr(self, name)
field.export_protobuf(msg, pb_name, val)
else:
# Primitive value, just assign it.
val = getattr(self, name)
if val is not None:
setattr(msg, pb_name, val)
return msg
def __hash__(self):
return hash(tuple(field for field in self.fields))
| 0.706494 | 0.14685 |
from typing import Type
from schematics.common import NOT_NONE
from schematics.exceptions import ConversionError
from schematics.types import BaseType
from schematics.undefined import Undefined
from schematics_proto3.enum import ProtobufEnum
from schematics_proto3.types.base import ProtobufTypeMixin
from schematics_proto3.unset import Unset
__all__ = ['EnumType']
class EnumType(ProtobufTypeMixin, BaseType):
def __init__(self, enum_class: Type[ProtobufEnum], *, unset_variant=Unset, **kwargs):
super().__init__(**kwargs)
self.enum_class: Type[ProtobufEnum] = enum_class
self.unset_variant = unset_variant
def check_required(self: BaseType, value, context):
# Treat Unset as required rule violation.
if self.required and value in {Unset, self.unset_variant}:
raise ConversionError(self.messages['required'])
super().check_required(value, context)
def convert(self, value, context):
if value in {Unset, self.unset_variant}:
return Unset
if isinstance(value, str):
return self.enum_class[value]
if isinstance(value, int):
return self.enum_class(value)
raise AttributeError(f'Expected int or str, got {type(value)}')
def export(self, value, format, context): # pylint:disable=redefined-builtin
if value is Unset:
export_level = self.get_export_level(context)
if export_level <= NOT_NONE:
return Undefined
return Unset
return value.name
def convert_protobuf(self, msg, field_name, field_names):
# pylint:disable=unused-argument
# TODO: Catch AttributeError and raise proper exception.
value = getattr(msg, field_name)
if value in {Unset, self.unset_variant}:
return Unset
return value
def export_protobuf(self, msg, field_name, value):
# pylint: disable=no-self-use
# TODO: Check that model_class is an instance of Model
if field_name is Unset:
return
setattr(
msg,
field_name,
value.value,
)
|
schematics-proto3
|
/schematics_proto3-0.1.3-py3-none-any.whl/schematics_proto3/types/enum.py
|
enum.py
|
from typing import Type
from schematics.common import NOT_NONE
from schematics.exceptions import ConversionError
from schematics.types import BaseType
from schematics.undefined import Undefined
from schematics_proto3.enum import ProtobufEnum
from schematics_proto3.types.base import ProtobufTypeMixin
from schematics_proto3.unset import Unset
__all__ = ['EnumType']
class EnumType(ProtobufTypeMixin, BaseType):
def __init__(self, enum_class: Type[ProtobufEnum], *, unset_variant=Unset, **kwargs):
super().__init__(**kwargs)
self.enum_class: Type[ProtobufEnum] = enum_class
self.unset_variant = unset_variant
def check_required(self: BaseType, value, context):
# Treat Unset as required rule violation.
if self.required and value in {Unset, self.unset_variant}:
raise ConversionError(self.messages['required'])
super().check_required(value, context)
def convert(self, value, context):
if value in {Unset, self.unset_variant}:
return Unset
if isinstance(value, str):
return self.enum_class[value]
if isinstance(value, int):
return self.enum_class(value)
raise AttributeError(f'Expected int or str, got {type(value)}')
def export(self, value, format, context): # pylint:disable=redefined-builtin
if value is Unset:
export_level = self.get_export_level(context)
if export_level <= NOT_NONE:
return Undefined
return Unset
return value.name
def convert_protobuf(self, msg, field_name, field_names):
# pylint:disable=unused-argument
# TODO: Catch AttributeError and raise proper exception.
value = getattr(msg, field_name)
if value in {Unset, self.unset_variant}:
return Unset
return value
def export_protobuf(self, msg, field_name, value):
# pylint: disable=no-self-use
# TODO: Check that model_class is an instance of Model
if field_name is Unset:
return
setattr(
msg,
field_name,
value.value,
)
| 0.692538 | 0.225918 |
from schematics.common import NOT_NONE
from schematics.exceptions import ValidationError, DataError, CompoundError, StopValidationError
from schematics.types import CompoundType, BaseType
from schematics.undefined import Undefined
from schematics_proto3.oneof import OneOfVariant
from schematics_proto3.types.base import ProtobufTypeMixin
from schematics_proto3.unset import Unset
from schematics_proto3.utils import get_value_fallback, set_value_fallback
__all__ = ['OneOfType']
class OneOfType(ProtobufTypeMixin, CompoundType):
def __init__(self, variants_spec, *args, **kwargs):
# TODO: Check that each:
# 1) key in variants_spec exists in protobuf message
# (with respect to renaming)
# 2) value in variants_spec is a subclass of BaseType
super().__init__(*args, **kwargs)
self.variants_spec = variants_spec
self._variant = None
self._variant_type = None
self._protobuf_renames = {}
self._default = Unset
for name, spec in variants_spec.items():
pb_name = spec.metadata.get('protobuf_field', None)
if pb_name is not None:
if pb_name in variants_spec:
raise RuntimeError(f'Duplicated variant name `{pb_name}`')
self._protobuf_renames[pb_name] = name
@property
def variant(self):
return self._variant
@variant.setter
def variant(self, name):
if name in self.variants_spec:
self._variant = name
self._variant_type = self.variants_spec[name]
elif name in self._protobuf_renames:
self._variant = self._protobuf_renames[name]
self._variant_type = self.variants_spec[self._variant]
else:
raise KeyError(name)
@property
def variant_type(self):
return self._variant_type
def pre_setattr(self, value):
# TODO: Raise proper exceptions
variant = None
if isinstance(value, OneOfVariant):
variant = value
if isinstance(value, tuple):
if len(value) != 2:
raise RuntimeError(
f'OneOfVariant tuple must have 2 items, got {len(value)}'
)
variant = OneOfVariant(value[0], value[1])
if isinstance(value, dict):
if 'variant' not in value or 'value' not in value:
raise RuntimeError(
'OneOfVariant dict must have `variant` and `value` keys.'
)
variant = OneOfVariant(value['variant'], value['value'])
if variant is None:
raise RuntimeError('Unknown value')
self.variant = variant.variant
return variant
def convert(self, value, context):
# TODO: Raise proper exception (ConversionError)
if value is Unset:
return Unset
if self.variant is None:
raise RuntimeError('Variant is unset')
val = self.variant_type.convert(value, context)
return OneOfVariant(self.variant, val)
def validate(self: BaseType, value, context=None):
if value is Unset:
return Unset
# Run validation of inner variant field.
try:
self.variant_type.validate(value.value, context)
except (ValidationError, DataError) as ex:
raise CompoundError({
self.variant: ex,
})
# Run validation for this field itself.
# Following is basically copy of a code in BaseType :/
errors = []
for validator in self.validators:
try:
validator(value, context)
except ValidationError as exc:
errors.append(exc)
if isinstance(exc, StopValidationError):
break
if errors:
raise ValidationError(errors)
return value
def export(self, value, format, context): # pylint:disable=redefined-builtin
if value in {Unset, None}:
export_level = self.get_export_level(context)
if export_level <= NOT_NONE:
return Undefined
return Unset
return {
'variant': value.variant,
'value': self.variant_type.export(value.value, format, context),
}
# Those methods are abstract in CompoundType class, override them to
# silence linters.
# Raising NotImplementedError does not matter as we already override
# convert and export (without underscores) which are called earlier.
def _convert(self, value, context):
raise NotImplementedError()
def _export(self, value, format, context): # pylint:disable=redefined-builtin
raise NotImplementedError()
def convert_protobuf(self, msg, field_name, field_names):
# TODO: Handle value error:
# ValueError: Protocol message has no oneof "X" field.
variant_name = msg.WhichOneof(field_name)
if variant_name is None:
return Unset
self.variant = variant_name
convert_func = getattr(self.variant_type, 'convert_protobuf', get_value_fallback)
return convert_func(msg, variant_name, field_names)
def export_protobuf(self, msg, field_name, value): # pylint: disable=unused-argument
# TODO: Check that model_class is an instance of Model
if value in {Unset, None}:
return
# self.variant = field_name
set_value = getattr(self.variant_type, 'export_protobuf', set_value_fallback)
set_value(msg, self.variant, value.value)
|
schematics-proto3
|
/schematics_proto3-0.1.3-py3-none-any.whl/schematics_proto3/types/oneof.py
|
oneof.py
|
from schematics.common import NOT_NONE
from schematics.exceptions import ValidationError, DataError, CompoundError, StopValidationError
from schematics.types import CompoundType, BaseType
from schematics.undefined import Undefined
from schematics_proto3.oneof import OneOfVariant
from schematics_proto3.types.base import ProtobufTypeMixin
from schematics_proto3.unset import Unset
from schematics_proto3.utils import get_value_fallback, set_value_fallback
__all__ = ['OneOfType']
class OneOfType(ProtobufTypeMixin, CompoundType):
def __init__(self, variants_spec, *args, **kwargs):
# TODO: Check that each:
# 1) key in variants_spec exists in protobuf message
# (with respect to renaming)
# 2) value in variants_spec is a subclass of BaseType
super().__init__(*args, **kwargs)
self.variants_spec = variants_spec
self._variant = None
self._variant_type = None
self._protobuf_renames = {}
self._default = Unset
for name, spec in variants_spec.items():
pb_name = spec.metadata.get('protobuf_field', None)
if pb_name is not None:
if pb_name in variants_spec:
raise RuntimeError(f'Duplicated variant name `{pb_name}`')
self._protobuf_renames[pb_name] = name
@property
def variant(self):
return self._variant
@variant.setter
def variant(self, name):
if name in self.variants_spec:
self._variant = name
self._variant_type = self.variants_spec[name]
elif name in self._protobuf_renames:
self._variant = self._protobuf_renames[name]
self._variant_type = self.variants_spec[self._variant]
else:
raise KeyError(name)
@property
def variant_type(self):
return self._variant_type
def pre_setattr(self, value):
# TODO: Raise proper exceptions
variant = None
if isinstance(value, OneOfVariant):
variant = value
if isinstance(value, tuple):
if len(value) != 2:
raise RuntimeError(
f'OneOfVariant tuple must have 2 items, got {len(value)}'
)
variant = OneOfVariant(value[0], value[1])
if isinstance(value, dict):
if 'variant' not in value or 'value' not in value:
raise RuntimeError(
'OneOfVariant dict must have `variant` and `value` keys.'
)
variant = OneOfVariant(value['variant'], value['value'])
if variant is None:
raise RuntimeError('Unknown value')
self.variant = variant.variant
return variant
def convert(self, value, context):
# TODO: Raise proper exception (ConversionError)
if value is Unset:
return Unset
if self.variant is None:
raise RuntimeError('Variant is unset')
val = self.variant_type.convert(value, context)
return OneOfVariant(self.variant, val)
def validate(self: BaseType, value, context=None):
if value is Unset:
return Unset
# Run validation of inner variant field.
try:
self.variant_type.validate(value.value, context)
except (ValidationError, DataError) as ex:
raise CompoundError({
self.variant: ex,
})
# Run validation for this field itself.
# Following is basically copy of a code in BaseType :/
errors = []
for validator in self.validators:
try:
validator(value, context)
except ValidationError as exc:
errors.append(exc)
if isinstance(exc, StopValidationError):
break
if errors:
raise ValidationError(errors)
return value
def export(self, value, format, context): # pylint:disable=redefined-builtin
if value in {Unset, None}:
export_level = self.get_export_level(context)
if export_level <= NOT_NONE:
return Undefined
return Unset
return {
'variant': value.variant,
'value': self.variant_type.export(value.value, format, context),
}
# Those methods are abstract in CompoundType class, override them to
# silence linters.
# Raising NotImplementedError does not matter as we already override
# convert and export (without underscores) which are called earlier.
def _convert(self, value, context):
raise NotImplementedError()
def _export(self, value, format, context): # pylint:disable=redefined-builtin
raise NotImplementedError()
def convert_protobuf(self, msg, field_name, field_names):
# TODO: Handle value error:
# ValueError: Protocol message has no oneof "X" field.
variant_name = msg.WhichOneof(field_name)
if variant_name is None:
return Unset
self.variant = variant_name
convert_func = getattr(self.variant_type, 'convert_protobuf', get_value_fallback)
return convert_func(msg, variant_name, field_names)
def export_protobuf(self, msg, field_name, value): # pylint: disable=unused-argument
# TODO: Check that model_class is an instance of Model
if value in {Unset, None}:
return
# self.variant = field_name
set_value = getattr(self.variant_type, 'export_protobuf', set_value_fallback)
set_value(msg, self.variant, value.value)
| 0.602062 | 0.244036 |
import os
import random
from datetime import datetime, timedelta, timezone
from google.protobuf import wrappers_pb2
from schematics.exceptions import ValidationError
from schematics.types import IntType, FloatType, BooleanType, StringType, BaseType
from schematics_proto3.types.base import ProtobufTypeMixin
from schematics_proto3.unset import Unset
__all__ = ['IntWrapperType', 'FloatWrapperType', 'BoolWrapperType',
'StringWrapperType', 'BytesWrapperType', 'TimestampType']
WRAPPER_TYPES = (
wrappers_pb2.Int32Value,
wrappers_pb2.Int64Value,
wrappers_pb2.BytesValue,
wrappers_pb2.StringValue,
wrappers_pb2.BoolValue,
wrappers_pb2.UInt32Value,
wrappers_pb2.UInt64Value,
wrappers_pb2.FloatValue,
wrappers_pb2.DoubleValue,
)
class WrapperTypeMixin(ProtobufTypeMixin):
def convert(self, value, context):
if value is Unset:
return Unset
# TODO: Is is avoidable to use this?
if isinstance(value, WRAPPER_TYPES):
value = value.value
return super().convert(value, context)
def convert_protobuf(self, msg, field_name, field_names):
# pylint: disable=no-self-use
if field_name not in field_names:
return Unset
value = getattr(msg, field_name)
return value.value
def export_protobuf(self, msg, field_name, value):
# pylint: disable=no-self-use
# TODO: Check that model_class is an instance of Model
if value is Unset or value is None:
return
field = getattr(msg, field_name)
field.value = value
class IntWrapperType(WrapperTypeMixin, IntType):
pass
class FloatWrapperType(WrapperTypeMixin, FloatType):
pass
class BoolWrapperType(WrapperTypeMixin, BooleanType):
pass
class StringWrapperType(WrapperTypeMixin, StringType):
pass
class BytesWrapperType(WrapperTypeMixin, BaseType):
MESSAGES = {
'max_length': "Bytes value is too long.",
'min_length': "Bytes value is too short.",
}
def __init__(self, max_length=None, min_length=None, **kwargs):
# TODO: Validate boundaries.
self.max_length = max_length
self.min_length = min_length
super().__init__(**kwargs)
def validate_length(self, value, context=None):
# pylint: disable=unused-argument
length = len(value)
if self.max_length is not None and length > self.max_length:
raise ValidationError(self.messages['max_length'])
if self.min_length is not None and length < self.min_length:
raise ValidationError(self.messages['min_length'])
def _mock(self, context=None):
length = random.randint(
self.min_length if self.min_length is None else 5,
self.max_length if self.max_length is None else 256,
)
return os.urandom(length)
class TimestampType(ProtobufTypeMixin, BaseType):
def convert_protobuf(self, msg, field_name, field_names):
# pylint: disable=no-self-use
if field_name not in field_names:
return Unset
value = getattr(msg, field_name)
return value
def to_native(self, value, context=None):
if isinstance(value, datetime):
return value
try:
return (
datetime(1970, 1, 1, tzinfo=timezone.utc)
+ timedelta(seconds=value.seconds, microseconds=value.nanos // 1000)
)
except (ValueError, TypeError):
# TODO: Informative error or Unset?
return None
|
schematics-proto3
|
/schematics_proto3-0.1.3-py3-none-any.whl/schematics_proto3/types/wrappers.py
|
wrappers.py
|
import os
import random
from datetime import datetime, timedelta, timezone
from google.protobuf import wrappers_pb2
from schematics.exceptions import ValidationError
from schematics.types import IntType, FloatType, BooleanType, StringType, BaseType
from schematics_proto3.types.base import ProtobufTypeMixin
from schematics_proto3.unset import Unset
__all__ = ['IntWrapperType', 'FloatWrapperType', 'BoolWrapperType',
'StringWrapperType', 'BytesWrapperType', 'TimestampType']
WRAPPER_TYPES = (
wrappers_pb2.Int32Value,
wrappers_pb2.Int64Value,
wrappers_pb2.BytesValue,
wrappers_pb2.StringValue,
wrappers_pb2.BoolValue,
wrappers_pb2.UInt32Value,
wrappers_pb2.UInt64Value,
wrappers_pb2.FloatValue,
wrappers_pb2.DoubleValue,
)
class WrapperTypeMixin(ProtobufTypeMixin):
def convert(self, value, context):
if value is Unset:
return Unset
# TODO: Is is avoidable to use this?
if isinstance(value, WRAPPER_TYPES):
value = value.value
return super().convert(value, context)
def convert_protobuf(self, msg, field_name, field_names):
# pylint: disable=no-self-use
if field_name not in field_names:
return Unset
value = getattr(msg, field_name)
return value.value
def export_protobuf(self, msg, field_name, value):
# pylint: disable=no-self-use
# TODO: Check that model_class is an instance of Model
if value is Unset or value is None:
return
field = getattr(msg, field_name)
field.value = value
class IntWrapperType(WrapperTypeMixin, IntType):
pass
class FloatWrapperType(WrapperTypeMixin, FloatType):
pass
class BoolWrapperType(WrapperTypeMixin, BooleanType):
pass
class StringWrapperType(WrapperTypeMixin, StringType):
pass
class BytesWrapperType(WrapperTypeMixin, BaseType):
MESSAGES = {
'max_length': "Bytes value is too long.",
'min_length': "Bytes value is too short.",
}
def __init__(self, max_length=None, min_length=None, **kwargs):
# TODO: Validate boundaries.
self.max_length = max_length
self.min_length = min_length
super().__init__(**kwargs)
def validate_length(self, value, context=None):
# pylint: disable=unused-argument
length = len(value)
if self.max_length is not None and length > self.max_length:
raise ValidationError(self.messages['max_length'])
if self.min_length is not None and length < self.min_length:
raise ValidationError(self.messages['min_length'])
def _mock(self, context=None):
length = random.randint(
self.min_length if self.min_length is None else 5,
self.max_length if self.max_length is None else 256,
)
return os.urandom(length)
class TimestampType(ProtobufTypeMixin, BaseType):
def convert_protobuf(self, msg, field_name, field_names):
# pylint: disable=no-self-use
if field_name not in field_names:
return Unset
value = getattr(msg, field_name)
return value
def to_native(self, value, context=None):
if isinstance(value, datetime):
return value
try:
return (
datetime(1970, 1, 1, tzinfo=timezone.utc)
+ timedelta(seconds=value.seconds, microseconds=value.nanos // 1000)
)
except (ValueError, TypeError):
# TODO: Informative error or Unset?
return None
| 0.257765 | 0.176104 |
this change log is not maintained in this fork
2.1.0 / Unreleased
==================
**[BREAKING CHANGE]**
- Drop Python 2.6 support
`#517 <https://github.com/schematics/schematics/pull/517>`__
(`rooterkyberian <https://github.com/rooterkyberian>`__)
Other changes:
- Add TimedeltaType
`#540 <https://github.com/schematics/schematics/pull/540>`__
(`gabisurita <https://github.com/gabisurita>`__)
- Allow to create Model fields dynamically
`#512 <https://github.com/schematics/schematics/pull/512>`__
(`lkraider <https://github.com/lkraider>`__)
- Allow ModelOptions to have extra parameters
`#449 <https://github.com/schematics/schematics/pull/449>`__
(`rmb938 <https://github.com/rmb938>`__)
`#506 <https://github.com/schematics/schematics/pull/506>`__
(`ekampf <https://github.com/ekampf>`__)
- Accept callables as serialize roles
`#508 <https://github.com/schematics/schematics/pull/508>`__
(`lkraider <https://github.com/lkraider>`__)
(`jaysonsantos <https://github.com/jaysonsantos>`__)
- Simplify PolyModelType.find_model for readability
`#537 <https://github.com/schematics/schematics/pull/537>`__
(`kstrauser <https://github.com/kstrauser>`__)
- Enable PolyModelType recursive validation
`#535 <https://github.com/schematics/schematics/pull/535>`__
(`javiertejero <https://github.com/javiertejero>`__)
- Documentation fixes
`#509 <https://github.com/schematics/schematics/pull/509>`__
(`Tuoris <https://github.com/Tuoris>`__)
`#514 <https://github.com/schematics/schematics/pull/514>`__
(`tommyzli <https://github.com/tommyzli>`__)
`#518 <https://github.com/schematics/schematics/pull/518>`__
(`rooterkyberian <https://github.com/rooterkyberian>`__)
`#546 <https://github.com/schematics/schematics/pull/546>`__
(`harveyslash <https://github.com/harveyslash>`__)
- Fix Model.init validation when partial is True
`#531 <https://github.com/schematics/schematics/issues/531>`__
(`lkraider <https://github.com/lkraider>`__)
- Minor number types refactor and mocking fixes
`#519 <https://github.com/schematics/schematics/pull/519>`__
(`rooterkyberian <https://github.com/rooterkyberian>`__)
`#520 <https://github.com/schematics/schematics/pull/520>`__
(`rooterkyberian <https://github.com/rooterkyberian>`__)
- Add ability to import models as strings
`#496 <https://github.com/schematics/schematics/pull/496>`__
(`jaysonsantos <https://github.com/jaysonsantos>`__)
- Add EnumType
`#504 <https://github.com/schematics/schematics/pull/504>`__
(`ekamil <https://github.com/ekamil>`__)
- Dynamic models: Possible memory issues because of _subclasses
`#502 <https://github.com/schematics/schematics/pull/502>`__
(`mjrk <https://github.com/mjrk>`__)
- Add type hints to constructors of field type classes
`#488 <https://github.com/schematics/schematics/pull/488>`__
(`KonishchevDmitry <https://github.com/KonishchevDmitry>`__)
- Regression: Do not call field validator if field has not been set
`#499 <https://github.com/schematics/schematics/pull/499>`__
(`cmonfort <https://github.com/cmonfort>`__)
- Add possibility to translate strings and add initial pt_BR translations
`#495 <https://github.com/schematics/schematics/pull/495>`__
(`jaysonsantos <https://github.com/jaysonsantos>`__)
(`lkraider <https://github.com/lkraider>`__)
2.0.1 / 2017-05-30
==================
- Support for raising DataError inside custom validate_fieldname methods.
`#441 <https://github.com/schematics/schematics/pull/441>`__
(`alexhayes <https://github.com/alexhayes>`__)
- Add specialized SchematicsDeprecationWarning.
(`lkraider <https://github.com/lkraider>`__)
- DateTimeType to_native method should handle type errors gracefully.
`#491 <https://github.com/schematics/schematics/pull/491>`__
(`e271828- <https://github.com/e271828->`__)
- Allow fields names to override the mapping-interface methods.
`#489 <https://github.com/schematics/schematics/pull/489>`__
(`toumorokoshi <https://github.com/toumorokoshi>`__)
(`lkraider <https://github.com/lkraider>`__)
2.0.0 / 2017-05-22
==================
**[BREAKING CHANGE]**
Version 2.0 introduces many API changes, and it is not fully backwards-compatible with 1.x code.
`Full Changelog <https://github.com/schematics/schematics/compare/v1.1.2...v2.0.0>`_
- Add syntax highlighting to README examples
`#486 <https://github.com/schematics/schematics/pull/486>`__
(`gabisurita <https://github.com/gabisurita>`__)
- Encode Unsafe data state in Model
`#484 <https://github.com/schematics/schematics/pull/484>`__
(`lkraider <https://github.com/lkraider>`__)
- Add MACAddressType
`#482 <https://github.com/schematics/schematics/pull/482>`__
(`aleksej-paschenko <https://github.com/aleksej-paschenko>`__)
2.0.0.b1 / 2017-04-06
=====================
- Enhancing and addressing some issues around exceptions:
`#477 <https://github.com/schematics/schematics/pull/477>`__
(`toumorokoshi <https://github.com/toumorokoshi>`__)
- Allow primitive and native types to be inspected
`#431 <https://github.com/schematics/schematics/pull/431>`__
(`chadrik <https://github.com/chadrik>`__)
- Atoms iterator performance improvement
`#476 <https://github.com/schematics/schematics/pull/476>`__
(`vovanbo <https://github.com/vovanbo>`__)
- Fixes 453: Recursive import\_loop with ListType
`#475 <https://github.com/schematics/schematics/pull/475>`__
(`lkraider <https://github.com/lkraider>`__)
- Schema API
`#466 <https://github.com/schematics/schematics/pull/466>`__
(`lkraider <https://github.com/lkraider>`__)
- Tweak code example to avoid sql injection
`#462 <https://github.com/schematics/schematics/pull/462>`__
(`Ian-Foote <https://github.com/Ian-Foote>`__)
- Convert readthedocs links for their .org -> .io migration for hosted
projects `#454 <https://github.com/schematics/schematics/pull/454>`__
(`adamchainz <https://github.com/adamchainz>`__)
- Support all non-string Iterables as choices (dev branch)
`#436 <https://github.com/schematics/schematics/pull/436>`__
(`di <https://github.com/di>`__)
- When testing if a values is None or Undefined, use 'is'.
`#425 <https://github.com/schematics/schematics/pull/425>`__
(`chadrik <https://github.com/chadrik>`__)
2.0.0a1 / 2016-05-03
====================
- Restore v1 to\_native behavior; simplify converter code
`#412 <https://github.com/schematics/schematics/pull/412>`__
(`bintoro <https://github.com/bintoro>`__)
- Change conversion rules for booleans
`#407 <https://github.com/schematics/schematics/pull/407>`__
(`bintoro <https://github.com/bintoro>`__)
- Test for Model.\_\_init\_\_ context passing to types
`#399 <https://github.com/schematics/schematics/pull/399>`__
(`sheilatron <https://github.com/sheilatron>`__)
- Code normalization for Python 3 + general cleanup
`#391 <https://github.com/schematics/schematics/pull/391>`__
(`bintoro <https://github.com/bintoro>`__)
- Add support for arbitrary field metadata.
`#390 <https://github.com/schematics/schematics/pull/390>`__
(`chadrik <https://github.com/chadrik>`__)
- Introduce MixedType
`#380 <https://github.com/schematics/schematics/pull/380>`__
(`bintoro <https://github.com/bintoro>`__)
2.0.0.dev2 / 2016-02-06
=======================
- Type maintenance
`#383 <https://github.com/schematics/schematics/pull/383>`__
(`bintoro <https://github.com/bintoro>`__)
2.0.0.dev1 / 2016-02-01
=======================
- Performance optimizations
`#378 <https://github.com/schematics/schematics/pull/378>`__
(`bintoro <https://github.com/bintoro>`__)
- Validation refactoring + exception redesign
`#374 <https://github.com/schematics/schematics/pull/374>`__
(`bintoro <https://github.com/bintoro>`__)
- Fix typo: serilaizataion --> serialization
`#373 <https://github.com/schematics/schematics/pull/373>`__
(`jeffwidman <https://github.com/jeffwidman>`__)
- Add support for undefined values
`#372 <https://github.com/schematics/schematics/pull/372>`__
(`bintoro <https://github.com/bintoro>`__)
- Serializable improvements
`#371 <https://github.com/schematics/schematics/pull/371>`__
(`bintoro <https://github.com/bintoro>`__)
- Unify import/export interface across all types
`#368 <https://github.com/schematics/schematics/pull/368>`__
(`bintoro <https://github.com/bintoro>`__)
- Correctly decode bytestrings in Python 3
`#365 <https://github.com/schematics/schematics/pull/365>`__
(`bintoro <https://github.com/bintoro>`__)
- Fix NumberType.to\_native()
`#364 <https://github.com/schematics/schematics/pull/364>`__
(`bintoro <https://github.com/bintoro>`__)
- Make sure field.validate() uses a native type
`#363 <https://github.com/schematics/schematics/pull/363>`__
(`bintoro <https://github.com/bintoro>`__)
- Don't validate ListType items twice
`#362 <https://github.com/schematics/schematics/pull/362>`__
(`bintoro <https://github.com/bintoro>`__)
- Collect field validators as bound methods
`#361 <https://github.com/schematics/schematics/pull/361>`__
(`bintoro <https://github.com/bintoro>`__)
- Propagate environment during recursive import/export/validation
`#359 <https://github.com/schematics/schematics/pull/359>`__
(`bintoro <https://github.com/bintoro>`__)
- DateTimeType & TimestampType major rewrite
`#358 <https://github.com/schematics/schematics/pull/358>`__
(`bintoro <https://github.com/bintoro>`__)
- Always export empty compound objects as {} / []
`#351 <https://github.com/schematics/schematics/pull/351>`__
(`bintoro <https://github.com/bintoro>`__)
- export\_loop cleanup
`#350 <https://github.com/schematics/schematics/pull/350>`__
(`bintoro <https://github.com/bintoro>`__)
- Fix FieldDescriptor.\_\_delete\_\_ to not touch model
`#349 <https://github.com/schematics/schematics/pull/349>`__
(`bintoro <https://github.com/bintoro>`__)
- Add validation method for latitude and longitude ranges in
GeoPointType
`#347 <https://github.com/schematics/schematics/pull/347>`__
(`wraziens <https://github.com/wraziens>`__)
- Fix longitude values for GeoPointType mock and add tests
`#344 <https://github.com/schematics/schematics/pull/344>`__
(`wraziens <https://github.com/wraziens>`__)
- Add support for self-referential ModelType fields
`#335 <https://github.com/schematics/schematics/pull/335>`__
(`bintoro <https://github.com/bintoro>`__)
- avoid unnecessary code path through try/except
`#327 <https://github.com/schematics/schematics/pull/327>`__
(`scavpy <https://github.com/scavpy>`__)
- Get mock object for ModelType and ListType
`#306 <https://github.com/schematics/schematics/pull/306>`__
(`kaiix <https://github.com/kaiix>`__)
1.1.3 / 2017-06-27
==================
* [Maintenance] (`#501 <https://github.com/schematics/schematics/issues/501>`_) Dynamic models: Possible memory issues because of _subclasses
1.1.2 / 2017-03-27
==================
* [Bug] (`#478 <https://github.com/schematics/schematics/pull/478>`_) Fix dangerous performance issue with ModelConversionError in nested models
1.1.1 / 2015-11-03
==================
* [Bug] (`befa202 <https://github.com/schematics/schematics/commit/befa202c3b3202aca89fb7ef985bdca06f9da37c>`_) Fix Unicode issue with DecimalType
* [Documentation] (`41157a1 <https://github.com/schematics/schematics/commit/41157a13896bd32a337c5503c04c5e9cc30ba4c7>`_) Documentation overhaul
* [Bug] (`860d717 <https://github.com/schematics/schematics/commit/860d71778421981f284c0612aec665ebf0cfcba2>`_) Fix import that was negatively affecting performance
* [Feature] (`93b554f <https://github.com/schematics/schematics/commit/93b554fd6a4e7b38133c4da5592b1843101792f0>`_) Add DataObject to datastructures.py
* [Bug] (`#236 <https://github.com/schematics/schematics/pull/236>`_) Set `None` on a field that's a compound type should honour that semantics
* [Maintenance] (`#348 <https://github.com/schematics/schematics/pull/348>`_) Update requirements
* [Maintenance] (`#346 <https://github.com/schematics/schematics/pull/346>`_) Combining Requirements
* [Maintenance] (`#342 <https://github.com/schematics/schematics/pull/342>`_) Remove to_primitive() method from compound types
* [Bug] (`#339 <https://github.com/schematics/schematics/pull/339>`_) Basic number validation
* [Bug] (`#336 <https://github.com/schematics/schematics/pull/336>`_) Don't evaluate serializable when accessed through class
* [Bug] (`#321 <https://github.com/schematics/schematics/pull/321>`_) Do not compile regex
* [Maintenance] (`#319 <https://github.com/schematics/schematics/pull/319>`_) Remove mock from install_requires
1.1.0 / 2015-07-12
==================
* [Feature] (`#303 <https://github.com/schematics/schematics/pull/303>`_) fix ListType, validate_items adds to errors list just field name without...
* [Feature] (`#304 <https://github.com/schematics/schematics/pull/304>`_) Include Partial Data when Raising ModelConversionError
* [Feature] (`#305 <https://github.com/schematics/schematics/pull/305>`_) Updated domain verifications to fit to RFC/working standards
* [Feature] (`#308 <https://github.com/schematics/schematics/pull/308>`_) Grennady ordered validation
* [Feature] (`#309 <https://github.com/schematics/schematics/pull/309>`_) improves date_time_type error message for custom formats
* [Feature] (`#310 <https://github.com/schematics/schematics/pull/310>`_) accept optional 'Z' suffix for UTC date_time_type format
* [Feature] (`#311 <https://github.com/schematics/schematics/pull/311>`_) Remove commented lines from models.py
* [Feature] (`#230 <https://github.com/schematics/schematics/pull/230>`_) Message normalization
1.0.4 / 2015-04-13
==================
* [Example] (`#286 <https://github.com/schematics/schematics/pull/286>`_) Add schematics usage with Django
* [Feature] (`#292 <https://github.com/schematics/schematics/pull/292>`_) increase domain length to 10 for .holiday, .vacations
* [Feature] (`#297 <https://github.com/schematics/schematics/pull/297>`_) Support for fields order in serialized format
* [Feature] (`#300 <https://github.com/schematics/schematics/pull/300>`_) increase domain length to 32
1.0.3 / 2015-03-07
==================
* [Feature] (`#284 <https://github.com/schematics/schematics/pull/284>`_) Add missing requirement for `six`
* [Feature] (`#283 <https://github.com/schematics/schematics/pull/283>`_) Update error msgs to print out invalid values in base.py
* [Feature] (`#281 <https://github.com/schematics/schematics/pull/281>`_) Update Model.__eq__
* [Feature] (`#267 <https://github.com/schematics/schematics/pull/267>`_) Type choices should be list or tuple
1.0.2 / 2015-02-12
==================
* [Bug] (`#280 <https://github.com/schematics/schematics/issues/280>`_) Fix the circular import issue.
1.0.1 / 2015-02-01
==================
* [Feature] (`#184 <https://github.com/schematics/schematics/issues/184>`_ / `03b2fd9 <https://github.com/schematics/schematics/commit/03b2fd97fb47c00e8d667cc8ea7254cc64d0f0a0>`_) Support for polymorphic model fields
* [Bug] (`#233 <https://github.com/schematics/schematics/pull/233>`_) Set field.owner_model recursively and honor ListType.field.serialize_when_none
* [Bug](`#252 <https://github.com/schematics/schematics/pull/252>`_) Fixed project URL
* [Feature] (`#259 <https://github.com/schematics/schematics/pull/259>`_) Give export loop to serializable when type has one
* [Feature] (`#262 <https://github.com/schematics/schematics/pull/262>`_) Make copies of inherited meta attributes when setting up a Model
* [Documentation] (`#276 <https://github.com/schematics/schematics/pull/276>`_) Improve the documentation of get_mock_object
1.0.0 / 2014-10-16
==================
* [Documentation] (`#239 <https://github.com/schematics/schematics/issues/239>`_) Fix typo with wording suggestion
* [Documentation] (`#244 <https://github.com/schematics/schematics/issues/244>`_) fix wrong reference in docs
* [Documentation] (`#246 <https://github.com/schematics/schematics/issues/246>`_) Using the correct function name in the docstring
* [Documentation] (`#245 <https://github.com/schematics/schematics/issues/245>`_) Making the docstring match actual parameter names
* [Feature] (`#241 <https://github.com/schematics/schematics/issues/241>`_) Py3k support
0.9.5 / 2014-07-19
==================
* [Feature] (`#191 <https://github.com/schematics/schematics/pull/191>`_) Updated import_data to avoid overwriting existing data. deserialize_mapping can now support partial and nested models.
* [Documentation] (`#192 <https://github.com/schematics/schematics/pull/192>`_) Document the creation of custom types
* [Feature] (`#193 <https://github.com/schematics/schematics/pull/193>`_) Add primitive types accepting values of any simple or compound primitive JSON type.
* [Bug] (`#194 <https://github.com/schematics/schematics/pull/194>`_) Change standard coerce_key function to unicode
* [Tests] (`#196 <https://github.com/schematics/schematics/pull/196>`_) Test fixes and cleanup
* [Feature] (`#197 <https://github.com/schematics/schematics/pull/197>`_) Giving context to serialization
* [Bug] (`#198 <https://github.com/schematics/schematics/pull/198>`_) Fixed typo in variable name in DateTimeType
* [Feature] (`#200 <https://github.com/schematics/schematics/pull/200>`_) Added the option to turn of strict conversion when creating a Model from a dict
* [Feature] (`#212 <https://github.com/schematics/schematics/pull/212>`_) Support exporting ModelType fields with subclassed model instances
* [Feature] (`#214 <https://github.com/schematics/schematics/pull/214>`_) Create mock objects using a class's fields as a template
* [Bug] (`#215 <https://github.com/schematics/schematics/pull/215>`_) PEP 8 FTW
* [Feature] (`#216 <https://github.com/schematics/schematics/pull/216>`_) Datastructures cleanup
* [Feature] (`#217 <https://github.com/schematics/schematics/pull/217>`_) Models cleanup pt 1
* [Feature] (`#218 <https://github.com/schematics/schematics/pull/218>`_) Models cleanup pt 2
* [Feature] (`#219 <https://github.com/schematics/schematics/pull/219>`_) Mongo cleanup
* [Feature] (`#220 <https://github.com/schematics/schematics/pull/220>`_) Temporal cleanup
* [Feature] (`#221 <https://github.com/schematics/schematics/pull/221>`_) Base cleanup
* [Feature] (`#224 <https://github.com/schematics/schematics/pull/224>`_) Exceptions cleanup
* [Feature] (`#225 <https://github.com/schematics/schematics/pull/225>`_) Validate cleanup
* [Feature] (`#226 <https://github.com/schematics/schematics/pull/226>`_) Serializable cleanup
* [Feature] (`#227 <https://github.com/schematics/schematics/pull/227>`_) Transforms cleanup
* [Feature] (`#228 <https://github.com/schematics/schematics/pull/228>`_) Compound cleanup
* [Feature] (`#229 <https://github.com/schematics/schematics/pull/229>`_) UUID cleanup
* [Feature] (`#231 <https://github.com/schematics/schematics/pull/231>`_) Booleans as numbers
0.9.4 / 2013-12-08
==================
* [Feature] (`#178 <https://github.com/schematics/schematics/pull/178>`_) Added deserialize_from flag to BaseType for alternate field names on import
* [Bug] (`#186 <https://github.com/schematics/schematics/pull/186>`_) Compoundtype support in ListTypes
* [Bug] (`#181 <https://github.com/schematics/schematics/pull/181>`_) Removed that stupid print statement!
* [Feature] (`#182 <https://github.com/schematics/schematics/pull/182>`_) Default roles system
* [Documentation] (`#190 <https://github.com/schematics/schematics/pull/190>`_) Typos
* [Bug] (`#177 <https://github.com/schematics/schematics/pull/177>`_) Removed `__iter__` from ModelMeta
* [Documentation] (`#188 <https://github.com/schematics/schematics/pull/188>`_) Typos
0.9.3 / 2013-10-20
==================
* [Documentation] More improvements
* [Feature] (`#147 <https://github.com/schematics/schematics/pull/147>`_) Complete conversion over to py.test
* [Bug] (`#176 <https://github.com/schematics/schematics/pull/176>`_) Fixed bug preventing clean override of options class
* [Bug] (`#174 <https://github.com/schematics/schematics/pull/174>`_) Python 2.6 support
0.9.2 / 2013-09-13
==================
* [Documentation] New History file!
* [Documentation] Major improvements to documentation
* [Feature] Renamed ``check_value`` to ``validate_range``
* [Feature] Changed ``serialize`` to ``to_native``
* [Bug] (`#155 <https://github.com/schematics/schematics/pull/155>`_) NumberType number range validation bugfix
|
schematics-py310-plus
|
/schematics-py310-plus-0.0.4.tar.gz/schematics-py310-plus-0.0.4/HISTORY.rst
|
HISTORY.rst
|
this change log is not maintained in this fork
2.1.0 / Unreleased
==================
**[BREAKING CHANGE]**
- Drop Python 2.6 support
`#517 <https://github.com/schematics/schematics/pull/517>`__
(`rooterkyberian <https://github.com/rooterkyberian>`__)
Other changes:
- Add TimedeltaType
`#540 <https://github.com/schematics/schematics/pull/540>`__
(`gabisurita <https://github.com/gabisurita>`__)
- Allow to create Model fields dynamically
`#512 <https://github.com/schematics/schematics/pull/512>`__
(`lkraider <https://github.com/lkraider>`__)
- Allow ModelOptions to have extra parameters
`#449 <https://github.com/schematics/schematics/pull/449>`__
(`rmb938 <https://github.com/rmb938>`__)
`#506 <https://github.com/schematics/schematics/pull/506>`__
(`ekampf <https://github.com/ekampf>`__)
- Accept callables as serialize roles
`#508 <https://github.com/schematics/schematics/pull/508>`__
(`lkraider <https://github.com/lkraider>`__)
(`jaysonsantos <https://github.com/jaysonsantos>`__)
- Simplify PolyModelType.find_model for readability
`#537 <https://github.com/schematics/schematics/pull/537>`__
(`kstrauser <https://github.com/kstrauser>`__)
- Enable PolyModelType recursive validation
`#535 <https://github.com/schematics/schematics/pull/535>`__
(`javiertejero <https://github.com/javiertejero>`__)
- Documentation fixes
`#509 <https://github.com/schematics/schematics/pull/509>`__
(`Tuoris <https://github.com/Tuoris>`__)
`#514 <https://github.com/schematics/schematics/pull/514>`__
(`tommyzli <https://github.com/tommyzli>`__)
`#518 <https://github.com/schematics/schematics/pull/518>`__
(`rooterkyberian <https://github.com/rooterkyberian>`__)
`#546 <https://github.com/schematics/schematics/pull/546>`__
(`harveyslash <https://github.com/harveyslash>`__)
- Fix Model.init validation when partial is True
`#531 <https://github.com/schematics/schematics/issues/531>`__
(`lkraider <https://github.com/lkraider>`__)
- Minor number types refactor and mocking fixes
`#519 <https://github.com/schematics/schematics/pull/519>`__
(`rooterkyberian <https://github.com/rooterkyberian>`__)
`#520 <https://github.com/schematics/schematics/pull/520>`__
(`rooterkyberian <https://github.com/rooterkyberian>`__)
- Add ability to import models as strings
`#496 <https://github.com/schematics/schematics/pull/496>`__
(`jaysonsantos <https://github.com/jaysonsantos>`__)
- Add EnumType
`#504 <https://github.com/schematics/schematics/pull/504>`__
(`ekamil <https://github.com/ekamil>`__)
- Dynamic models: Possible memory issues because of _subclasses
`#502 <https://github.com/schematics/schematics/pull/502>`__
(`mjrk <https://github.com/mjrk>`__)
- Add type hints to constructors of field type classes
`#488 <https://github.com/schematics/schematics/pull/488>`__
(`KonishchevDmitry <https://github.com/KonishchevDmitry>`__)
- Regression: Do not call field validator if field has not been set
`#499 <https://github.com/schematics/schematics/pull/499>`__
(`cmonfort <https://github.com/cmonfort>`__)
- Add possibility to translate strings and add initial pt_BR translations
`#495 <https://github.com/schematics/schematics/pull/495>`__
(`jaysonsantos <https://github.com/jaysonsantos>`__)
(`lkraider <https://github.com/lkraider>`__)
2.0.1 / 2017-05-30
==================
- Support for raising DataError inside custom validate_fieldname methods.
`#441 <https://github.com/schematics/schematics/pull/441>`__
(`alexhayes <https://github.com/alexhayes>`__)
- Add specialized SchematicsDeprecationWarning.
(`lkraider <https://github.com/lkraider>`__)
- DateTimeType to_native method should handle type errors gracefully.
`#491 <https://github.com/schematics/schematics/pull/491>`__
(`e271828- <https://github.com/e271828->`__)
- Allow fields names to override the mapping-interface methods.
`#489 <https://github.com/schematics/schematics/pull/489>`__
(`toumorokoshi <https://github.com/toumorokoshi>`__)
(`lkraider <https://github.com/lkraider>`__)
2.0.0 / 2017-05-22
==================
**[BREAKING CHANGE]**
Version 2.0 introduces many API changes, and it is not fully backwards-compatible with 1.x code.
`Full Changelog <https://github.com/schematics/schematics/compare/v1.1.2...v2.0.0>`_
- Add syntax highlighting to README examples
`#486 <https://github.com/schematics/schematics/pull/486>`__
(`gabisurita <https://github.com/gabisurita>`__)
- Encode Unsafe data state in Model
`#484 <https://github.com/schematics/schematics/pull/484>`__
(`lkraider <https://github.com/lkraider>`__)
- Add MACAddressType
`#482 <https://github.com/schematics/schematics/pull/482>`__
(`aleksej-paschenko <https://github.com/aleksej-paschenko>`__)
2.0.0.b1 / 2017-04-06
=====================
- Enhancing and addressing some issues around exceptions:
`#477 <https://github.com/schematics/schematics/pull/477>`__
(`toumorokoshi <https://github.com/toumorokoshi>`__)
- Allow primitive and native types to be inspected
`#431 <https://github.com/schematics/schematics/pull/431>`__
(`chadrik <https://github.com/chadrik>`__)
- Atoms iterator performance improvement
`#476 <https://github.com/schematics/schematics/pull/476>`__
(`vovanbo <https://github.com/vovanbo>`__)
- Fixes 453: Recursive import\_loop with ListType
`#475 <https://github.com/schematics/schematics/pull/475>`__
(`lkraider <https://github.com/lkraider>`__)
- Schema API
`#466 <https://github.com/schematics/schematics/pull/466>`__
(`lkraider <https://github.com/lkraider>`__)
- Tweak code example to avoid sql injection
`#462 <https://github.com/schematics/schematics/pull/462>`__
(`Ian-Foote <https://github.com/Ian-Foote>`__)
- Convert readthedocs links for their .org -> .io migration for hosted
projects `#454 <https://github.com/schematics/schematics/pull/454>`__
(`adamchainz <https://github.com/adamchainz>`__)
- Support all non-string Iterables as choices (dev branch)
`#436 <https://github.com/schematics/schematics/pull/436>`__
(`di <https://github.com/di>`__)
- When testing if a values is None or Undefined, use 'is'.
`#425 <https://github.com/schematics/schematics/pull/425>`__
(`chadrik <https://github.com/chadrik>`__)
2.0.0a1 / 2016-05-03
====================
- Restore v1 to\_native behavior; simplify converter code
`#412 <https://github.com/schematics/schematics/pull/412>`__
(`bintoro <https://github.com/bintoro>`__)
- Change conversion rules for booleans
`#407 <https://github.com/schematics/schematics/pull/407>`__
(`bintoro <https://github.com/bintoro>`__)
- Test for Model.\_\_init\_\_ context passing to types
`#399 <https://github.com/schematics/schematics/pull/399>`__
(`sheilatron <https://github.com/sheilatron>`__)
- Code normalization for Python 3 + general cleanup
`#391 <https://github.com/schematics/schematics/pull/391>`__
(`bintoro <https://github.com/bintoro>`__)
- Add support for arbitrary field metadata.
`#390 <https://github.com/schematics/schematics/pull/390>`__
(`chadrik <https://github.com/chadrik>`__)
- Introduce MixedType
`#380 <https://github.com/schematics/schematics/pull/380>`__
(`bintoro <https://github.com/bintoro>`__)
2.0.0.dev2 / 2016-02-06
=======================
- Type maintenance
`#383 <https://github.com/schematics/schematics/pull/383>`__
(`bintoro <https://github.com/bintoro>`__)
2.0.0.dev1 / 2016-02-01
=======================
- Performance optimizations
`#378 <https://github.com/schematics/schematics/pull/378>`__
(`bintoro <https://github.com/bintoro>`__)
- Validation refactoring + exception redesign
`#374 <https://github.com/schematics/schematics/pull/374>`__
(`bintoro <https://github.com/bintoro>`__)
- Fix typo: serilaizataion --> serialization
`#373 <https://github.com/schematics/schematics/pull/373>`__
(`jeffwidman <https://github.com/jeffwidman>`__)
- Add support for undefined values
`#372 <https://github.com/schematics/schematics/pull/372>`__
(`bintoro <https://github.com/bintoro>`__)
- Serializable improvements
`#371 <https://github.com/schematics/schematics/pull/371>`__
(`bintoro <https://github.com/bintoro>`__)
- Unify import/export interface across all types
`#368 <https://github.com/schematics/schematics/pull/368>`__
(`bintoro <https://github.com/bintoro>`__)
- Correctly decode bytestrings in Python 3
`#365 <https://github.com/schematics/schematics/pull/365>`__
(`bintoro <https://github.com/bintoro>`__)
- Fix NumberType.to\_native()
`#364 <https://github.com/schematics/schematics/pull/364>`__
(`bintoro <https://github.com/bintoro>`__)
- Make sure field.validate() uses a native type
`#363 <https://github.com/schematics/schematics/pull/363>`__
(`bintoro <https://github.com/bintoro>`__)
- Don't validate ListType items twice
`#362 <https://github.com/schematics/schematics/pull/362>`__
(`bintoro <https://github.com/bintoro>`__)
- Collect field validators as bound methods
`#361 <https://github.com/schematics/schematics/pull/361>`__
(`bintoro <https://github.com/bintoro>`__)
- Propagate environment during recursive import/export/validation
`#359 <https://github.com/schematics/schematics/pull/359>`__
(`bintoro <https://github.com/bintoro>`__)
- DateTimeType & TimestampType major rewrite
`#358 <https://github.com/schematics/schematics/pull/358>`__
(`bintoro <https://github.com/bintoro>`__)
- Always export empty compound objects as {} / []
`#351 <https://github.com/schematics/schematics/pull/351>`__
(`bintoro <https://github.com/bintoro>`__)
- export\_loop cleanup
`#350 <https://github.com/schematics/schematics/pull/350>`__
(`bintoro <https://github.com/bintoro>`__)
- Fix FieldDescriptor.\_\_delete\_\_ to not touch model
`#349 <https://github.com/schematics/schematics/pull/349>`__
(`bintoro <https://github.com/bintoro>`__)
- Add validation method for latitude and longitude ranges in
GeoPointType
`#347 <https://github.com/schematics/schematics/pull/347>`__
(`wraziens <https://github.com/wraziens>`__)
- Fix longitude values for GeoPointType mock and add tests
`#344 <https://github.com/schematics/schematics/pull/344>`__
(`wraziens <https://github.com/wraziens>`__)
- Add support for self-referential ModelType fields
`#335 <https://github.com/schematics/schematics/pull/335>`__
(`bintoro <https://github.com/bintoro>`__)
- avoid unnecessary code path through try/except
`#327 <https://github.com/schematics/schematics/pull/327>`__
(`scavpy <https://github.com/scavpy>`__)
- Get mock object for ModelType and ListType
`#306 <https://github.com/schematics/schematics/pull/306>`__
(`kaiix <https://github.com/kaiix>`__)
1.1.3 / 2017-06-27
==================
* [Maintenance] (`#501 <https://github.com/schematics/schematics/issues/501>`_) Dynamic models: Possible memory issues because of _subclasses
1.1.2 / 2017-03-27
==================
* [Bug] (`#478 <https://github.com/schematics/schematics/pull/478>`_) Fix dangerous performance issue with ModelConversionError in nested models
1.1.1 / 2015-11-03
==================
* [Bug] (`befa202 <https://github.com/schematics/schematics/commit/befa202c3b3202aca89fb7ef985bdca06f9da37c>`_) Fix Unicode issue with DecimalType
* [Documentation] (`41157a1 <https://github.com/schematics/schematics/commit/41157a13896bd32a337c5503c04c5e9cc30ba4c7>`_) Documentation overhaul
* [Bug] (`860d717 <https://github.com/schematics/schematics/commit/860d71778421981f284c0612aec665ebf0cfcba2>`_) Fix import that was negatively affecting performance
* [Feature] (`93b554f <https://github.com/schematics/schematics/commit/93b554fd6a4e7b38133c4da5592b1843101792f0>`_) Add DataObject to datastructures.py
* [Bug] (`#236 <https://github.com/schematics/schematics/pull/236>`_) Set `None` on a field that's a compound type should honour that semantics
* [Maintenance] (`#348 <https://github.com/schematics/schematics/pull/348>`_) Update requirements
* [Maintenance] (`#346 <https://github.com/schematics/schematics/pull/346>`_) Combining Requirements
* [Maintenance] (`#342 <https://github.com/schematics/schematics/pull/342>`_) Remove to_primitive() method from compound types
* [Bug] (`#339 <https://github.com/schematics/schematics/pull/339>`_) Basic number validation
* [Bug] (`#336 <https://github.com/schematics/schematics/pull/336>`_) Don't evaluate serializable when accessed through class
* [Bug] (`#321 <https://github.com/schematics/schematics/pull/321>`_) Do not compile regex
* [Maintenance] (`#319 <https://github.com/schematics/schematics/pull/319>`_) Remove mock from install_requires
1.1.0 / 2015-07-12
==================
* [Feature] (`#303 <https://github.com/schematics/schematics/pull/303>`_) fix ListType, validate_items adds to errors list just field name without...
* [Feature] (`#304 <https://github.com/schematics/schematics/pull/304>`_) Include Partial Data when Raising ModelConversionError
* [Feature] (`#305 <https://github.com/schematics/schematics/pull/305>`_) Updated domain verifications to fit to RFC/working standards
* [Feature] (`#308 <https://github.com/schematics/schematics/pull/308>`_) Grennady ordered validation
* [Feature] (`#309 <https://github.com/schematics/schematics/pull/309>`_) improves date_time_type error message for custom formats
* [Feature] (`#310 <https://github.com/schematics/schematics/pull/310>`_) accept optional 'Z' suffix for UTC date_time_type format
* [Feature] (`#311 <https://github.com/schematics/schematics/pull/311>`_) Remove commented lines from models.py
* [Feature] (`#230 <https://github.com/schematics/schematics/pull/230>`_) Message normalization
1.0.4 / 2015-04-13
==================
* [Example] (`#286 <https://github.com/schematics/schematics/pull/286>`_) Add schematics usage with Django
* [Feature] (`#292 <https://github.com/schematics/schematics/pull/292>`_) increase domain length to 10 for .holiday, .vacations
* [Feature] (`#297 <https://github.com/schematics/schematics/pull/297>`_) Support for fields order in serialized format
* [Feature] (`#300 <https://github.com/schematics/schematics/pull/300>`_) increase domain length to 32
1.0.3 / 2015-03-07
==================
* [Feature] (`#284 <https://github.com/schematics/schematics/pull/284>`_) Add missing requirement for `six`
* [Feature] (`#283 <https://github.com/schematics/schematics/pull/283>`_) Update error msgs to print out invalid values in base.py
* [Feature] (`#281 <https://github.com/schematics/schematics/pull/281>`_) Update Model.__eq__
* [Feature] (`#267 <https://github.com/schematics/schematics/pull/267>`_) Type choices should be list or tuple
1.0.2 / 2015-02-12
==================
* [Bug] (`#280 <https://github.com/schematics/schematics/issues/280>`_) Fix the circular import issue.
1.0.1 / 2015-02-01
==================
* [Feature] (`#184 <https://github.com/schematics/schematics/issues/184>`_ / `03b2fd9 <https://github.com/schematics/schematics/commit/03b2fd97fb47c00e8d667cc8ea7254cc64d0f0a0>`_) Support for polymorphic model fields
* [Bug] (`#233 <https://github.com/schematics/schematics/pull/233>`_) Set field.owner_model recursively and honor ListType.field.serialize_when_none
* [Bug](`#252 <https://github.com/schematics/schematics/pull/252>`_) Fixed project URL
* [Feature] (`#259 <https://github.com/schematics/schematics/pull/259>`_) Give export loop to serializable when type has one
* [Feature] (`#262 <https://github.com/schematics/schematics/pull/262>`_) Make copies of inherited meta attributes when setting up a Model
* [Documentation] (`#276 <https://github.com/schematics/schematics/pull/276>`_) Improve the documentation of get_mock_object
1.0.0 / 2014-10-16
==================
* [Documentation] (`#239 <https://github.com/schematics/schematics/issues/239>`_) Fix typo with wording suggestion
* [Documentation] (`#244 <https://github.com/schematics/schematics/issues/244>`_) fix wrong reference in docs
* [Documentation] (`#246 <https://github.com/schematics/schematics/issues/246>`_) Using the correct function name in the docstring
* [Documentation] (`#245 <https://github.com/schematics/schematics/issues/245>`_) Making the docstring match actual parameter names
* [Feature] (`#241 <https://github.com/schematics/schematics/issues/241>`_) Py3k support
0.9.5 / 2014-07-19
==================
* [Feature] (`#191 <https://github.com/schematics/schematics/pull/191>`_) Updated import_data to avoid overwriting existing data. deserialize_mapping can now support partial and nested models.
* [Documentation] (`#192 <https://github.com/schematics/schematics/pull/192>`_) Document the creation of custom types
* [Feature] (`#193 <https://github.com/schematics/schematics/pull/193>`_) Add primitive types accepting values of any simple or compound primitive JSON type.
* [Bug] (`#194 <https://github.com/schematics/schematics/pull/194>`_) Change standard coerce_key function to unicode
* [Tests] (`#196 <https://github.com/schematics/schematics/pull/196>`_) Test fixes and cleanup
* [Feature] (`#197 <https://github.com/schematics/schematics/pull/197>`_) Giving context to serialization
* [Bug] (`#198 <https://github.com/schematics/schematics/pull/198>`_) Fixed typo in variable name in DateTimeType
* [Feature] (`#200 <https://github.com/schematics/schematics/pull/200>`_) Added the option to turn of strict conversion when creating a Model from a dict
* [Feature] (`#212 <https://github.com/schematics/schematics/pull/212>`_) Support exporting ModelType fields with subclassed model instances
* [Feature] (`#214 <https://github.com/schematics/schematics/pull/214>`_) Create mock objects using a class's fields as a template
* [Bug] (`#215 <https://github.com/schematics/schematics/pull/215>`_) PEP 8 FTW
* [Feature] (`#216 <https://github.com/schematics/schematics/pull/216>`_) Datastructures cleanup
* [Feature] (`#217 <https://github.com/schematics/schematics/pull/217>`_) Models cleanup pt 1
* [Feature] (`#218 <https://github.com/schematics/schematics/pull/218>`_) Models cleanup pt 2
* [Feature] (`#219 <https://github.com/schematics/schematics/pull/219>`_) Mongo cleanup
* [Feature] (`#220 <https://github.com/schematics/schematics/pull/220>`_) Temporal cleanup
* [Feature] (`#221 <https://github.com/schematics/schematics/pull/221>`_) Base cleanup
* [Feature] (`#224 <https://github.com/schematics/schematics/pull/224>`_) Exceptions cleanup
* [Feature] (`#225 <https://github.com/schematics/schematics/pull/225>`_) Validate cleanup
* [Feature] (`#226 <https://github.com/schematics/schematics/pull/226>`_) Serializable cleanup
* [Feature] (`#227 <https://github.com/schematics/schematics/pull/227>`_) Transforms cleanup
* [Feature] (`#228 <https://github.com/schematics/schematics/pull/228>`_) Compound cleanup
* [Feature] (`#229 <https://github.com/schematics/schematics/pull/229>`_) UUID cleanup
* [Feature] (`#231 <https://github.com/schematics/schematics/pull/231>`_) Booleans as numbers
0.9.4 / 2013-12-08
==================
* [Feature] (`#178 <https://github.com/schematics/schematics/pull/178>`_) Added deserialize_from flag to BaseType for alternate field names on import
* [Bug] (`#186 <https://github.com/schematics/schematics/pull/186>`_) Compoundtype support in ListTypes
* [Bug] (`#181 <https://github.com/schematics/schematics/pull/181>`_) Removed that stupid print statement!
* [Feature] (`#182 <https://github.com/schematics/schematics/pull/182>`_) Default roles system
* [Documentation] (`#190 <https://github.com/schematics/schematics/pull/190>`_) Typos
* [Bug] (`#177 <https://github.com/schematics/schematics/pull/177>`_) Removed `__iter__` from ModelMeta
* [Documentation] (`#188 <https://github.com/schematics/schematics/pull/188>`_) Typos
0.9.3 / 2013-10-20
==================
* [Documentation] More improvements
* [Feature] (`#147 <https://github.com/schematics/schematics/pull/147>`_) Complete conversion over to py.test
* [Bug] (`#176 <https://github.com/schematics/schematics/pull/176>`_) Fixed bug preventing clean override of options class
* [Bug] (`#174 <https://github.com/schematics/schematics/pull/174>`_) Python 2.6 support
0.9.2 / 2013-09-13
==================
* [Documentation] New History file!
* [Documentation] Major improvements to documentation
* [Feature] Renamed ``check_value`` to ``validate_range``
* [Feature] Changed ``serialize`` to ``to_native``
* [Bug] (`#155 <https://github.com/schematics/schematics/pull/155>`_) NumberType number range validation bugfix
| 0.72331 | 0.706444 |
About
=====
This is a Fork from the hard work of the maintainers at
https://github.com/schematics/schematics.
Here's a summary of the changes:
+ add support for python 3.10+
+ drop support for python version 3.6, 3.7, and 3.8
+ run black and isort on the code base
+ package with flit, updating to pyproject.toml
+ add development environment setup with nix and package as a nix flake.
+ and that's it!
I don't plan on any changes to this library aside from maintaining
support for modern python versions as long as this library is still
a dependency for projects that I'm involved with which is unlikely to
be forever. I would recommend planning on porting your validation code
to another validation / serialization library that is actively maintained.
But until then I'll do my best to keep this current with new python
versions. Thank you to the original maintainers for all of their work!
**Project documentation:** https://schematics.readthedocs.io/en/latest/
Schematics is a Python library to combine types into structures, validate them,
and transform the shapes of your data based on simple descriptions.
The internals are similar to ORM type systems, but there is no database layer
in Schematics. Instead, we believe that building a database
layer is made significantly easier when Schematics handles everything but
writing the query.
Further, it can be used for a range of tasks where having a database involved
may not make sense.
Some common use cases:
+ Design and document specific `data structures <https://schematics.readthedocs.io/en/latest/usage/models.html>`_
+ `Convert structures <https://schematics.readthedocs.io/en/latest/usage/exporting.html#converting-data>`_ to and from different formats such as JSON or MsgPack
+ `Validate <https://schematics.readthedocs.io/en/latest/usage/validation.html>`_ API inputs
+ `Remove fields based on access rights <https://schematics.readthedocs.io/en/latest/usage/exporting.html>`_ of some data's recipient
+ Define message formats for communications protocols, like an RPC
+ Custom `persistence layers <https://schematics.readthedocs.io/en/latest/usage/models.html#model-configuration>`_
Example
=======
This is a simple Model.
.. code:: python
>>> from schematics.models import Model
>>> from schematics.types import StringType, URLType
>>> class Person(Model):
... name = StringType(required=True)
... website = URLType()
...
>>> person = Person({'name': u'Joe Strummer',
... 'website': 'http://soundcloud.com/joestrummer'})
>>> person.name
u'Joe Strummer'
Serializing the data to JSON.
.. code:: python
>>> import json
>>> json.dumps(person.to_primitive())
{"name": "Joe Strummer", "website": "http://soundcloud.com/joestrummer"}
Let's try validating without a name value, since it's required.
.. code:: python
>>> person = Person()
>>> person.website = 'http://www.amontobin.com/'
>>> person.validate()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "schematics/models.py", line 231, in validate
raise DataError(e.messages)
schematics.exceptions.DataError: {'name': ['This field is required.']}
Add the field and validation passes.
.. code:: python
>>> person = Person()
>>> person.name = 'Amon Tobin'
>>> person.website = 'http://www.amontobin.com/'
>>> person.validate()
>>>
|
schematics-py310-plus
|
/schematics-py310-plus-0.0.4.tar.gz/schematics-py310-plus-0.0.4/README.rst
|
README.rst
|
About
=====
This is a Fork from the hard work of the maintainers at
https://github.com/schematics/schematics.
Here's a summary of the changes:
+ add support for python 3.10+
+ drop support for python version 3.6, 3.7, and 3.8
+ run black and isort on the code base
+ package with flit, updating to pyproject.toml
+ add development environment setup with nix and package as a nix flake.
+ and that's it!
I don't plan on any changes to this library aside from maintaining
support for modern python versions as long as this library is still
a dependency for projects that I'm involved with which is unlikely to
be forever. I would recommend planning on porting your validation code
to another validation / serialization library that is actively maintained.
But until then I'll do my best to keep this current with new python
versions. Thank you to the original maintainers for all of their work!
**Project documentation:** https://schematics.readthedocs.io/en/latest/
Schematics is a Python library to combine types into structures, validate them,
and transform the shapes of your data based on simple descriptions.
The internals are similar to ORM type systems, but there is no database layer
in Schematics. Instead, we believe that building a database
layer is made significantly easier when Schematics handles everything but
writing the query.
Further, it can be used for a range of tasks where having a database involved
may not make sense.
Some common use cases:
+ Design and document specific `data structures <https://schematics.readthedocs.io/en/latest/usage/models.html>`_
+ `Convert structures <https://schematics.readthedocs.io/en/latest/usage/exporting.html#converting-data>`_ to and from different formats such as JSON or MsgPack
+ `Validate <https://schematics.readthedocs.io/en/latest/usage/validation.html>`_ API inputs
+ `Remove fields based on access rights <https://schematics.readthedocs.io/en/latest/usage/exporting.html>`_ of some data's recipient
+ Define message formats for communications protocols, like an RPC
+ Custom `persistence layers <https://schematics.readthedocs.io/en/latest/usage/models.html#model-configuration>`_
Example
=======
This is a simple Model.
.. code:: python
>>> from schematics.models import Model
>>> from schematics.types import StringType, URLType
>>> class Person(Model):
... name = StringType(required=True)
... website = URLType()
...
>>> person = Person({'name': u'Joe Strummer',
... 'website': 'http://soundcloud.com/joestrummer'})
>>> person.name
u'Joe Strummer'
Serializing the data to JSON.
.. code:: python
>>> import json
>>> json.dumps(person.to_primitive())
{"name": "Joe Strummer", "website": "http://soundcloud.com/joestrummer"}
Let's try validating without a name value, since it's required.
.. code:: python
>>> person = Person()
>>> person.website = 'http://www.amontobin.com/'
>>> person.validate()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "schematics/models.py", line 231, in validate
raise DataError(e.messages)
schematics.exceptions.DataError: {'name': ['This field is required.']}
Add the field and validation passes.
.. code:: python
>>> person = Person()
>>> person.name = 'Amon Tobin'
>>> person.website = 'http://www.amontobin.com/'
>>> person.validate()
>>>
| 0.793306 | 0.597931 |
==========
Schematics
==========
.. rubric:: Python Data Structures for Humans™.
.. image:: https://travis-ci.org/schematics/schematics.svg?branch=development
:target: https://travis-ci.org/schematics/schematics
:alt: Build Status
.. image:: https://coveralls.io/repos/github/schematics/schematics/badge.svg?branch=development
:target: https://coveralls.io/github/schematics/schematics?branch=development
:alt: Coverage
.. toctree::
:hidden:
:maxdepth: 2
:caption: Basics
Overview <self>
basics/install
basics/quickstart
.. contents::
:local:
:depth: 1
**Please note that the documentation is currently somewhat out of date.**
About
=====
Schematics is a Python library to combine types into structures, validate them,
and transform the shapes of your data based on simple descriptions.
The internals are similar to ORM type systems, but there is no database layer
in Schematics. Instead, we believe that building a database
layer is made significantly easier when Schematics handles everything but
writing the query.
Further, it can be used for a range of tasks where having a database involved
may not make sense.
Some common use cases:
+ Design and document specific :ref:`data structures <models>`
+ :ref:`Convert structures <exporting_converting_data>` to and from different formats such as JSON or MsgPack
+ :ref:`Validate <validation>` API inputs
+ :ref:`Remove fields based on access rights <exporting>` of some data's recipient
+ Define message formats for communications protocols, like an RPC
+ Custom :ref:`persistence layers <model_configuration>`
Example
=======
This is a simple Model. ::
>>> from schematics.models import Model
>>> from schematics.types import StringType, URLType
>>> class Person(Model):
... name = StringType(required=True)
... website = URLType()
...
>>> person = Person({'name': u'Joe Strummer',
... 'website': 'http://soundcloud.com/joestrummer'})
>>> person.name
u'Joe Strummer'
Serializing the data to JSON. ::
>>> import json
>>> json.dumps(person.to_primitive())
{"name": "Joe Strummer", "website": "http://soundcloud.com/joestrummer"}
Let's try validating without a name value, since it's required. ::
>>> person = Person()
>>> person.website = 'http://www.amontobin.com/'
>>> person.validate()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "schematics/models.py", line 231, in validate
raise DataError(e.messages)
schematics.exceptions.DataError: {'name': ['This field is required.']}
Add the field and validation passes::
>>> person = Person()
>>> person.name = 'Amon Tobin'
>>> person.website = 'http://www.amontobin.com/'
>>> person.validate()
>>>
Installing
==========
Install stable releases of Schematics with pip. ::
$ pip install schematics
See the :doc:`basics/install` for more detail.
Getting Started
===============
New Schematics users should start with the :doc:`basics/quickstart`. That is the
fastest way to get a look at what Schematics does.
Documentation
=============
Schematics exists to make a few concepts easy to glue together. The types
allow us to describe units of data, models let us put them together into
structures with fields. We can then import data, check if it looks correct,
and easily serialize the results into any format we need.
The User's Guide provides the high-level concepts, but the API documentation and
the code itself provide the most accurate reference.
.. toctree::
:maxdepth: 2
:caption: User's Guide
usage/types
usage/models
usage/exporting
usage/importing
usage/validation
usage/extending
.. toctree::
:maxdepth: 1
:caption: API Reference
schematics.models <api/models>
schematics.validation <api/validation>
schematics.transforms <api/transforms>
schematics.types <api/types>
schematics.contrib <api/contrib>
Development
===========
We welcome ideas and code. We ask that you follow some of our guidelines
though.
See the :doc:`development/development` for more information.
.. toctree::
:hidden:
:caption: Development
development/development
development/community
Testing & Coverage
==================
Run ``coverage`` and check the missing statements. ::
$ coverage run --source schematics -m py.test && coverage report
|
schematics-py310-plus
|
/schematics-py310-plus-0.0.4.tar.gz/schematics-py310-plus-0.0.4/docs/index.rst
|
index.rst
|
==========
Schematics
==========
.. rubric:: Python Data Structures for Humans™.
.. image:: https://travis-ci.org/schematics/schematics.svg?branch=development
:target: https://travis-ci.org/schematics/schematics
:alt: Build Status
.. image:: https://coveralls.io/repos/github/schematics/schematics/badge.svg?branch=development
:target: https://coveralls.io/github/schematics/schematics?branch=development
:alt: Coverage
.. toctree::
:hidden:
:maxdepth: 2
:caption: Basics
Overview <self>
basics/install
basics/quickstart
.. contents::
:local:
:depth: 1
**Please note that the documentation is currently somewhat out of date.**
About
=====
Schematics is a Python library to combine types into structures, validate them,
and transform the shapes of your data based on simple descriptions.
The internals are similar to ORM type systems, but there is no database layer
in Schematics. Instead, we believe that building a database
layer is made significantly easier when Schematics handles everything but
writing the query.
Further, it can be used for a range of tasks where having a database involved
may not make sense.
Some common use cases:
+ Design and document specific :ref:`data structures <models>`
+ :ref:`Convert structures <exporting_converting_data>` to and from different formats such as JSON or MsgPack
+ :ref:`Validate <validation>` API inputs
+ :ref:`Remove fields based on access rights <exporting>` of some data's recipient
+ Define message formats for communications protocols, like an RPC
+ Custom :ref:`persistence layers <model_configuration>`
Example
=======
This is a simple Model. ::
>>> from schematics.models import Model
>>> from schematics.types import StringType, URLType
>>> class Person(Model):
... name = StringType(required=True)
... website = URLType()
...
>>> person = Person({'name': u'Joe Strummer',
... 'website': 'http://soundcloud.com/joestrummer'})
>>> person.name
u'Joe Strummer'
Serializing the data to JSON. ::
>>> import json
>>> json.dumps(person.to_primitive())
{"name": "Joe Strummer", "website": "http://soundcloud.com/joestrummer"}
Let's try validating without a name value, since it's required. ::
>>> person = Person()
>>> person.website = 'http://www.amontobin.com/'
>>> person.validate()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "schematics/models.py", line 231, in validate
raise DataError(e.messages)
schematics.exceptions.DataError: {'name': ['This field is required.']}
Add the field and validation passes::
>>> person = Person()
>>> person.name = 'Amon Tobin'
>>> person.website = 'http://www.amontobin.com/'
>>> person.validate()
>>>
Installing
==========
Install stable releases of Schematics with pip. ::
$ pip install schematics
See the :doc:`basics/install` for more detail.
Getting Started
===============
New Schematics users should start with the :doc:`basics/quickstart`. That is the
fastest way to get a look at what Schematics does.
Documentation
=============
Schematics exists to make a few concepts easy to glue together. The types
allow us to describe units of data, models let us put them together into
structures with fields. We can then import data, check if it looks correct,
and easily serialize the results into any format we need.
The User's Guide provides the high-level concepts, but the API documentation and
the code itself provide the most accurate reference.
.. toctree::
:maxdepth: 2
:caption: User's Guide
usage/types
usage/models
usage/exporting
usage/importing
usage/validation
usage/extending
.. toctree::
:maxdepth: 1
:caption: API Reference
schematics.models <api/models>
schematics.validation <api/validation>
schematics.transforms <api/transforms>
schematics.types <api/types>
schematics.contrib <api/contrib>
Development
===========
We welcome ideas and code. We ask that you follow some of our guidelines
though.
See the :doc:`development/development` for more information.
.. toctree::
:hidden:
:caption: Development
development/development
development/community
Testing & Coverage
==================
Run ``coverage`` and check the missing statements. ::
$ coverage run --source schematics -m py.test && coverage report
| 0.884046 | 0.600716 |
.. _types:
=====
Types
=====
Types are the smallest definition of structure in Schematics. They represent
structure by offering functions to inspect or mutate the data in some way.
According to Schematics, a type is an instance of a way to do three things:
1. Coerce the data type into an appropriate representation in Python
2. Convert the Python representation into other formats suitable for
serialization
3. Offer a precise method of validating data of many forms
These properties are implemented as ``to_native``, ``to_primitive``, and
``validate``.
Coercion
========
A simple example is the ``DateTimeType``.
::
>>> from schematics.types import DateTimeType
>>> dt_t = DateTimeType()
The ``to_native`` function transforms an ISO8601 formatted date string into a
Python ``datetime.datetime``.
::
>>> dt = dt_t.to_native('2013-08-31T02:21:21.486072')
>>> dt
datetime.datetime(2013, 8, 31, 2, 21, 21, 486072)
Conversion
==========
The ``to_primitive`` function changes it back to a language agnostic form, in
this case an ISO8601 formatted string, just like we used above.
::
>>> dt_t.to_primitive(dt)
'2013-08-31T02:21:21.486072'
Validation
==========
Validation can be as simple as successfully calling ``to_native``, but
sometimes more is needed.
data or behavior during a typical use, like serialization.
Let's look at the ``StringType``. We'll set a ``max_length`` of 10.
::
>>> st = StringType(max_length=10)
>>> st.to_native('this is longer than 10')
u'this is longer than 10'
It converts to a string just fine. Now, let's attempt to validate it.
::
>>> st.validate('this is longer than 10')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "schematics/types/base.py", line 164, in validate
raise ValidationError(errors)
schematics.exceptions.ValidationError: [u'String value is too long.']
Custom types
============
If the types provided by the schematics library don't meet all of your needs,
you can also create new types. Do so by extending
``schematics.types.BaseType``, and decide which based methods you need to
override.
`to_native`
~~~~~~~~~~~
By default, this method on ``schematics.types.BaseType`` just returns the
primitive value it was given. Override this if you want to convert it to a
specific native value. For example, suppose we are implementing a type that
represents the net-location portion of a URL, which consists of a hostname and
optional port number::
>>> from schematics.types import BaseType
>>> class NetlocType(BaseType):
... def to_native(self, value):
... if ':' in value:
... return tuple(value.split(':', 1))
... return (value, None)
`to_primitive`
~~~~~~~~~~~~~~
By default, this method on ``schematics.types.BaseType`` just returns the
native value it was given. Override this to convert any non-primitive values to
primitive data values. The following types can pass through safely:
* int
* float
* bool
* basestring
* NoneType
* lists or dicts of any of the above or containing other similarly constrained
lists or dicts
To cover values that fall outside of these definitions, define a primitive
conversion::
>>> from schematics.types import BaseType
>>> class NetlocType(BaseType):
... def to_primitive(self, value):
... host, port = value
... if port:
... return u'{0}:{1}'.format(host, port)
... return host
validation
~~~~~~~~~~
The base implementation of `validate` runs individual validators defined:
* At type class definition time, as methods named in a specific way
* At instantiation time as arguments to the type's init method.
The second type is explained by ``schematics.types.BaseType``, so we'll focus
on the first option.
Declared validation methods take names of the form
`validate_constraint(self, value)`, where `constraint` is an arbitrary name you
give to the check being performed. If the check fails, then the method should
raise ``schematics.exceptions.ValidationError``::
>>> from schematics.exceptions import ValidationError
>>> from schematics.types import BaseType
>>> class NetlocType(BaseType):
... def validate_netloc(self, value):
... if ':' not in value:
... raise ValidationError('Value must be a valid net location of the form host[:port]')
However, schematics types do define an organized way to define and manage coded
error messages. By defining a `MESSAGES` dict, you can assign error messages to
your constraint name. Then the message is available as
`self.message['my_constraint']` in validation methods. Sub-classes can add
messages for new codes or replace messages for existing codes. However, they
will inherit messages for error codes defined by base classes.
So, to enhance the prior example::
>>> from schematics.exceptions import ValidationError
>>> from schematics.types import BaseType
>>> class NetlocType(BaseType):
... MESSAGES = {
... 'netloc': 'Value must be a valid net location of the form host[:port]'
... }
... def validate_netloc(self, value):
... if ':' not in value:
... raise ValidationError(self.messages['netloc'])
Parameterizing types
~~~~~~~~~~~~~~~~~~~~
There may be times when you want to override `__init__` and parameterize your
type. When you do so, just ensure two things:
* Don't redefine any of the initialization parameters defined for
``schematics.types.BaseType``.
* After defining your specific parameters, ensure that the base parameters are
given to the base init method. The simplest way to ensure this is to accept
`*args` and `**kwargs` and pass them through to the super init method, like
so::
>>> from schematics.types import BaseType
>>> class NetlocType(BaseType):
... def __init__(self, verify_location=False, *args, **kwargs):
... super().__init__(*args, **kwargs)
... self.verify_location = verify_location
More Information
================
To learn more about **Types**, visit the :ref:`Types API <api_doc_types>`
|
schematics-py310-plus
|
/schematics-py310-plus-0.0.4.tar.gz/schematics-py310-plus-0.0.4/docs/usage/types.rst
|
types.rst
|
.. _types:
=====
Types
=====
Types are the smallest definition of structure in Schematics. They represent
structure by offering functions to inspect or mutate the data in some way.
According to Schematics, a type is an instance of a way to do three things:
1. Coerce the data type into an appropriate representation in Python
2. Convert the Python representation into other formats suitable for
serialization
3. Offer a precise method of validating data of many forms
These properties are implemented as ``to_native``, ``to_primitive``, and
``validate``.
Coercion
========
A simple example is the ``DateTimeType``.
::
>>> from schematics.types import DateTimeType
>>> dt_t = DateTimeType()
The ``to_native`` function transforms an ISO8601 formatted date string into a
Python ``datetime.datetime``.
::
>>> dt = dt_t.to_native('2013-08-31T02:21:21.486072')
>>> dt
datetime.datetime(2013, 8, 31, 2, 21, 21, 486072)
Conversion
==========
The ``to_primitive`` function changes it back to a language agnostic form, in
this case an ISO8601 formatted string, just like we used above.
::
>>> dt_t.to_primitive(dt)
'2013-08-31T02:21:21.486072'
Validation
==========
Validation can be as simple as successfully calling ``to_native``, but
sometimes more is needed.
data or behavior during a typical use, like serialization.
Let's look at the ``StringType``. We'll set a ``max_length`` of 10.
::
>>> st = StringType(max_length=10)
>>> st.to_native('this is longer than 10')
u'this is longer than 10'
It converts to a string just fine. Now, let's attempt to validate it.
::
>>> st.validate('this is longer than 10')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "schematics/types/base.py", line 164, in validate
raise ValidationError(errors)
schematics.exceptions.ValidationError: [u'String value is too long.']
Custom types
============
If the types provided by the schematics library don't meet all of your needs,
you can also create new types. Do so by extending
``schematics.types.BaseType``, and decide which based methods you need to
override.
`to_native`
~~~~~~~~~~~
By default, this method on ``schematics.types.BaseType`` just returns the
primitive value it was given. Override this if you want to convert it to a
specific native value. For example, suppose we are implementing a type that
represents the net-location portion of a URL, which consists of a hostname and
optional port number::
>>> from schematics.types import BaseType
>>> class NetlocType(BaseType):
... def to_native(self, value):
... if ':' in value:
... return tuple(value.split(':', 1))
... return (value, None)
`to_primitive`
~~~~~~~~~~~~~~
By default, this method on ``schematics.types.BaseType`` just returns the
native value it was given. Override this to convert any non-primitive values to
primitive data values. The following types can pass through safely:
* int
* float
* bool
* basestring
* NoneType
* lists or dicts of any of the above or containing other similarly constrained
lists or dicts
To cover values that fall outside of these definitions, define a primitive
conversion::
>>> from schematics.types import BaseType
>>> class NetlocType(BaseType):
... def to_primitive(self, value):
... host, port = value
... if port:
... return u'{0}:{1}'.format(host, port)
... return host
validation
~~~~~~~~~~
The base implementation of `validate` runs individual validators defined:
* At type class definition time, as methods named in a specific way
* At instantiation time as arguments to the type's init method.
The second type is explained by ``schematics.types.BaseType``, so we'll focus
on the first option.
Declared validation methods take names of the form
`validate_constraint(self, value)`, where `constraint` is an arbitrary name you
give to the check being performed. If the check fails, then the method should
raise ``schematics.exceptions.ValidationError``::
>>> from schematics.exceptions import ValidationError
>>> from schematics.types import BaseType
>>> class NetlocType(BaseType):
... def validate_netloc(self, value):
... if ':' not in value:
... raise ValidationError('Value must be a valid net location of the form host[:port]')
However, schematics types do define an organized way to define and manage coded
error messages. By defining a `MESSAGES` dict, you can assign error messages to
your constraint name. Then the message is available as
`self.message['my_constraint']` in validation methods. Sub-classes can add
messages for new codes or replace messages for existing codes. However, they
will inherit messages for error codes defined by base classes.
So, to enhance the prior example::
>>> from schematics.exceptions import ValidationError
>>> from schematics.types import BaseType
>>> class NetlocType(BaseType):
... MESSAGES = {
... 'netloc': 'Value must be a valid net location of the form host[:port]'
... }
... def validate_netloc(self, value):
... if ':' not in value:
... raise ValidationError(self.messages['netloc'])
Parameterizing types
~~~~~~~~~~~~~~~~~~~~
There may be times when you want to override `__init__` and parameterize your
type. When you do so, just ensure two things:
* Don't redefine any of the initialization parameters defined for
``schematics.types.BaseType``.
* After defining your specific parameters, ensure that the base parameters are
given to the base init method. The simplest way to ensure this is to accept
`*args` and `**kwargs` and pass them through to the super init method, like
so::
>>> from schematics.types import BaseType
>>> class NetlocType(BaseType):
... def __init__(self, verify_location=False, *args, **kwargs):
... super().__init__(*args, **kwargs)
... self.verify_location = verify_location
More Information
================
To learn more about **Types**, visit the :ref:`Types API <api_doc_types>`
| 0.947817 | 0.699889 |
=========
Importing
=========
The general mechanism for data import is to call a function on every field in
the data and coerce it into the most appropriate representation in Python. A
date string, for example, would be converted to a ``datetime.datetime``.
Perhaps we're writing a web API that receives song data. Let's model the song.
::
class Song(Model):
name = StringType()
artist = StringType()
url = URLType()
This is what successful validation of the data looks like.
::
>>> song_json = '{"url": "http://www.youtube.com/watch?v=67KGSJVkix0", "name": "Werewolf", "artist": "Fiona Apple"}'
>>> fiona_song = Song(json.loads(song_json))
>>> fiona_song.url
u'http://www.youtube.com/watch?v=67KGSJVkix0'
Compound Types
==============
We could define a simple collection of songs like this:
::
class Collection(Model):
songs = ListType(ModelType(Song))
Some JSON data for this type of a model might look like this:
::
>>> songs_json = '{"songs": [{"url": "https://www.youtube.com/watch?v=UeBFEanVsp4", "name": "When I Lost My Bet", "artist": "Dillinger Escape Plan"}, {"url": "http://www.youtube.com/watch?v=67KGSJVkix0", "name": "Werewolf", "artist": "Fiona Apple"}]}'
The collection has a list of models for songs, so when we import that list, that
data should be converted to model instances.
::
>>> song_collection = Collection(json.loads(songs_json))
>>> song_collection.songs[0]
<Song: Song object>
>>> song_collection.songs[0].artist
u'Dillinger Escape Plan'
More Information
================
To learn more about **Importing**, visit the :ref:`Transforms API <api_doc_transforms>`
|
schematics-py310-plus
|
/schematics-py310-plus-0.0.4.tar.gz/schematics-py310-plus-0.0.4/docs/usage/importing.rst
|
importing.rst
|
=========
Importing
=========
The general mechanism for data import is to call a function on every field in
the data and coerce it into the most appropriate representation in Python. A
date string, for example, would be converted to a ``datetime.datetime``.
Perhaps we're writing a web API that receives song data. Let's model the song.
::
class Song(Model):
name = StringType()
artist = StringType()
url = URLType()
This is what successful validation of the data looks like.
::
>>> song_json = '{"url": "http://www.youtube.com/watch?v=67KGSJVkix0", "name": "Werewolf", "artist": "Fiona Apple"}'
>>> fiona_song = Song(json.loads(song_json))
>>> fiona_song.url
u'http://www.youtube.com/watch?v=67KGSJVkix0'
Compound Types
==============
We could define a simple collection of songs like this:
::
class Collection(Model):
songs = ListType(ModelType(Song))
Some JSON data for this type of a model might look like this:
::
>>> songs_json = '{"songs": [{"url": "https://www.youtube.com/watch?v=UeBFEanVsp4", "name": "When I Lost My Bet", "artist": "Dillinger Escape Plan"}, {"url": "http://www.youtube.com/watch?v=67KGSJVkix0", "name": "Werewolf", "artist": "Fiona Apple"}]}'
The collection has a list of models for songs, so when we import that list, that
data should be converted to model instances.
::
>>> song_collection = Collection(json.loads(songs_json))
>>> song_collection.songs[0]
<Song: Song object>
>>> song_collection.songs[0].artist
u'Dillinger Escape Plan'
More Information
================
To learn more about **Importing**, visit the :ref:`Transforms API <api_doc_transforms>`
| 0.792062 | 0.566978 |
.. _models:
======
Models
======
Schematics models are the next form of structure above types. They are a
collection of types in a class. When a `Type` is given a name inside a `Model`,
it is called a `field`.
.. _simple_model:
Simple Model
============
Let's say we want to build a social network for weather. At its core, we'll
need a way to represent some temperature information and where that temperature
was found.
::
import datetime
from schematics.models import Model
from schematics.types import StringType, DecimalType, DateTimeType
class WeatherReport(Model):
city = StringType()
temperature = DecimalType()
taken_at = DateTimeType(default=datetime.datetime.now)
That'll do. Let's try using it.
::
>>> wr = WeatherReport({'city': 'NYC', 'temperature': 80})
>>> wr.temperature
Decimal('80.0')
And remember that ``DateTimeType`` we set a default callable for?
::
>>> wr.taken_at
datetime.datetime(2013, 8, 21, 13, 6, 38, 11883)
.. _model_configuration:
Model Configuration
===================
Models offer a few configuration options. Options are attached in the form of a
class.
::
class Whatever(Model):
...
class Options:
option = value
``namespace`` is a namespace identifier that can be used with persistence
layers.
::
class Whatever(Model):
...
class Options:
namespace = "whatever_bucket"
``roles`` is a dictionary that stores whitelists and blacklists.
::
class Whatever(Model):
...
class Options:
roles = {
'public': whitelist('some', 'fields'),
'owner': blacklist('some', 'internal', 'stuff'),
}
``serialize_when_none`` can be ``True`` or ``False``. It's behavior is
explained here: :ref:`exporting_serialize_when_none`.
::
class Whatever(Model):
...
class Options:
serialize_when_none = False
.. _model_mocking:
Model Mocking
=============
Testing typically involves creating lots of fake (but plausible) objects. Good
tests use random values so that multiple tests can run in parallel without
overwriting each other. Great tests exercise many possible valid input values
to make sure the code being tested can deal with various combinations.
Schematics models can help you write great tests by automatically generating
mock objects. Starting with our ``WeatherReport`` model from earlier:
::
class WeatherReport(Model):
city = StringType()
temperature = DecimalType()
taken_at = DateTimeType(default=datetime.datetime.now)
we can ask Schematic to generate a mock object with reasonable values:
::
>>> WeatherReport.get_mock_object().to_primitive()
{'city': u'zLmeEt7OAGOWI', 'temperature': u'8', 'taken_at': '2014-05-06T17:34:56.396280'}
If you've set a constraint on a field that the mock can't satisfy - such as
putting a ``max_length`` on a URL field so that it's too small to hold a
randomly-generated URL - then ``get_mock_object`` will raise a
``MockCreationError`` exception:
::
from schematics.types import URLType
class OverlyStrict(Model):
url = URLType(max_length=11, required=True)
>>> OverlyStrict.get_mock_object()
...
schematics.exceptions.MockCreationError: url: This field is too short to hold the mock data
More Information
================
To learn more about **Models**, visit the :ref:`Models API <api_doc_models>`
|
schematics-py310-plus
|
/schematics-py310-plus-0.0.4.tar.gz/schematics-py310-plus-0.0.4/docs/usage/models.rst
|
models.rst
|
.. _models:
======
Models
======
Schematics models are the next form of structure above types. They are a
collection of types in a class. When a `Type` is given a name inside a `Model`,
it is called a `field`.
.. _simple_model:
Simple Model
============
Let's say we want to build a social network for weather. At its core, we'll
need a way to represent some temperature information and where that temperature
was found.
::
import datetime
from schematics.models import Model
from schematics.types import StringType, DecimalType, DateTimeType
class WeatherReport(Model):
city = StringType()
temperature = DecimalType()
taken_at = DateTimeType(default=datetime.datetime.now)
That'll do. Let's try using it.
::
>>> wr = WeatherReport({'city': 'NYC', 'temperature': 80})
>>> wr.temperature
Decimal('80.0')
And remember that ``DateTimeType`` we set a default callable for?
::
>>> wr.taken_at
datetime.datetime(2013, 8, 21, 13, 6, 38, 11883)
.. _model_configuration:
Model Configuration
===================
Models offer a few configuration options. Options are attached in the form of a
class.
::
class Whatever(Model):
...
class Options:
option = value
``namespace`` is a namespace identifier that can be used with persistence
layers.
::
class Whatever(Model):
...
class Options:
namespace = "whatever_bucket"
``roles`` is a dictionary that stores whitelists and blacklists.
::
class Whatever(Model):
...
class Options:
roles = {
'public': whitelist('some', 'fields'),
'owner': blacklist('some', 'internal', 'stuff'),
}
``serialize_when_none`` can be ``True`` or ``False``. It's behavior is
explained here: :ref:`exporting_serialize_when_none`.
::
class Whatever(Model):
...
class Options:
serialize_when_none = False
.. _model_mocking:
Model Mocking
=============
Testing typically involves creating lots of fake (but plausible) objects. Good
tests use random values so that multiple tests can run in parallel without
overwriting each other. Great tests exercise many possible valid input values
to make sure the code being tested can deal with various combinations.
Schematics models can help you write great tests by automatically generating
mock objects. Starting with our ``WeatherReport`` model from earlier:
::
class WeatherReport(Model):
city = StringType()
temperature = DecimalType()
taken_at = DateTimeType(default=datetime.datetime.now)
we can ask Schematic to generate a mock object with reasonable values:
::
>>> WeatherReport.get_mock_object().to_primitive()
{'city': u'zLmeEt7OAGOWI', 'temperature': u'8', 'taken_at': '2014-05-06T17:34:56.396280'}
If you've set a constraint on a field that the mock can't satisfy - such as
putting a ``max_length`` on a URL field so that it's too small to hold a
randomly-generated URL - then ``get_mock_object`` will raise a
``MockCreationError`` exception:
::
from schematics.types import URLType
class OverlyStrict(Model):
url = URLType(max_length=11, required=True)
>>> OverlyStrict.get_mock_object()
...
schematics.exceptions.MockCreationError: url: This field is too short to hold the mock data
More Information
================
To learn more about **Models**, visit the :ref:`Models API <api_doc_models>`
| 0.935043 | 0.513729 |
.. _validation:
==========
Validation
==========
To validate data in Schematics is to have both a data model and some input
data. The data model describes what valid data looks like in different forms.
Here's a quick glance and some of the ways you can tweak validation.
::
>>> from schematics.models import Model
>>> from schematics.types import StringType
>>> class Person(Model):
... name = StringType()
... bio = StringType(required=True)
...
>>> p = Person()
>>> p.name = 'Fiona Apple'
>>> p.validate()
Traceback (most recent call last):
...
ModelValidationError: {'bio': [u'This field is required.']}
Validation Errors
=================
Validation failures throw an exception called ``ValidationError``. A
description of what failed is stored in ``messages``, which is a dictionary
keyed by the field name with a list of reasons the field failed.
::
>>> from schematics.exceptions import ValidationError
>>> try:
... p.validate()
... except ValidationError, e:
... print e.messages
{'bio': [u'This field is required.']}
Extending Validation
====================
Validation for both types and models can be extended. Whatever validation
system you require is probably expressable via Schematics.
Type-level Validation
---------------------
Here is a function that checks if a string is uppercase and throws a
``ValidationError`` if it is not.
::
>>> from schematics.exceptions import ValidationError
>>> def is_uppercase(value):
... if value.upper() != value:
... raise ValidationError(u'Please speak up!')
... return value
...
And we can attach it to our StringType like this:
::
>>> class Person(Model):
... name = StringType(validators=[is_uppercase])
...
Using it is built into validation.
>>> me = Person({'name': u'Jökull'})
>>> me.validate()
Traceback (most recent call last):
...
ModelValidationError: {'name': [u'Please speak up!']}
It is also possible to define new types with custom validation by subclassing a
type, like ``BaseType``, and implementing instance methods that start with
``validate_``.
::
>>> from schematics.exceptions import ValidationError
>>> class UppercaseType(StringType):
... def validate_uppercase(self, value):
... if value.upper() != value:
... raise ValidationError("Value must be uppercase!")
...
Just like before, using it is now built in.
>>> class Person(Model):
... name = UppercaseType()
...
>>> me = Person({'name': u'Jökull'})
>>> me.validate()
Traceback (most recent call last):
...
ModelValidationError: {'name': ['Value must be uppercase!']}
Model-level Validation
----------------------
What about field validation based on other model data? The order in which
fields are declared is preserved inside the model. So if the validity of a field
depends on another field’s value, just make sure to declare it below its
dependencies:
::
>>> from schematics.models import Model
>>> from schematics.types import StringType, BooleanType
>>> from schematics.exceptions import ValidationError
>>>
>>> class Signup(Model):
... name = StringType()
... call_me = BooleanType(default=False)
... def validate_call_me(self, data, value):
... if data['name'] == u'Brad' and data['call_me'] is True:
... raise ValidationError(u'He prefers email.')
... return value
...
>>> Signup({'name': u'Brad'}).validate()
>>> Signup({'name': u'Brad', 'call_me': True}).validate()
Traceback (most recent call last):
...
ModelValidationError: {'call_me': [u'He prefers email.']}
More Information
================
To learn more about **Validation**, visit the :ref:`Validation API <api_doc_validation>`
|
schematics-py310-plus
|
/schematics-py310-plus-0.0.4.tar.gz/schematics-py310-plus-0.0.4/docs/usage/validation.rst
|
validation.rst
|
.. _validation:
==========
Validation
==========
To validate data in Schematics is to have both a data model and some input
data. The data model describes what valid data looks like in different forms.
Here's a quick glance and some of the ways you can tweak validation.
::
>>> from schematics.models import Model
>>> from schematics.types import StringType
>>> class Person(Model):
... name = StringType()
... bio = StringType(required=True)
...
>>> p = Person()
>>> p.name = 'Fiona Apple'
>>> p.validate()
Traceback (most recent call last):
...
ModelValidationError: {'bio': [u'This field is required.']}
Validation Errors
=================
Validation failures throw an exception called ``ValidationError``. A
description of what failed is stored in ``messages``, which is a dictionary
keyed by the field name with a list of reasons the field failed.
::
>>> from schematics.exceptions import ValidationError
>>> try:
... p.validate()
... except ValidationError, e:
... print e.messages
{'bio': [u'This field is required.']}
Extending Validation
====================
Validation for both types and models can be extended. Whatever validation
system you require is probably expressable via Schematics.
Type-level Validation
---------------------
Here is a function that checks if a string is uppercase and throws a
``ValidationError`` if it is not.
::
>>> from schematics.exceptions import ValidationError
>>> def is_uppercase(value):
... if value.upper() != value:
... raise ValidationError(u'Please speak up!')
... return value
...
And we can attach it to our StringType like this:
::
>>> class Person(Model):
... name = StringType(validators=[is_uppercase])
...
Using it is built into validation.
>>> me = Person({'name': u'Jökull'})
>>> me.validate()
Traceback (most recent call last):
...
ModelValidationError: {'name': [u'Please speak up!']}
It is also possible to define new types with custom validation by subclassing a
type, like ``BaseType``, and implementing instance methods that start with
``validate_``.
::
>>> from schematics.exceptions import ValidationError
>>> class UppercaseType(StringType):
... def validate_uppercase(self, value):
... if value.upper() != value:
... raise ValidationError("Value must be uppercase!")
...
Just like before, using it is now built in.
>>> class Person(Model):
... name = UppercaseType()
...
>>> me = Person({'name': u'Jökull'})
>>> me.validate()
Traceback (most recent call last):
...
ModelValidationError: {'name': ['Value must be uppercase!']}
Model-level Validation
----------------------
What about field validation based on other model data? The order in which
fields are declared is preserved inside the model. So if the validity of a field
depends on another field’s value, just make sure to declare it below its
dependencies:
::
>>> from schematics.models import Model
>>> from schematics.types import StringType, BooleanType
>>> from schematics.exceptions import ValidationError
>>>
>>> class Signup(Model):
... name = StringType()
... call_me = BooleanType(default=False)
... def validate_call_me(self, data, value):
... if data['name'] == u'Brad' and data['call_me'] is True:
... raise ValidationError(u'He prefers email.')
... return value
...
>>> Signup({'name': u'Brad'}).validate()
>>> Signup({'name': u'Brad', 'call_me': True}).validate()
Traceback (most recent call last):
...
ModelValidationError: {'call_me': [u'He prefers email.']}
More Information
================
To learn more about **Validation**, visit the :ref:`Validation API <api_doc_validation>`
| 0.844216 | 0.563258 |
.. _exporting:
=========
Exporting
=========
To export data is to go from the Schematics representation of data to some
other form. It's also possible you want to adjust some things along the way,
such as skipping over some fields or providing empty values for missing fields.
The general mechanism for data export is to call a function on every field in
the model. The function probably converts the field's value to some other
format, but you can easily modify it.
We'll use the following model for the examples:
::
from schematics.models import Model
from schematics.types import StringType, DateTimeType
from schematics.transforms import blacklist
class Movie(Model):
name = StringType()
director = StringType()
release_date = DateTimeType
personal_thoughts = StringType()
class Options:
roles = {'public': blacklist('personal_thoughts')}
.. _exporting_terminology:
Terminology
===========
To `serialize` data is to convert from the way it's represented in Schematics
to some other form. That might be a reduction of the ``Model`` into a
``dict``, but it might also be more complicated.
A field can be serialized if it is an instance of ``BaseType`` or if a function
is wrapped with the ``@serializable`` decorator.
A ``Model`` instance may be serialized with a particular `context`. A context
is a ``dict`` passed through the model to each of its fields. A field may use
values from the context to alter how it is serialized.
.. _exporting_converting_data:
Converting Data
===============
To export data is basically to convert from one form to another. Schematics
can convert data into simple Python types or a language agnostic format. We
refer to the native serialization as `to_native`, but we refer to the language
agnostic format as `primitive`, since it has removed all dependencies on
Python.
.. _exporting_native_types:
Native Types
------------
The fields in a model attempt to use the best Python representation of data
whenever possible. For example, the DateTimeType will use Python's
``datetime.datetime`` module.
You can reduce a model into the native Python types by calling ``to_native``.
>>> trainspotting = Movie()
>>> trainspotting.name = u'Trainspotting'
>>> trainspotting.director = u'Danny Boyle'
>>> trainspotting.release_date = datetime.datetime(1996, 7, 19, 0, 0)
>>> trainspotting.personal_thoughts = 'This movie was great!'
>>> trainspotting.to_native()
{
'name': u'Trainspotting',
'director': u'Danny Boyle',
'release_date': datetime.datetime(1996, 7, 19, 0, 0),
'personal_thoughts': 'This movie was great!'
}
.. _exporting_primitive_types:
Primitive Types
---------------
To present data to clients we have the ``Model.to_primitive`` method. Default
behavior is to output the same data you would need to reproduce the model in its
current state.
::
>>> trainspotting.to_primitive()
{
'name': u'Trainspotting',
'director': u'Danny Boyle',
'release_date': '1996-07-19T00:00:00.000000',
'personal_thoughts': 'This movie was great!'
}
Great. We got the primitive data back. It would be easy to convert to JSON
from here.
>>> import json
>>> json.dumps(trainspotting.to_primitive())
'{
"name": "Trainspotting",
"director": "Danny Boyle",
"release_date": "1996-07-19T00:00:00.000000",
"personal_thoughts": "This movie was great!"
}'
.. _exporting_using_contexts:
Using Contexts
--------------
Sometimes a field needs information about its environment to know how to
serialize itself. For example, the ``MultilingualStringType`` holds several
translations of a phrase:
>>> class TestModel(Model):
... mls = MultilingualStringType()
...
>>> mls_test = TestModel({'mls': {
... 'en_US': 'Hello, world!',
... 'fr_FR': 'Bonjour tout le monde!',
... 'es_MX': '¡Hola, mundo!',
... }})
In this case, serializing without knowing which localized string to use
wouldn't make sense:
>>> mls_test.to_primitive()
[...]
schematics.exceptions.ConversionError: [u'No default or explicit locales were given.']
Neither does choosing the locale ahead of time, because the same
MultilingualStringType field might be serialized several times with different
locales inside the same method.
However, it could use information in a `context` to return a useful
representation:
>>> mls_test.to_primitive(context={'locale': 'en_US'})
{'mls': 'Hello, world!'}
This allows us to use the same model instance several times with different
contexts:
>>> for user, locale in [('Joe', 'en_US'), ('Sue', 'es_MX')]:
... print('%s says %s' % (user, mls_test.to_primitive(context={'locale': locale})['mls']))
...
Joe says Hello, world!
Sue says ¡Hola, mundo!
.. _exporting_compound_types:
Compound Types
==============
Let's complicate things and observe what happens with data exporting. First,
we'll define a collection which will have a list of ``Movie`` instances.
First, let's instantiate another movie.
::
>>> total_recall = Movie()
>>> total_recall.name = u'Total Recall'
>>> total_recall.director = u'Paul Verhoeven'
>>> total_recall.release_date = datetime.datetime(1990, 6, 1, 0, 0)
>>> total_recall.personal_thoughts = 'Old classic. Still love it.'
Now, let's define a collection, which has a list of movies in it.
::
from schematics.types.compound import ListType, ModelType
class Collection(Model):
name = StringType()
movies = ListType(ModelType(Movie))
notes = StringType()
class Options:
roles = {'public': blacklist('notes')}
Let's instantiate a collection.
>>> favorites = Collection()
>>> favorites.name = 'My favorites'
>>> favorites.notes = 'These are some of my favorite movies'
>>> favorites.movies = [trainspotting, total_recall]
Here is what happens when we call ``to_primitive()`` on it.
>>> favorites.to_primitive()
{
'notes': 'These are some of my favorite movies',
'name': 'My favorites',
'movies': [{
'name': u'Trainspotting',
'director': u'Danny Boyle',
'personal_thoughts': 'This movie was great!',
'release_date': '1996-07-19T00:00:00.000000'
}, {
'name': u'Total Recall',
'director': u'Paul Verhoeven',
'personal_thoughts': 'Old classic. Still love it.',
'release_date': '1990-06-01T00:00:00.000000'
}]
}
.. _exporting_customizing_output:
Customizing Output
==================
Schematics offers many ways to customize the behavior of serialization:
.. _exporting_roles:
Roles
-----
Roles offer a way to specify whether or not a field should be skipped during
export. There are many reasons this might be desirable, such as access
permissions or to not serialize more data than absolutely necessary.
Roles are implemented as either white lists or black lists where the members of
the list are field names.
::
>>> r = blacklist('private_field', 'another_private_field')
Imagine we are sending our movie instance to a random person on the Internet.
We probably don't want to share our personal thoughts. Recall earlier that we
added a role called ``public`` and gave it a blacklist with
``personal_thoughts`` listed.
::
class Movie(Model):
personal_thoughts = StringType()
...
class Options:
roles = {'public': blacklist('personal_thoughts')}
This is what it looks like to use the role, which should simply remove
``personal_thoughts`` from the export.
::
>>> movie.to_primitive(role='public')
{
'name': u'Trainspotting',
'director': u'Danny Boyle',
'release_date': '1996-07-19T00:00:00.000000'
}
This works for compound types too, such as the list of movies in our
``Collection`` model above.
::
class Collection(Model):
notes = StringType()
...
class Options:
roles = {'public': blacklist('notes')}
We expect the ``personal_thoughts`` field to removed from the movie data and we
also expect the ``notes`` field to be removed from the collection data.
>>> favorites.to_primitive(role='public')
{
'name': 'My favorites',
'movies': [{
'name': u'Trainspotting',
'director': u'Danny Boyle',
'release_date': '1996-07-19T00:00:00.000000'
}, {
'name': u'Total Recall',
'director': u'Paul Verhoeven',
'release_date': '1990-06-01T00:00:00.000000'
}]
}
If no role is specified, the default behavior is to export all fields. This
behavior can be overridden by specifying a ``default`` role. Renaming
the ``public`` role to ``default`` in the example above yields equivalent
results without having to specify ``role`` in the export function.
>>> favorites.to_primitive()
{
'name': 'My favorites',
'movies': [{
'name': u'Trainspotting',
'director': u'Danny Boyle',
'release_date': '1996-07-19T00:00:00.000000'
}, {
'name': u'Total Recall',
'director': u'Paul Verhoeven',
'release_date': '1990-06-01T00:00:00.000000'
}]
}
.. _exporting_serializable:
Serializable
------------
Earlier we mentioned a ``@serializable`` decorator. You can write a function
that will produce a value used during serialization with a field name matching
the function name.
That looks like this:
::
...
from schematics.types.serializable import serializable
class Song(Model):
name = StringType()
artist = StringType()
url = URLType()
@serializable
def id(self):
return u'%s/%s' % (self.artist, self.name)
This is what it looks like to use it.
::
>>> song = Song()
>>> song.artist = 'Fiona Apple'
>>> song.name = 'Werewolf'
>>> song.url = 'http://www.youtube.com/watch?v=67KGSJVkix0'
>>> song.id
'Fiona Apple/Werewolf'
Or here:
::
>>> song.to_native()
{
'id': u'Fiona Apple/Werewolf',
'artist': u'Fiona Apple'
'name': u'Werewolf',
'url': u'http://www.youtube.com/watch?v=67KGSJVkix0',
}
.. _exporting_serialized_name:
Serialized Name
---------------
There are times when you have one name for a field in one place and another
name for it somewhere else. Schematics tries to help you by letting you
customize the field names used during serialization.
That looks like this:
::
class Person(Model):
name = StringType(serialized_name='person_name')
Notice the effect it has on serialization.
::
>>> p = Person()
>>> p.name = 'Ben Weinman'
>>> p.to_native()
{'person_name': u'Ben Weinman'}
.. _exporting_serialize_when_none:
Serialize When None
-------------------
If a value is not required and doesn't have a value, it will serialize with a
None value by default. This can be disabled.
::
>>> song = Song()
>>> song.to_native()
{'url': None, 'name': None, 'artist': None}
You can disable at the field level like this:
::
class Song(Model):
name = StringType(serialize_when_none=False)
artist = StringType()
And this produces the following:
::
>>> s = Song()
>>> s.to_native()
{'artist': None}
Or you can disable it at the class level:
::
class Song(Model):
name = StringType()
artist = StringType()
class Options:
serialize_when_none=False
Using it:
::
>>> s = Song()
>>> s.to_native()
>>>
More Information
================
To learn more about **Exporting**, visit the :ref:`Transforms API <api_doc_transforms>`
|
schematics-py310-plus
|
/schematics-py310-plus-0.0.4.tar.gz/schematics-py310-plus-0.0.4/docs/usage/exporting.rst
|
exporting.rst
|
.. _exporting:
=========
Exporting
=========
To export data is to go from the Schematics representation of data to some
other form. It's also possible you want to adjust some things along the way,
such as skipping over some fields or providing empty values for missing fields.
The general mechanism for data export is to call a function on every field in
the model. The function probably converts the field's value to some other
format, but you can easily modify it.
We'll use the following model for the examples:
::
from schematics.models import Model
from schematics.types import StringType, DateTimeType
from schematics.transforms import blacklist
class Movie(Model):
name = StringType()
director = StringType()
release_date = DateTimeType
personal_thoughts = StringType()
class Options:
roles = {'public': blacklist('personal_thoughts')}
.. _exporting_terminology:
Terminology
===========
To `serialize` data is to convert from the way it's represented in Schematics
to some other form. That might be a reduction of the ``Model`` into a
``dict``, but it might also be more complicated.
A field can be serialized if it is an instance of ``BaseType`` or if a function
is wrapped with the ``@serializable`` decorator.
A ``Model`` instance may be serialized with a particular `context`. A context
is a ``dict`` passed through the model to each of its fields. A field may use
values from the context to alter how it is serialized.
.. _exporting_converting_data:
Converting Data
===============
To export data is basically to convert from one form to another. Schematics
can convert data into simple Python types or a language agnostic format. We
refer to the native serialization as `to_native`, but we refer to the language
agnostic format as `primitive`, since it has removed all dependencies on
Python.
.. _exporting_native_types:
Native Types
------------
The fields in a model attempt to use the best Python representation of data
whenever possible. For example, the DateTimeType will use Python's
``datetime.datetime`` module.
You can reduce a model into the native Python types by calling ``to_native``.
>>> trainspotting = Movie()
>>> trainspotting.name = u'Trainspotting'
>>> trainspotting.director = u'Danny Boyle'
>>> trainspotting.release_date = datetime.datetime(1996, 7, 19, 0, 0)
>>> trainspotting.personal_thoughts = 'This movie was great!'
>>> trainspotting.to_native()
{
'name': u'Trainspotting',
'director': u'Danny Boyle',
'release_date': datetime.datetime(1996, 7, 19, 0, 0),
'personal_thoughts': 'This movie was great!'
}
.. _exporting_primitive_types:
Primitive Types
---------------
To present data to clients we have the ``Model.to_primitive`` method. Default
behavior is to output the same data you would need to reproduce the model in its
current state.
::
>>> trainspotting.to_primitive()
{
'name': u'Trainspotting',
'director': u'Danny Boyle',
'release_date': '1996-07-19T00:00:00.000000',
'personal_thoughts': 'This movie was great!'
}
Great. We got the primitive data back. It would be easy to convert to JSON
from here.
>>> import json
>>> json.dumps(trainspotting.to_primitive())
'{
"name": "Trainspotting",
"director": "Danny Boyle",
"release_date": "1996-07-19T00:00:00.000000",
"personal_thoughts": "This movie was great!"
}'
.. _exporting_using_contexts:
Using Contexts
--------------
Sometimes a field needs information about its environment to know how to
serialize itself. For example, the ``MultilingualStringType`` holds several
translations of a phrase:
>>> class TestModel(Model):
... mls = MultilingualStringType()
...
>>> mls_test = TestModel({'mls': {
... 'en_US': 'Hello, world!',
... 'fr_FR': 'Bonjour tout le monde!',
... 'es_MX': '¡Hola, mundo!',
... }})
In this case, serializing without knowing which localized string to use
wouldn't make sense:
>>> mls_test.to_primitive()
[...]
schematics.exceptions.ConversionError: [u'No default or explicit locales were given.']
Neither does choosing the locale ahead of time, because the same
MultilingualStringType field might be serialized several times with different
locales inside the same method.
However, it could use information in a `context` to return a useful
representation:
>>> mls_test.to_primitive(context={'locale': 'en_US'})
{'mls': 'Hello, world!'}
This allows us to use the same model instance several times with different
contexts:
>>> for user, locale in [('Joe', 'en_US'), ('Sue', 'es_MX')]:
... print('%s says %s' % (user, mls_test.to_primitive(context={'locale': locale})['mls']))
...
Joe says Hello, world!
Sue says ¡Hola, mundo!
.. _exporting_compound_types:
Compound Types
==============
Let's complicate things and observe what happens with data exporting. First,
we'll define a collection which will have a list of ``Movie`` instances.
First, let's instantiate another movie.
::
>>> total_recall = Movie()
>>> total_recall.name = u'Total Recall'
>>> total_recall.director = u'Paul Verhoeven'
>>> total_recall.release_date = datetime.datetime(1990, 6, 1, 0, 0)
>>> total_recall.personal_thoughts = 'Old classic. Still love it.'
Now, let's define a collection, which has a list of movies in it.
::
from schematics.types.compound import ListType, ModelType
class Collection(Model):
name = StringType()
movies = ListType(ModelType(Movie))
notes = StringType()
class Options:
roles = {'public': blacklist('notes')}
Let's instantiate a collection.
>>> favorites = Collection()
>>> favorites.name = 'My favorites'
>>> favorites.notes = 'These are some of my favorite movies'
>>> favorites.movies = [trainspotting, total_recall]
Here is what happens when we call ``to_primitive()`` on it.
>>> favorites.to_primitive()
{
'notes': 'These are some of my favorite movies',
'name': 'My favorites',
'movies': [{
'name': u'Trainspotting',
'director': u'Danny Boyle',
'personal_thoughts': 'This movie was great!',
'release_date': '1996-07-19T00:00:00.000000'
}, {
'name': u'Total Recall',
'director': u'Paul Verhoeven',
'personal_thoughts': 'Old classic. Still love it.',
'release_date': '1990-06-01T00:00:00.000000'
}]
}
.. _exporting_customizing_output:
Customizing Output
==================
Schematics offers many ways to customize the behavior of serialization:
.. _exporting_roles:
Roles
-----
Roles offer a way to specify whether or not a field should be skipped during
export. There are many reasons this might be desirable, such as access
permissions or to not serialize more data than absolutely necessary.
Roles are implemented as either white lists or black lists where the members of
the list are field names.
::
>>> r = blacklist('private_field', 'another_private_field')
Imagine we are sending our movie instance to a random person on the Internet.
We probably don't want to share our personal thoughts. Recall earlier that we
added a role called ``public`` and gave it a blacklist with
``personal_thoughts`` listed.
::
class Movie(Model):
personal_thoughts = StringType()
...
class Options:
roles = {'public': blacklist('personal_thoughts')}
This is what it looks like to use the role, which should simply remove
``personal_thoughts`` from the export.
::
>>> movie.to_primitive(role='public')
{
'name': u'Trainspotting',
'director': u'Danny Boyle',
'release_date': '1996-07-19T00:00:00.000000'
}
This works for compound types too, such as the list of movies in our
``Collection`` model above.
::
class Collection(Model):
notes = StringType()
...
class Options:
roles = {'public': blacklist('notes')}
We expect the ``personal_thoughts`` field to removed from the movie data and we
also expect the ``notes`` field to be removed from the collection data.
>>> favorites.to_primitive(role='public')
{
'name': 'My favorites',
'movies': [{
'name': u'Trainspotting',
'director': u'Danny Boyle',
'release_date': '1996-07-19T00:00:00.000000'
}, {
'name': u'Total Recall',
'director': u'Paul Verhoeven',
'release_date': '1990-06-01T00:00:00.000000'
}]
}
If no role is specified, the default behavior is to export all fields. This
behavior can be overridden by specifying a ``default`` role. Renaming
the ``public`` role to ``default`` in the example above yields equivalent
results without having to specify ``role`` in the export function.
>>> favorites.to_primitive()
{
'name': 'My favorites',
'movies': [{
'name': u'Trainspotting',
'director': u'Danny Boyle',
'release_date': '1996-07-19T00:00:00.000000'
}, {
'name': u'Total Recall',
'director': u'Paul Verhoeven',
'release_date': '1990-06-01T00:00:00.000000'
}]
}
.. _exporting_serializable:
Serializable
------------
Earlier we mentioned a ``@serializable`` decorator. You can write a function
that will produce a value used during serialization with a field name matching
the function name.
That looks like this:
::
...
from schematics.types.serializable import serializable
class Song(Model):
name = StringType()
artist = StringType()
url = URLType()
@serializable
def id(self):
return u'%s/%s' % (self.artist, self.name)
This is what it looks like to use it.
::
>>> song = Song()
>>> song.artist = 'Fiona Apple'
>>> song.name = 'Werewolf'
>>> song.url = 'http://www.youtube.com/watch?v=67KGSJVkix0'
>>> song.id
'Fiona Apple/Werewolf'
Or here:
::
>>> song.to_native()
{
'id': u'Fiona Apple/Werewolf',
'artist': u'Fiona Apple'
'name': u'Werewolf',
'url': u'http://www.youtube.com/watch?v=67KGSJVkix0',
}
.. _exporting_serialized_name:
Serialized Name
---------------
There are times when you have one name for a field in one place and another
name for it somewhere else. Schematics tries to help you by letting you
customize the field names used during serialization.
That looks like this:
::
class Person(Model):
name = StringType(serialized_name='person_name')
Notice the effect it has on serialization.
::
>>> p = Person()
>>> p.name = 'Ben Weinman'
>>> p.to_native()
{'person_name': u'Ben Weinman'}
.. _exporting_serialize_when_none:
Serialize When None
-------------------
If a value is not required and doesn't have a value, it will serialize with a
None value by default. This can be disabled.
::
>>> song = Song()
>>> song.to_native()
{'url': None, 'name': None, 'artist': None}
You can disable at the field level like this:
::
class Song(Model):
name = StringType(serialize_when_none=False)
artist = StringType()
And this produces the following:
::
>>> s = Song()
>>> s.to_native()
{'artist': None}
Or you can disable it at the class level:
::
class Song(Model):
name = StringType()
artist = StringType()
class Options:
serialize_when_none=False
Using it:
::
>>> s = Song()
>>> s.to_native()
>>>
More Information
================
To learn more about **Exporting**, visit the :ref:`Transforms API <api_doc_transforms>`
| 0.86511 | 0.748582 |
.. _extending:
=======
Extending
=======
For most non trivial cases, the base types may not be enough. Schematics is designed to be flexible to allow for extending data types in order to accomodate custom logic.
Simple Example
=============
A simple example is allowing for value transformations.
Say that there is a model that requires email validation. Since emails are case insenstive, it might be helpful to convert the input email to lower case before continuing to validate.
This can be achieved by Extending the Email class
::
>>> from schematics.types import EmailType
>>> class LowerCaseEmailType(EmailType):
...
... # override convert method
... def convert(self, value, context=None):
... value = super().convert(value, context)
... return value.lower() # value will be converted to lowercase
Our ``LowerCaseEmailType`` can now be used as an ordinary field.
::
>>> from schematics.models import Model
>>> from schematics.types import StringType
>>> class Person(Model):
... name = StringType()
... bio = StringType(required=True)
... email = LowerCaseEmailType(required=True)
...
>>> p = Person()
>>> p.name = 'Mutoid Man'
>>> p.email = '[email protected]' # technically correct email,but should be 'cleaned'
>>> p.validate()
>>> p.to_native()
>>> {'bio': 'Mutoid Man',
>>> 'email': '[email protected]', # the email was converted to lowercase
>>> 'name': 'Mutoid Man'}
Taking it a step further
=============
It is also possible that you may have several different kinds of cleaning required.
In such cases, it may not be ideal to subclass a type every time (like the previous example).
We can use the same logic from above and define a ``Type`` that can apply a set of arbitrary
functions.
::
>>> class CleanedStringType(StringType):
... converters = []
...
... def __init__(self, **kwargs):
... """
... This takes in all the inputs as String Type, but takes in an extra
... input called converters.
...
... Converters must be a list of functions, and each of those functions
... must take in exactly 1 value , and return the transformed input
... """
... if 'converters' in kwargs:
... self.converters = kwargs['converters']
... del kwargs['converters']
... super().__init__(**kwargs)
...
... def convert(self, value, context=None):
... value = super().convert(value, context)
... for func in self.converters:
... value = func(value)
... return value # will have a value after going through all the conversions in order
Now that we have defined our new Type, we can use it.
::
>>> from schematics.models import Model
>>> from schematics.types import StringType
>>> class Person(Model):
... name = StringType()
... bio = CleanedStringType(required=True,
... converters = [lambda x: x.upper(),
... lambda x: x.split(" ")[0]]) # convert to uppercase, then split on " " and just take the first of the split
... email = CleanedStringType(required=True, converts = [lambda x:x.lower()]) # same functionality as LowerCaseEmailType
...
>>> p = Person()
>>> p.name = 'Mutoid Man'
>>> p.bio = 'good man'
>>> p.email = '[email protected]' # technically correct email,but should be 'cleaned'
>>> p.validate()
>>> p.to_native()
>>> {'bio': 'GOOD', # was converted as we specified
>>> 'email': '[email protected]', # was converted to lowercase
>>> 'name': 'Mutoid Man'}
|
schematics-py310-plus
|
/schematics-py310-plus-0.0.4.tar.gz/schematics-py310-plus-0.0.4/docs/usage/extending.rst
|
extending.rst
|
.. _extending:
=======
Extending
=======
For most non trivial cases, the base types may not be enough. Schematics is designed to be flexible to allow for extending data types in order to accomodate custom logic.
Simple Example
=============
A simple example is allowing for value transformations.
Say that there is a model that requires email validation. Since emails are case insenstive, it might be helpful to convert the input email to lower case before continuing to validate.
This can be achieved by Extending the Email class
::
>>> from schematics.types import EmailType
>>> class LowerCaseEmailType(EmailType):
...
... # override convert method
... def convert(self, value, context=None):
... value = super().convert(value, context)
... return value.lower() # value will be converted to lowercase
Our ``LowerCaseEmailType`` can now be used as an ordinary field.
::
>>> from schematics.models import Model
>>> from schematics.types import StringType
>>> class Person(Model):
... name = StringType()
... bio = StringType(required=True)
... email = LowerCaseEmailType(required=True)
...
>>> p = Person()
>>> p.name = 'Mutoid Man'
>>> p.email = '[email protected]' # technically correct email,but should be 'cleaned'
>>> p.validate()
>>> p.to_native()
>>> {'bio': 'Mutoid Man',
>>> 'email': '[email protected]', # the email was converted to lowercase
>>> 'name': 'Mutoid Man'}
Taking it a step further
=============
It is also possible that you may have several different kinds of cleaning required.
In such cases, it may not be ideal to subclass a type every time (like the previous example).
We can use the same logic from above and define a ``Type`` that can apply a set of arbitrary
functions.
::
>>> class CleanedStringType(StringType):
... converters = []
...
... def __init__(self, **kwargs):
... """
... This takes in all the inputs as String Type, but takes in an extra
... input called converters.
...
... Converters must be a list of functions, and each of those functions
... must take in exactly 1 value , and return the transformed input
... """
... if 'converters' in kwargs:
... self.converters = kwargs['converters']
... del kwargs['converters']
... super().__init__(**kwargs)
...
... def convert(self, value, context=None):
... value = super().convert(value, context)
... for func in self.converters:
... value = func(value)
... return value # will have a value after going through all the conversions in order
Now that we have defined our new Type, we can use it.
::
>>> from schematics.models import Model
>>> from schematics.types import StringType
>>> class Person(Model):
... name = StringType()
... bio = CleanedStringType(required=True,
... converters = [lambda x: x.upper(),
... lambda x: x.split(" ")[0]]) # convert to uppercase, then split on " " and just take the first of the split
... email = CleanedStringType(required=True, converts = [lambda x:x.lower()]) # same functionality as LowerCaseEmailType
...
>>> p = Person()
>>> p.name = 'Mutoid Man'
>>> p.bio = 'good man'
>>> p.email = '[email protected]' # technically correct email,but should be 'cleaned'
>>> p.validate()
>>> p.to_native()
>>> {'bio': 'GOOD', # was converted as we specified
>>> 'email': '[email protected]', # was converted to lowercase
>>> 'name': 'Mutoid Man'}
| 0.822296 | 0.664255 |
.. _community:
=========
Community
=========
Schematics was created in Brooklyn, NY by James Dennis. Since then, the code has
been worked on by folks from around the world. If you have ideas, we encourage
you to share them!
Special thanks to `Hacker School <http://hackerschool.com>`_, `Plain Vanilla
<http://www.plainvanilla.is/>`_, `Quantopian <http://quantopian.com>`_, `Apple
<http://apple.com>`_, `Johns Hopkins University <http://jhu.edu>`_, and
everyone who has contributed to Schematics.
Bugs & Features
===============
We track bugs, feature requests, and documentation requests with `Github Issues
<https://github.com/schematics/schematics/issues>`_.
Mailing list
============
We discuss the future of Schematics and upcoming changes in detail on
`schematics-dev <http://groups.google.com/group/schematics-dev>`_.
If you've read the documentation and still haven't found the answer you're
looking for, you should reach out to us here too.
Contributing
============
If you're interested in contributing code or documentation to Schematics,
please visit the :doc:`development` for instructions.
|
schematics-py310-plus
|
/schematics-py310-plus-0.0.4.tar.gz/schematics-py310-plus-0.0.4/docs/development/community.rst
|
community.rst
|
.. _community:
=========
Community
=========
Schematics was created in Brooklyn, NY by James Dennis. Since then, the code has
been worked on by folks from around the world. If you have ideas, we encourage
you to share them!
Special thanks to `Hacker School <http://hackerschool.com>`_, `Plain Vanilla
<http://www.plainvanilla.is/>`_, `Quantopian <http://quantopian.com>`_, `Apple
<http://apple.com>`_, `Johns Hopkins University <http://jhu.edu>`_, and
everyone who has contributed to Schematics.
Bugs & Features
===============
We track bugs, feature requests, and documentation requests with `Github Issues
<https://github.com/schematics/schematics/issues>`_.
Mailing list
============
We discuss the future of Schematics and upcoming changes in detail on
`schematics-dev <http://groups.google.com/group/schematics-dev>`_.
If you've read the documentation and still haven't found the answer you're
looking for, you should reach out to us here too.
Contributing
============
If you're interested in contributing code or documentation to Schematics,
please visit the :doc:`development` for instructions.
| 0.563138 | 0.35095 |
.. _install:
=============
Install Guide
=============
Tagged releases are available from `PyPI <https://pypi.python.org/pypi>`_::
$ pip install schematics
The latest development version can be obtained via git::
$ pip install git+https://github.com/schematics/schematics.git#egg=schematics
Schematics currently supports Python versions 3.9, 3.10, and 3.11.
.. _install_from_github:
Installing from GitHub
======================
The `canonical repository for Schematics <https://github.com/schematics/schematics>`_ is hosted on GitHub.
Getting a local copy is simple::
$ git clone https://github.com/schematics/schematics.git
If you are planning to contribute, first create your own fork of Schematics on GitHub and clone the fork::
$ git clone https://github.com/YOUR-USERNAME/schematics.git
Then add the main Schematics repository as another remote called *upstream*::
$ git remote add upstream https://github.com/schematics/schematics.git
See also :doc:`/development/development`.
|
schematics-py310-plus
|
/schematics-py310-plus-0.0.4.tar.gz/schematics-py310-plus-0.0.4/docs/basics/install.rst
|
install.rst
|
.. _install:
=============
Install Guide
=============
Tagged releases are available from `PyPI <https://pypi.python.org/pypi>`_::
$ pip install schematics
The latest development version can be obtained via git::
$ pip install git+https://github.com/schematics/schematics.git#egg=schematics
Schematics currently supports Python versions 3.9, 3.10, and 3.11.
.. _install_from_github:
Installing from GitHub
======================
The `canonical repository for Schematics <https://github.com/schematics/schematics>`_ is hosted on GitHub.
Getting a local copy is simple::
$ git clone https://github.com/schematics/schematics.git
If you are planning to contribute, first create your own fork of Schematics on GitHub and clone the fork::
$ git clone https://github.com/YOUR-USERNAME/schematics.git
Then add the main Schematics repository as another remote called *upstream*::
$ git remote add upstream https://github.com/schematics/schematics.git
See also :doc:`/development/development`.
| 0.765243 | 0.384594 |
.. _quickstart:
================
Quickstart Guide
================
Working with Schematics begins with modeling the data, so this tutorial will
start there.
After that we will take a quick look at serialization, validation, and what it
means to save this data to a database.
Simple Model
============
Let's say we want to build a structure for storing weather data. At it's core,
we'll need a way to represent some temperature information and where that temp
was found.
::
import datetime
from schematics.models import Model
from schematics.types import StringType, DecimalType, DateTimeType
class WeatherReport(Model):
city = StringType()
temperature = DecimalType()
taken_at = DateTimeType(default=datetime.datetime.now)
That'll do.
Here's what it looks like use it.
::
>>> t1 = WeatherReport({'city': 'NYC', 'temperature': 80})
>>> t2 = WeatherReport({'city': 'NYC', 'temperature': 81})
>>> t3 = WeatherReport({'city': 'NYC', 'temperature': 90})
>>> (t1.temperature + t2.temperature + t3.temperature) / 3
Decimal('83.66666666666666666666666667')
And remember that ``DateTimeType`` we set a default callable for?
::
>>> t1.taken_at
datetime.datetime(2013, 8, 21, 13, 6, 38, 11883)
Validation
==========
Validating data is fundamentally important for many systems.
This is what it looks like when validation succeeds.
::
>>> t1.validate()
>>>
And this is what it looks like when validation fails.
::
>>> t1.taken_at = 'whatever'
>>> t1.validate()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "schematics/models.py", line 229, in validate
raise ModelValidationError(e.messages)
schematics.exceptions.ModelValidationError: {'taken_at': [u'Could not parse whatever. Should be ISO8601.']}
Serialization
=============
Serialization comes in two primary forms. In both cases the data is produced
as a dictionary.
The ``to_primitive()`` function will reduce the native Python types into string
safe formats. For example, the ``DateTimeType`` from above is stored as a
Python ``datetime``, but it will serialize to an ISO8601 format string.
::
>>> t1.to_primitive()
{'city': u'NYC', 'taken_at': '2013-08-21T13:04:19.074808', 'temperature': u'80'}
Converting to JSON is then a simple task.
::
>>> json_str = json.dumps(t1.to_primitive())
>>> json_str
'{"city": "NYC", "taken_at": "2013-08-21T13:04:19.074808", "temperature": "80"}'
Instantiating an instance from JSON is not too different.
::
>>> t1_prime = WeatherReport(json.loads(json_str))
>>> t1_prime.taken_at
datetime.datetime(2013, 8, 21, 13, 4, 19, 074808)
Persistence
===========
In many cases, persistence can be as easy as converting the model to a
dictionary and passing that into a query.
First, to get at the values we'd pass into a SQL database, we might call
``to_native()``.
Let's get a fresh ``WeatherReport`` instance.
::
>>> wr = WeatherReport({'city': 'NYC', 'temperature': 80})
>>> wr.to_native()
{'city': u'NYC', 'taken_at': datetime.datetime(2013, 8, 27, 0, 25, 53, 185279), 'temperature': Decimal('80')}
With PostgreSQL
---------------
You'll want to create a table with this query:
.. code:: sql
CREATE TABLE weatherreports(
city varchar,
taken_at timestamp,
temperature decimal
);
Inserting
~~~~~~~~~
Then, from Python, an insert statement could look like this:
::
>>> query = "INSERT INTO weatherreports (city, taken_at, temperature) VALUES (%s, %s, %s);"
>>> params = (wr.city, wr.taken_at, wr.temperature)
Let's insert that into PostgreSQL using the ``psycopg2`` driver.
::
>>> import psycopg2
>>> db_conn = psycopg2.connect("host='localhost' dbname='mydb'")
>>> cursor = db_conn.cursor()
>>> cursor.execute(query, params)
>>> db_conn.commit()
Reading
~~~~~~~
Reading isn't much different.
::
>>> query = "SELECT city,taken_at,temperature FROM weatherreports;"
>>> cursor = db_conn.cursor()
>>> cursor.execute(query)
>>> rows = dbc.fetchall()
Now to translate that data into instances
::
>>> instances = list()
>>> for row in rows:
... (city, taken_at, temperature) = row
... instance = WeatherReport()
... instance.city = city
... instance.taken_at = taken_at
... instance.temperature = temperature
... instances.append(instance)
...
>>> instances
[<WeatherReport: WeatherReport object>]
|
schematics-py310-plus
|
/schematics-py310-plus-0.0.4.tar.gz/schematics-py310-plus-0.0.4/docs/basics/quickstart.rst
|
quickstart.rst
|
.. _quickstart:
================
Quickstart Guide
================
Working with Schematics begins with modeling the data, so this tutorial will
start there.
After that we will take a quick look at serialization, validation, and what it
means to save this data to a database.
Simple Model
============
Let's say we want to build a structure for storing weather data. At it's core,
we'll need a way to represent some temperature information and where that temp
was found.
::
import datetime
from schematics.models import Model
from schematics.types import StringType, DecimalType, DateTimeType
class WeatherReport(Model):
city = StringType()
temperature = DecimalType()
taken_at = DateTimeType(default=datetime.datetime.now)
That'll do.
Here's what it looks like use it.
::
>>> t1 = WeatherReport({'city': 'NYC', 'temperature': 80})
>>> t2 = WeatherReport({'city': 'NYC', 'temperature': 81})
>>> t3 = WeatherReport({'city': 'NYC', 'temperature': 90})
>>> (t1.temperature + t2.temperature + t3.temperature) / 3
Decimal('83.66666666666666666666666667')
And remember that ``DateTimeType`` we set a default callable for?
::
>>> t1.taken_at
datetime.datetime(2013, 8, 21, 13, 6, 38, 11883)
Validation
==========
Validating data is fundamentally important for many systems.
This is what it looks like when validation succeeds.
::
>>> t1.validate()
>>>
And this is what it looks like when validation fails.
::
>>> t1.taken_at = 'whatever'
>>> t1.validate()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "schematics/models.py", line 229, in validate
raise ModelValidationError(e.messages)
schematics.exceptions.ModelValidationError: {'taken_at': [u'Could not parse whatever. Should be ISO8601.']}
Serialization
=============
Serialization comes in two primary forms. In both cases the data is produced
as a dictionary.
The ``to_primitive()`` function will reduce the native Python types into string
safe formats. For example, the ``DateTimeType`` from above is stored as a
Python ``datetime``, but it will serialize to an ISO8601 format string.
::
>>> t1.to_primitive()
{'city': u'NYC', 'taken_at': '2013-08-21T13:04:19.074808', 'temperature': u'80'}
Converting to JSON is then a simple task.
::
>>> json_str = json.dumps(t1.to_primitive())
>>> json_str
'{"city": "NYC", "taken_at": "2013-08-21T13:04:19.074808", "temperature": "80"}'
Instantiating an instance from JSON is not too different.
::
>>> t1_prime = WeatherReport(json.loads(json_str))
>>> t1_prime.taken_at
datetime.datetime(2013, 8, 21, 13, 4, 19, 074808)
Persistence
===========
In many cases, persistence can be as easy as converting the model to a
dictionary and passing that into a query.
First, to get at the values we'd pass into a SQL database, we might call
``to_native()``.
Let's get a fresh ``WeatherReport`` instance.
::
>>> wr = WeatherReport({'city': 'NYC', 'temperature': 80})
>>> wr.to_native()
{'city': u'NYC', 'taken_at': datetime.datetime(2013, 8, 27, 0, 25, 53, 185279), 'temperature': Decimal('80')}
With PostgreSQL
---------------
You'll want to create a table with this query:
.. code:: sql
CREATE TABLE weatherreports(
city varchar,
taken_at timestamp,
temperature decimal
);
Inserting
~~~~~~~~~
Then, from Python, an insert statement could look like this:
::
>>> query = "INSERT INTO weatherreports (city, taken_at, temperature) VALUES (%s, %s, %s);"
>>> params = (wr.city, wr.taken_at, wr.temperature)
Let's insert that into PostgreSQL using the ``psycopg2`` driver.
::
>>> import psycopg2
>>> db_conn = psycopg2.connect("host='localhost' dbname='mydb'")
>>> cursor = db_conn.cursor()
>>> cursor.execute(query, params)
>>> db_conn.commit()
Reading
~~~~~~~
Reading isn't much different.
::
>>> query = "SELECT city,taken_at,temperature FROM weatherreports;"
>>> cursor = db_conn.cursor()
>>> cursor.execute(query)
>>> rows = dbc.fetchall()
Now to translate that data into instances
::
>>> instances = list()
>>> for row in rows:
... (city, taken_at, temperature) = row
... instance = WeatherReport()
... instance.city = city
... instance.taken_at = taken_at
... instance.temperature = temperature
... instances.append(instance)
...
>>> instances
[<WeatherReport: WeatherReport object>]
| 0.91168 | 0.54819 |
# Linkify
`Linkify` is small app similar to `reddit/hackernews` without comments
and votes features. App demonstrates how to use `Schematics` with
`Django` to create APIs.
## Installation
- Create a new virtualenv for `python3`.
- Then `pip install -r requirements`.
- `./manage.py runserver`.
- To run tests, `./test`
## Endpoints
- `/links` -> List all links (`GET`).
- `/links/` -> Create a link (`POST`).
- `/links/<id>/` -> Read details of a link (`GET`).
- `/links/<id>/` -> Update a link (`PATCH`).
- `/links/<id>/` -> Delete a link (`DELETE`).
## Examples
```python
# Create a new link
In [96]: data = {
'title': 'Brubeck',
'url': 'https://github.com/j2labs/brubeck',
'tags':['Web Framework', 'Python']
}
In [97]: r = requests.post('http://127.0.0.1:8000/links/', json=data)
In [98]: r.status_code
Out[98]: 201
In [99]: r.json()
Out[99]:
{'id': 1,
'tags': [{'id': 1, 'title': 'Web Framework'}, {'id': 2, 'title': 'Python'}],
'title': 'Brubeck',
'url': 'https://github.com/j2labs/brubeck'}
# Read a link
In [105]: requests.get("http://localhost:8000/links/1/").json()
Out[105]:
{'id': 1,
'tags': [{'id': 1, 'title': 'Web Framework'}, {'id': 2, 'title': 'Python'}],
'title': 'Brubeck',
'url': 'https://github.com/j2labs/brubeck'}
# List all links
In [106]: requests.get("http://localhost:8000/links/").json()
Out[106]:
{'items': [{'id': 1,
'tags': [{'id': 1, 'title': 'Web Framework'}, {'id': 2, 'title': 'Python'}],
'title': 'Brubeck',
'url': 'https://github.com/j2labs/brubeck'}],
'total': 1}
# Update a link
In [107]: update_data = {'title': 'Django', 'url': 'https://github.com/django/django'}
In [110]: r = requests.patch("http://localhost:8000/links/1/", json=update_data)
In [111]: r.status_code
Out[111]: 202
In [112]: r.json()
Out[112]:
{'id': 1,
'tags': [{'id': 1, 'title': 'Web Framework'}, {'id': 2, 'title': 'Python'}],
'title': 'Django',
'url': 'https://github.com/django/django'}
# Delete a link
In [113]: r = requests.delete("http://localhost:8000/links/1/")
In [114]: r.status_code
Out[114]: 204
```
|
schematics-py310-plus
|
/schematics-py310-plus-0.0.4.tar.gz/schematics-py310-plus-0.0.4/examples/django/README.md
|
README.md
|
# Create a new link
In [96]: data = {
'title': 'Brubeck',
'url': 'https://github.com/j2labs/brubeck',
'tags':['Web Framework', 'Python']
}
In [97]: r = requests.post('http://127.0.0.1:8000/links/', json=data)
In [98]: r.status_code
Out[98]: 201
In [99]: r.json()
Out[99]:
{'id': 1,
'tags': [{'id': 1, 'title': 'Web Framework'}, {'id': 2, 'title': 'Python'}],
'title': 'Brubeck',
'url': 'https://github.com/j2labs/brubeck'}
# Read a link
In [105]: requests.get("http://localhost:8000/links/1/").json()
Out[105]:
{'id': 1,
'tags': [{'id': 1, 'title': 'Web Framework'}, {'id': 2, 'title': 'Python'}],
'title': 'Brubeck',
'url': 'https://github.com/j2labs/brubeck'}
# List all links
In [106]: requests.get("http://localhost:8000/links/").json()
Out[106]:
{'items': [{'id': 1,
'tags': [{'id': 1, 'title': 'Web Framework'}, {'id': 2, 'title': 'Python'}],
'title': 'Brubeck',
'url': 'https://github.com/j2labs/brubeck'}],
'total': 1}
# Update a link
In [107]: update_data = {'title': 'Django', 'url': 'https://github.com/django/django'}
In [110]: r = requests.patch("http://localhost:8000/links/1/", json=update_data)
In [111]: r.status_code
Out[111]: 202
In [112]: r.json()
Out[112]:
{'id': 1,
'tags': [{'id': 1, 'title': 'Web Framework'}, {'id': 2, 'title': 'Python'}],
'title': 'Django',
'url': 'https://github.com/django/django'}
# Delete a link
In [113]: r = requests.delete("http://localhost:8000/links/1/")
In [114]: r.status_code
Out[114]: 204
| 0.390825 | 0.754124 |
import json
from django.http import Http404, JsonResponse
from django.utils.decorators import method_decorator
from django.views.decorators.csrf import csrf_exempt
from django.views.generic import View
from schematics.exceptions import ModelConversionError, ModelValidationError
from .models import Link, Tag
from .serializers import LinkCreateSerializer, LinkReadSerializer, LinkUpdateSerializer
class CSRFExemptMixin(View):
@method_decorator(csrf_exempt)
def dispatch(self, *args, **kwargs):
return super().dispatch(*args, **kwargs)
class LinkListView(CSRFExemptMixin):
http_method_names = ["post", "get"]
def post(self, request):
data = json.loads(request.body.decode())
try:
link = LinkCreateSerializer(raw_data=data)
link.validate()
kwargs = link.to_native()
# Pop tags since objects will be created separately
tags = kwargs.pop("tags", None)
# Persist the data
link_obj = Link.objects.create(**kwargs)
tag_collection = [Tag.objects.get_or_create(title=tag)[0] for tag in tags]
link_obj.attach_tags(tag_collection)
# Prepare for response
return_data = LinkReadSerializer(link_obj.to_dict()).to_native()
return JsonResponse(data=return_data, status=201)
except (ModelValidationError, ModelConversionError) as e:
return JsonResponse(e.messages, status=400)
def get(self, request):
# TODO: Add pagination
links = Link.objects.all()
items = [LinkReadSerializer(link.to_dict()).to_native() for link in links]
data = {"items": items, "total": len(links)}
return JsonResponse(data=data)
class LinkDetailView(CSRFExemptMixin):
http_method_names = ["delete", "get", "patch"]
def get_or_404(self, pk):
try:
return Link.objects.get(pk=pk)
except Link.DoesNotExist:
raise Http404("Link doesn't exist")
def get(self, request, pk):
link = self.get_or_404(pk=pk)
return_data = LinkReadSerializer(link.to_dict()).to_native()
return JsonResponse(return_data)
def delete(self, request, pk):
link = self.get_or_404(pk=pk)
link.delete()
return JsonResponse(data={}, status=204)
def patch(self, request, pk):
data = json.loads(request.body.decode())
try:
link = LinkUpdateSerializer(raw_data=data)
kwargs = link.to_native()
# We need to make two db calls any way to return Response
link_obj = self.get_or_404(pk=pk)
link_obj.update(**kwargs)
# Prepare response
data = LinkReadSerializer(link_obj.to_dict()).to_native()
return JsonResponse(data=data, status=202)
except ModelConversionError as e:
# this is raised when fields are missing
msg = "PATCH not allowed."
data = {field: msg for field, value in e.messages.items()}
return JsonResponse(data=data, status=400)
except ModelValidationError as e:
return JsonResponse(e.messages, status=400)
|
schematics-py310-plus
|
/schematics-py310-plus-0.0.4.tar.gz/schematics-py310-plus-0.0.4/examples/django/linkify/links/views.py
|
views.py
|
import json
from django.http import Http404, JsonResponse
from django.utils.decorators import method_decorator
from django.views.decorators.csrf import csrf_exempt
from django.views.generic import View
from schematics.exceptions import ModelConversionError, ModelValidationError
from .models import Link, Tag
from .serializers import LinkCreateSerializer, LinkReadSerializer, LinkUpdateSerializer
class CSRFExemptMixin(View):
@method_decorator(csrf_exempt)
def dispatch(self, *args, **kwargs):
return super().dispatch(*args, **kwargs)
class LinkListView(CSRFExemptMixin):
http_method_names = ["post", "get"]
def post(self, request):
data = json.loads(request.body.decode())
try:
link = LinkCreateSerializer(raw_data=data)
link.validate()
kwargs = link.to_native()
# Pop tags since objects will be created separately
tags = kwargs.pop("tags", None)
# Persist the data
link_obj = Link.objects.create(**kwargs)
tag_collection = [Tag.objects.get_or_create(title=tag)[0] for tag in tags]
link_obj.attach_tags(tag_collection)
# Prepare for response
return_data = LinkReadSerializer(link_obj.to_dict()).to_native()
return JsonResponse(data=return_data, status=201)
except (ModelValidationError, ModelConversionError) as e:
return JsonResponse(e.messages, status=400)
def get(self, request):
# TODO: Add pagination
links = Link.objects.all()
items = [LinkReadSerializer(link.to_dict()).to_native() for link in links]
data = {"items": items, "total": len(links)}
return JsonResponse(data=data)
class LinkDetailView(CSRFExemptMixin):
http_method_names = ["delete", "get", "patch"]
def get_or_404(self, pk):
try:
return Link.objects.get(pk=pk)
except Link.DoesNotExist:
raise Http404("Link doesn't exist")
def get(self, request, pk):
link = self.get_or_404(pk=pk)
return_data = LinkReadSerializer(link.to_dict()).to_native()
return JsonResponse(return_data)
def delete(self, request, pk):
link = self.get_or_404(pk=pk)
link.delete()
return JsonResponse(data={}, status=204)
def patch(self, request, pk):
data = json.loads(request.body.decode())
try:
link = LinkUpdateSerializer(raw_data=data)
kwargs = link.to_native()
# We need to make two db calls any way to return Response
link_obj = self.get_or_404(pk=pk)
link_obj.update(**kwargs)
# Prepare response
data = LinkReadSerializer(link_obj.to_dict()).to_native()
return JsonResponse(data=data, status=202)
except ModelConversionError as e:
# this is raised when fields are missing
msg = "PATCH not allowed."
data = {field: msg for field, value in e.messages.items()}
return JsonResponse(data=data, status=400)
except ModelValidationError as e:
return JsonResponse(e.messages, status=400)
| 0.425963 | 0.114443 |
import functools
import inspect
from typing import List
from .datastructures import Context
from .exceptions import DataError, FieldError
from .iteration import atoms
from .transforms import import_loop, validation_converter
from .undefined import Undefined
__all__: List[str] = []
def schema_from(obj):
try:
return obj._schema
except AttributeError:
return obj
def validate(
schema,
mutable,
raw_data=None,
trusted_data=None,
partial=False,
strict=False,
convert=True,
context=None,
**kwargs
):
"""
Validate some untrusted data using a model. Trusted data can be passed in
the `trusted_data` parameter.
:param schema:
The Schema to use as source for validation.
:param mutable:
A mapping or instance that can be changed during validation by Schema
functions.
:param raw_data:
A mapping or instance containing new data to be validated.
:param partial:
Allow partial data to validate; useful for PATCH requests.
Essentially drops the ``required=True`` arguments from field
definitions. Default: False
:param strict:
Complain about unrecognized keys. Default: False
:param trusted_data:
A ``dict``-like structure that may contain already validated data.
:param convert:
Controls whether to perform import conversion before validating.
Can be turned off to skip an unnecessary conversion step if all values
are known to have the right datatypes (e.g., when validating immediately
after the initial import). Default: True
:returns: data
``dict`` containing the valid raw_data plus ``trusted_data``.
If errors are found, they are raised as a ValidationError with a list
of errors attached.
"""
if raw_data is None:
raw_data = mutable
context = context or get_validation_context(
partial=partial, strict=strict, convert=convert
)
errors = {}
try:
data = import_loop(
schema,
mutable,
raw_data,
trusted_data=trusted_data,
context=context,
**kwargs
)
except DataError as exc:
errors = dict(exc.errors)
data = exc.partial_data
errors.update(_validate_model(schema, mutable, data, context))
if errors:
raise DataError(errors, data)
return data
def _validate_model(schema, mutable, data, context):
"""
Validate data using model level methods.
:param schema:
The Schema to validate ``data`` against.
:param mutable:
A mapping or instance that will be passed to the validator containing
the original data and that can be mutated.
:param data:
A dict with data to validate. Invalid items are removed from it.
:returns:
Errors of the fields that did not pass validation.
"""
errors = {}
invalid_fields = []
def has_validator(atom):
return (
atom.value is not Undefined and atom.name in schema_from(schema).validators
)
for field_name, field, value in atoms(schema, data, filter=has_validator):
try:
schema_from(schema).validators[field_name](mutable, data, value, context)
except (FieldError, DataError) as exc:
serialized_field_name = field.serialized_name or field_name
errors[serialized_field_name] = exc.errors
invalid_fields.append(field_name)
for field_name in invalid_fields:
data.pop(field_name)
return errors
def get_validation_context(**options):
validation_options = {
"field_converter": validation_converter,
"partial": False,
"strict": False,
"convert": True,
"validate": True,
"new": False,
}
validation_options.update(options)
return Context(**validation_options)
def prepare_validator(func, argcount):
if isinstance(func, classmethod):
func = func.__get__(object).__func__
func_args = inspect.getfullargspec(func).args
if len(func_args) < argcount:
@functools.wraps(func)
def newfunc(*args, **kwargs):
sentinel = object()
if not kwargs or kwargs.pop("context", sentinel) is sentinel:
args = args[:-1]
return func(*args, **kwargs)
return newfunc
return func
|
schematics-py310-plus
|
/schematics-py310-plus-0.0.4.tar.gz/schematics-py310-plus-0.0.4/schematics/validate.py
|
validate.py
|
import functools
import inspect
from typing import List
from .datastructures import Context
from .exceptions import DataError, FieldError
from .iteration import atoms
from .transforms import import_loop, validation_converter
from .undefined import Undefined
__all__: List[str] = []
def schema_from(obj):
try:
return obj._schema
except AttributeError:
return obj
def validate(
schema,
mutable,
raw_data=None,
trusted_data=None,
partial=False,
strict=False,
convert=True,
context=None,
**kwargs
):
"""
Validate some untrusted data using a model. Trusted data can be passed in
the `trusted_data` parameter.
:param schema:
The Schema to use as source for validation.
:param mutable:
A mapping or instance that can be changed during validation by Schema
functions.
:param raw_data:
A mapping or instance containing new data to be validated.
:param partial:
Allow partial data to validate; useful for PATCH requests.
Essentially drops the ``required=True`` arguments from field
definitions. Default: False
:param strict:
Complain about unrecognized keys. Default: False
:param trusted_data:
A ``dict``-like structure that may contain already validated data.
:param convert:
Controls whether to perform import conversion before validating.
Can be turned off to skip an unnecessary conversion step if all values
are known to have the right datatypes (e.g., when validating immediately
after the initial import). Default: True
:returns: data
``dict`` containing the valid raw_data plus ``trusted_data``.
If errors are found, they are raised as a ValidationError with a list
of errors attached.
"""
if raw_data is None:
raw_data = mutable
context = context or get_validation_context(
partial=partial, strict=strict, convert=convert
)
errors = {}
try:
data = import_loop(
schema,
mutable,
raw_data,
trusted_data=trusted_data,
context=context,
**kwargs
)
except DataError as exc:
errors = dict(exc.errors)
data = exc.partial_data
errors.update(_validate_model(schema, mutable, data, context))
if errors:
raise DataError(errors, data)
return data
def _validate_model(schema, mutable, data, context):
"""
Validate data using model level methods.
:param schema:
The Schema to validate ``data`` against.
:param mutable:
A mapping or instance that will be passed to the validator containing
the original data and that can be mutated.
:param data:
A dict with data to validate. Invalid items are removed from it.
:returns:
Errors of the fields that did not pass validation.
"""
errors = {}
invalid_fields = []
def has_validator(atom):
return (
atom.value is not Undefined and atom.name in schema_from(schema).validators
)
for field_name, field, value in atoms(schema, data, filter=has_validator):
try:
schema_from(schema).validators[field_name](mutable, data, value, context)
except (FieldError, DataError) as exc:
serialized_field_name = field.serialized_name or field_name
errors[serialized_field_name] = exc.errors
invalid_fields.append(field_name)
for field_name in invalid_fields:
data.pop(field_name)
return errors
def get_validation_context(**options):
validation_options = {
"field_converter": validation_converter,
"partial": False,
"strict": False,
"convert": True,
"validate": True,
"new": False,
}
validation_options.update(options)
return Context(**validation_options)
def prepare_validator(func, argcount):
if isinstance(func, classmethod):
func = func.__get__(object).__func__
func_args = inspect.getfullargspec(func).args
if len(func_args) < argcount:
@functools.wraps(func)
def newfunc(*args, **kwargs):
sentinel = object()
if not kwargs or kwargs.pop("context", sentinel) is sentinel:
args = args[:-1]
return func(*args, **kwargs)
return newfunc
return func
| 0.862685 | 0.534795 |
from typing import (
TYPE_CHECKING,
Any,
Callable,
Iterable,
Mapping,
NamedTuple,
Optional,
Tuple,
)
from .undefined import Undefined
if TYPE_CHECKING:
from schematics.schema import Schema
class Atom(NamedTuple):
name: Optional[str] = None
field: Optional[str] = None
value: Any = None
def schema_from(obj):
try:
return obj._schema
except AttributeError:
return obj
def atoms(
schema: "Schema",
mapping: Mapping,
keys: Tuple[str, ...] = tuple(Atom._fields),
filter: Callable[[Atom], bool] = None,
) -> Iterable[Atom]:
"""
Iterator for the atomic components of a model definition and relevant
data that creates a 3-tuple of the field's name, its type instance and
its value.
:type schema: schematics.schema.Schema
:param schema:
The Schema definition.
:type mapping: Mapping
:param mapping:
The structure where fields from schema are mapped to values. The only
expectation for this structure is that it implements a ``Mapping``
interface.
:type keys: Tuple[str, str, str]
:param keys:
Tuple specifying the output of the iterator. Valid keys are:
`name`: the field name
`field`: the field descriptor object
`value`: the current value set on the field
Specifying invalid keys will raise an exception.
:type filter: Optional[Callable[[Atom], bool]]
:param filter:
Function to filter out atoms from the iteration.
:rtype: Iterable[Atom]
"""
if not set(keys).issubset(Atom._fields):
raise TypeError("invalid key specified")
has_name = "name" in keys
has_field = "field" in keys
has_value = (mapping is not None) and ("value" in keys)
for field_name, field in schema_from(schema).fields.items():
value = Undefined
if has_value:
try:
value = mapping[field_name]
except Exception:
value = Undefined
atom_tuple = Atom(
name=field_name if has_name else None,
field=field if has_field else None,
value=value,
)
if filter is None:
yield atom_tuple
elif filter(atom_tuple):
yield atom_tuple
class atom_filter:
"""Group for the default filter functions."""
@staticmethod
def has_setter(atom):
return getattr(atom.field, "fset", None) is not None
@staticmethod
def not_setter(atom):
return not atom_filter.has_setter(atom)
|
schematics-py310-plus
|
/schematics-py310-plus-0.0.4.tar.gz/schematics-py310-plus-0.0.4/schematics/iteration.py
|
iteration.py
|
from typing import (
TYPE_CHECKING,
Any,
Callable,
Iterable,
Mapping,
NamedTuple,
Optional,
Tuple,
)
from .undefined import Undefined
if TYPE_CHECKING:
from schematics.schema import Schema
class Atom(NamedTuple):
name: Optional[str] = None
field: Optional[str] = None
value: Any = None
def schema_from(obj):
try:
return obj._schema
except AttributeError:
return obj
def atoms(
schema: "Schema",
mapping: Mapping,
keys: Tuple[str, ...] = tuple(Atom._fields),
filter: Callable[[Atom], bool] = None,
) -> Iterable[Atom]:
"""
Iterator for the atomic components of a model definition and relevant
data that creates a 3-tuple of the field's name, its type instance and
its value.
:type schema: schematics.schema.Schema
:param schema:
The Schema definition.
:type mapping: Mapping
:param mapping:
The structure where fields from schema are mapped to values. The only
expectation for this structure is that it implements a ``Mapping``
interface.
:type keys: Tuple[str, str, str]
:param keys:
Tuple specifying the output of the iterator. Valid keys are:
`name`: the field name
`field`: the field descriptor object
`value`: the current value set on the field
Specifying invalid keys will raise an exception.
:type filter: Optional[Callable[[Atom], bool]]
:param filter:
Function to filter out atoms from the iteration.
:rtype: Iterable[Atom]
"""
if not set(keys).issubset(Atom._fields):
raise TypeError("invalid key specified")
has_name = "name" in keys
has_field = "field" in keys
has_value = (mapping is not None) and ("value" in keys)
for field_name, field in schema_from(schema).fields.items():
value = Undefined
if has_value:
try:
value = mapping[field_name]
except Exception:
value = Undefined
atom_tuple = Atom(
name=field_name if has_name else None,
field=field if has_field else None,
value=value,
)
if filter is None:
yield atom_tuple
elif filter(atom_tuple):
yield atom_tuple
class atom_filter:
"""Group for the default filter functions."""
@staticmethod
def has_setter(atom):
return getattr(atom.field, "fset", None) is not None
@staticmethod
def not_setter(atom):
return not atom_filter.has_setter(atom)
| 0.905873 | 0.423995 |
from collections.abc import Set
class Role(Set):
"""
A ``Role`` object can be used to filter specific fields against a sequence.
The ``Role`` contains two things: a set of names and a function.
The function describes how to filter, taking a field name as input and then
returning ``True`` or ``False`` to indicate that field should or should not
be skipped.
A ``Role`` can be operated on as a ``Set`` object representing the fields
it has an opinion on. When Roles are combined with other roles, only the
filtering behavior of the first role is used.
"""
def __init__(self, function, fields):
self.function = function
self.fields = set(fields)
def _from_iterable(self, iterable):
return Role(self.function, iterable)
def __contains__(self, value):
return value in self.fields
def __iter__(self):
return iter(self.fields)
def __len__(self):
return len(self.fields)
def __eq__(self, other):
return (
self.function.__name__ == other.function.__name__
and self.fields == other.fields
)
def __str__(self):
fields = ", ".join(f"'{f}'" for f in self.fields)
return f"{self.function.__name__}({fields})"
def __repr__(self):
return f"<Role {self}>"
# edit role fields
def __add__(self, other):
fields = self.fields.union(other)
return self._from_iterable(fields)
def __sub__(self, other):
fields = self.fields.difference(other)
return self._from_iterable(fields)
# apply role to field
def __call__(self, name, value):
return self.function(name, value, self.fields)
# static filter functions
@staticmethod
def wholelist(name, value, seq):
"""
Accepts a field name, value, and a field list. This function
implements acceptance of all fields by never requesting a field be
skipped, thus returns False for all input.
:param name:
The field name to inspect.
:param value:
The field's value.
:param seq:
The list of fields associated with the ``Role``.
"""
return False
@staticmethod
def whitelist(name, value, seq):
"""
Implements the behavior of a whitelist by requesting a field be skipped
whenever its name is not in the list of fields.
:param name:
The field name to inspect.
:param value:
The field's value.
:param seq:
The list of fields associated with the ``Role``.
"""
if seq is not None and len(seq) > 0:
return name not in seq
return True
@staticmethod
def blacklist(name, value, seq):
"""
Implements the behavior of a blacklist by requesting a field be skipped
whenever its name is found in the list of fields.
:param name:
The field name to inspect.
:param value:
The field's value.
:param seq:
The list of fields associated with the ``Role``.
"""
if seq is not None and len(seq) > 0:
return name in seq
return False
|
schematics-py310-plus
|
/schematics-py310-plus-0.0.4.tar.gz/schematics-py310-plus-0.0.4/schematics/role.py
|
role.py
|
from collections.abc import Set
class Role(Set):
"""
A ``Role`` object can be used to filter specific fields against a sequence.
The ``Role`` contains two things: a set of names and a function.
The function describes how to filter, taking a field name as input and then
returning ``True`` or ``False`` to indicate that field should or should not
be skipped.
A ``Role`` can be operated on as a ``Set`` object representing the fields
it has an opinion on. When Roles are combined with other roles, only the
filtering behavior of the first role is used.
"""
def __init__(self, function, fields):
self.function = function
self.fields = set(fields)
def _from_iterable(self, iterable):
return Role(self.function, iterable)
def __contains__(self, value):
return value in self.fields
def __iter__(self):
return iter(self.fields)
def __len__(self):
return len(self.fields)
def __eq__(self, other):
return (
self.function.__name__ == other.function.__name__
and self.fields == other.fields
)
def __str__(self):
fields = ", ".join(f"'{f}'" for f in self.fields)
return f"{self.function.__name__}({fields})"
def __repr__(self):
return f"<Role {self}>"
# edit role fields
def __add__(self, other):
fields = self.fields.union(other)
return self._from_iterable(fields)
def __sub__(self, other):
fields = self.fields.difference(other)
return self._from_iterable(fields)
# apply role to field
def __call__(self, name, value):
return self.function(name, value, self.fields)
# static filter functions
@staticmethod
def wholelist(name, value, seq):
"""
Accepts a field name, value, and a field list. This function
implements acceptance of all fields by never requesting a field be
skipped, thus returns False for all input.
:param name:
The field name to inspect.
:param value:
The field's value.
:param seq:
The list of fields associated with the ``Role``.
"""
return False
@staticmethod
def whitelist(name, value, seq):
"""
Implements the behavior of a whitelist by requesting a field be skipped
whenever its name is not in the list of fields.
:param name:
The field name to inspect.
:param value:
The field's value.
:param seq:
The list of fields associated with the ``Role``.
"""
if seq is not None and len(seq) > 0:
return name not in seq
return True
@staticmethod
def blacklist(name, value, seq):
"""
Implements the behavior of a blacklist by requesting a field be skipped
whenever its name is found in the list of fields.
:param name:
The field name to inspect.
:param value:
The field's value.
:param seq:
The list of fields associated with the ``Role``.
"""
if seq is not None and len(seq) > 0:
return name in seq
return False
| 0.929696 | 0.73456 |
from collections.abc import Mapping, Sequence
from typing import List
__all__: List[str] = []
class DataObject:
"""
An object for holding data as attributes.
``DataObject`` can be instantiated like ``dict``::
>>> d = DataObject({'one': 1, 'two': 2}, three=3)
>>> d.__dict__
{'one': 1, 'two': 2, 'three': 3}
Attributes are accessible via the regular dot notation (``d.x``) as well as
the subscription syntax (``d['x']``)::
>>> d.one == d['one'] == 1
True
To convert a ``DataObject`` into a dictionary, use ``d._to_dict()``.
``DataObject`` implements the following collection-like operations:
* iteration through attributes as name-value pairs
* ``'x' in d`` for membership tests
* ``len(d)`` to get the number of attributes
Additionally, the following methods are equivalent to their ``dict` counterparts:
``_clear``, ``_get``, ``_keys``, ``_items``, ``_pop``, ``_setdefault``, ``_update``.
An advantage of ``DataObject`` over ``dict` subclasses is that every method name
in ``DataObject`` begins with an underscore, so attributes like ``"update"`` or
``"values"`` are valid.
"""
def __init__(self, *args, **kwargs):
source = args[0] if args else {}
self._update(source, **kwargs)
def __repr__(self):
return f"{self.__class__.__name__}({self.__dict__!r})"
def _copy(self):
return self.__class__(self)
__copy__ = _copy
def __eq__(self, other):
return isinstance(other, DataObject) and self.__dict__ == other.__dict__
def __iter__(self):
return iter(self.__dict__.items())
def _update(self, source=None, **kwargs):
if isinstance(source, DataObject):
source = source.__dict__
self.__dict__.update(source, **kwargs)
def _setdefaults(self, source):
if isinstance(source, dict):
source = source.items()
for name, value in source:
self._setdefault(name, value)
return self
def _to_dict(self):
d = dict(self.__dict__)
for k, v in d.items():
if isinstance(v, DataObject):
d[k] = v._to_dict()
return d
def __setitem__(self, key, value):
self.__dict__[key] = value
def __getitem__(self, key):
return self.__dict__[key]
def __delitem__(self, key):
del self.__dict__[key]
def __len__(self):
return len(self.__dict__)
def __contains__(self, key):
return key in self.__dict__
def _clear(self):
return self.__dict__.clear()
def _get(self, *args):
return self.__dict__.get(*args)
def _items(self):
return self.__dict__.items()
def _keys(self):
return self.__dict__.keys()
def _pop(self, *args):
return self.__dict__.pop(*args)
def _setdefault(self, *args):
return self.__dict__.setdefault(*args)
class Context(DataObject):
_fields = ()
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
if self._fields:
unknowns = [name for name in self._keys() if name not in self._fields]
if unknowns:
raise ValueError(f"Unexpected field names: {unknowns!r}")
@classmethod
def _new(cls, *args, **kwargs):
if len(args) > len(cls._fields):
raise TypeError("Too many positional arguments")
return cls(zip(cls._fields, args), **kwargs)
@classmethod
def _make(cls, obj):
if obj is None:
return cls()
elif isinstance(obj, cls):
return obj
else:
return cls(obj)
def __setattr__(self, name, value):
if name in self:
raise TypeError(f"Field '{name}' already set")
super().__setattr__(name, value)
def _branch(self, **kwargs):
if not kwargs:
return self
items = dict(
((k, v) for k, v in kwargs.items() if v is not None and v != self[k])
)
if items:
return self.__class__(self, **items)
else:
return self
def _setdefaults(self, source):
if not isinstance(source, dict):
source = source.__dict__
new_values = source.copy()
new_values.update(self.__dict__)
self.__dict__.update(new_values)
return self
def __bool__(self):
return True
__nonzero__ = __bool__
class FrozenDict(Mapping):
def __init__(self, value):
self._value = dict(value)
def __getitem__(self, key):
return self._value[key]
def __iter__(self):
return iter(self._value)
def __len__(self):
return len(self._value)
def __hash__(self):
try:
return self._hash
except AttributeError:
self._hash = 0
for k, v in self._value.items():
self._hash ^= hash(k)
self._hash ^= hash(v)
return self._hash
def __repr__(self):
return repr(self._value)
def __str__(self):
return str(self._value)
class FrozenList(Sequence):
def __init__(self, value):
self._list = list(value)
def __getitem__(self, index):
return self._list[index]
def __len__(self):
return len(self._list)
def __hash__(self):
try:
return self._hash
except AttributeError:
self._hash = 0
for e in self._list:
self._hash ^= hash(e)
return self._hash
def __repr__(self):
return repr(self._list)
def __str__(self):
return str(self._list)
def __eq__(self, other):
if len(self) != len(other):
return False
for i in range(len(self)):
if self[i] != other[i]:
return False
return True
|
schematics-py310-plus
|
/schematics-py310-plus-0.0.4.tar.gz/schematics-py310-plus-0.0.4/schematics/datastructures.py
|
datastructures.py
|
from collections.abc import Mapping, Sequence
from typing import List
__all__: List[str] = []
class DataObject:
"""
An object for holding data as attributes.
``DataObject`` can be instantiated like ``dict``::
>>> d = DataObject({'one': 1, 'two': 2}, three=3)
>>> d.__dict__
{'one': 1, 'two': 2, 'three': 3}
Attributes are accessible via the regular dot notation (``d.x``) as well as
the subscription syntax (``d['x']``)::
>>> d.one == d['one'] == 1
True
To convert a ``DataObject`` into a dictionary, use ``d._to_dict()``.
``DataObject`` implements the following collection-like operations:
* iteration through attributes as name-value pairs
* ``'x' in d`` for membership tests
* ``len(d)`` to get the number of attributes
Additionally, the following methods are equivalent to their ``dict` counterparts:
``_clear``, ``_get``, ``_keys``, ``_items``, ``_pop``, ``_setdefault``, ``_update``.
An advantage of ``DataObject`` over ``dict` subclasses is that every method name
in ``DataObject`` begins with an underscore, so attributes like ``"update"`` or
``"values"`` are valid.
"""
def __init__(self, *args, **kwargs):
source = args[0] if args else {}
self._update(source, **kwargs)
def __repr__(self):
return f"{self.__class__.__name__}({self.__dict__!r})"
def _copy(self):
return self.__class__(self)
__copy__ = _copy
def __eq__(self, other):
return isinstance(other, DataObject) and self.__dict__ == other.__dict__
def __iter__(self):
return iter(self.__dict__.items())
def _update(self, source=None, **kwargs):
if isinstance(source, DataObject):
source = source.__dict__
self.__dict__.update(source, **kwargs)
def _setdefaults(self, source):
if isinstance(source, dict):
source = source.items()
for name, value in source:
self._setdefault(name, value)
return self
def _to_dict(self):
d = dict(self.__dict__)
for k, v in d.items():
if isinstance(v, DataObject):
d[k] = v._to_dict()
return d
def __setitem__(self, key, value):
self.__dict__[key] = value
def __getitem__(self, key):
return self.__dict__[key]
def __delitem__(self, key):
del self.__dict__[key]
def __len__(self):
return len(self.__dict__)
def __contains__(self, key):
return key in self.__dict__
def _clear(self):
return self.__dict__.clear()
def _get(self, *args):
return self.__dict__.get(*args)
def _items(self):
return self.__dict__.items()
def _keys(self):
return self.__dict__.keys()
def _pop(self, *args):
return self.__dict__.pop(*args)
def _setdefault(self, *args):
return self.__dict__.setdefault(*args)
class Context(DataObject):
_fields = ()
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
if self._fields:
unknowns = [name for name in self._keys() if name not in self._fields]
if unknowns:
raise ValueError(f"Unexpected field names: {unknowns!r}")
@classmethod
def _new(cls, *args, **kwargs):
if len(args) > len(cls._fields):
raise TypeError("Too many positional arguments")
return cls(zip(cls._fields, args), **kwargs)
@classmethod
def _make(cls, obj):
if obj is None:
return cls()
elif isinstance(obj, cls):
return obj
else:
return cls(obj)
def __setattr__(self, name, value):
if name in self:
raise TypeError(f"Field '{name}' already set")
super().__setattr__(name, value)
def _branch(self, **kwargs):
if not kwargs:
return self
items = dict(
((k, v) for k, v in kwargs.items() if v is not None and v != self[k])
)
if items:
return self.__class__(self, **items)
else:
return self
def _setdefaults(self, source):
if not isinstance(source, dict):
source = source.__dict__
new_values = source.copy()
new_values.update(self.__dict__)
self.__dict__.update(new_values)
return self
def __bool__(self):
return True
__nonzero__ = __bool__
class FrozenDict(Mapping):
def __init__(self, value):
self._value = dict(value)
def __getitem__(self, key):
return self._value[key]
def __iter__(self):
return iter(self._value)
def __len__(self):
return len(self._value)
def __hash__(self):
try:
return self._hash
except AttributeError:
self._hash = 0
for k, v in self._value.items():
self._hash ^= hash(k)
self._hash ^= hash(v)
return self._hash
def __repr__(self):
return repr(self._value)
def __str__(self):
return str(self._value)
class FrozenList(Sequence):
def __init__(self, value):
self._list = list(value)
def __getitem__(self, index):
return self._list[index]
def __len__(self):
return len(self._list)
def __hash__(self):
try:
return self._hash
except AttributeError:
self._hash = 0
for e in self._list:
self._hash ^= hash(e)
return self._hash
def __repr__(self):
return repr(self._list)
def __str__(self):
return str(self._list)
def __eq__(self, other):
if len(self) != len(other):
return False
for i in range(len(self)):
if self[i] != other[i]:
return False
return True
| 0.909872 | 0.480662 |
import inspect
from collections import ChainMap, OrderedDict
from copy import deepcopy
from types import FunctionType, MappingProxyType
from typing import List
from . import schema
from .datastructures import Context
from .exceptions import (
DataError,
MockCreationError,
UndefinedValueError,
UnknownFieldError,
)
from .iteration import atoms
from .transforms import convert, export_loop, to_native, to_primitive
from .types.base import BaseType
from .types.serializable import Serializable
from .undefined import Undefined
from .util import get_ident
from .validate import prepare_validator, validate
__all__: List[str] = []
class FieldDescriptor:
"""
``FieldDescriptor`` instances serve as field accessors on models.
"""
def __init__(self, name):
"""
:param name:
The field's name
"""
self.name = name
def __get__(self, instance, cls):
"""
For a model instance, returns the field's current value.
For a model class, returns the field's type object.
"""
if instance is None:
return cls._schema.fields[self.name]
value = instance._data.get(self.name, Undefined)
if value is Undefined:
raise UndefinedValueError(instance, self.name)
return value
def __set__(self, instance, value):
"""
Sets the field's value.
"""
field = instance._schema.fields[self.name]
value = field.pre_setattr(value)
instance._data.converted[self.name] = value
def __delete__(self, instance):
"""
Deletes the field's value.
"""
del instance._data[self.name]
class ModelMeta(type):
"""
Metaclass for Models.
"""
def __new__(mcs, name, bases, attrs):
"""
This metaclass parses the declarative Model into a corresponding Schema,
then adding it as the `_schema` attribute to the host class.
"""
# Structures used to accumulate meta info
fields = OrderedDict()
validator_functions = {} # Model level
options_members = {}
# Accumulate metas info from parent classes
for base in reversed(bases):
if hasattr(base, "_schema"):
fields.update(deepcopy(base._schema.fields))
options_members.update(dict(base._schema.options))
validator_functions.update(base._schema.validators)
# Parse this class's attributes into schema structures
for key, value in attrs.items():
if key.startswith("validate_") and isinstance(
value, (FunctionType, classmethod)
):
validator_functions[key[9:]] = prepare_validator(value, 4)
if isinstance(value, BaseType):
fields[key] = value
elif isinstance(value, Serializable):
fields[key] = value
# Convert declared fields into descriptors for new class
fields = OrderedDict(
sorted(
(kv for kv in fields.items()),
key=lambda i: i[1]._position_hint,
)
)
for key, field in fields.items():
if isinstance(field, BaseType):
attrs[key] = FieldDescriptor(key)
elif isinstance(field, Serializable):
attrs[key] = field
klass = type.__new__(mcs, name, bases, attrs)
# Parse schema options
options = mcs._read_options(name, bases, attrs, options_members)
# Parse meta data into new schema
klass._schema = schema.Schema(
name,
model=klass,
options=options,
validators=validator_functions,
*(schema.Field(k, t) for k, t in fields.items()),
)
return klass
@classmethod
def _read_options(mcs, name, bases, attrs, options_members):
"""
Parses model `Options` class into a `SchemaOptions` instance.
"""
options_class = attrs.get("__optionsclass__", schema.SchemaOptions)
if "Options" in attrs:
for key, value in inspect.getmembers(attrs["Options"]):
if key.startswith("__"):
continue
if key.startswith("_"):
extras = options_members.get("extras", {}).copy()
extras.update({key: value})
options_members["extras"] = extras
elif key == "roles":
roles = options_members.get("roles", {}).copy()
roles.update(value)
options_members[key] = roles
else:
options_members[key] = value
return options_class(**options_members)
class ModelDict(ChainMap):
__slots__ = ["_unsafe", "_converted", "__valid", "_valid"]
def __init__(self, unsafe=None, converted=None, valid=None):
self._unsafe = unsafe if unsafe is not None else {}
self._converted = converted if converted is not None else {}
self.__valid = valid if valid is not None else {}
self._valid = MappingProxyType(self.__valid)
super().__init__(self._unsafe, self._converted, self._valid)
@property
def unsafe(self):
return self._unsafe
@unsafe.setter
def unsafe(self, value):
self._unsafe = value
self.maps[0] = self._unsafe
@property
def converted(self):
return self._converted
@converted.setter
def converted(self, value):
self._converted = value
self.maps[1] = self._converted
@property
def valid(self):
return self._valid
@valid.setter
def valid(self, value):
self._valid = MappingProxyType(value)
self.maps[2] = self._valid
def __delitem__(self, key):
did_delete = False
for data in [self.__valid, self._converted, self._unsafe]:
try:
del data[key]
did_delete = True
except KeyError:
pass
if not did_delete:
raise KeyError(key)
def __repr__(self):
return repr(dict(self))
class Model(metaclass=ModelMeta):
"""
Enclosure for fields and validation. Same pattern deployed by Django
models, SQLAlchemy declarative extension and other developer friendly
libraries.
:param Mapping raw_data:
The data to be imported into the model instance.
:param Mapping deserialize_mapping:
Can be used to provide alternative input names for fields. Values may be
strings or lists of strings, keyed by the actual field name.
:param bool partial:
Allow partial data to validate. Essentially drops the ``required=True``
settings from field definitions. Default: True
:param bool strict:
Complain about unrecognized keys. Default: True
"""
def __init__(
self,
raw_data=None,
trusted_data=None,
deserialize_mapping=None,
init=True,
partial=True,
strict=True,
validate=False,
app_data=None,
lazy=False,
**kwargs,
):
kwargs.setdefault("init_values", init)
kwargs.setdefault("apply_defaults", init)
if lazy:
self._data = ModelDict(unsafe=raw_data, valid=trusted_data)
return
self._data = ModelDict(valid=trusted_data)
data = self._convert(
raw_data,
trusted_data=trusted_data,
mapping=deserialize_mapping,
partial=partial,
strict=strict,
validate=validate,
new=True,
app_data=app_data,
**kwargs,
)
self._data.converted = data
if validate:
self.validate(partial=partial, app_data=app_data, **kwargs)
def validate(self, partial=False, convert=True, app_data=None, **kwargs):
"""
Validates the state of the model. If the data is invalid, raises a ``DataError``
with error messages.
:param bool partial:
Allow partial data to validate. Essentially drops the ``required=True``
settings from field definitions. Default: False
:param convert:
Controls whether to perform import conversion before validating.
Can be turned off to skip an unnecessary conversion step if all values
are known to have the right datatypes (e.g., when validating immediately
after the initial import). Default: True
"""
if not self._data.converted and partial:
return # no new input data to validate
try:
data = self._convert(
validate=True,
partial=partial,
convert=convert,
app_data=app_data,
**kwargs,
)
self._data.valid = data
except DataError as e:
valid = dict(self._data.valid)
valid.update(e.partial_data)
self._data.valid = valid
raise
finally:
self._data.converted = {}
def import_data(self, raw_data, recursive=False, **kwargs):
"""
Converts and imports the raw data into an existing model instance.
:param raw_data:
The data to be imported.
"""
data = self._convert(
raw_data, trusted_data=dict(self), recursive=recursive, **kwargs
)
self._data.converted.update(data)
if kwargs.get("validate"):
self.validate(convert=False)
return self
def _convert(self, raw_data=None, context=None, **kwargs):
"""
Converts the instance raw data into richer Python constructs according
to the fields on the model, validating data if requested.
:param raw_data:
New data to be imported and converted
"""
raw_data = (
{key: raw_data[key] for key in raw_data}
if raw_data
else self._data.converted
)
kwargs["trusted_data"] = kwargs.get("trusted_data") or {}
kwargs["convert"] = getattr(context, "convert", kwargs.get("convert", True))
if self._data.unsafe:
self._data.unsafe.update(raw_data)
raw_data = self._data.unsafe
self._data.unsafe = {}
kwargs["convert"] = True
should_validate = getattr(context, "validate", kwargs.get("validate", False))
func = validate if should_validate else convert
return func(
self._schema, self, raw_data=raw_data, oo=True, context=context, **kwargs
)
def export(self, field_converter=None, role=None, app_data=None, **kwargs):
return export_loop(
self._schema,
self,
field_converter=field_converter,
role=role,
app_data=app_data,
**kwargs,
)
def to_native(self, role=None, app_data=None, **kwargs):
return to_native(self._schema, self, role=role, app_data=app_data, **kwargs)
def to_primitive(self, role=None, app_data=None, **kwargs):
return to_primitive(self._schema, self, role=role, app_data=app_data, **kwargs)
def serialize(self, *args, **kwargs):
raw_data = self._data.converted
try:
self.validate(apply_defaults=True)
except DataError:
pass
data = self.to_primitive(*args, **kwargs)
self._data.converted = raw_data
return data
def atoms(self):
"""
Iterator for the atomic components of a model definition and relevant
data that creates a 3-tuple of the field's name, its type instance and
its value.
"""
return atoms(self._schema, self)
def __iter__(self):
return (
k
for k in self._schema.fields
if k in self._data and getattr(self._schema.fields[k], "fset", None) is None
)
def keys(self):
return list(iter(self))
def items(self):
return [(k, self._data[k]) for k in self]
def values(self):
return [self._data[k] for k in self]
def get(self, key, default=None):
return getattr(self, key, default)
@classmethod
def _append_field(cls, field_name, field_type):
"""
Add a new field to this class.
:type field_name: str
:param field_name:
The name of the field to add.
:type field_type: BaseType
:param field_type:
The type to use for the field.
"""
cls._schema.append_field(schema.Field(field_name, field_type))
setattr(cls, field_name, FieldDescriptor(field_name))
@classmethod
def get_mock_object(cls, context=None, overrides=None):
"""Get a mock object.
:param dict context:
:param dict overrides: overrides for the model
"""
context = Context._make(context)
context._setdefault("memo", set())
context.memo.add(cls)
values = {}
overrides = overrides or {}
for name, field in cls._schema.fields.items():
if name in overrides:
continue
if getattr(field, "model_class", None) in context.memo:
continue
try:
values[name] = field.mock(context)
except MockCreationError as exc:
raise MockCreationError(f"{name}: {exc.args[0]}") from exc
values.update(overrides)
return cls(values)
def __getitem__(self, name):
if name in self._schema.fields:
return getattr(self, name)
raise UnknownFieldError(self, name)
def __setitem__(self, name, value):
if name in self._schema.fields:
return setattr(self, name, value)
raise UnknownFieldError(self, name)
def __delitem__(self, name):
if name in self._schema.fields:
return delattr(self, name)
raise UnknownFieldError(self, name)
def __contains__(self, name):
serializables = {
k for k, t in self._schema.fields.items() if isinstance(t, Serializable)
}
return (
name in self._data and getattr(self, name, Undefined) is not Undefined
) or name in serializables
def __len__(self):
return len(self._data)
def __eq__(self, other, memo=set()):
if self is other:
return True
if type(self) is not type(other):
return NotImplemented
key = (id(self), id(other), get_ident())
if key in memo:
return True
memo.add(key)
try:
for k in self:
if self.get(k) != other.get(k):
return False
return True
finally:
memo.remove(key)
def __ne__(self, other):
return not self == other
def __repr__(self):
model = self.__class__.__name__
info = self._repr_info()
if info:
return f"<{model}: {info}>"
return f"<{model} instance>"
def _repr_info(self):
"""
Subclasses may implement this method to augment the ``__repr__()`` output for the instance::
class Person(Model):
...
def _repr_info(self):
return self.name
>>> Person({'name': 'Mr. Pink'})
<Person: Mr. Pink>
"""
return None
|
schematics-py310-plus
|
/schematics-py310-plus-0.0.4.tar.gz/schematics-py310-plus-0.0.4/schematics/models.py
|
models.py
|
import inspect
from collections import ChainMap, OrderedDict
from copy import deepcopy
from types import FunctionType, MappingProxyType
from typing import List
from . import schema
from .datastructures import Context
from .exceptions import (
DataError,
MockCreationError,
UndefinedValueError,
UnknownFieldError,
)
from .iteration import atoms
from .transforms import convert, export_loop, to_native, to_primitive
from .types.base import BaseType
from .types.serializable import Serializable
from .undefined import Undefined
from .util import get_ident
from .validate import prepare_validator, validate
__all__: List[str] = []
class FieldDescriptor:
"""
``FieldDescriptor`` instances serve as field accessors on models.
"""
def __init__(self, name):
"""
:param name:
The field's name
"""
self.name = name
def __get__(self, instance, cls):
"""
For a model instance, returns the field's current value.
For a model class, returns the field's type object.
"""
if instance is None:
return cls._schema.fields[self.name]
value = instance._data.get(self.name, Undefined)
if value is Undefined:
raise UndefinedValueError(instance, self.name)
return value
def __set__(self, instance, value):
"""
Sets the field's value.
"""
field = instance._schema.fields[self.name]
value = field.pre_setattr(value)
instance._data.converted[self.name] = value
def __delete__(self, instance):
"""
Deletes the field's value.
"""
del instance._data[self.name]
class ModelMeta(type):
"""
Metaclass for Models.
"""
def __new__(mcs, name, bases, attrs):
"""
This metaclass parses the declarative Model into a corresponding Schema,
then adding it as the `_schema` attribute to the host class.
"""
# Structures used to accumulate meta info
fields = OrderedDict()
validator_functions = {} # Model level
options_members = {}
# Accumulate metas info from parent classes
for base in reversed(bases):
if hasattr(base, "_schema"):
fields.update(deepcopy(base._schema.fields))
options_members.update(dict(base._schema.options))
validator_functions.update(base._schema.validators)
# Parse this class's attributes into schema structures
for key, value in attrs.items():
if key.startswith("validate_") and isinstance(
value, (FunctionType, classmethod)
):
validator_functions[key[9:]] = prepare_validator(value, 4)
if isinstance(value, BaseType):
fields[key] = value
elif isinstance(value, Serializable):
fields[key] = value
# Convert declared fields into descriptors for new class
fields = OrderedDict(
sorted(
(kv for kv in fields.items()),
key=lambda i: i[1]._position_hint,
)
)
for key, field in fields.items():
if isinstance(field, BaseType):
attrs[key] = FieldDescriptor(key)
elif isinstance(field, Serializable):
attrs[key] = field
klass = type.__new__(mcs, name, bases, attrs)
# Parse schema options
options = mcs._read_options(name, bases, attrs, options_members)
# Parse meta data into new schema
klass._schema = schema.Schema(
name,
model=klass,
options=options,
validators=validator_functions,
*(schema.Field(k, t) for k, t in fields.items()),
)
return klass
@classmethod
def _read_options(mcs, name, bases, attrs, options_members):
"""
Parses model `Options` class into a `SchemaOptions` instance.
"""
options_class = attrs.get("__optionsclass__", schema.SchemaOptions)
if "Options" in attrs:
for key, value in inspect.getmembers(attrs["Options"]):
if key.startswith("__"):
continue
if key.startswith("_"):
extras = options_members.get("extras", {}).copy()
extras.update({key: value})
options_members["extras"] = extras
elif key == "roles":
roles = options_members.get("roles", {}).copy()
roles.update(value)
options_members[key] = roles
else:
options_members[key] = value
return options_class(**options_members)
class ModelDict(ChainMap):
__slots__ = ["_unsafe", "_converted", "__valid", "_valid"]
def __init__(self, unsafe=None, converted=None, valid=None):
self._unsafe = unsafe if unsafe is not None else {}
self._converted = converted if converted is not None else {}
self.__valid = valid if valid is not None else {}
self._valid = MappingProxyType(self.__valid)
super().__init__(self._unsafe, self._converted, self._valid)
@property
def unsafe(self):
return self._unsafe
@unsafe.setter
def unsafe(self, value):
self._unsafe = value
self.maps[0] = self._unsafe
@property
def converted(self):
return self._converted
@converted.setter
def converted(self, value):
self._converted = value
self.maps[1] = self._converted
@property
def valid(self):
return self._valid
@valid.setter
def valid(self, value):
self._valid = MappingProxyType(value)
self.maps[2] = self._valid
def __delitem__(self, key):
did_delete = False
for data in [self.__valid, self._converted, self._unsafe]:
try:
del data[key]
did_delete = True
except KeyError:
pass
if not did_delete:
raise KeyError(key)
def __repr__(self):
return repr(dict(self))
class Model(metaclass=ModelMeta):
"""
Enclosure for fields and validation. Same pattern deployed by Django
models, SQLAlchemy declarative extension and other developer friendly
libraries.
:param Mapping raw_data:
The data to be imported into the model instance.
:param Mapping deserialize_mapping:
Can be used to provide alternative input names for fields. Values may be
strings or lists of strings, keyed by the actual field name.
:param bool partial:
Allow partial data to validate. Essentially drops the ``required=True``
settings from field definitions. Default: True
:param bool strict:
Complain about unrecognized keys. Default: True
"""
def __init__(
self,
raw_data=None,
trusted_data=None,
deserialize_mapping=None,
init=True,
partial=True,
strict=True,
validate=False,
app_data=None,
lazy=False,
**kwargs,
):
kwargs.setdefault("init_values", init)
kwargs.setdefault("apply_defaults", init)
if lazy:
self._data = ModelDict(unsafe=raw_data, valid=trusted_data)
return
self._data = ModelDict(valid=trusted_data)
data = self._convert(
raw_data,
trusted_data=trusted_data,
mapping=deserialize_mapping,
partial=partial,
strict=strict,
validate=validate,
new=True,
app_data=app_data,
**kwargs,
)
self._data.converted = data
if validate:
self.validate(partial=partial, app_data=app_data, **kwargs)
def validate(self, partial=False, convert=True, app_data=None, **kwargs):
"""
Validates the state of the model. If the data is invalid, raises a ``DataError``
with error messages.
:param bool partial:
Allow partial data to validate. Essentially drops the ``required=True``
settings from field definitions. Default: False
:param convert:
Controls whether to perform import conversion before validating.
Can be turned off to skip an unnecessary conversion step if all values
are known to have the right datatypes (e.g., when validating immediately
after the initial import). Default: True
"""
if not self._data.converted and partial:
return # no new input data to validate
try:
data = self._convert(
validate=True,
partial=partial,
convert=convert,
app_data=app_data,
**kwargs,
)
self._data.valid = data
except DataError as e:
valid = dict(self._data.valid)
valid.update(e.partial_data)
self._data.valid = valid
raise
finally:
self._data.converted = {}
def import_data(self, raw_data, recursive=False, **kwargs):
"""
Converts and imports the raw data into an existing model instance.
:param raw_data:
The data to be imported.
"""
data = self._convert(
raw_data, trusted_data=dict(self), recursive=recursive, **kwargs
)
self._data.converted.update(data)
if kwargs.get("validate"):
self.validate(convert=False)
return self
def _convert(self, raw_data=None, context=None, **kwargs):
"""
Converts the instance raw data into richer Python constructs according
to the fields on the model, validating data if requested.
:param raw_data:
New data to be imported and converted
"""
raw_data = (
{key: raw_data[key] for key in raw_data}
if raw_data
else self._data.converted
)
kwargs["trusted_data"] = kwargs.get("trusted_data") or {}
kwargs["convert"] = getattr(context, "convert", kwargs.get("convert", True))
if self._data.unsafe:
self._data.unsafe.update(raw_data)
raw_data = self._data.unsafe
self._data.unsafe = {}
kwargs["convert"] = True
should_validate = getattr(context, "validate", kwargs.get("validate", False))
func = validate if should_validate else convert
return func(
self._schema, self, raw_data=raw_data, oo=True, context=context, **kwargs
)
def export(self, field_converter=None, role=None, app_data=None, **kwargs):
return export_loop(
self._schema,
self,
field_converter=field_converter,
role=role,
app_data=app_data,
**kwargs,
)
def to_native(self, role=None, app_data=None, **kwargs):
return to_native(self._schema, self, role=role, app_data=app_data, **kwargs)
def to_primitive(self, role=None, app_data=None, **kwargs):
return to_primitive(self._schema, self, role=role, app_data=app_data, **kwargs)
def serialize(self, *args, **kwargs):
raw_data = self._data.converted
try:
self.validate(apply_defaults=True)
except DataError:
pass
data = self.to_primitive(*args, **kwargs)
self._data.converted = raw_data
return data
def atoms(self):
"""
Iterator for the atomic components of a model definition and relevant
data that creates a 3-tuple of the field's name, its type instance and
its value.
"""
return atoms(self._schema, self)
def __iter__(self):
return (
k
for k in self._schema.fields
if k in self._data and getattr(self._schema.fields[k], "fset", None) is None
)
def keys(self):
return list(iter(self))
def items(self):
return [(k, self._data[k]) for k in self]
def values(self):
return [self._data[k] for k in self]
def get(self, key, default=None):
return getattr(self, key, default)
@classmethod
def _append_field(cls, field_name, field_type):
"""
Add a new field to this class.
:type field_name: str
:param field_name:
The name of the field to add.
:type field_type: BaseType
:param field_type:
The type to use for the field.
"""
cls._schema.append_field(schema.Field(field_name, field_type))
setattr(cls, field_name, FieldDescriptor(field_name))
@classmethod
def get_mock_object(cls, context=None, overrides=None):
"""Get a mock object.
:param dict context:
:param dict overrides: overrides for the model
"""
context = Context._make(context)
context._setdefault("memo", set())
context.memo.add(cls)
values = {}
overrides = overrides or {}
for name, field in cls._schema.fields.items():
if name in overrides:
continue
if getattr(field, "model_class", None) in context.memo:
continue
try:
values[name] = field.mock(context)
except MockCreationError as exc:
raise MockCreationError(f"{name}: {exc.args[0]}") from exc
values.update(overrides)
return cls(values)
def __getitem__(self, name):
if name in self._schema.fields:
return getattr(self, name)
raise UnknownFieldError(self, name)
def __setitem__(self, name, value):
if name in self._schema.fields:
return setattr(self, name, value)
raise UnknownFieldError(self, name)
def __delitem__(self, name):
if name in self._schema.fields:
return delattr(self, name)
raise UnknownFieldError(self, name)
def __contains__(self, name):
serializables = {
k for k, t in self._schema.fields.items() if isinstance(t, Serializable)
}
return (
name in self._data and getattr(self, name, Undefined) is not Undefined
) or name in serializables
def __len__(self):
return len(self._data)
def __eq__(self, other, memo=set()):
if self is other:
return True
if type(self) is not type(other):
return NotImplemented
key = (id(self), id(other), get_ident())
if key in memo:
return True
memo.add(key)
try:
for k in self:
if self.get(k) != other.get(k):
return False
return True
finally:
memo.remove(key)
def __ne__(self, other):
return not self == other
def __repr__(self):
model = self.__class__.__name__
info = self._repr_info()
if info:
return f"<{model}: {info}>"
return f"<{model} instance>"
def _repr_info(self):
"""
Subclasses may implement this method to augment the ``__repr__()`` output for the instance::
class Person(Model):
...
def _repr_info(self):
return self.name
>>> Person({'name': 'Mr. Pink'})
<Person: Mr. Pink>
"""
return None
| 0.861553 | 0.200773 |
import json
from collections.abc import Mapping, Sequence
from typing import Optional, Type
from .datastructures import FrozenDict, FrozenList
from .translator import LazyText
__all__ = [
"BaseError",
"ErrorMessage",
"FieldError",
"ConversionError",
"ValidationError",
"StopValidationError",
"CompoundError",
"DataError",
"MockCreationError",
"UndefinedValueError",
"UnknownFieldError",
]
class BaseError(Exception):
def __init__(self, errors):
"""
The base class for all Schematics errors.
message should be a human-readable message,
while errors is a machine-readable list, or dictionary.
if None is passed as the message, and error is populated,
the primitive representation will be serialized.
the Python logging module expects exceptions to be hashable
and therefore immutable. As a result, it is not possible to
mutate BaseError's error list or dict after initialization.
"""
errors = self._freeze(errors)
super().__init__(errors)
@property
def errors(self):
return self.args[0]
def to_primitive(self):
"""
converts the errors dict to a primitive representation of dicts,
list and strings.
"""
try:
return self._primitive
except AttributeError:
self._primitive = self._to_primitive(self.errors)
return self._primitive
@staticmethod
def _freeze(obj):
"""freeze common data structures to something immutable."""
if isinstance(obj, dict):
return FrozenDict(obj)
if isinstance(obj, list):
return FrozenList(obj)
return obj
@classmethod
def _to_primitive(cls, obj):
"""recursive to_primitive for basic data types."""
if isinstance(obj, str):
return obj
if isinstance(obj, Sequence):
return [cls._to_primitive(e) for e in obj]
if isinstance(obj, Mapping):
return dict((k, cls._to_primitive(v)) for k, v in obj.items())
return str(obj)
def __str__(self):
return json.dumps(self.to_primitive())
def __repr__(self):
return f"{self.__class__.__name__}({self.errors!r})"
def __hash__(self):
return hash(self.errors)
def __eq__(self, other):
if type(self) is type(other):
return self.errors == other.errors
return self.errors == other
def __ne__(self, other):
return not (self == other)
class ErrorMessage:
def __init__(self, summary, info=None):
self.type = None
self.summary = summary
self.info = info
def __repr__(self):
return f"{self.__class__.__name__}({self.summary!r}, {self.info!r})"
def __str__(self):
if self.info:
return f"{self.summary}: {self._info_as_str()}"
return str(self.summary)
def _info_as_str(self):
if isinstance(self.info, int):
return str(self.info)
if isinstance(self.info, str):
return f'"{self.info}"'
return str(self.info)
def __eq__(self, other):
if isinstance(other, ErrorMessage):
return (
self.summary == other.summary
and self.type == other.type
and self.info == other.info
)
if isinstance(other, str):
return self.summary == other
return False
def __ne__(self, other):
return not (self == other)
def __hash__(self):
return hash((self.summary, self.type, self.info))
class FieldError(BaseError, Sequence):
type: Optional[Type[Exception]] = None
def __init__(self, *args, **kwargs):
if type(self) is FieldError:
raise NotImplementedError(
"Please raise either ConversionError or ValidationError."
)
if len(args) == 0:
raise TypeError("Please provide at least one error or error message.")
if kwargs:
items = [ErrorMessage(*args, **kwargs)]
elif len(args) == 1:
arg = args[0]
if isinstance(arg, list):
items = list(arg)
else:
items = [arg]
else:
items = args
errors = []
for item in items:
if isinstance(item, (str, LazyText)):
errors.append(ErrorMessage(str(item)))
elif isinstance(item, tuple):
errors.append(ErrorMessage(*item))
elif isinstance(item, ErrorMessage):
errors.append(item)
elif isinstance(item, self.__class__):
errors.extend(item.errors)
else:
raise TypeError(
f"'{type(item).__name__}()' object is neither a {type(self).__name__} nor an error message."
)
for error in errors:
error.type = self.type or type(self)
super().__init__(errors)
def __contains__(self, value):
return value in self.errors
def __getitem__(self, index):
return self.errors[index]
def __iter__(self):
return iter(self.errors)
def __len__(self):
return len(self.errors)
class ConversionError(FieldError, TypeError):
"""Exception raised when data cannot be converted to the correct python type"""
class ValidationError(FieldError, ValueError):
"""Exception raised when invalid data is encountered."""
class StopValidationError(ValidationError):
"""Exception raised when no more validation need occur."""
type = ValidationError
class CompoundError(BaseError):
def __init__(self, errors):
if not isinstance(errors, dict):
raise TypeError("Compound errors must be reported as a dictionary.")
for key, value in errors.items():
if isinstance(value, CompoundError):
errors[key] = value.errors
else:
errors[key] = value
super().__init__(errors)
class DataError(CompoundError):
def __init__(self, errors, partial_data=None):
super().__init__(errors)
self.partial_data = partial_data
class MockCreationError(ValueError):
"""Exception raised when a mock value cannot be generated."""
class UndefinedValueError(AttributeError, KeyError):
"""Exception raised when accessing a field with an undefined value."""
def __init__(self, model, name):
msg = f"'{model.__class__.__name__}' instance has no value for field '{name}'"
super().__init__(msg)
class UnknownFieldError(KeyError):
"""Exception raised when attempting to access a nonexistent field using the subscription syntax."""
def __init__(self, model, name):
msg = f"Model '{model.__class__.__name__}' has no field named '{name}'"
super().__init__(msg)
|
schematics-py310-plus
|
/schematics-py310-plus-0.0.4.tar.gz/schematics-py310-plus-0.0.4/schematics/exceptions.py
|
exceptions.py
|
import json
from collections.abc import Mapping, Sequence
from typing import Optional, Type
from .datastructures import FrozenDict, FrozenList
from .translator import LazyText
__all__ = [
"BaseError",
"ErrorMessage",
"FieldError",
"ConversionError",
"ValidationError",
"StopValidationError",
"CompoundError",
"DataError",
"MockCreationError",
"UndefinedValueError",
"UnknownFieldError",
]
class BaseError(Exception):
def __init__(self, errors):
"""
The base class for all Schematics errors.
message should be a human-readable message,
while errors is a machine-readable list, or dictionary.
if None is passed as the message, and error is populated,
the primitive representation will be serialized.
the Python logging module expects exceptions to be hashable
and therefore immutable. As a result, it is not possible to
mutate BaseError's error list or dict after initialization.
"""
errors = self._freeze(errors)
super().__init__(errors)
@property
def errors(self):
return self.args[0]
def to_primitive(self):
"""
converts the errors dict to a primitive representation of dicts,
list and strings.
"""
try:
return self._primitive
except AttributeError:
self._primitive = self._to_primitive(self.errors)
return self._primitive
@staticmethod
def _freeze(obj):
"""freeze common data structures to something immutable."""
if isinstance(obj, dict):
return FrozenDict(obj)
if isinstance(obj, list):
return FrozenList(obj)
return obj
@classmethod
def _to_primitive(cls, obj):
"""recursive to_primitive for basic data types."""
if isinstance(obj, str):
return obj
if isinstance(obj, Sequence):
return [cls._to_primitive(e) for e in obj]
if isinstance(obj, Mapping):
return dict((k, cls._to_primitive(v)) for k, v in obj.items())
return str(obj)
def __str__(self):
return json.dumps(self.to_primitive())
def __repr__(self):
return f"{self.__class__.__name__}({self.errors!r})"
def __hash__(self):
return hash(self.errors)
def __eq__(self, other):
if type(self) is type(other):
return self.errors == other.errors
return self.errors == other
def __ne__(self, other):
return not (self == other)
class ErrorMessage:
def __init__(self, summary, info=None):
self.type = None
self.summary = summary
self.info = info
def __repr__(self):
return f"{self.__class__.__name__}({self.summary!r}, {self.info!r})"
def __str__(self):
if self.info:
return f"{self.summary}: {self._info_as_str()}"
return str(self.summary)
def _info_as_str(self):
if isinstance(self.info, int):
return str(self.info)
if isinstance(self.info, str):
return f'"{self.info}"'
return str(self.info)
def __eq__(self, other):
if isinstance(other, ErrorMessage):
return (
self.summary == other.summary
and self.type == other.type
and self.info == other.info
)
if isinstance(other, str):
return self.summary == other
return False
def __ne__(self, other):
return not (self == other)
def __hash__(self):
return hash((self.summary, self.type, self.info))
class FieldError(BaseError, Sequence):
type: Optional[Type[Exception]] = None
def __init__(self, *args, **kwargs):
if type(self) is FieldError:
raise NotImplementedError(
"Please raise either ConversionError or ValidationError."
)
if len(args) == 0:
raise TypeError("Please provide at least one error or error message.")
if kwargs:
items = [ErrorMessage(*args, **kwargs)]
elif len(args) == 1:
arg = args[0]
if isinstance(arg, list):
items = list(arg)
else:
items = [arg]
else:
items = args
errors = []
for item in items:
if isinstance(item, (str, LazyText)):
errors.append(ErrorMessage(str(item)))
elif isinstance(item, tuple):
errors.append(ErrorMessage(*item))
elif isinstance(item, ErrorMessage):
errors.append(item)
elif isinstance(item, self.__class__):
errors.extend(item.errors)
else:
raise TypeError(
f"'{type(item).__name__}()' object is neither a {type(self).__name__} nor an error message."
)
for error in errors:
error.type = self.type or type(self)
super().__init__(errors)
def __contains__(self, value):
return value in self.errors
def __getitem__(self, index):
return self.errors[index]
def __iter__(self):
return iter(self.errors)
def __len__(self):
return len(self.errors)
class ConversionError(FieldError, TypeError):
"""Exception raised when data cannot be converted to the correct python type"""
class ValidationError(FieldError, ValueError):
"""Exception raised when invalid data is encountered."""
class StopValidationError(ValidationError):
"""Exception raised when no more validation need occur."""
type = ValidationError
class CompoundError(BaseError):
def __init__(self, errors):
if not isinstance(errors, dict):
raise TypeError("Compound errors must be reported as a dictionary.")
for key, value in errors.items():
if isinstance(value, CompoundError):
errors[key] = value.errors
else:
errors[key] = value
super().__init__(errors)
class DataError(CompoundError):
def __init__(self, errors, partial_data=None):
super().__init__(errors)
self.partial_data = partial_data
class MockCreationError(ValueError):
"""Exception raised when a mock value cannot be generated."""
class UndefinedValueError(AttributeError, KeyError):
"""Exception raised when accessing a field with an undefined value."""
def __init__(self, model, name):
msg = f"'{model.__class__.__name__}' instance has no value for field '{name}'"
super().__init__(msg)
class UnknownFieldError(KeyError):
"""Exception raised when attempting to access a nonexistent field using the subscription syntax."""
def __init__(self, model, name):
msg = f"Model '{model.__class__.__name__}' has no field named '{name}'"
super().__init__(msg)
| 0.873997 | 0.215268 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.