id
stringlengths 1
265
| text
stringlengths 6
5.19M
| dataset_id
stringclasses 7
values |
---|---|---|
/Jinja2-3.1.2-py3-none-any.whl/Jinja2-3.1.2.dist-info/LICENSE.rst | Copyright 2007 Pallets
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
3. Neither the name of the copyright holder nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A
PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED
TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
| PypiClean |
/M2Crypto-0.39.0.tar.gz/M2Crypto-0.39.0/doc/howto.ssl.rst | :orphan:
.. _howto-ssl:
HOWTO: Programming SSL in Python with M2Crypto
==============================================
:author: Pheng Siong Ng <[email protected]> and Heikki Toivonen ([email protected])
:copyright: © 2000, 2001 by Ng Pheng Siong,
portions © 2006 by Open Source Applications Foundation
Introduction
============
`M2Crypto <https://gitlab.com/m2crypto/m2crypto/>`__ is a
`Python <http://www.python.org>`__ interface to
`OpenSSL <http://www.openssl.org>`__. It makes available to the Python
programmer SSL functionality to implement clients and servers, S/MIME
v2, RSA, DSA, DH, symmetric ciphers, message digests and HMACs.
This document demonstrates programming HTTPS with M2Crypto.
A bit of history
================
M2Crypto was created during the time of Python 1.5, which features a
module httplib providing client-side HTTP functionality. M2Crypto sports
a httpslib based on httplib.
Beginning with version 2.0, Python's socket module provided
(rudimentary) SSL support. Also in the same version, httplib was
enhanced with class HTTPConnection, which is more sophisticated than the
old class HTTP, and HTTPSConnection, which does HTTPS.
Subsequently, M2Crypto.httpslib grew a compatible (but not identical)
class HTTPSConnection.
The primary interface difference between the two HTTPSConnection classes
is that M2Crypto's version accepts an M2Crypto.SSL.Context instance as a
parameter, whereas Python 2.x's SSL support does not permit Pythonic
control of the SSL context.
Within the implementations, Python's ``HTTPSConnection`` employs a
``FakeSocket`` object, which collects all input from the SSL connection
before returning it to the application as a ``StringIO`` buffer, whereas
M2Crypto's ``HTTPSConnection`` uses a buffering
``M2Crypto.BIO.IOBuffer`` object that works over the underlying
M2Crypto.SSL.Connection directly.
Since then M2Crypto has gained a Twisted wrapper that allows securing
Twisted SSL connections with M2Crypto.
Secure SSL
==========
It is recommended that you read the book Network Security with OpenSSL
by John Viega, Matt Messier and Pravir Chandra, ISBN 059600270X.
Using M2Crypto does not automatically make an SSL connection secure.
There are various steps that need to be made before we can make that
claim. Let's see how a simple client can establish a secure
connection::
ctx = SSL.Context()
ctx.set_verify(SSL.verify_peer | SSL.verify_fail_if_no_peer_cert, depth=9)
if ctx.load_verify_locations('ca.pem') != 1: raise Exception('No CA certs')
s = SSL.Connection(ctx)
s.connect(server_address)
# Normal protocol (for example HTTP) commands follow
The first line creates an SSL context. The defaults allow any SSL
version (except SSL version 2 which has known weaknesses) and sets the
allowed ciphers to secure ones.
The second line tells M2Crypto to perform certificate validation. The
flags shown above are typical for clients, and requires the server to
send a certificate. The depth parameter tells how long certificate
chains are allowed - 9 is pretty common default, although probably too
long in practice.
The third line loads the allowed root (certificate authority or CA)
certificates. Most Linux distributions come with CA certificates in
suitable format. You could also download the
`certdata.txt <http://mxr.mozilla.org/seamonkey/source//security/nss/lib/ckfw/builtins/certdata.txt?raw=1>`__
file from the
`NSS <http://www.mozilla.org/projects/security/pki/nss/>`__ project and
convert it with the little M2Crypto utility script
`demo/x509/certdata2pem.py <http://svn.osafoundation.org/m2crypto/trunk/demo/x509/certdata2pem.py>`__.
The fourth line creates an SSL connection object with the secure
context.
The fifth line connects to the server. During this time we perform the
last security step: just after connection, but before exchanging any
data, we compare the commonName (or subjectAltName DNS field) field in
the certificate the server returned to the server address we tried to
connect to. This happens automatically with SSL.Connection and the
Twisted wrapper class, and anything that uses those. In all other cases
you must do the check manually. It is recommended you call the
SSL.Checker to do the actual check.
SSL servers are different in that they typically do not require the
client to send a certificate, so there is usually no certificate
checking. Also, it is typically useless to perform host name checking.
Code Samples
============
The best samples of how to use the various SSL objects are in the tests
directory, and the test\_ssl.py file specifically. There are additional
samples in the demo directory, but they are not quaranteed to be up to
date.
NOTE: The tests and demos may not be secure as is. Use the information
above on how to make them secure.
ssldump
=======
ssldump "is an SSLv3/TLS network protocol analyser. It identifies TCP
connections on the chosen network interface and attempts to interpret
them as SSLv3/TLS traffic. When it identifies SSLv3/TLS traffic, it
decodes the records and displays them in a textual form to stdout. If
provided with the appropriate keying material, it will also decrypt the
connections and display the application data traffic.
If linked with OpenSSL, ssldump can display certificates in decoded form
and decrypt traffic (provided that it has the appropriate keying
material)."
ssldump is written by Eric Rescorla.
| PypiClean |
/DendroPy-4.6.1.tar.gz/DendroPy-4.6.1/src/dendropy/dataio/fastawriter.py |
##############################################################################
## DendroPy Phylogenetic Computing Library.
##
## Copyright 2010-2015 Jeet Sukumaran and Mark T. Holder.
## All rights reserved.
##
## See "LICENSE.rst" for terms and conditions of usage.
##
## If you use this work or any portion thereof in published work,
## please cite it as:
##
## Sukumaran, J. and M. T. Holder. 2010. DendroPy: a Python library
## for phylogenetic computing. Bioinformatics 26: 1569-1571.
##
##############################################################################
"""
Implementation of FASTA-format data writer.
"""
from dendropy.dataio import ioservice
class FastaWriter(ioservice.DataWriter):
"""
Formatter for FASTA writer
"""
def __init__(self, **kwargs):
"""
Keyword Arguments
-----------------
wrap: boolean, default: |True|
If |False|, then sequences are written out as single, unbroken lines.
Defaults to |True|: wraps sequences at 70 colums.
"""
ioservice.DataWriter.__init__(self)
self.wrap = kwargs.get("wrap", True)
self.wrap_width = kwargs.get("wrap_width", 70)
def _write(self,
stream,
taxon_namespaces=None,
tree_lists=None,
char_matrices=None,
global_annotations_target=None):
for char_matrix in char_matrices:
if (self.attached_taxon_namespace is not None
and char_matrix.taxon_namespace is not self.attached_taxon_namespace):
continue
self._write_char_matrix(stream, char_matrix)
def _write_char_matrix(self, stream, char_matrix):
for taxon in char_matrix:
stream.write(">{}\n".format(taxon.label))
seq = char_matrix[taxon]
if self.wrap:
col_count = 0
for c in seq:
if col_count == self.wrap_width:
stream.write("\n")
col_count = 0
stream.write(str(c))
col_count += 1
else:
s = "".join("{}".format(c) for c in seq)
stream.write("{}\n".format(s))
stream.write("\n\n") | PypiClean |
/ConSSL-0.0.1-py3-none-any.whl/CSSL/datasets/kitti_dataset.py | import os
import numpy as np
from torch.utils.data import Dataset
from CSSL.utils import _PIL_AVAILABLE
from CSSL.utils.warnings import warn_missing_pkg
if _PIL_AVAILABLE:
from PIL import Image
else: # pragma: no cover
warn_missing_pkg('PIL', pypi_name='Pillow')
DEFAULT_VOID_LABELS = (0, 1, 2, 3, 4, 5, 6, 9, 10, 14, 15, 16, 18, 29, 30, -1)
DEFAULT_VALID_LABELS = (7, 8, 11, 12, 13, 17, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 31, 32, 33)
class KittiDataset(Dataset):
"""
Note:
You need to have downloaded the Kitti dataset first and provide the path to where it is saved.
You can download the dataset here: http://www.cvlibs.net/datasets/kitti/eval_semseg.php?benchmark=semantics2015
There are 34 classes, however not all of them are useful for training (e.g. railings on highways). These
useless classes (the pixel values of these classes) are stored in `void_labels`. Useful classes are stored
in `valid_labels`.
The `encode_segmap` function sets all pixels with any of the `void_labels` to `ignore_index`
(250 by default). It also sets all of the valid pixels to the appropriate value between 0 and
`len(valid_labels)` (since that is the number of valid classes), so it can be used properly by
the loss function when comparing with the output.
"""
IMAGE_PATH = os.path.join('training', 'image_2')
MASK_PATH = os.path.join('training', 'semantic')
def __init__(
self,
data_dir: str,
img_size: tuple = (1242, 376),
void_labels: list = DEFAULT_VOID_LABELS,
valid_labels: list = DEFAULT_VALID_LABELS,
transform=None
):
"""
Args:
data_dir (str): where to load the data from path, i.e. '/path/to/folder/with/data_semantics/'
img_size: image dimensions (width, height)
void_labels: useless classes to be excluded from training
valid_labels: useful classes to include
"""
if not _PIL_AVAILABLE: # pragma: no cover
raise ModuleNotFoundError('You want to use `PIL` which is not installed yet.')
self.img_size = img_size
self.void_labels = void_labels
self.valid_labels = valid_labels
self.ignore_index = 250
self.class_map = dict(zip(self.valid_labels, range(len(self.valid_labels))))
self.transform = transform
self.data_dir = data_dir
self.img_path = os.path.join(self.data_dir, self.IMAGE_PATH)
self.mask_path = os.path.join(self.data_dir, self.MASK_PATH)
self.img_list = self.get_filenames(self.img_path)
self.mask_list = self.get_filenames(self.mask_path)
def __len__(self):
return len(self.img_list)
def __getitem__(self, idx):
img = Image.open(self.img_list[idx])
img = img.resize(self.img_size)
img = np.array(img)
mask = Image.open(self.mask_list[idx]).convert('L')
mask = mask.resize(self.img_size)
mask = np.array(mask)
mask = self.encode_segmap(mask)
if self.transform:
img = self.transform(img)
return img, mask
def encode_segmap(self, mask):
"""
Sets void classes to zero so they won't be considered for training
"""
for voidc in self.void_labels:
mask[mask == voidc] = self.ignore_index
for validc in self.valid_labels:
mask[mask == validc] = self.class_map[validc]
# remove extra idxs from updated dataset
mask[mask > 18] = self.ignore_index
return mask
def get_filenames(self, path):
"""
Returns a list of absolute paths to images inside given `path`
"""
files_list = list()
for filename in os.listdir(path):
files_list.append(os.path.join(path, filename))
return files_list | PypiClean |
/EnviroMS-4.3.0.tar.gz/EnviroMS-4.3.0/enviroMS/cli.py | from dataclasses import asdict
from pathlib import Path
import toml
import click
from enviroMS.singleMzSearch import run_molecular_formula_search
from enviroMS.diWorkflow import DiWorkflowParameters, generate_database, run_di_mpi, run_direct_infusion_workflow, run_wdl_direct_infusion_workflow
from corems.molecular_id.search.molecularFormulaSearch import SearchMolecularFormulas
from corems.encapsulation.output.parameter_to_json import dump_ms_settings_toml, dump_all_settings_toml
class Config:
def __init__(self):
self.verbose = False
pass_config = click.make_pass_decorator(Config, ensure=True)
@click.group()
@click.option('--verbose', is_flag=True, help='print out the results')
@pass_config
def cli(config, verbose):
config.verbose = verbose
@cli.command()
@click.argument('mz', required=True, type=float, )
@click.argument('corems_parameters_filepath', required=True, type=click.Path())
@click.argument('out', required=False, type=click.File('w'), default='-')
@click.option('-e', '--error', 'ppm_error', default=1.0, help='the marging of mass error (ppm)')
@click.option('-r', '--radical', 'isRadical', default=True, type=bool, help='include radical ion type')
@click.option('-p', '--protonated', 'isProtonated', default=True, type=bool, help='include (de)-protonated ion type')
@click.option('-a', '--adduct', 'isAdduct', default=False, type=bool, help='include adduct ion type')
@pass_config
def run_search_formula(config, mz, ppm_error, isRadical, isProtonated, isAdduct,out, corems_parameters_filepath):#settings_filepath
'''Search for molecular formula candidates to a given m/z value \n
corems_parameters_filepath =' CoreMS Parameters File (JSON)'
MZ = m/z value FLOAT\n
out = filename to store results TEXT\n
'''
#if config.verbose:
click.echo('', file=out)
#dump_search_settings_yaml()
click.echo('Searching formulas for %.5f' % mz, file=out)
click.echo('', file=out)
click.echo('Loading Searching Settings from %s' % corems_parameters_filepath, file=out)
click.echo('',file=out)
run_molecular_formula_search(mz, out, corems_parameters_filepath)
@cli.command()
@click.argument('corems_parameters_file', required=True, type=str)
@click.option('--jobs','-j', default=4, help="'cpu's'")
def create_database(corems_parameters_file, jobs):
'''corems_parameters_file: Path for CoreMS TOML Parameters file\n
jobs: Number of processes to run\n
"postgresql://postgres:[email protected]:5432/",
'''
generate_database(corems_parameters_file, jobs)
@cli.command()
@click.argument('file_paths', required=True, type=str)
@click.argument('output_directory', required=True, type=str)
@click.argument('output_type', required=True, type=str)
@click.argument('corems_toml_path', required=True, type=str)
@click.argument('nmdc_metadata_path', required=True, type=str)
@click.argument('polarity', required=True, type=str)
@click.argument('raw_file_start_scan', required=True, type=int)
@click.argument('raw_file_final_scan', required=True, type=int)
@click.argument('is_centroid', required=True, type=bool)
@click.argument('calibration_ref_file_path', required=False, type=str)
@click.option('--calibrate','-c', default=True)
@click.option('--plot_mz_error', '-e', default=True)
@click.option('--plot_ms_assigned_unassigned','-a', default=True)
@click.option('--plot_c_dbe', '-cb', default=True)
@click.option('--plot_van_krevelen', '-vk', default=True)
@click.option('--plot_ms_classes', '-mc', default=True)
@click.option('--plot_mz_error_classes', '-ec', default=True)
@click.option('--jobs','-j', default=4, help="'cpu's'")
def run_di_wdl(*args, **kwargs):
'''Run the Direct Infusion Workflow using wdl'''
run_wdl_direct_infusion_workflow(*args, **kwargs)
@cli.command()
@click.argument('di_workflow_paramaters_file', required=True, type=str)
@click.option('--jobs','-j', default=4, help="'cpu's'")
@click.option('--replicas','-r', default=1, help="data replicas")
@click.option('--tasks','-t', default=4, help="mpi tasks")
@click.option('--mpi','-m', is_flag=True, help="run mpi version")
def run_di(di_workflow_paramaters_file, jobs, replicas, tasks, mpi):
'''Run the Direct Infusion Workflow\n
workflow_paramaters_file = toml file with workflow parameters\n
output_types = csv, excel, pandas, json set on the parameter file\n
corems_toml_path = toml file with corems parameters\n
--jobs = number of processes to run in parallel\n
--mpi = run on hpc, if omitted will run python's multiprocessing and will duplicate runs on nodes\n
'''
if mpi:
run_di_mpi(di_workflow_paramaters_file, tasks, replicas)
else:
run_direct_infusion_workflow(di_workflow_paramaters_file, jobs, replicas)
@cli.command()
@click.argument('lcms_workflow_paramaters_file', required=True, type=str)
@click.option('--jobs','-j', default=4, help="'cpu's'")
@pass_config
def run_lcms(workflow_paramaters_file, jobs):
#implement a mz search inside the mass spectrum, then run a search for molecular formula and the isotopologues
pass
@cli.command()
@click.argument('toml_file_name', required=True, type=click.Path())
def dump_corems_template(toml_file_name):
'''Dumps a CoreMS toml file template
to be used as the workflow parameters input
'''
path_obj = Path(toml_file_name).with_suffix('.toml')
dump_all_settings_toml(file_path=path_obj)
@cli.command()
@click.argument('toml_file_name', required=True, type=click.Path())
def dump_corems_enviroms_template(toml_file_name):
'''Dumps a CoreMS toml file template
to be used as the workflow parameters input
'''
path_obj = Path(toml_file_name).with_suffix('.toml')
dump_ms_settings_toml(file_path=path_obj)
@cli.command()
@click.argument('toml_file_name', required=True, type=click.Path())
def dump_di_template(toml_file_name):
'''Dumps a toml file template
to be used as the workflow parameters input
'''
ref_lib_path = Path(toml_file_name).with_suffix('.toml')
with open(ref_lib_path, 'w') as workflow_param:
workflow = DiWorkflowParameters()
toml.dump(asdict(workflow), workflow_param) | PypiClean |
/EnergyCapSdk-8.2304.4743.tar.gz/EnergyCapSdk-8.2304.4743/energycap/sdk/models/report_distribution_details_response_py3.py |
from msrest.serialization import Model
class ReportDistributionDetailsResponse(Model):
"""ReportDistributionDetailsResponse.
:param report_distribution_id: The id of the report distribution
:type report_distribution_id: int
:param report_distribution_name: The name of the report distribution
:type report_distribution_name: str
:param created_by_user:
:type created_by_user: ~energycap.sdk.models.UserChild
:param created_date: The date and time the report distribution was created
:type created_date: datetime
:param modified_by_user:
:type modified_by_user: ~energycap.sdk.models.UserChild
:param modified_date: The date and time of the most recent modification
:type modified_date: datetime
:param last_run_date: Last time the report distribution was run
:type last_run_date: datetime
:param next_run_date: Next time the report distribution will run
:type next_run_date: datetime
:param enabled: Indicates if the report distribution is currently enabled
:type enabled: bool
:param specific_report_id: The id of the report being distributed
:type specific_report_id: int
:param base_report:
:type base_report: ~energycap.sdk.models.ReportChild
:param email_settings:
:type email_settings:
~energycap.sdk.models.ReportDistributionEmailSettings
"""
_attribute_map = {
'report_distribution_id': {'key': 'reportDistributionId', 'type': 'int'},
'report_distribution_name': {'key': 'reportDistributionName', 'type': 'str'},
'created_by_user': {'key': 'createdByUser', 'type': 'UserChild'},
'created_date': {'key': 'createdDate', 'type': 'iso-8601'},
'modified_by_user': {'key': 'modifiedByUser', 'type': 'UserChild'},
'modified_date': {'key': 'modifiedDate', 'type': 'iso-8601'},
'last_run_date': {'key': 'lastRunDate', 'type': 'iso-8601'},
'next_run_date': {'key': 'nextRunDate', 'type': 'iso-8601'},
'enabled': {'key': 'enabled', 'type': 'bool'},
'specific_report_id': {'key': 'specificReportId', 'type': 'int'},
'base_report': {'key': 'baseReport', 'type': 'ReportChild'},
'email_settings': {'key': 'emailSettings', 'type': 'ReportDistributionEmailSettings'},
}
def __init__(self, *, report_distribution_id: int=None, report_distribution_name: str=None, created_by_user=None, created_date=None, modified_by_user=None, modified_date=None, last_run_date=None, next_run_date=None, enabled: bool=None, specific_report_id: int=None, base_report=None, email_settings=None, **kwargs) -> None:
super(ReportDistributionDetailsResponse, self).__init__(**kwargs)
self.report_distribution_id = report_distribution_id
self.report_distribution_name = report_distribution_name
self.created_by_user = created_by_user
self.created_date = created_date
self.modified_by_user = modified_by_user
self.modified_date = modified_date
self.last_run_date = last_run_date
self.next_run_date = next_run_date
self.enabled = enabled
self.specific_report_id = specific_report_id
self.base_report = base_report
self.email_settings = email_settings | PypiClean |
/CloeePy-0.0.2.tar.gz/CloeePy-0.0.2/README.md | # CloeePy
Mini Python framework for backend jobs and such. Avoids the HTTP riffraff when you're
not building a web system.
CloeePy uses YAML configuration files, which integrates better with Kubernetes'
ConfigMaps.
**This project is currently in alpha.**
## System Requirements
- Unix-based operating system
- Python 3.3+
## Installation
`pip install CloeePy`
## Configuration
Please see the [example configuration](./example-config.yml) for details of how to configure
A minimal configuration would be:
```
# config.yml
# CloeePy Framework and plugins configuration listed under CloeePy key
CloeePy:
Logging:
formatter: text
level: debug
Plugins: {}
# Your application specific configurations can go here
CustomVar: custom_value
```
## Usage
Using CloeePy is simple. Just import the framework, tell CloeePy where your config file
is located, and use the plugins that are attached to the application object.
With programs consisting of multiple modules, you can access the CloeePy instance
by re-instantiating it via `app = CloeePy()`. The CloeePy instance is a singleton,
so it will only ever be instantiated once per process.
The only plugin that comes packaged with CloeePy (at this point) is the logger.
```
# main.py
from cloeepy import CloeePy
if __name__ == "__main__":
# Required: set config path as environment variable
os.environ["CLOEEPY_CONFIG_PATH"] = /path/to/config.yml
# instantiate application instance
app = CloeePy()
# write a log entry to stdout
app.log.info("Hello World!")
```
## Background
This package is brought to you by the engineering team at Cloee. We build
Artificial Intelligence for DevOps and Infrastructure as a Service. Many of our
systems run as background jobs, and we weren't quite happy with existing Python
frameworks - as most are designed for building web systems (Django, Flask, Tornado, etc).
Our requirements were:
**Simple, easy-to-use framework for developing non-HTTP backend systems**
We write a lot of cron jobs and message-driven systems, so we don't need request
handling functionality, which is the focus of most existing frameworks.
**Singleton application context that can be accessed from anywhere**
We needed an application context containing configuration, database connections, other
useful stuff, that can be easily accessed from anywhere in our application.
Application context for a CloeePy app is a singleton that can be instantiated
anywhere in the application without running the risk of re-reading configuration
files or overwriting existing database connections.
**YAML driven configuration files**
Most popular python frameworks use python modules as configuration files. Although it's
convenient to do so in many situations, most of our systems run as containers on
Kubernetes. YAML has become the de-facto configuration format for many modern
applications, and Kuberenetes supports YAML-based ConfigMaps that can be added to
a container at startup time.
**Configuration object, NOT configuration dictionary**
Okay, this is a nit-picky one. But when you have deeply nested configurations,
isn't it annoying when all of your configuration data is stored as a Python dictionary?
Wouldn't dot accessors to your configuration data be a lot prettier and easy to
read/write? We think so. Therefore, any dictionaries in your configuration files
are turned into generic Python objets, so you can use the dot accessors like this:
`config.key1.key2.key3`
instead of this:
`config[key1][key2][key3]`.
Nonetheless, if you REALLY like dictionary access, you still have access to
your configuration as a dictionary.
**Extensible via plugins**
You can extend CloeePy by creating plugins. Plugins allow you to create
anything you want and attach it to the application context. This is particularly
useful for managing database connections or sharing common data/objects
throughout your application.
## Maintainers
Scott Crespo (@scottcrespo)
## Contributing
If you would like to contribute, please read the [Contributor's Guide](./CONTRIBUTING.md)
| PypiClean |
/ImSwitchUC2-2.1.0.tar.gz/ImSwitchUC2-2.1.0/imswitch/imcontrol/view/guitools/ViewSetupInfo.py | from dataclasses import dataclass, field
from typing import Dict, List, Optional, Union
from imswitch.imcontrol.model import SetupInfo
@dataclass(frozen=True)
class ROIInfo:
x: int
""" Starting X position of ROI, in pixels. """
y: int
""" Starting Y position of ROI, in pixels. """
w: int
""" Width of ROI, in pixels. """
h: int
""" Height of ROI, in pixels. """
@dataclass(frozen=True)
class LaserPresetInfo:
value: float
""" Laser value. """
@dataclass(frozen=True)
class PositionerPresetInfo:
value: float
""" Laser value. """
@dataclass
class ViewSetupInfo(SetupInfo):
""" This is the object represented by the hardware configuration JSON file.
All fields are optional, unless explicitly otherwise specified. """
# Quotes around type hints seem to be required for proper linking in the hardware control docs
rois: Dict[str, 'ROIInfo'] = field(default_factory=dict)
""" Additional ROIs available to select in detector settings. """
laserPresets: Dict[str, Dict[str, 'LaserPresetInfo']] = field(default_factory=dict)
""" Laser presets available to select (map preset name -> laser name ->
LaserPresetInfo). """
positionerPresets: Dict[str, Dict[str, 'PositionerPresetInfo']] = field(default_factory=dict)
defaultLaserPresetForScan: Optional[str] = field(default_factory=lambda: None)
""" Default laser preset for scanning. """
availableWidgets: Union[List[str], bool] = field(default_factory=list)
""" Which widgets to load. The following values are possible to include
(case sensitive):
- ``Settings`` (detector settings widget)
- ``View`` (image controls widget)
- ``Recording`` (recording widget)
- ``Image`` (image display widget)
- ``FocusLock`` (focus lock widget; requires ``focusLock`` field to be
defined)
- ``Autofocus`` (autofocus widget; requires ``focusLock`` field to be
defined)
- ``SLM`` (SLM widget; requires ``slm`` field to be defined)
- ``SIM`` (SIM widget; requires ``sim`` field to be defined)
- ``Laser`` (laser control widget)
- ``Positioner`` (positioners widget)
- ``Scan`` (scan widget; requires ``scan`` field to be defined)
- ``BeadRec`` (bead reconstruction widget)
- ``AlignAverage`` (axial alignment tool widget)
- ``AlignXY`` (rotation alignment tool widget)
- ``AlignmentLine`` (line alignment tool widget)
- ``uLenses`` (uLenses tool widget; requires ``Image`` widget)
- ``FFT`` (FFT tool widget)
- ``Console`` (Python console widget)
- ``EtSTED`` (etSTED widget; requires ``etSTED`` field to be defined)
- ``Rotator`` (Rotator widget; requires "Rotator" field to be defined)
- ``RotationScan`` (Rotation scan widget; requires "Rotator" field to be defined)
- ``MotCorr`` (Leica motorized correction collar widget; requires "leicastand" rs232 device to be defined)
You can also set this to ``true`` to enable all widgets, or ``false`` to
disable all widgets.
This field is required.
"""
def setROI(self, name, x, y, width, height):
""" :meta private: """
self.rois[name] = ROIInfo(x=x, y=y, w=width, h=height)
def removeROI(self, name):
""" :meta private: """
try:
del self.rois[name]
except KeyError:
pass
def setLaserPreset(self, name, laserPresetInfos):
""" :meta private: """
self.laserPresets[name] = laserPresetInfos
def setPositionerPreset(self, name, positionerPresetInfos):
""" :meta private: """
self.laserPresets[name] = positionerPresetInfos
def removeLaserPreset(self, name):
""" :meta private: """
try:
del self.laserPresets[name]
if self.defaultLaserPresetForScan == name:
self.setDefaultLaserPresetForScan(None)
except KeyError:
pass
def setDefaultLaserPresetForScan(self, presetNameOrNone):
""" :meta private: """
self.defaultLaserPresetForScan = presetNameOrNone
def hasWidget(self, widget):
""" :meta private: """
return self.availableWidgets is True or (
isinstance(self.availableWidgets, list) and widget in self.availableWidgets
)
# Copyright (C) 2020-2021 ImSwitch developers
# This file is part of ImSwitch.
#
# ImSwitch is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# ImSwitch is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>. | PypiClean |
/netket-3.9.2.tar.gz/netket-3.9.2/netket/driver/vmc.py |
from typing import Optional
import jax
import jax.numpy as jnp
from textwrap import dedent
from inspect import signature
from netket.utils.types import PyTree
from netket.operator import AbstractOperator
from netket.stats import Stats
from netket.vqs import MCState
from netket.optimizer import (
identity_preconditioner,
PreconditionerT,
_DeprecatedPreconditionerSignature,
)
from .vmc_common import info
from .abstract_variational_driver import AbstractVariationalDriver
class VMC(AbstractVariationalDriver):
"""
Energy minimization using Variational Monte Carlo (VMC).
"""
def __init__(
self,
hamiltonian: AbstractOperator,
optimizer,
*args,
variational_state=None,
preconditioner: PreconditionerT = identity_preconditioner,
**kwargs,
):
"""
Initializes the driver class.
Args:
hamiltonian: The Hamiltonian of the system.
optimizer: Determines how optimization steps are performed given the
bare energy gradient.
preconditioner: Determines which preconditioner to use for the loss gradient.
This must be a tuple of `(object, solver)` as documented in the section
`preconditioners` in the documentation. The standard preconditioner
included with NetKet is Stochastic Reconfiguration. By default, no
preconditioner is used and the bare gradient is passed to the optimizer.
"""
if variational_state is None:
variational_state = MCState(*args, **kwargs)
if variational_state.hilbert != hamiltonian.hilbert:
raise TypeError(
dedent(
f"""the variational_state has hilbert space {variational_state.hilbert}
(this is normally defined by the hilbert space in the sampler), but
the hamiltonian has hilbert space {hamiltonian.hilbert}.
The two should match.
"""
)
)
super().__init__(variational_state, optimizer, minimized_quantity_name="Energy")
self._ham = hamiltonian.collect() # type: AbstractOperator
self.preconditioner = preconditioner
self._dp: PyTree = None
self._S = None
self._sr_info = None
@property
def preconditioner(self):
"""
The preconditioner used to modify the gradient.
This is a function with the following signature
.. code-block:: python
precondtioner(vstate: VariationalState,
grad: PyTree,
step: Optional[Scalar] = None)
Where the first argument is a variational state, the second argument
is the PyTree of the gradient to precondition and the last optional
argument is the step, used to change some parameters along the
optimisation.
Often, this is taken to be :func:`nk.optimizer.SR`. If it is set to
`None`, then the identity is used.
"""
return self._preconditioner
@preconditioner.setter
def preconditioner(self, val: Optional[PreconditionerT]):
if val is None:
val = identity_preconditioner
if len(signature(val).parameters) == 2:
val = _DeprecatedPreconditionerSignature(val)
self._preconditioner = val
def _forward_and_backward(self):
"""
Performs a number of VMC optimization steps.
Args:
n_steps (int): Number of steps to perform.
"""
self.state.reset()
# Compute the local energy estimator and average Energy
self._loss_stats, self._loss_grad = self.state.expect_and_grad(self._ham)
# if it's the identity it does
# self._dp = self._loss_grad
self._dp = self.preconditioner(self.state, self._loss_grad, self.step_count)
# If parameters are real, then take only real part of the gradient (if it's complex)
self._dp = jax.tree_map(
lambda x, target: (x if jnp.iscomplexobj(target) else x.real),
self._dp,
self.state.parameters,
)
return self._dp
@property
def energy(self) -> Stats:
"""
Return MCMC statistics for the expectation value of observables in the
current state of the driver.
"""
return self._loss_stats
def __repr__(self):
return (
"Vmc("
+ f"\n step_count = {self.step_count},"
+ f"\n state = {self.state})"
)
def info(self, depth=0):
lines = [
f"{name}: {info(obj, depth=depth + 1)}"
for name, obj in [
("Hamiltonian ", self._ham),
("Optimizer ", self._optimizer),
("Preconditioner ", self.preconditioner),
("State ", self.state),
]
]
return "\n{}".format(" " * 3 * (depth + 1)).join([str(self)] + lines) | PypiClean |
/CL_Auto_Library-1.1.5.3-py3-none-any.whl/CL_Auto_Library/CL_Auto_Keywords.py | import re
import openpyxl
import os
import psutil
import win32com.client
import unicodedata
from datetime import datetime, timedelta
import time
from random import randint
from docx import Document
from openpyxl.workbook import Workbook
from openpyxl.utils import column_index_from_string
def EmailFormatValidator(email):
"""
Validates if an email address (parameter: email) is in the correct email format and returns a
PASS (email in correct format) or FAIL (email not in correct format)
Robot Framework Usage Example:
${PASS_FAIL}= Email Format Validator [email protected]
"""
if re.match("[\.\w]{2,}[@]\w+[.]\w+",email,re.IGNORECASE):
return "PASS"
else:
return "FAIL"
def ContainsOnlyDigits(str_value):
"""
Validates if a string value (parameter: str_value) contains only digits
Returns PASS (str_value contains only digits) or FAIL (str_value does not
contain only digits)
Robot Framework Usage Example:
${PASS_FAIL}= Contains Only Digits 5920782573
"""
# Using regex()
if re.match('^[0-9]*$', str_value):
return 'PASS'
else:
return 'FAIL'
def ConvertStringToList(string):
"""
Converts a string (parameter: string) of values INto an array to be used in a list variable
Note: each value in the string parameter must be separated by a space, example: A B C D
Robot Framework Usage Example:
@{LIST_VARIABLE}= Convert String To List A B C D
"""
ConvLst = list(string.split(" "))
return ConvLst
def GetValueInString(value, str):
"""
Returns all occurrences of a value (parameter: value) contained within a string (parameter: str) or
returns FAIL if the string value (parameter: value) is not contained within the string (parameter: str).
Note: This function is not case sensitive. The matched value return is in lower case.
Robot Framework Usage Example:
${return_value}= Get Value In String is This is my string
"""
value = value.lower()
str = str.lower()
Match = re.findall(value, str)
if Match:
return Match
else:
return 'FAIL'
def StringFormatValidator(field_format, field_value):
"""
Returns PASS if the field_value (parameter: field_value) matches a specified field Regex format (parameter: field_format)
Returns FAIL if the field_value (parameter: field_value) does not match a specified field Regex format (parameter: field_format)
Robot Framework Usage Example:
${PASS_FAIL}= String Format Validator ^[0-9]{6}-[0-9]{1}$ 848567-0
Note: must be a string equal to any 6 digits (from 0 to 9) dash any one digit (from 0 to 9) :
016349-0 ; 999999-9 ; 000000-0
"""
Regex_Match = re.match(field_format, field_value)
if Regex_Match:
return 'PASS'
else:
return 'FAIL'
def GetStringPosition(str,fnd):
"""
Returns the index (actual position - 1) of the position of the first occurrence of a string (parameter: fnd) contained in the string value passed in (parameter: str)
Note: This function is not case sensitive
Robot Framework Usage Example:
${return_value}= Get String Position This is my string my
"""
str = str.lower()
fnd = fnd.lower()
pos = 0
ind = str.find(fnd)
pos += str.find(fnd)
return pos
def GetUniqueItems(list_of_values):
"""
Returns the unique list of values from parameter list_of_values (i.e. list variable)
Note: This function is case sensitive
Robot Framework Usage Example:
@{LIST_OF_VALUES}= Convert String To List two three one five one two
@{UNIQUE_ITEMS}= Get Unique Items ${LIST_OF_VALUES}
"""
for x in list_of_values:
if list_of_values.count(x) > 1:
list_of_values.remove(x)
return list_of_values
def CountOccurrenceOfValueInString(string,sub_string):
"""
Returns the count of the number of times a value (parameter: sub_string) appears in a string (parameter: string)
Note: This function is not case sensitive
Robot Framework Usage Example:
${count_occurence}= Count Occurrence Of Value In String One TWO three one five Two two
"""
string = string.lower()
sub_string = sub_string.lower()
l=len(sub_string)
count=0
for i in range(len(string)-len(sub_string)+1):
if(string[i:i+len(sub_string)] == sub_string ):
count+=1
return count
def KillProcess(process_name):
"""
Kill the process name (parameter: process_name) passed in
Robot Framework Usage Example:
Kill Process chromedriver.exe
"""
# Iterate over the all the running process
for proc in psutil.process_iter():
try:
# Check if process name contains the given name string (process_name).
if process_name.lower() in proc.name().lower():
proc.kill()
except (psutil.NoSuchProcess, psutil.AccessDenied, psutil.ZombieProcess):
pass
def RemoveSpecialCharacters(str):
"""
Removes special characters, including spaces, from a string (parameter: str)
and returns the string
Robot Framework Usage Example:
${special_char_removed}= Remove Special Characters ${sring_variable}
"""
alphanumeric = ""
for character in str:
if character.isalnum():
alphanumeric += character
return alphanumeric
def CreateNewWorkbook(filename, sheetname, headings):
"""
Creates a workbook object that will be saved at the filename path (parameter: filename) passed in
Update the default worksheet name "Sheet" to the value passed in to parameter: sheetname
Add the headings to row 1 that are passed in to parameter: headings
Robot Framework Usage Example:
Create New Workbook ${workbook_filename_path} ${sheetname} ${headings_list}
"""
wb = Workbook() # Create workbook
#Change default worksheet name to the value contained in parameter: sheetname
ws=wb.get_sheet_by_name('Sheet')
ws.title = sheetname
# save the workbook
wb.save(filename)
# Add the headings to the worksheet
wb = openpyxl.load_workbook(filename)
ws = wb.get_sheet_by_name(sheetname)
heading_array = headings.split(";")
num_headings = len(heading_array)
for x in range(0, num_headings):
ws.cell(row=1, column=x+1).value = heading_array[x]
# save the workbook
wb.save(filename)
def OpenWorkbook(filename):
"""
Opens an excel workbook (parameter: filename, which includes the filename path)
Robot Framework Usage Example:
Open Workbook ${workbook_filename_path}
Note: ${workbook_filename_path} variable value example = ${EXECDIR}\\Data\\Project_Data_File.xlsx
"""
wb = openpyxl.load_workbook(filename)
for sheet in wb:
print(sheet.title)
def GetDataRowCount(filename, sheetname) :
"""
Returns the number of rows in a particular worksheet name (parameter: sheetname) of an
excel file (parameter: filename, which includes the filename path)
Robot Framework Usage Example:
${row_count}= Get Data Row Count ${workbook_filename_path} ${worksheet_name}
Note: ${workbook_filename_path} variable value example = ${EXECDIR}\\Data\\Project_Data_File.xlsx
"""
workbook = openpyxl.load_workbook(filename)
worksheet = workbook.get_sheet_by_name(sheetname)
row_count = worksheet.max_row-1
return row_count
def GetDataByRowIndex(excel_row, filename, sheetname) :
"""
Returns a row of data (into a list variable) from an excel file (parameter: filename) worksheet (parameter: sheetname) for the excel row index
(parameter: excel_row, which is the excel worksheet row number)
Robot Framework Usage Example:
@{DATA_ROW}= Get Data By Row Index ${excel_row_index_variable} ${workbook_filename_path} ${worksheet_name}
Note: ${workbook_filename_path} variable value example = ${EXECDIR}\\Data\\Project_Data_File.xlsx
"""
workbook = openpyxl.load_workbook(filename)
worksheet = workbook.get_sheet_by_name(sheetname)
data_row = []
excel_row = int(excel_row)
for row in worksheet.iter_rows(min_row=excel_row, max_row=excel_row):
for cell in row:
#Append column values to the data row list
data_row.append(cell.value)
return data_row # return the row of test data
def GetNextAvailableDataRow(filename, sheetname, used_col_letter):
"""
Returns the next available row of data (into a list variable) that is not marked as 'Used' in the column letter (example: column A,B,C,etc.)
that is passed in from the excel file (parameter: filename) worksheet (parameter: sheetname)
Robot Framework Usage Example:
@{DATA_ROW}= Get Next Available Data Row ${workbook_filename_path} ${worksheet_name} ${available_col_letter}
Note: ${workbook_filename_path} variable value example = ${EXECDIR}\\Data\\Project_Data_File.xlsx
"""
wb = openpyxl.load_workbook(filename)
ws = wb.get_sheet_by_name(sheetname)
data_row = []
excel_col_number = column_index_from_string(used_col_letter)
i = 1
for row in ws.iter_rows(min_row=1, max_row=ws.max_row):
i = i + 1
if ws.cell(i, excel_col_number).value != "Used":
available_row = i
break # exit for loop
for row in ws.iter_rows(min_row=available_row, max_row=available_row):
for cell in row:
#Append column values to the data row list
data_row.append(cell.value)
ws.cell(row=available_row, column=excel_col_number).value = "Used" # Update 'Available Row' column cell value to 'Used'
wb.save(filename) # Save the workbook
return data_row # return the row of test data
def GetAllDataFromExcelSheet(fileName, sheetname) :
"""
Returns all of the rows of data (into a list variable) from a particular excel file sheetname
Robot Framework Usage Example:
@{WORKSHEET_DATA}= Get All Data From Excel Sheet ${workbook_filename_path} ${worksheet_name}
Note: ${workbook_filename_path} variable value example = ${EXECDIR}\\Data\\Project_Data_File.xlsx
"""
workbook = openpyxl.load_workbook(fileName)
worksheet = workbook.get_sheet_by_name(sheetname)
rowEndIndex = worksheet.max_row
rowStartIndex = 2 # Start on worksheet row 2, excludes headings row
data_row = []
for row in worksheet.iter_rows(min_row=rowStartIndex, max_row=rowEndIndex):
for cell in row:
# Append column values to the data row list
data_row.append(cell.value)
return data_row
def WriteToExcelFile(filename, sheetname, data_value, row_index, col_index):
"""
Write a value into a cell in the excel file (parameter: filename) worksheet (parameter: sheetname)
row number (parameter: row_index) and column number (parameter: col_index)
Robot Framework Usage Example:
Write To Excel File ${workbook_filename_path} ${worksheet_name} ${data_value} ${row_index} ${col_index}
Note: ${workbook_filename_path} variable value example = ${EXECDIR}\\Data\\Project_Data_File.xlsx
"""
wb = openpyxl.load_workbook(filename)
ws = wb.get_sheet_by_name(sheetname)
r = int(row_index)
c = int(col_index)
ws.cell(row=r, column=c).value = ''
ws.cell(row=r, column=c).value = data_value #enter the data_value in cell row=r and column=c
wb.save(filename) # Save the workbook
def ReplaceAccents(text):
"""
Replaces French accents in a string (parameter: text) with the same characters but without the accents
and returns the string
Robot Framework Usage Example:
${string_without_accents}= Replace Accents ${text}
"""
try:
text = unicode(text, 'utf-8')
except NameError: # unicode is a default on python 3
pass
text = unicodedata.normalize('NFD', text)\
.encode('ascii', 'ignore')\
.decode("utf-8")
return str(text)
def ReadWordFile(word_file_path):
"""
Reads the first sentence of text from a MS Word file (parameter: word_file_path) and returns the MS Word file text
Robot Framework Usage Example:
${word_file_text}= Read Word File ${word_file_path}
Note: ${word_file_path} variable value example = ${EXECDIR}\\Data\\word_file_name.docx
"""
document = Document(word_file_path)
for para in document.paragraphs:
return (para.text)
def RandomNumber(num_digits):
"""
Generates a random number having the number of digits inputted in parameter "num_digits"
Robot Framework Usage Example:
${random_number}= Random Number ${number_digits}
"""
n = int(num_digits)
range_start = 10**(n-1)
range_end = (10**n)-1
return randint(range_start, range_end)
def RunRFTestSuite(project_dir, ts_name, ts_subfolders, browser_var, lang_var, run_type_var, tc_row_index_var):
"""
Runs a Robot Framework Test Suite and moves the test result files to the Reports
folder relevant to the project main folder (parameter: project_dir) for a given
test suite (parameters: ts_name), test suite subfolders (parameter: ts_subfolders), and variables
(parameters: browser_var, lang_var, run_type_var, tc_row_index_var)
Note: Based on the Robot Framework Project Folder template structure
Robot Framework Usage Example:
Run RF Test Suite ${EXECDIR} TS_01_US01_Register_Non-CDN_Organization Test_Suites\\TS_01_P@I_Register_Your_Organization chrome en ${test_case_row_index}
"""
current_date_time = datetime.now()
timestamp = current_date_time.strftime("%Y%m%d")
change_dir = "cd " + project_dir
a = "robot -d "
b = "Reports" + "\\" + "\\" + ts_name + "-" + timestamp + " --timestampoutputs -r "
c = "_reports.html -o "
d = "_output.xml -l "
e = "_log.html "
f = "--variable browser:" + browser_var + " --variable lang:" + lang_var + " --variable run_type:" + run_type_var + " --variable test_case_row_index:" + tc_row_index_var + " "
g = ts_subfolders + "\\" + ts_name + ".robot"
cmds = a + b + ts_name + c + ts_name + d + ts_name + e + f + g
print(cmds)
os.system(change_dir)
os.system(cmds) | PypiClean |
/DrissionPage-3.2.31.tar.gz/DrissionPage-3.2.31/README.md | # ✨️ 概述
DrissionPage 是一个基于 python 的网页自动化工具。
它既能控制浏览器,也能收发数据包,还能把两者合而为一。
可兼顾浏览器自动化的便利性和 requests 的高效率。
它功能强大,内置无数人性化设计和便捷功能。
它的语法简洁而优雅,代码量少,对新手友好。
---
<a href='https://gitee.com/g1879/DrissionPage/stargazers'><img src='https://gitee.com/g1879/DrissionPage/badge/star.svg?theme=dark' alt='star'></img></a> <a href='https://gitee.com/g1879/DrissionPage/members'><img src='https://gitee.com/g1879/DrissionPage/badge/fork.svg?theme=dark' alt='fork'></img></a>
项目地址:[gitee](https://gitee.com/g1879/DrissionPage) | [github](https://github.com/g1879/DrissionPage)
您的星星是对我最大的支持💖
---
支持系统:Windows、Linux、Mac
python 版本:3.6 及以上
支持浏览器:Chromium 内核浏览器(如 Chrome 和 Edge),electron 应用
---
**📖 使用文档:** [点击查看](http://g1879.gitee.io/drissionpagedocs)
**交流 QQ 群:** 897838127[已满]、558778073
---
# 🔥 新版预告
查看下一步开发计划:[新版预告](http://g1879.gitee.io/drissionpagedocs/whatsnew/3_3/)
---
# 📕 背景
用 requests 做数据采集面对要登录的网站时,要分析数据包、JS 源码,构造复杂的请求,往往还要应付验证码、JS 混淆、签名参数等反爬手段,门槛较高,开发效率不高。
使用浏览器,可以很大程度上绕过这些坑,但浏览器运行效率不高。
因此,这个库设计初衷,是将它们合而为一,同时实现“写得快”和“跑得快”。能够在不同需要时切换相应模式,并提供一种人性化的使用方法,提高开发和运行效率。
除了合并两者,本库还以网页为单位封装了常用功能,提供非常简便的操作和语句,使用户可减少考虑细节,专注功能实现。 以简单的方式实现强大的功能,使代码更优雅。
以前的版本是对 selenium 进行重新封装实现的。从 3.0 开始,作者另起炉灶,对底层进行了重新开发,摆脱对 selenium 的依赖,增强了功能,提升了运行效率。
---
# 💡 理念
简洁!易用 !方便!
---
# ☀️ 特性和亮点
作者经过长期实践,踩过无数坑,总结出的经验全写到这个库里了。
## 🎇 强大的自研内核
本库采用全自研的内核,内置了 N 多实用功能,对常用功能作了整合和优化,对比 selenium,有以下优点:
- 无 webdriver 特征
- 无需为不同版本的浏览器下载不同的驱动
- 运行速度更快
- 可以跨`<iframe>`查找元素,无需切入切出
- 把`<iframe>`看作普通元素,获取后可直接在其中查找元素,逻辑更清晰
- 可以同时操作浏览器中的多个标签页,即使标签页为非激活状态,无需切换
- 可以直接读取浏览器缓存来保存图片,无需用 GUI 点击另存
- 可以对整个网页截图,包括视口外的部分(90以上版本浏览器支持)
- 可处理非`open`状态的 shadow-root
## 🎇 亮点功能
除了以上优点,本库还内置了无数人性化设计。
- 极简的语法规则。集成大量常用功能,代码更优雅
- 定位元素更加容易,功能更强大稳定
- 无处不在的等待和自动重试功能。使不稳定的网络变得易于控制,程序更稳定,编写更省心
- 提供强大的下载工具。操作浏览器时也能享受快捷可靠的下载功能
- 允许反复使用已经打开的浏览器。无须每次运行从头启动浏览器,调试超方便
- 使用 ini 文件保存常用配置,自动调用,提供便捷的设置,远离繁杂的配置项
- 内置 lxml 作为解析引擎,解析速度成几个数量级提升
- 使用 POM 模式封装,可直接用于测试,便于扩展
- 高度集成的便利功能,从每个细节中体现
- 还有很多细节,这里不一一列举,欢迎实际使用中体验:)
---
# 🛠 使用文档
[点击跳转到使用文档](http://g1879.gitee.io/drissionpage)
---
# 🔖 版本历史
[点击查看版本历史](http://g1879.gitee.io/drissionpagedocs/history/3.x/)
---
# 🖐🏻 免责声明
请勿将 DrissionPage 应用到任何可能会违反法律规定和道德约束的工作中,请友善使用 DrissionPage,遵守蜘蛛协议,不要将 DrissionPage 用于任何非法用途。如您选择使用 DrissionPage
即代表您遵守此协议,作者不承担任何由于您违反此协议带来任何的法律风险和损失,一切后果由您承担。
---
# ☕ 请我喝咖啡
如果本项目对您有所帮助,不妨请作者我喝杯咖啡 :)

| PypiClean |
/CUQIpy_PyTorch-0.2.0-py3-none-any.whl/cuqipy_pytorch/distribution.py | import torch
import cuqi
from cuqi.distribution import Distribution
import numbers
import numpy as np
class _OutOfBoundsDistribution:
""" Helper class to handle out-of-bounds values """
def log_prob(self, *args, **kwargs):
return torch.tensor(-torch.inf, dtype=torch.float32)
class HalfGaussian(Distribution):
def __init__(self, scale, is_symmetric=False, **kwargs):
super().__init__(is_symmetric=is_symmetric, **kwargs)
self.scale = scale
@property
def scale(self):
return self._scale
@scale.setter
def scale(self, value):
self._scale = value
self._set_dist()
def _set_dist(self):
if hasattr(self, '_scale'):
scale = self.scale
# Set scale
if isinstance(scale, numbers.Number):
scale = scale*torch.ones(self.dim)
self._dist = torch.distributions.HalfNormal(scale)
def logpdf(self, value):
if not torch.is_tensor(value):
value = torch.tensor(value)
return torch.sum(self._dist.log_prob(value))
def _sample(self, n):
return self._dist.sample(torch.Size((n,)))
class Uniform(Distribution):
""" Uniform distribution """
def __init__(self, low, high, is_symmetric=True, **kwargs):
super().__init__(is_symmetric=is_symmetric, **kwargs)
self.low = low
self.high = high
@property
def low(self):
return self._low
@low.setter
def low(self, low):
self._low = low
self._set_dist()
@property
def high(self):
return self._high
@high.setter
def high(self, high):
self._high = high
self._set_dist()
def _set_dist(self):
if hasattr(self, '_low') and hasattr(self, '_high'):
low, high = self.low, self.high
if isinstance(low, numbers.Number):
low = low*torch.ones(self.dim)
if isinstance(high, numbers.Number):
high = high*torch.ones(self.dim)
if isinstance(low, np.ndarray):
low = torch.tensor(low)
if isinstance(high, np.ndarray):
high = torch.tensor(high)
self._dist = torch.distributions.Uniform(low, high)
def logpdf(self, value):
if not torch.is_tensor(value):
value = torch.tensor(value)
if value < torch.tensor(self.low) or value > torch.tensor(self.high):
return torch.tensor(-torch.inf, dtype=torch.float32)
# Flip interval inclusion in logpdf, i.e. (low,high] instead of [low,high)
if value == self.low: value = torch.tensor(self.high)
elif value == self.high: value = torch.tensor(self.low)
return torch.sum(self._dist.log_prob(value))
def _sample(self, n):
return self._dist.sample(torch.Size((n,)))
class LogGaussian(Distribution):
def __init__(self, mean, cov, is_symmetric=False, **kwargs):
super().__init__(is_symmetric=is_symmetric, **kwargs)
self.mean = mean
self.cov = cov
@property
def mean(self):
return self._mean
@mean.setter
def mean(self, value):
self._mean = value
self._set_dist()
@property
def cov(self):
return self._cov
@cov.setter
def cov(self, value):
self._cov = value
self._set_dist()
def _set_dist(self):
if hasattr(self, '_mean') and hasattr(self, '_cov'):
mean = self.mean
cov = self.cov
# Set mean and cov to tensors if numbers
if isinstance(mean, numbers.Number):
mean = mean*torch.ones(self.dim)
if isinstance(cov, numbers.Number):
cov = cov*torch.ones(self.dim)
if torch.is_tensor(mean) and torch.is_tensor(cov):
self._dist = torch.distributions.LogNormal(mean, cov)
def logpdf(self, value):
if not torch.is_tensor(value):
value = torch.tensor(value)
#if isinstance(self._dist, Normal):
return torch.sum(self._dist.log_prob(value))
#else:
#return self._dist.log_prob(value)
def _sample(self, n):
return self._dist.sample(torch.Size((n,)))
# Create a basic Gaussian distribution wrapping pytorch
class Gaussian(Distribution):
def __init__(self, mean, cov, is_symmetric=True, **kwargs):
super().__init__(is_symmetric=is_symmetric, **kwargs)
self.mean = mean
self.cov = cov
@property
def mean(self):
return self._mean
@mean.setter
def mean(self, value):
self._mean = value
self._set_dist(update_only_mean=True)
@property
def cov(self):
return self._cov
@cov.setter
def cov(self, value):
self._cov = value
self._set_dist()
def _set_dist(self, update_only_mean=False):
""" Set the pytorch distribution if both values are specified """
if hasattr(self, '_mean') and hasattr(self, '_cov'):
# Define mean value
if callable(self._mean) and hasattr(self._mean, '__len__'):
mean = torch.zeros(len(self._mean))
else:
mean = self._mean
# Define covariance value
cov = self._cov
# Set mean and cov to tensors if numbers
if isinstance(mean, numbers.Number):
mean = mean*torch.ones(self.dim)
if isinstance(cov, numbers.Number):
cov = cov*torch.ones(self.dim)
if isinstance(mean, np.ndarray):
mean = torch.tensor(mean)
if isinstance(cov, np.ndarray):
cov = torch.tensor(cov)
# If both are tensors we create dist
if torch.is_tensor(mean) and torch.is_tensor(cov):
#if torch.isnan(mean).any():
# raise ValueError("mean contains NaN")
#if torch.isnan(cov).any():
# raise ValueError("cov contains NaN")
# Special update for mean value to speed-up computation
if hasattr(self, '_dist') and update_only_mean:
if cov.ndim==1:
self._dist.loc = self._mean.expand(self._dist.batch_shape)
else:
self._dist.loc = self._mean.expand(self._dist.batch_shape + (-1,))
elif cov.ndim==1: #Make i.i.d. Gaussian
sqrt_cov = torch.sqrt(cov)
if torch.isnan(sqrt_cov).any():
self._dist = _OutOfBoundsDistribution()
else:
self._dist = torch.distributions.Normal(mean, sqrt_cov)
else:
self._dist = torch.distributions.MultivariateNormal(mean, cov)
def logpdf(self, value):
if isinstance(value, np.ndarray):
value = torch.tensor(value)
if isinstance(self._dist, torch.distributions.Normal):
return torch.sum(self._dist.log_prob(value))
else:
return self._dist.log_prob(value)
def _sample(self, n):
return self._dist.sample(torch.Size((n,)))
def gradient(self, v1, v2=None):
if v2 is None: #Prior case
v1.requires_grad = True
v1.grad = None
Q = self.logpdf(v1) # Forward pass
Q.backward() # Backward pass
return v1.grad
else: #Likelihood case
v2.requires_grad = True
v2.grad = None
Q = self(v2).logpdf(v1) # Forward pass
Q.backward() # Backward pass
return v2.grad
# Create a basic Gaussian distribution wrapping pytorch
class Gamma(Distribution):
def __init__(self, shape, rate, is_symmetric=False, **kwargs):
super().__init__(is_symmetric=is_symmetric, **kwargs)
self.shape = shape
self.rate = rate
@property
def shape(self):
return self._shape
@shape.setter
def shape(self, value):
if not torch.is_tensor(value):
value = torch.tensor([value], dtype=torch.float32)
self._shape = value
self._set_dist()
@property
def rate(self):
return self._rate
@rate.setter
def rate(self, value):
if not torch.is_tensor(value):
value = torch.tensor([value], dtype=torch.float32)
self._rate = value
self._set_dist()
def _set_dist(self):
""" Set the pytorch distribution if both values are specified """
if hasattr(self, '_shape') and hasattr(self, '_rate'):
# Define shape value
shape = self._shape
# Define rate value
rate = self._rate
# If both are tensors we create dist
if torch.is_tensor(shape) and torch.is_tensor(rate):
self._dist = torch.distributions.Gamma(shape, rate)
def logpdf(self, value):
if not torch.is_tensor(value):
value = torch.tensor([value], dtype=torch.float32)
# Check if value is negative
if value.min() <= 0:
return -float('inf')
return torch.sum(self._dist.log_prob(value))
def _sample(self, n):
return self._dist.sample(torch.Size((n,)))
class StackedJointDistribution(cuqi.distribution._StackedJointDistribution, Distribution):
def logpdf(self, x):
# Cache x
self._x_np = x
# Convert x to tensor
self._x = torch.tensor(x, requires_grad=True)
# Evaluate logpdf (and cache)
self._logpdf = self.logd(self._x)
# Return as numpy
return self._logpdf.detach().numpy()
def gradient(self, x):
if hasattr(self, '_x_np') and self._x_np is x:
self._logpdf.backward()
return self._x.grad.detach().numpy()
else:
self.logpdf(x)
return self.gradient(x)
def _sample(self, Ns):
pass
# Create a basic Gaussian distribution wrapping pytorch
class Gaussian2(Distribution):
def __init__(self, mean, sqrtcov, is_symmetric=True, **kwargs):
super().__init__(is_symmetric=is_symmetric, **kwargs)
self.mean = mean
self.sqrtcov = sqrtcov
@property
def mean(self):
return self._mean
@mean.setter
def mean(self, value):
self._mean = value
self._set_dist(update_only_mean=True)
@property
def sqrtcov(self):
return self._sqrtcov
@sqrtcov.setter
def sqrtcov(self, value):
self._sqrtcov = value
self._set_dist()
def _set_dist(self, update_only_mean=False):
""" Set the pytorch distribution if both values are specified """
if hasattr(self, '_mean') and hasattr(self, '_sqrtcov'):
# Define mean value
if callable(self._mean) and hasattr(self._mean, '__len__'):
mean = torch.zeros(len(self._mean))
else:
mean = self._mean
# Define covariance value
sqrtcov = self._sqrtcov
# If both are tensors we create dist
if torch.is_tensor(mean) and torch.is_tensor(sqrtcov):
# Special update for mean value to speed-up computation
if hasattr(self, '_dist') and update_only_mean:
if sqrtcov.ndim==1:
self._dist.loc = self._mean.expand(self._dist.batch_shape)
else:
self._dist.loc = self._mean.expand(self._dist.batch_shape + (-1,))
elif sqrtcov.ndim==1: #Make i.i.d. Gaussian
if torch.isnan(sqrtcov).any():
self._dist = _OutOfBoundsDistribution()
else:
self._dist = torch.distributions.Normal(mean, sqrtcov)
else:
self._dist = torch.distributions.MultivariateNormal(mean, scale_tril=sqrtcov)
def logpdf(self, value):
if isinstance(self._dist, torch.distributions.Normal):
return torch.sum(self._dist.log_prob(value))
else:
return self._dist.log_prob(value)
def _sample(self, n):
return self._dist.sample(torch.Size((n,)))
def gradient(self, v1, v2=None):
if v2 is None: #Prior case
v1.requires_grad = True
v1.grad = None
Q = self.logpdf(v1) # Forward pass
Q.backward() # Backward pass
return v1.grad
else: #Likelihood case
v2.requires_grad = True
v2.grad = None
Q = self(v2).logpdf(v1) # Forward pass
Q.backward() # Backward pass
return v2.grad
class Laplace(Distribution):
"""Laplace distribution constructed via torch.distributions.Laplace.
Creates a Laplace distribution with location `loc` and scale `scale`. The pdf is given by
.. math::
f(x) = \\frac{1}{2 \\sigma} \\exp \\left( - \\frac{|x - \\mu|}{\\sigma} \\right)
where :math:`\\mu` is the location and :math:`\\sigma` is the scale.
Parameters
----------
loc : float, ndarray or torch.tensor
Location parameter
scale : float, ndarray or torch.tensor
"""
def __init__(self, location, scale, is_symmetric=True, **kwargs):
super().__init__(is_symmetric=is_symmetric, **kwargs)
self.location = location
self.scale = scale
@property
def location(self):
return self._location
@location.setter
def location(self, value):
self._location = value
self._set_dist()
@property
def scale(self):
return self._scale
@scale.setter
def scale(self, value):
self._scale = value
self._set_dist()
def _set_dist(self):
""" Set the pytorch distribution if both values are specified """
if hasattr(self, '_location') and hasattr(self, '_scale'):
# Define location value
loc = self.location
if isinstance(loc, numbers.Number):
loc = loc*torch.ones(self.dim)
if isinstance(loc, np.ndarray):
loc = torch.tensor(loc)
# Define scale value
scale = self._scale
if isinstance(scale, numbers.Number):
scale = scale*torch.ones(self.dim)
if isinstance(scale, np.ndarray):
scale = torch.tensor(scale)
# If both are tensors we create dist
if torch.is_tensor(loc) and torch.is_tensor(scale):
self._dist = torch.distributions.Laplace(loc, scale)
def logpdf(self, value):
return torch.sum(self._dist.log_prob(value))
def _sample(self, n):
return self._dist.sample(torch.Size((n,)))
class Cauchy(Distribution):
"""Cauchy distribution constructed via torch.distributions.Cauchy.
Creates a Cauchy distribution with location `loc` and scale `scale`. The pdf is given by
.. math::
f(x) = \\frac{1}{\\pi \\sigma (1 + ((x - \\mu)/\\sigma)^2)}
where :math:`\\mu` is the location and :math:`\\sigma` is the scale.
Parameters
----------
loc : float, ndarray or torch.tensor
scale : float, ndarray or torch.tensor
"""
def __init__(self, location, scale, is_symmetric=True, **kwargs):
super().__init__(is_symmetric=is_symmetric, **kwargs)
self.location = location
self.scale = scale
@property
def location(self):
return self._location
@location.setter
def location(self, value):
self._location = value
self._set_dist()
@property
def scale(self):
return self._scale
@scale.setter
def scale(self, value):
self._scale = value
self._set_dist()
def _set_dist(self):
""" Set the pytorch distribution if both values are specified """
if hasattr(self, '_location') and hasattr(self, '_scale'):
# Define location value
loc = self.location
if isinstance(loc, numbers.Number):
loc = loc*torch.ones(self.dim)
if isinstance(loc, np.ndarray):
loc = torch.tensor(loc)
# Define scale value
scale = self._scale
if isinstance(scale, numbers.Number):
scale = scale*torch.ones(self.dim)
if isinstance(scale, np.ndarray):
scale = torch.tensor(scale)
# If both are tensors we create dist
if torch.is_tensor(loc) and torch.is_tensor(scale):
self._dist = torch.distributions.Cauchy(loc, scale)
def logpdf(self, value):
return torch.sum(self._dist.log_prob(value))
def _sample(self, n):
return self._dist.sample(torch.Size((n,))) | PypiClean |
/BPTK_Py-1.8.0.tar.gz/BPTK_Py-1.8.0/BPTK_Py/modelparser/meta_model_creator.py |
class ModelCreator():
"""
This class creates a meta model for the scenario. You can export this model to BPTK-Compliant JSON format for further processing. It comes wiht its own serialization mechanism
For now, it only supports ABM models. SD support is planned for the future!
"""
def __init__(self, name, type="abm", model="model", silent=False,json_dict = None):
"""
:param name: Name of the scenario manager
:param type: ABM or SD
:param model: link to model file, if any. Uses Python dot notation
:param silent: If True, no output will be made during parsing
"""
self.name = name
self.type = type
self.datacollector = None
self.json_dict = json_dict
# Handle cases where the user does not give a package link, rather than a class name only
if len(model) == 1:
model = "model." + model
self.model = model
self.silent = silent
self.properties = []
self.scenarios = {
}
def add_scenario(self, name, starttime, stoptime, dt,properties={},datacollector=None):
self.scenarios[name] = {
"runspecs": {
"starttime": starttime,
"stoptime": stoptime,
"dt": dt
},
"agents": [],
"properties": properties
}
self.datacollector = datacollector
return self
def add_agent(self, agent, scenario):
"""
Add one serializable agent object
:param agent: Instance of Serializable Agent
:param scenario: Name of scenario to add agent to
:return:
"""
self.scenarios[scenario]["agents"] += [agent]
def create_model(self):
"""
Serialization method. Outputs the model and all of its components as dictionary
:return: Dictionary
"""
if self.type == "sd" or self.type=="undefined":
return None, self.json_dict
def import_class(name):
components = name.split('.')
mod = __import__(components[0])
for comp in components[1:]:
mod = getattr(mod, comp)
return mod
from copy import deepcopy
model_to_dump = deepcopy(self)
def serialize(obj):
output = {}
try:
elems = vars(obj)
except:
elems = obj
output = elems
if type(elems) == list:
output = []
for elem in elems:
output += [serialize(elem)]
elif type(elems) == dict:
for key, value in elems.items():
output[key] = serialize(value)
return output
model = model_to_dump.model
### Create the import statements for the Model
agents = []
for key, value in model_to_dump.scenarios.items():
agents += value["agents"]
## Agent factories erzeugen
from BPTK_Py import Model
from BPTK_Py.logger import log
BPTK_Mod = None
try:
if (model!=model_to_dump.name):
import importlib
split = model.split(".")
className = split[len(split) - 1]
packageName = '.'.join(split[:-1])
mod = importlib.import_module(packageName)
class_object = getattr(mod,className)
if self.datacollector:
BPTK_Mod = class_object(data_collector=self.datacollector if self.datacollector else None)
else:
BPTK_Mod = class_object()
except Exception as e:
print(e)
print("ERROR")
log("[WARN] Could not load specific model class. Using standard Model")
if self.datacollector:
BPTK_Mod = Model(data_collector=self.datacollector)
else:
BPTK_Mod = Model()
classes_to_type = {}
for agent in agents:
class_obj = agent.classname
name = agent.name
class agent_Fac():
"""
Helper class that encapsulates the agent factory method. Needed to store agent type object
"""
def __init__(self,class_obj,agent_type):
import copy
self.className=copy.deepcopy(class_obj)
self.agent_type = copy.deepcopy(agent_type)
def factory(self, agent_id, model, properties):
"""
Actual Agent factory method
:param agent_id: int, given by Model
:param model: BPTK_py.Model
:param properties: Dictionary of Python properties for agent
:return: Agent instance
"""
from BPTK_Py.logger import log
import importlib
split = self.className.split(".")
className = split[len(split) - 1]
packageName = '.'.join(split[:-1])
try:
mod = importlib.import_module(packageName)
except ModuleNotFoundError as e:
log(
"[ERROR] File {}.py not found. Probably this is due to a faulty configuration or you forget to delete one. Skipping.".format(
packageName.replace(".", "/")))
return
try:
scenario_class = getattr(mod, className)
except AttributeError as e:
log(
"[ERROR] Could not find class {} in {}. Probably there is still a configuration that you do not use anymore. Skipping.".format(
class_obj, packageName))
return
return scenario_class(agent_id=agent_id,model=model,properties=properties,agent_type=self.agent_type)
fac = agent_Fac(class_obj,name)
BPTK_Mod.register_agent_factory(name,
fac.factory)
# We also return this serialized model as Dictionary as many internal mechanisms rely on dictionary data
return BPTK_Mod, {self.name: serialize(model_to_dump)}
##################
### AGENT SPECS ##
##################
'''
The following code wraps the agent specs for each agent type. To add your own agent, instantiate sub classes with specific predefined properties
'''
class serializable_agent():
"""
This class wraps certainagent properties that will be evaluated by BPTK-Py during runtime-
"""
def __init__(self, name, count, step, properties=None, classname=None, previous=None, target=None, silent=False):
import copy
self.count = count
self.step = step
self.name = name
self.properties = {} if properties is None else copy.deepcopy(properties)
self.classname = "BPTK_Py.Agent" if not classname else classname
self.silent = silent
if previous:
self.set_previous(previous)
if target:
self.set_target(target)
def set_previous(self, name):
"""
For graphs of agents: Set previous agent group
:param name:
:return:
"""
return self.set_property(name="previous", type="String", value=name)
def set_target(self, name):
"""
For graphs of agents: Set next agent group
:param name:
:return:
"""
return self.set_property(name="target", type="String", value=name)
def set_property(self, name, type, value):
"""
Set a property
:param name: Name of property
:param type: Type of property
:param value: Value of property
:return:
"""
if not self.silent:
print("Setting {} of {} to {}".format(name, self.name, value))
self.properties[name] = {"type": type, "value": value}
return self | PypiClean |
/Flask-Vue-0.3.5.tar.gz/Flask-Vue-0.3.5/flask_vue/static/vue.min.js | !function(t,e){"object"==typeof exports&&"undefined"!=typeof module?module.exports=e():"function"==typeof define&&define.amd?define(e):t.Vue=e()}(this,function(){"use strict";function t(e,n,r){if(i(e,n))return void(e[n]=r);if(e._isVue)return void t(e._data,n,r);var s=e.__ob__;if(!s)return void(e[n]=r);if(s.convert(n,r),s.dep.notify(),s.vms)for(var o=s.vms.length;o--;){var a=s.vms[o];a._proxy(n),a._digest()}return r}function e(t,e){if(i(t,e)){delete t[e];var n=t.__ob__;if(!n)return void(t._isVue&&(delete t._data[e],t._digest()));if(n.dep.notify(),n.vms)for(var r=n.vms.length;r--;){var s=n.vms[r];s._unproxy(e),s._digest()}}}function i(t,e){return Mi.call(t,e)}function n(t){return Wi.test(t)}function r(t){var e=(t+"").charCodeAt(0);return 36===e||95===e}function s(t){return null==t?"":t.toString()}function o(t){if("string"!=typeof t)return t;var e=Number(t);return isNaN(e)?t:e}function a(t){return"true"===t||"false"!==t&&t}function h(t){var e=t.charCodeAt(0),i=t.charCodeAt(t.length-1);return e!==i||34!==e&&39!==e?t:t.slice(1,-1)}function l(t){return t.replace(Vi,c)}function c(t,e){return e?e.toUpperCase():""}function u(t){return t.replace(Bi,"$1-$2").replace(Bi,"$1-$2").toLowerCase()}function f(t){return t.replace(zi,c)}function p(t,e){return function(i){var n=arguments.length;return n?n>1?t.apply(e,arguments):t.call(e,i):t.call(e)}}function d(t,e){e=e||0;for(var i=t.length-e,n=new Array(i);i--;)n[i]=t[i+e];return n}function v(t,e){for(var i=Object.keys(e),n=i.length;n--;)t[i[n]]=e[i[n]];return t}function m(t){return null!==t&&"object"==typeof t}function g(t){return Ui.call(t)===Ji}function _(t,e,i,n){Object.defineProperty(t,e,{value:i,enumerable:!!n,writable:!0,configurable:!0})}function y(t,e){var i,n,r,s,o,a=function a(){var h=Date.now()-s;h<e&&h>=0?i=setTimeout(a,e-h):(i=null,o=t.apply(r,n),i||(r=n=null))};return function(){return r=this,n=arguments,s=Date.now(),i||(i=setTimeout(a,e)),o}}function b(t,e){for(var i=t.length;i--;)if(t[i]===e)return i;return-1}function w(t){var e=function e(){if(!e.cancelled)return t.apply(this,arguments)};return e.cancel=function(){e.cancelled=!0},e}function C(t,e){return t==e||!(!m(t)||!m(e))&&JSON.stringify(t)===JSON.stringify(e)}function $(t){return/native code/.test(t.toString())}function k(t){this.size=0,this.limit=t,this.head=this.tail=void 0,this._keymap=Object.create(null)}function x(){return fn.charCodeAt(vn+1)}function A(){return fn.charCodeAt(++vn)}function O(){return vn>=dn}function T(){for(;x()===Tn;)A()}function N(t){return t===kn||t===xn}function j(t){return Nn[t]}function E(t,e){return jn[t]===e}function S(){for(var t,e=A();!O();)if(t=A(),t===On)A();else if(t===e)break}function F(t){for(var e=0,i=t;!O();)if(t=x(),N(t))S();else if(i===t&&e++,E(i,t)&&e--,A(),0===e)break}function D(){for(var t=vn;!O();)if(mn=x(),N(mn))S();else if(j(mn))F(mn);else if(mn===An){if(A(),mn=x(),mn!==An){gn!==bn&&gn!==$n||(gn=wn);break}A()}else{if(mn===Tn&&(gn===Cn||gn===$n)){T();break}gn===wn&&(gn=Cn),A()}return fn.slice(t+1,vn)||null}function P(){for(var t=[];!O();)t.push(R());return t}function R(){var t,e={};return gn=wn,e.name=D().trim(),gn=$n,t=L(),t.length&&(e.args=t),e}function L(){for(var t=[];!O()&&gn!==wn;){var e=D();if(!e)break;t.push(H(e))}return t}function H(t){if(yn.test(t))return{value:o(t),dynamic:!1};var e=h(t),i=e===t;return{value:i?t:e,dynamic:i}}function I(t){var e=_n.get(t);if(e)return e;fn=t,pn={},dn=fn.length,vn=-1,mn="",gn=bn;var i;return fn.indexOf("|")<0?pn.expression=fn.trim():(pn.expression=D().trim(),i=P(),i.length&&(pn.filters=i)),_n.put(t,pn),pn}function M(t){return t.replace(Sn,"\\$&")}function W(){var t=M(Mn.delimiters[0]),e=M(Mn.delimiters[1]),i=M(Mn.unsafeDelimiters[0]),n=M(Mn.unsafeDelimiters[1]);Dn=new RegExp(i+"((?:.|\\n)+?)"+n+"|"+t+"((?:.|\\n)+?)"+e,"g"),Pn=new RegExp("^"+i+"((?:.|\\n)+?)"+n+"$"),Fn=new k(1e3)}function V(t){Fn||W();var e=Fn.get(t);if(e)return e;if(!Dn.test(t))return null;for(var i,n,r,s,o,a,h=[],l=Dn.lastIndex=0;i=Dn.exec(t);)n=i.index,n>l&&h.push({value:t.slice(l,n)}),r=Pn.test(i[0]),s=r?i[1]:i[2],o=s.charCodeAt(0),a=42===o,s=a?s.slice(1):s,h.push({tag:!0,value:s.trim(),html:r,oneTime:a}),l=n+i[0].length;return l<t.length&&h.push({value:t.slice(l)}),Fn.put(t,h),h}function B(t,e){return t.length>1?t.map(function(t){return z(t,e)}).join("+"):z(t[0],e,!0)}function z(t,e,i){return t.tag?t.oneTime&&e?'"'+e.$eval(t.value)+'"':U(t.value,i):'"'+t.value+'"'}function U(t,e){if(Rn.test(t)){var i=I(t);return i.filters?"this._applyFilters("+i.expression+",null,"+JSON.stringify(i.filters)+",false)":"("+t+")"}return e?t:"("+t+")"}function J(t,e,i,n){G(t,1,function(){e.appendChild(t)},i,n)}function q(t,e,i,n){G(t,1,function(){et(t,e)},i,n)}function Q(t,e,i){G(t,-1,function(){nt(t)},e,i)}function G(t,e,i,n,r){var s=t.__v_trans;if(!s||!s.hooks&&!rn||!n._isCompiled||n.$parent&&!n.$parent._isCompiled)return i(),void(r&&r());var o=e>0?"enter":"leave";s[o](i,r)}function Z(t){if("string"==typeof t){t=document.querySelector(t)}return t}function X(t){if(!t)return!1;var e=t.ownerDocument.documentElement,i=t.parentNode;return e===t||e===i||!(!i||1!==i.nodeType||!e.contains(i))}function Y(t,e){var i=t.getAttribute(e);return null!==i&&t.removeAttribute(e),i}function K(t,e){var i=Y(t,":"+e);return null===i&&(i=Y(t,"v-bind:"+e)),i}function tt(t,e){return t.hasAttribute(e)||t.hasAttribute(":"+e)||t.hasAttribute("v-bind:"+e)}function et(t,e){e.parentNode.insertBefore(t,e)}function it(t,e){e.nextSibling?et(t,e.nextSibling):e.parentNode.appendChild(t)}function nt(t){t.parentNode.removeChild(t)}function rt(t,e){e.firstChild?et(t,e.firstChild):e.appendChild(t)}function st(t,e){var i=t.parentNode;i&&i.replaceChild(e,t)}function ot(t,e,i,n){t.addEventListener(e,i,n)}function at(t,e,i){t.removeEventListener(e,i)}function ht(t){var e=t.className;return"object"==typeof e&&(e=e.baseVal||""),e}function lt(t,e){Ki&&!/svg$/.test(t.namespaceURI)?t.className=e:t.setAttribute("class",e)}function ct(t,e){if(t.classList)t.classList.add(e);else{var i=" "+ht(t)+" ";i.indexOf(" "+e+" ")<0&<(t,(i+e).trim())}}function ut(t,e){if(t.classList)t.classList.remove(e);else{for(var i=" "+ht(t)+" ",n=" "+e+" ";i.indexOf(n)>=0;)i=i.replace(n," ");lt(t,i.trim())}t.className||t.removeAttribute("class")}function ft(t,e){var i,n;if(vt(t)&&bt(t.content)&&(t=t.content),t.hasChildNodes())for(pt(t),n=e?document.createDocumentFragment():document.createElement("div");i=t.firstChild;)n.appendChild(i);return n}function pt(t){for(var e;e=t.firstChild,dt(e);)t.removeChild(e);for(;e=t.lastChild,dt(e);)t.removeChild(e)}function dt(t){return t&&(3===t.nodeType&&!t.data.trim()||8===t.nodeType)}function vt(t){return t.tagName&&"template"===t.tagName.toLowerCase()}function mt(t,e){var i=Mn.debug?document.createComment(t):document.createTextNode(e?" ":"");return i.__v_anchor=!0,i}function gt(t){if(t.hasAttributes())for(var e=t.attributes,i=0,n=e.length;i<n;i++){var r=e[i].name;if(Bn.test(r))return l(r.replace(Bn,""))}}function _t(t,e,i){for(var n;t!==e;)n=t.nextSibling,i(t),t=n;i(e)}function yt(t,e,i,n,r){function s(){if(a++,o&&a>=h.length){for(var t=0;t<h.length;t++)n.appendChild(h[t]);r&&r()}}var o=!1,a=0,h=[];_t(t,e,function(t){t===e&&(o=!0),h.push(t),Q(t,i,s)})}function bt(t){return t&&11===t.nodeType}function wt(t){if(t.outerHTML)return t.outerHTML;var e=document.createElement("div");return e.appendChild(t.cloneNode(!0)),e.innerHTML}function Ct(t,e){var i=t.tagName.toLowerCase(),n=t.hasAttributes();if(zn.test(i)||Un.test(i)){if(n)return $t(t,e)}else{if(jt(e,"components",i))return{id:i};var r=n&&$t(t,e);if(r)return r}}function $t(t,e){var i=t.getAttribute("is");if(null!=i){if(jt(e,"components",i))return t.removeAttribute("is"),{id:i}}else if(i=K(t,"is"),null!=i)return{id:i,dynamic:!0}}function kt(e,n){var r,s,o;for(r in n)s=e[r],o=n[r],i(e,r)?m(s)&&m(o)&&kt(s,o):t(e,r,o);return e}function xt(t,e){var i=Object.create(t||null);return e?v(i,Tt(e)):i}function At(t){if(t.components)for(var e,i=t.components=Tt(t.components),n=Object.keys(i),r=0,s=n.length;r<s;r++){var o=n[r];zn.test(o)||Un.test(o)||(e=i[o],g(e)&&(i[o]=Di.extend(e)))}}function Ot(t){var e,i,n=t.props;if(qi(n))for(t.props={},e=n.length;e--;)i=n[e],"string"==typeof i?t.props[i]=null:i.name&&(t.props[i.name]=i);else if(g(n)){var r=Object.keys(n);for(e=r.length;e--;)i=n[r[e]],"function"==typeof i&&(n[r[e]]={type:i})}}function Tt(t){if(qi(t)){for(var e,i={},n=t.length;n--;){e=t[n];var r="function"==typeof e?e.options&&e.options.name||e.id:e.name||e.id;r&&(i[r]=e)}return i}return t}function Nt(t,e,n){function r(i){var r=Jn[i]||qn;o[i]=r(t[i],e[i],n,i)}At(e),Ot(e);var s,o={};if(e.extends&&(t="function"==typeof e.extends?Nt(t,e.extends.options,n):Nt(t,e.extends,n)),e.mixins)for(var a=0,h=e.mixins.length;a<h;a++){var l=e.mixins[a],c=l.prototype instanceof Di?l.options:l;t=Nt(t,c,n)}for(s in t)r(s);for(s in e)i(t,s)||r(s);return o}function jt(t,e,i,n){if("string"==typeof i){var r,s=t[e],o=s[i]||s[r=l(i)]||s[r.charAt(0).toUpperCase()+r.slice(1)];return o}}function Et(){this.id=Qn++,this.subs=[]}function St(t){Yn=!1,t(),Yn=!0}function Ft(t){if(this.value=t,this.dep=new Et,_(t,"__ob__",this),qi(t)){var e=Qi?Dt:Pt;e(t,Zn,Xn),this.observeArray(t)}else this.walk(t)}function Dt(t,e){t.__proto__=e}function Pt(t,e,i){for(var n=0,r=i.length;n<r;n++){var s=i[n];_(t,s,e[s])}}function Rt(t,e){if(t&&"object"==typeof t){var n;return i(t,"__ob__")&&t.__ob__ instanceof Ft?n=t.__ob__:Yn&&(qi(t)||g(t))&&Object.isExtensible(t)&&!t._isVue&&(n=new Ft(t)),n&&e&&n.addVm(e),n}}function Lt(t,e,i){var n=new Et,r=Object.getOwnPropertyDescriptor(t,e);if(!r||r.configurable!==!1){var s=r&&r.get,o=r&&r.set,a=Rt(i);Object.defineProperty(t,e,{enumerable:!0,configurable:!0,get:function(){var e=s?s.call(t):i;if(Et.target&&(n.depend(),a&&a.dep.depend(),qi(e)))for(var r,o=0,h=e.length;o<h;o++)r=e[o],r&&r.__ob__&&r.__ob__.dep.depend();return e},set:function(e){var r=s?s.call(t):i;e!==r&&(o?o.call(t,e):i=e,a=Rt(e),n.notify())}})}}function Ht(t){t.prototype._init=function(t){t=t||{},this.$el=null,this.$parent=t.parent,this.$root=this.$parent?this.$parent.$root:this,this.$children=[],this.$refs={},this.$els={},this._watchers=[],this._directives=[],this._uid=tr++,this._isVue=!0,this._events={},this._eventsCount={},this._isFragment=!1,this._fragment=this._fragmentStart=this._fragmentEnd=null,this._isCompiled=this._isDestroyed=this._isReady=this._isAttached=this._isBeingDestroyed=this._vForRemoving=!1,this._unlinkFn=null,this._context=t._context||this.$parent,this._scope=t._scope,this._frag=t._frag,this._frag&&this._frag.children.push(this),this.$parent&&this.$parent.$children.push(this),t=this.$options=Nt(this.constructor.options,t,this),this._updateRef(),this._data={},this._callHook("init"),this._initState(),this._initEvents(),this._callHook("created"),t.el&&this.$mount(t.el)}}function It(t){if(void 0===t)return"eof";var e=t.charCodeAt(0);switch(e){case 91:case 93:case 46:case 34:case 39:case 48:return t;case 95:case 36:return"ident";case 32:case 9:case 10:case 13:case 160:case 65279:case 8232:case 8233:return"ws"}return e>=97&&e<=122||e>=65&&e<=90?"ident":e>=49&&e<=57?"number":"else"}function Mt(t){var e=t.trim();return("0"!==t.charAt(0)||!isNaN(t))&&(n(e)?h(e):"*"+e)}function Wt(t){function e(){var e=t[c+1];if(u===ur&&"'"===e||u===fr&&'"'===e)return c++,n="\\"+e,p[ir](),!0}var i,n,r,s,o,a,h,l=[],c=-1,u=or,f=0,p=[];for(p[nr]=function(){void 0!==r&&(l.push(r),r=void 0)},p[ir]=function(){void 0===r?r=n:r+=n},p[rr]=function(){p[ir](),f++},p[sr]=function(){if(f>0)f--,u=cr,p[ir]();else{if(f=0,r=Mt(r),r===!1)return!1;p[nr]()}};null!=u;)if(c++,i=t[c],"\\"!==i||!e()){if(s=It(i),h=vr[u],o=h[s]||h.else||dr,o===dr)return;if(u=o[0],a=p[o[1]],a&&(n=o[2],n=void 0===n?i:n,a()===!1))return;if(u===pr)return l.raw=t,l}}function Vt(t){var e=er.get(t);return e||(e=Wt(t),e&&er.put(t,e)),e}function Bt(t,e){return Yt(e).get(t)}function zt(e,i,n){var r=e;if("string"==typeof i&&(i=Wt(i)),!i||!m(e))return!1;for(var s,o,a=0,h=i.length;a<h;a++)s=e,o=i[a],"*"===o.charAt(0)&&(o=Yt(o.slice(1)).get.call(r,r)),a<h-1?(e=e[o],m(e)||(e={},t(s,o,e))):qi(e)?e.$set(o,n):o in e?e[o]=n:t(e,o,n);return!0}function Ut(){}function Jt(t,e){var i=Nr.length;return Nr[i]=e?t.replace($r,"\\n"):t,'"'+i+'"'}function qt(t){var e=t.charAt(0),i=t.slice(1);return yr.test(i)?t:(i=i.indexOf('"')>-1?i.replace(xr,Qt):i,e+"scope."+i)}function Qt(t,e){return Nr[e]}function Gt(t){wr.test(t),Nr.length=0;var e=t.replace(kr,Jt).replace(Cr,"");return e=(" "+e).replace(Or,qt).replace(xr,Qt),Zt(e)}function Zt(t){try{return new Function("scope","return "+t+";")}catch(t){return Ut}}function Xt(t){var e=Vt(t);if(e)return function(t,i){zt(t,e,i)}}function Yt(t,e){t=t.trim();var i=gr.get(t);if(i)return e&&!i.set&&(i.set=Xt(i.exp)),i;var n={exp:t};return n.get=Kt(t)&&t.indexOf("[")<0?Zt("scope."+t):Gt(t),e&&(n.set=Xt(t)),gr.put(t,n),n}function Kt(t){return Ar.test(t)&&!Tr.test(t)&&"Math."!==t.slice(0,5)}function te(){Er.length=0,Sr.length=0,Fr={},Dr={},Pr=!1}function ee(){for(var t=!0;t;)t=!1,ie(Er),ie(Sr),Er.length?t=!0:(Zi&&Mn.devtools&&Zi.emit("flush"),te())}function ie(t){for(var e=0;e<t.length;e++){var i=t[e],n=i.id;Fr[n]=null,i.run()}t.length=0}function ne(t){var e=t.id;if(null==Fr[e]){var i=t.user?Sr:Er;Fr[e]=i.length,i.push(t),Pr||(Pr=!0,ln(ee))}}function re(t,e,i,n){n&&v(this,n);var r="function"==typeof e;if(this.vm=t,t._watchers.push(this),this.expression=e,this.cb=i,this.id=++Rr,this.active=!0,this.dirty=this.lazy,this.deps=[],this.newDeps=[],this.depIds=new cn,this.newDepIds=new cn,this.prevError=null,r)this.getter=e,this.setter=void 0;else{var s=Yt(e,this.twoWay);this.getter=s.get,this.setter=s.set}this.value=this.lazy?void 0:this.get(),this.queued=this.shallow=!1}function se(t,e){var i=void 0,n=void 0;e||(e=Lr,e.clear());var r=qi(t),s=m(t);if((r||s)&&Object.isExtensible(t)){if(t.__ob__){var o=t.__ob__.dep.id;if(e.has(o))return;e.add(o)}if(r)for(i=t.length;i--;)se(t[i],e);else if(s)for(n=Object.keys(t),i=n.length;i--;)se(t[n[i]],e)}}function oe(t){return vt(t)&&bt(t.content)}function ae(t,e){var i=e?t:t.trim(),n=Ir.get(i);if(n)return n;var r=document.createDocumentFragment(),s=t.match(Vr),o=Br.test(t),a=zr.test(t);if(s||o||a){var h=s&&s[1],l=Wr[h]||Wr.efault,c=l[0],u=l[1],f=l[2],p=document.createElement("div");for(p.innerHTML=u+t+f;c--;)p=p.lastChild;for(var d;d=p.firstChild;)r.appendChild(d)}else r.appendChild(document.createTextNode(t));return e||pt(r),Ir.put(i,r),r}function he(t){if(oe(t))return ae(t.innerHTML);if("SCRIPT"===t.tagName)return ae(t.textContent);for(var e,i=le(t),n=document.createDocumentFragment();e=i.firstChild;)n.appendChild(e);return pt(n),n}function le(t){if(!t.querySelectorAll)return t.cloneNode();var e,i,n,r=t.cloneNode(!0);if(Ur){var s=r;if(oe(t)&&(t=t.content,s=r.content),i=t.querySelectorAll("template"),i.length)for(n=s.querySelectorAll("template"),e=n.length;e--;)n[e].parentNode.replaceChild(le(i[e]),n[e])}if(Jr)if("TEXTAREA"===t.tagName)r.value=t.value;else if(i=t.querySelectorAll("textarea"),i.length)for(n=r.querySelectorAll("textarea"),e=n.length;e--;)n[e].value=i[e].value;return r}function ce(t,e,i){var n,r;return bt(t)?(pt(t),e?le(t):t):("string"==typeof t?i||"#"!==t.charAt(0)?r=ae(t,i):(r=Mr.get(t),r||(n=document.getElementById(t.slice(1)),n&&(r=he(n),Mr.put(t,r)))):t.nodeType&&(r=he(t)),r&&e?le(r):r)}function ue(t,e,i,n,r,s){this.children=[],this.childFrags=[],this.vm=e,this.scope=r,this.inserted=!1,this.parentFrag=s,s&&s.childFrags.push(this),this.unlink=t(e,i,n,r,this);var o=this.single=1===i.childNodes.length&&!i.childNodes[0].__v_anchor;o?(this.node=i.childNodes[0],this.before=fe,this.remove=pe):(this.node=mt("fragment-start"),this.end=mt("fragment-end"),this.frag=i,rt(this.node,i),i.appendChild(this.end),this.before=de,this.remove=ve),this.node.__v_frag=this}function fe(t,e){this.inserted=!0;var i=e!==!1?q:et;i(this.node,t,this.vm),X(this.node)&&this.callHook(me)}function pe(){this.inserted=!1;var t=X(this.node),e=this;this.beforeRemove(),Q(this.node,this.vm,function(){t&&e.callHook(ge),e.destroy()})}function de(t,e){this.inserted=!0;var i=this.vm,n=e!==!1?q:et;_t(this.node,this.end,function(e){n(e,t,i)}),X(this.node)&&this.callHook(me)}function ve(){this.inserted=!1;var t=this,e=X(this.node);this.beforeRemove(),yt(this.node,this.end,this.vm,this.frag,function(){e&&t.callHook(ge),t.destroy()})}function me(t){!t._isAttached&&X(t.$el)&&t._callHook("attached")}function ge(t){t._isAttached&&!X(t.$el)&&t._callHook("detached")}function _e(t,e){this.vm=t;var i,n="string"==typeof e;n||vt(e)&&!e.hasAttribute("v-if")?i=ce(e,!0):(i=document.createDocumentFragment(),i.appendChild(e)),this.template=i;var r,s=t.constructor.cid;if(s>0){var o=s+(n?e:wt(e));r=Gr.get(o),r||(r=qe(i,t.$options,!0),Gr.put(o,r))}else r=qe(i,t.$options,!0);this.linker=r}function ye(t,e,i){var n=t.node.previousSibling;if(n){for(t=n.__v_frag;!(t&&t.forId===i&&t.inserted||n===e);){if(n=n.previousSibling,!n)return;t=n.__v_frag}return t}}function be(t){for(var e=-1,i=new Array(Math.floor(t));++e<t;)i[e]=e;return i}function we(t,e,i,n){return n?"$index"===n?t:n.charAt(0).match(/\w/)?Bt(i,n):i[n]:e||i}function Ce(t){var e=t.node;if(t.end)for(;!e.__vue__&&e!==t.end&&e.nextSibling;)e=e.nextSibling;return e.__vue__}function $e(t,e,i){for(var n,r,s,o=e?[]:null,a=0,h=t.options.length;a<h;a++)if(n=t.options[a],s=i?n.hasAttribute("selected"):n.selected){if(r=n.hasOwnProperty("_value")?n._value:n.value,!e)return r;o.push(r)}return o}function ke(t,e){for(var i=t.length;i--;)if(C(t[i],e))return i;return-1}function xe(t,e){var i=e.map(function(t){var e=t.charCodeAt(0);return e>47&&e<58?parseInt(t,10):1===t.length&&(e=t.toUpperCase().charCodeAt(0),e>64&&e<91)?e:ms[t]});return i=[].concat.apply([],i),function(e){if(i.indexOf(e.keyCode)>-1)return t.call(this,e)}}function Ae(t){return function(e){return e.stopPropagation(),t.call(this,e)}}function Oe(t){return function(e){return e.preventDefault(),t.call(this,e)}}function Te(t){return function(e){if(e.target===e.currentTarget)return t.call(this,e)}}function Ne(t){if(ws[t])return ws[t];var e=je(t);return ws[t]=ws[e]=e,e}function je(t){t=u(t);var e=l(t),i=e.charAt(0).toUpperCase()+e.slice(1);Cs||(Cs=document.createElement("div"));var n,r=_s.length;if("filter"!==e&&e in Cs.style)return{kebab:t,camel:e};for(;r--;)if(n=ys[r]+i,n in Cs.style)return{kebab:_s[r]+t,camel:n}}function Ee(t){var e=[];if(qi(t))for(var i=0,n=t.length;i<n;i++){var r=t[i];if(r)if("string"==typeof r)e.push(r);else for(var s in r)r[s]&&e.push(s)}else if(m(t))for(var o in t)t[o]&&e.push(o);return e}function Se(t,e,i){if(e=e.trim(),e.indexOf(" ")===-1)return void i(t,e);for(var n=e.split(/\s+/),r=0,s=n.length;r<s;r++)i(t,n[r])}function Fe(t,e,i){function n(){++s>=r?i():t[s].call(e,n)}var r=t.length,s=0;t[0].call(e,n)}function De(t,e,i){for(var r,s,o,a,h,c,f,p=[],d=i.$options.propsData,v=Object.keys(e),m=v.length;m--;)s=v[m],r=e[s]||Hs,h=l(s),Is.test(h)&&(f={name:s,path:h,options:r,mode:Ls.ONE_WAY,raw:null},o=u(s),null===(a=K(t,o))&&(null!==(a=K(t,o+".sync"))?f.mode=Ls.TWO_WAY:null!==(a=K(t,o+".once"))&&(f.mode=Ls.ONE_TIME)),null!==a?(f.raw=a,c=I(a),a=c.expression,f.filters=c.filters,n(a)&&!c.filters?f.optimizedLiteral=!0:f.dynamic=!0,f.parentPath=a):null!==(a=Y(t,o))?f.raw=a:d&&null!==(a=d[s]||d[h])&&(f.raw=a),p.push(f));return Pe(p)}function Pe(t){return function(e,n){e._props={};for(var r,s,l,c,f,p=e.$options.propsData,d=t.length;d--;)if(r=t[d],f=r.raw,s=r.path,l=r.options,e._props[s]=r,p&&i(p,s)&&Le(e,r,p[s]),null===f)Le(e,r,void 0);else if(r.dynamic)r.mode===Ls.ONE_TIME?(c=(n||e._context||e).$get(r.parentPath),Le(e,r,c)):e._context?e._bindDir({name:"prop",def:Ws,prop:r},null,null,n):Le(e,r,e.$get(r.parentPath));else if(r.optimizedLiteral){var v=h(f);c=v===f?a(o(f)):v,Le(e,r,c)}else c=l.type===Boolean&&(""===f||f===u(r.name))||f,Le(e,r,c)}}function Re(t,e,i,n){var r=e.dynamic&&Kt(e.parentPath),s=i;void 0===s&&(s=Ie(t,e)),s=We(e,s,t);var o=s!==i;Me(e,s,t)||(s=void 0),r&&!o?St(function(){n(s)}):n(s)}function Le(t,e,i){Re(t,e,i,function(i){Lt(t,e.path,i)})}function He(t,e,i){Re(t,e,i,function(i){t[e.path]=i})}function Ie(t,e){var n=e.options;if(!i(n,"default"))return n.type!==Boolean&&void 0;var r=n.default;return m(r),"function"==typeof r&&n.type!==Function?r.call(t):r}function Me(t,e,i){if(!t.options.required&&(null===t.raw||null==e))return!0;var n=t.options,r=n.type,s=!r,o=[];if(r){qi(r)||(r=[r]);for(var a=0;a<r.length&&!s;a++){var h=Ve(e,r[a]);o.push(h.expectedType),s=h.valid}}if(!s)return!1;var l=n.validator;return!(l&&!l(e))}function We(t,e,i){var n=t.options.coerce;return n&&"function"==typeof n?n(e):e}function Ve(t,e){var i,n;return e===String?(n="string",i=typeof t===n):e===Number?(n="number",i=typeof t===n):e===Boolean?(n="boolean",i=typeof t===n):e===Function?(n="function",i=typeof t===n):e===Object?(n="object",i=g(t)):e===Array?(n="array",i=qi(t)):i=t instanceof e,{valid:i,expectedType:n}}function Be(t){Vs.push(t),Bs||(Bs=!0,ln(ze))}function ze(){for(var t=document.documentElement.offsetHeight,e=0;e<Vs.length;e++)Vs[e]();return Vs=[],Bs=!1,t}function Ue(t,e,i,n){this.id=e,this.el=t,this.enterClass=i&&i.enterClass||e+"-enter",this.leaveClass=i&&i.leaveClass||e+"-leave",this.hooks=i,this.vm=n,this.pendingCssEvent=this.pendingCssCb=this.cancel=this.pendingJsCb=this.op=this.cb=null,this.justEntered=!1,this.entered=this.left=!1,this.typeCache={},this.type=i&&i.type;var r=this;["enterNextTick","enterDone","leaveNextTick","leaveDone"].forEach(function(t){r[t]=p(r[t],r)})}function Je(t){if(/svg$/.test(t.namespaceURI)){var e=t.getBoundingClientRect();return!(e.width||e.height)}return!(t.offsetWidth||t.offsetHeight||t.getClientRects().length)}function qe(t,e,i){var n=i||!e._asComponent?ti(t,e):null,r=n&&n.terminal||gi(t)||!t.hasChildNodes()?null:oi(t.childNodes,e);return function(t,e,i,s,o){var a=d(e.childNodes),h=Qe(function(){n&&n(t,e,i,s,o),r&&r(t,a,i,s,o)},t);return Ze(t,h)}}function Qe(t,e){e._directives=[];var i=e._directives.length;t();var n=e._directives.slice(i);Ge(n);for(var r=0,s=n.length;r<s;r++)n[r]._bind();return n}function Ge(t){if(0!==t.length){var e,i,n,r,s={},o=0,a=[];for(e=0,i=t.length;e<i;e++){var h=t[e],l=h.descriptor.def.priority||ro,c=s[l];c||(c=s[l]=[],a.push(l)),c.push(h)}for(a.sort(function(t,e){return t>e?-1:t===e?0:1}),e=0,i=a.length;e<i;e++){var u=s[a[e]];for(n=0,r=u.length;n<r;n++)t[o++]=u[n]}}}function Ze(t,e,i,n){function r(r){Xe(t,e,r),i&&n&&Xe(i,n)}return r.dirs=e,r}function Xe(t,e,i){for(var n=e.length;n--;)e[n]._teardown()}function Ye(t,e,i,n){var r=De(e,i,t),s=Qe(function(){r(t,n)},t);return Ze(t,s)}function Ke(t,e,i){var n,r,s=e._containerAttrs,o=e._replacerAttrs;return 11!==t.nodeType&&(e._asComponent?(s&&i&&(n=pi(s,i)),o&&(r=pi(o,e))):r=pi(t.attributes,e)),e._containerAttrs=e._replacerAttrs=null,function(t,e,i){var s,o=t._context;o&&n&&(s=Qe(function(){n(o,e,null,i)},o));var a=Qe(function(){r&&r(t,e)},t);return Ze(t,a,o,s)}}function ti(t,e){var i=t.nodeType;return 1!==i||gi(t)?3===i&&t.data.trim()?ii(t,e):null:ei(t,e)}function ei(t,e){if("TEXTAREA"===t.tagName){if(null!==Y(t,"v-pre"))return ui;var i=V(t.value);i&&(t.setAttribute(":value",B(i)),t.value="")}var n,r=t.hasAttributes(),s=r&&d(t.attributes);return r&&(n=ci(t,s,e)),n||(n=hi(t,e)),n||(n=li(t,e)),!n&&r&&(n=pi(s,e)),n}function ii(t,e){if(t._skip)return ni;var i=V(t.wholeText);if(!i)return null;for(var n=t.nextSibling;n&&3===n.nodeType;)n._skip=!0,n=n.nextSibling;for(var r,s,o=document.createDocumentFragment(),a=0,h=i.length;a<h;a++)s=i[a],r=s.tag?ri(s,e):document.createTextNode(s.value),o.appendChild(r);return si(i,o,e)}function ni(t,e){nt(e)}function ri(t,e){function i(e){if(!t.descriptor){var i=I(t.value);t.descriptor={name:e,def:Ds[e],expression:i.expression,filters:i.filters}}}var n;return t.oneTime?n=document.createTextNode(t.value):t.html?(n=document.createComment("v-html"),i("html")):(n=document.createTextNode(" "),i("text")),n}function si(t,e){return function(i,n,r,o){for(var a,h,l,c=e.cloneNode(!0),u=d(c.childNodes),f=0,p=t.length;f<p;f++)a=t[f],h=a.value,a.tag&&(l=u[f],a.oneTime?(h=(o||i).$eval(h),a.html?st(l,ce(h,!0)):l.data=s(h)):i._bindDir(a.descriptor,l,r,o));st(n,c)}}function oi(t,e){for(var i,n,r,s=[],o=0,a=t.length;o<a;o++)r=t[o],i=ti(r,e),n=i&&i.terminal||"SCRIPT"===r.tagName||!r.hasChildNodes()?null:oi(r.childNodes,e),s.push(i,n);return s.length?ai(s):null}function ai(t){return function(e,i,n,r,s){for(var o,a,h,l=0,c=0,u=t.length;l<u;c++){o=i[c],a=t[l++],h=t[l++];var f=d(o.childNodes);a&&a(e,o,n,r,s),h&&h(e,f,n,r,s)}}}function hi(t,e){var i=t.tagName.toLowerCase();if(!zn.test(i)){var n=jt(e,"elementDirectives",i);return n?fi(t,i,"",e,n):void 0}}function li(t,e){var i=Ct(t,e);if(i){var n=gt(t),r={name:"component",ref:n,expression:i.id,def:Ys.component,modifiers:{literal:!i.dynamic}},s=function(t,e,i,s,o){n&&Lt((s||t).$refs,n,null),t._bindDir(r,e,i,s,o)};return s.terminal=!0,s}}function ci(t,e,i){if(null!==Y(t,"v-pre"))return ui;if(t.hasAttribute("v-else")){var n=t.previousElementSibling;if(n&&n.hasAttribute("v-if"))return ui}for(var r,s,o,a,h,l,c,u,f,p,d=0,v=e.length;d<v;d++)r=e[d],s=r.name.replace(io,""),(h=s.match(eo))&&(f=jt(i,"directives",h[1]),f&&f.terminal&&(!p||(f.priority||so)>p.priority)&&(p=f,c=r.name,a=di(r.name),o=r.value,l=h[1],u=h[2]));return p?fi(t,l,o,i,p,c,u,a):void 0}function ui(){}function fi(t,e,i,n,r,s,o,a){var h=I(i),l={name:e,arg:o,expression:h.expression,filters:h.filters,raw:i,attr:s,modifiers:a,def:r};"for"!==e&&"router-view"!==e||(l.ref=gt(t));var c=function(t,e,i,n,r){l.ref&&Lt((n||t).$refs,l.ref,null),t._bindDir(l,e,i,n,r)};return c.terminal=!0,c}function pi(t,e){function i(t,e,i){var n=i&&mi(i),r=!n&&I(s);v.push({name:t,attr:o,raw:a,def:e,arg:l,modifiers:c,expression:r&&r.expression,filters:r&&r.filters,interp:i,hasOneTime:n})}for(var n,r,s,o,a,h,l,c,u,f,p,d=t.length,v=[];d--;)if(n=t[d],r=o=n.name,s=a=n.value,f=V(s),l=null,c=di(r),r=r.replace(io,""),f)s=B(f),l=r,i("bind",Ds.bind,f);else if(no.test(r))c.literal=!Ks.test(r),i("transition",Ys.transition);else if(to.test(r))l=r.replace(to,""),i("on",Ds.on);else if(Ks.test(r))h=r.replace(Ks,""),"style"===h||"class"===h?i(h,Ys[h]):(l=h,i("bind",Ds.bind));else if(p=r.match(eo)){if(h=p[1],l=p[2],"else"===h)continue;u=jt(e,"directives",h,!0),u&&i(h,u)}if(v.length)return vi(v)}function di(t){var e=Object.create(null),i=t.match(io);if(i)for(var n=i.length;n--;)e[i[n].slice(1)]=!0;return e}function vi(t){return function(e,i,n,r,s){for(var o=t.length;o--;)e._bindDir(t[o],i,n,r,s)}}function mi(t){for(var e=t.length;e--;)if(t[e].oneTime)return!0}function gi(t){return"SCRIPT"===t.tagName&&(!t.hasAttribute("type")||"text/javascript"===t.getAttribute("type"))}function _i(t,e){return e&&(e._containerAttrs=bi(t)),vt(t)&&(t=ce(t)),e&&(e._asComponent&&!e.template&&(e.template="<slot></slot>"),e.template&&(e._content=ft(t),t=yi(t,e))),bt(t)&&(rt(mt("v-start",!0),t),t.appendChild(mt("v-end",!0))),t}function yi(t,e){var i=e.template,n=ce(i,!0);if(n){var r=n.firstChild;if(!r)return n;var s=r.tagName&&r.tagName.toLowerCase();return e.replace?(t===document.body,n.childNodes.length>1||1!==r.nodeType||"component"===s||jt(e,"components",s)||tt(r,"is")||jt(e,"elementDirectives",s)||r.hasAttribute("v-for")||r.hasAttribute("v-if")?n:(e._replacerAttrs=bi(r),wi(t,r),r)):(t.appendChild(n),t)}}function bi(t){if(1===t.nodeType&&t.hasAttributes())return d(t.attributes)}function wi(t,e){for(var i,n,r=t.attributes,s=r.length;s--;)i=r[s].name,n=r[s].value,e.hasAttribute(i)||oo.test(i)?"class"===i&&!V(n)&&(n=n.trim())&&n.split(/\s+/).forEach(function(t){ct(e,t)}):e.setAttribute(i,n)}function Ci(t,e){if(e){for(var i,n,r=t._slotContents=Object.create(null),s=0,o=e.children.length;s<o;s++)i=e.children[s],(n=i.getAttribute("slot"))&&(r[n]||(r[n]=[])).push(i);for(n in r)r[n]=$i(r[n],e);if(e.hasChildNodes()){var a=e.childNodes;if(1===a.length&&3===a[0].nodeType&&!a[0].data.trim())return;r.default=$i(e.childNodes,e)}}}function $i(t,e){var i=document.createDocumentFragment();t=d(t);for(var n=0,r=t.length;n<r;n++){var s=t[n];!vt(s)||s.hasAttribute("v-if")||s.hasAttribute("v-for")||(e.removeChild(s),s=ce(s,!0)),i.appendChild(s)}return i}function ki(t){function e(){}function n(t,e){var i=new re(e,t,null,{lazy:!0});return function(){return i.dirty&&i.evaluate(),Et.target&&i.depend(),i.value}}Object.defineProperty(t.prototype,"$data",{get:function(){return this._data},set:function(t){t!==this._data&&this._setData(t)}}),t.prototype._initState=function(){this._initProps(),this._initMeta(),this._initMethods(),this._initData(),this._initComputed()},t.prototype._initProps=function(){var t=this.$options,e=t.el,i=t.props;e=t.el=Z(e),this._propsUnlinkFn=e&&1===e.nodeType&&i?Ye(this,e,i,this._scope):null},t.prototype._initData=function(){var t=this.$options.data,e=this._data=t?t():{};g(e)||(e={});var n,r,s=this._props,o=Object.keys(e);for(n=o.length;n--;)r=o[n],s&&i(s,r)||this._proxy(r);Rt(e,this)},t.prototype._setData=function(t){t=t||{};var e=this._data;this._data=t;var n,r,s;for(n=Object.keys(e),s=n.length;s--;)r=n[s],r in t||this._unproxy(r);for(n=Object.keys(t),s=n.length;s--;)r=n[s],i(this,r)||this._proxy(r);e.__ob__.removeVm(this),Rt(t,this),this._digest()},t.prototype._proxy=function(t){if(!r(t)){var e=this;Object.defineProperty(e,t,{configurable:!0,enumerable:!0,get:function(){return e._data[t]},set:function(i){e._data[t]=i}})}},t.prototype._unproxy=function(t){r(t)||delete this[t]},t.prototype._digest=function(){for(var t=0,e=this._watchers.length;t<e;t++)this._watchers[t].update(!0)},t.prototype._initComputed=function(){var t=this.$options.computed;if(t)for(var i in t){var r=t[i],s={enumerable:!0,configurable:!0};"function"==typeof r?(s.get=n(r,this),s.set=e):(s.get=r.get?r.cache!==!1?n(r.get,this):p(r.get,this):e,s.set=r.set?p(r.set,this):e),Object.defineProperty(this,i,s)}},t.prototype._initMethods=function(){var t=this.$options.methods;if(t)for(var e in t)this[e]=p(t[e],this)},t.prototype._initMeta=function(){var t=this.$options._meta;if(t)for(var e in t)Lt(this,e,t[e])}}function xi(t){function e(t,e){for(var i,n,r,s=e.attributes,o=0,a=s.length;o<a;o++)i=s[o].name,ho.test(i)&&(i=i.replace(ho,""),n=s[o].value,Kt(n)&&(n+=".apply(this, $arguments)"),r=(t._scope||t._context).$eval(n,!0),r._fromParent=!0,t.$on(i.replace(ho),r))}function i(t,e,i){if(i){var r,s,o,a;for(s in i)if(r=i[s],qi(r))for(o=0,a=r.length;o<a;o++)n(t,e,s,r[o]);else n(t,e,s,r)}}function n(t,e,i,r,s){var o=typeof r;if("function"===o)t[e](i,r,s);else if("string"===o){var a=t.$options.methods,h=a&&a[r];h&&t[e](i,h,s)}else r&&"object"===o&&n(t,e,i,r.handler,r)}function r(){this._isAttached||(this._isAttached=!0,this.$children.forEach(s))}function s(t){!t._isAttached&&X(t.$el)&&t._callHook("attached")}function o(){this._isAttached&&(this._isAttached=!1,this.$children.forEach(a))}function a(t){t._isAttached&&!X(t.$el)&&t._callHook("detached")}t.prototype._initEvents=function(){var t=this.$options;t._asComponent&&e(this,t.el),i(this,"$on",t.events),i(this,"$watch",t.watch)},t.prototype._initDOMHooks=function(){this.$on("hook:attached",r),this.$on("hook:detached",o)},t.prototype._callHook=function(t){this.$emit("pre-hook:"+t);var e=this.$options[t];if(e)for(var i=0,n=e.length;i<n;i++)e[i].call(this);this.$emit("hook:"+t)}}function Ai(){}function Oi(t,e,i,n,r,s){this.vm=e,this.el=i,this.descriptor=t,this.name=t.name,this.expression=t.expression,this.arg=t.arg,this.modifiers=t.modifiers,this.filters=t.filters,this.literal=this.modifiers&&this.modifiers.literal,this._locked=!1,this._bound=!1,this._listeners=null,this._host=n,this._scope=r,this._frag=s}function Ti(t){t.prototype._updateRef=function(t){var e=this.$options._ref;if(e){var i=(this._scope||this._context).$refs;t?i[e]===this&&(i[e]=null):i[e]=this}},t.prototype._compile=function(t){var e=this.$options,i=t;if(t=_i(t,e),this._initElement(t),1!==t.nodeType||null===Y(t,"v-pre")){var n=this._context&&this._context.$options,r=Ke(t,e,n);Ci(this,e._content);var s,o=this.constructor;e._linkerCachable&&(s=o.linker,s||(s=o.linker=qe(t,e)));var a=r(this,t,this._scope),h=s?s(this,t):qe(t,e)(this,t);
this._unlinkFn=function(){a(),h(!0)},e.replace&&st(i,t),this._isCompiled=!0,this._callHook("compiled")}},t.prototype._initElement=function(t){bt(t)?(this._isFragment=!0,this.$el=this._fragmentStart=t.firstChild,this._fragmentEnd=t.lastChild,3===this._fragmentStart.nodeType&&(this._fragmentStart.data=this._fragmentEnd.data=""),this._fragment=t):this.$el=t,this.$el.__vue__=this,this._callHook("beforeCompile")},t.prototype._bindDir=function(t,e,i,n,r){this._directives.push(new Oi(t,this,e,i,n,r))},t.prototype._destroy=function(t,e){if(this._isBeingDestroyed)return void(e||this._cleanup());var i,n,r=this,s=function(){!i||n||e||r._cleanup()};t&&this.$el&&(n=!0,this.$remove(function(){n=!1,s()})),this._callHook("beforeDestroy"),this._isBeingDestroyed=!0;var o,a=this.$parent;for(a&&!a._isBeingDestroyed&&(a.$children.$remove(this),this._updateRef(!0)),o=this.$children.length;o--;)this.$children[o].$destroy();for(this._propsUnlinkFn&&this._propsUnlinkFn(),this._unlinkFn&&this._unlinkFn(),o=this._watchers.length;o--;)this._watchers[o].teardown();this.$el&&(this.$el.__vue__=null),i=!0,s()},t.prototype._cleanup=function(){this._isDestroyed||(this._frag&&this._frag.children.$remove(this),this._data&&this._data.__ob__&&this._data.__ob__.removeVm(this),this.$el=this.$parent=this.$root=this.$children=this._watchers=this._context=this._scope=this._directives=null,this._isDestroyed=!0,this._callHook("destroyed"),this.$off())}}function Ni(t){t.prototype._applyFilters=function(t,e,i,n){var r,s,o,a,h,l,c,u,f;for(l=0,c=i.length;l<c;l++)if(r=i[n?c-l-1:l],s=jt(this.$options,"filters",r.name,!0),s&&(s=n?s.write:s.read||s,"function"==typeof s)){if(o=n?[t,e]:[t],h=n?2:1,r.args)for(u=0,f=r.args.length;u<f;u++)a=r.args[u],o[u+h]=a.dynamic?this.$get(a.value):a.value;t=s.apply(this,o)}return t},t.prototype._resolveComponent=function(e,i){var n;if(n="function"==typeof e?e:jt(this.$options,"components",e,!0))if(n.options)i(n);else if(n.resolved)i(n.resolved);else if(n.requested)n.pendingCallbacks.push(i);else{n.requested=!0;var r=n.pendingCallbacks=[i];n.call(this,function(e){g(e)&&(e=t.extend(e)),n.resolved=e;for(var i=0,s=r.length;i<s;i++)r[i](e)},function(t){})}}}function ji(t){function i(t){return JSON.parse(JSON.stringify(t))}t.prototype.$get=function(t,e){var i=Yt(t);if(i){if(e){var n=this;return function(){n.$arguments=d(arguments);var t=i.get.call(n,n);return n.$arguments=null,t}}try{return i.get.call(this,this)}catch(t){}}},t.prototype.$set=function(t,e){var i=Yt(t,!0);i&&i.set&&i.set.call(this,this,e)},t.prototype.$delete=function(t){e(this._data,t)},t.prototype.$watch=function(t,e,i){var n,r=this;"string"==typeof t&&(n=I(t),t=n.expression);var s=new re(r,t,e,{deep:i&&i.deep,sync:i&&i.sync,filters:n&&n.filters,user:!i||i.user!==!1});return i&&i.immediate&&e.call(r,s.value),function(){s.teardown()}},t.prototype.$eval=function(t,e){if(lo.test(t)){var i=I(t),n=this.$get(i.expression,e);return i.filters?this._applyFilters(n,null,i.filters):n}return this.$get(t,e)},t.prototype.$interpolate=function(t){var e=V(t),i=this;return e?1===e.length?i.$eval(e[0].value)+"":e.map(function(t){return t.tag?i.$eval(t.value):t.value}).join(""):t},t.prototype.$log=function(t){var e=t?Bt(this._data,t):this._data;if(e&&(e=i(e)),!t){var n;for(n in this.$options.computed)e[n]=i(this[n]);if(this._props)for(n in this._props)e[n]=i(this[n])}console.log(e)}}function Ei(t){function e(t,e,n,r,s,o){e=i(e);var a=!X(e),h=r===!1||a?s:o,l=!a&&!t._isAttached&&!X(t.$el);return t._isFragment?(_t(t._fragmentStart,t._fragmentEnd,function(i){h(i,e,t)}),n&&n()):h(t.$el,e,t,n),l&&t._callHook("attached"),t}function i(t){return"string"==typeof t?document.querySelector(t):t}function n(t,e,i,n){e.appendChild(t),n&&n()}function r(t,e,i,n){et(t,e),n&&n()}function s(t,e,i){nt(t),i&&i()}t.prototype.$nextTick=function(t){ln(t,this)},t.prototype.$appendTo=function(t,i,r){return e(this,t,i,r,n,J)},t.prototype.$prependTo=function(t,e,n){return t=i(t),t.hasChildNodes()?this.$before(t.firstChild,e,n):this.$appendTo(t,e,n),this},t.prototype.$before=function(t,i,n){return e(this,t,i,n,r,q)},t.prototype.$after=function(t,e,n){return t=i(t),t.nextSibling?this.$before(t.nextSibling,e,n):this.$appendTo(t.parentNode,e,n),this},t.prototype.$remove=function(t,e){if(!this.$el.parentNode)return t&&t();var i=this._isAttached&&X(this.$el);i||(e=!1);var n=this,r=function(){i&&n._callHook("detached"),t&&t()};if(this._isFragment)yt(this._fragmentStart,this._fragmentEnd,this,this._fragment,r);else{var o=e===!1?s:Q;o(this.$el,this,r)}return this}}function Si(t){function e(t,e,n){var r=t.$parent;if(r&&n&&!i.test(e))for(;r;)r._eventsCount[e]=(r._eventsCount[e]||0)+n,r=r.$parent}t.prototype.$on=function(t,i){return(this._events[t]||(this._events[t]=[])).push(i),e(this,t,1),this},t.prototype.$once=function(t,e){function i(){n.$off(t,i),e.apply(this,arguments)}var n=this;return i.fn=e,this.$on(t,i),this},t.prototype.$off=function(t,i){var n;if(!arguments.length){if(this.$parent)for(t in this._events)n=this._events[t],n&&e(this,t,-n.length);return this._events={},this}if(n=this._events[t],!n)return this;if(1===arguments.length)return e(this,t,-n.length),this._events[t]=null,this;for(var r,s=n.length;s--;)if(r=n[s],r===i||r.fn===i){e(this,t,-1),n.splice(s,1);break}return this},t.prototype.$emit=function(t){var e="string"==typeof t;t=e?t:t.name;var i=this._events[t],n=e||!i;if(i){i=i.length>1?d(i):i;var r=e&&i.some(function(t){return t._fromParent});r&&(n=!1);for(var s=d(arguments,1),o=0,a=i.length;o<a;o++){var h=i[o],l=h.apply(this,s);l!==!0||r&&!h._fromParent||(n=!0)}}return n},t.prototype.$broadcast=function(t){var e="string"==typeof t;if(t=e?t:t.name,this._eventsCount[t]){var i=this.$children,n=d(arguments);e&&(n[0]={name:t,source:this});for(var r=0,s=i.length;r<s;r++){var o=i[r],a=o.$emit.apply(o,n);a&&o.$broadcast.apply(o,n)}return this}},t.prototype.$dispatch=function(t){var e=this.$emit.apply(this,arguments);if(e){var i=this.$parent,n=d(arguments);for(n[0]={name:t,source:this};i;)e=i.$emit.apply(i,n),i=e?i.$parent:null;return this}};var i=/^hook:/}function Fi(t){function e(){this._isAttached=!0,this._isReady=!0,this._callHook("ready")}t.prototype.$mount=function(t){if(!this._isCompiled)return t=Z(t),t||(t=document.createElement("div")),this._compile(t),this._initDOMHooks(),X(this.$el)?(this._callHook("attached"),e.call(this)):this.$once("hook:attached",e),this},t.prototype.$destroy=function(t,e){this._destroy(t,e)},t.prototype.$compile=function(t,e,i,n){return qe(t,this.$options,!0)(this,t,e,i,n)}}function Di(t){this._init(t)}function Pi(t,e,i){return i=i?parseInt(i,10):0,e=o(e),"number"==typeof e?t.slice(i,i+e):t}function Ri(t,e,i){if(t=po(t),null==e)return t;if("function"==typeof e)return t.filter(e);e=(""+e).toLowerCase();for(var n,r,s,o,a="in"===i?3:2,h=Array.prototype.concat.apply([],d(arguments,a)),l=[],c=0,u=t.length;c<u;c++)if(n=t[c],s=n&&n.$value||n,o=h.length){for(;o--;)if(r=h[o],"$key"===r&&Hi(n.$key,e)||Hi(Bt(s,r),e)){l.push(n);break}}else Hi(n,e)&&l.push(n);return l}function Li(t){function e(t,e,i){var r=n[i];return r&&("$key"!==r&&(m(t)&&"$value"in t&&(t=t.$value),m(e)&&"$value"in e&&(e=e.$value)),t=m(t)?Bt(t,r):t,e=m(e)?Bt(e,r):e),t===e?0:t>e?s:-s}var i=null,n=void 0;t=po(t);var r=d(arguments,1),s=r[r.length-1];"number"==typeof s?(s=s<0?-1:1,r=r.length>1?r.slice(0,-1):r):s=1;var o=r[0];return o?("function"==typeof o?i=function(t,e){return o(t,e)*s}:(n=Array.prototype.concat.apply([],r),i=function(t,r,s){return s=s||0,s>=n.length-1?e(t,r,s):e(t,r,s)||i(t,r,s+1)}),t.slice().sort(i)):t}function Hi(t,e){var i;if(g(t)){var n=Object.keys(t);for(i=n.length;i--;)if(Hi(t[n[i]],e))return!0}else if(qi(t)){for(i=t.length;i--;)if(Hi(t[i],e))return!0}else if(null!=t)return t.toString().toLowerCase().indexOf(e)>-1}function Ii(i){function n(t){return new Function("return function "+f(t)+" (options) { this._init(options) }")()}i.options={directives:Ds,elementDirectives:fo,filters:mo,transitions:{},components:{},partials:{},replace:!0},i.util=Kn,i.config=Mn,i.set=t,i.delete=e,i.nextTick=ln,i.compiler=ao,i.FragmentFactory=_e,i.internalDirectives=Ys,i.parsers={path:mr,text:Ln,template:qr,directive:En,expression:jr},i.cid=0;var r=1;i.extend=function(t){t=t||{};var e=this,i=0===e.cid;if(i&&t._Ctor)return t._Ctor;var s=t.name||e.options.name,o=n(s||"VueComponent");return o.prototype=Object.create(e.prototype),o.prototype.constructor=o,o.cid=r++,o.options=Nt(e.options,t),o.super=e,o.extend=e.extend,Mn._assetTypes.forEach(function(t){o[t]=e[t]}),s&&(o.options.components[s]=o),i&&(t._Ctor=o),o},i.use=function(t){if(!t.installed){var e=d(arguments,1);return e.unshift(this),"function"==typeof t.install?t.install.apply(t,e):t.apply(null,e),t.installed=!0,this}},i.mixin=function(t){i.options=Nt(i.options,t)},Mn._assetTypes.forEach(function(t){i[t]=function(e,n){return n?("component"===t&&g(n)&&(n.name||(n.name=e),n=i.extend(n)),this.options[t+"s"][e]=n,n):this.options[t+"s"][e]}}),v(i.transition,Vn)}var Mi=Object.prototype.hasOwnProperty,Wi=/^\s?(true|false|-?[\d\.]+|'[^']*'|"[^"]*")\s?$/,Vi=/-(\w)/g,Bi=/([^-])([A-Z])/g,zi=/(?:^|[-_\/])(\w)/g,Ui=Object.prototype.toString,Ji="[object Object]",qi=Array.isArray,Qi="__proto__"in{},Gi="undefined"!=typeof window&&"[object Object]"!==Object.prototype.toString.call(window),Zi=Gi&&window.__VUE_DEVTOOLS_GLOBAL_HOOK__,Xi=Gi&&window.navigator.userAgent.toLowerCase(),Yi=Xi&&Xi.indexOf("trident")>0,Ki=Xi&&Xi.indexOf("msie 9.0")>0,tn=Xi&&Xi.indexOf("android")>0,en=Xi&&/iphone|ipad|ipod|ios/.test(Xi),nn=void 0,rn=void 0,sn=void 0,on=void 0;if(Gi&&!Ki){var an=void 0===window.ontransitionend&&void 0!==window.onwebkittransitionend,hn=void 0===window.onanimationend&&void 0!==window.onwebkitanimationend;nn=an?"WebkitTransition":"transition",rn=an?"webkitTransitionEnd":"transitionend",sn=hn?"WebkitAnimation":"animation",on=hn?"webkitAnimationEnd":"animationend"}var ln=function(){function t(){i=!1;var t=e.slice(0);e.length=0;for(var n=0;n<t.length;n++)t[n]()}var e=[],i=!1,n=void 0;if("undefined"!=typeof Promise&&$(Promise)){var r=Promise.resolve(),s=function(){};n=function(){r.then(t),en&&setTimeout(s)}}else if("undefined"!=typeof MutationObserver){var o=1,a=new MutationObserver(t),h=document.createTextNode(String(o));a.observe(h,{characterData:!0}),n=function(){o=(o+1)%2,h.data=String(o)}}else n=setTimeout;return function(r,s){var o=s?function(){r.call(s)}:r;e.push(o),i||(i=!0,n(t,0))}}(),cn=void 0;"undefined"!=typeof Set&&$(Set)?cn=Set:(cn=function(){this.set=Object.create(null)},cn.prototype.has=function(t){return void 0!==this.set[t]},cn.prototype.add=function(t){this.set[t]=1},cn.prototype.clear=function(){this.set=Object.create(null)});var un=k.prototype;un.put=function(t,e){var i,n=this.get(t,!0);return n||(this.size===this.limit&&(i=this.shift()),n={key:t},this._keymap[t]=n,this.tail?(this.tail.newer=n,n.older=this.tail):this.head=n,this.tail=n,this.size++),n.value=e,i},un.shift=function(){var t=this.head;return t&&(this.head=this.head.newer,this.head.older=void 0,t.newer=t.older=void 0,this._keymap[t.key]=void 0,this.size--),t},un.get=function(t,e){var i=this._keymap[t];if(void 0!==i)return i===this.tail?e?i:i.value:(i.newer&&(i===this.head&&(this.head=i.newer),i.newer.older=i.older),i.older&&(i.older.newer=i.newer),i.newer=void 0,i.older=this.tail,this.tail&&(this.tail.newer=i),this.tail=i,e?i:i.value)};var fn,pn,dn,vn,mn,gn,_n=new k(1e3),yn=/^in$|^-?\d+/,bn=0,wn=1,Cn=2,$n=3,kn=34,xn=39,An=124,On=92,Tn=32,Nn={91:1,123:1,40:1},jn={91:93,123:125,40:41},En=Object.freeze({parseDirective:I}),Sn=/[-.*+?^${}()|[\]\/\\]/g,Fn=void 0,Dn=void 0,Pn=void 0,Rn=/[^|]\|[^|]/,Ln=Object.freeze({compileRegex:W,parseText:V,tokensToExp:B}),Hn=["{{","}}"],In=["{{{","}}}"],Mn=Object.defineProperties({debug:!1,silent:!1,async:!0,warnExpressionErrors:!0,devtools:!1,_delimitersChanged:!0,_assetTypes:["component","directive","elementDirective","filter","transition","partial"],_propBindingModes:{ONE_WAY:0,TWO_WAY:1,ONE_TIME:2},_maxUpdateCount:100},{delimiters:{get:function(){return Hn},set:function(t){Hn=t,W()},configurable:!0,enumerable:!0},unsafeDelimiters:{get:function(){return In},set:function(t){In=t,W()},configurable:!0,enumerable:!0}}),Wn=void 0,Vn=Object.freeze({appendWithTransition:J,beforeWithTransition:q,removeWithTransition:Q,applyTransition:G}),Bn=/^v-ref:/,zn=/^(div|p|span|img|a|b|i|br|ul|ol|li|h1|h2|h3|h4|h5|h6|code|pre|table|th|td|tr|form|label|input|select|option|nav|article|section|header|footer)$/i,Un=/^(slot|partial|component)$/i,Jn=Mn.optionMergeStrategies=Object.create(null);Jn.data=function(t,e,i){return i?t||e?function(){var n="function"==typeof e?e.call(i):e,r="function"==typeof t?t.call(i):void 0;return n?kt(n,r):r}:void 0:e?"function"!=typeof e?t:t?function(){return kt(e.call(this),t.call(this))}:e:t},Jn.el=function(t,e,i){if(i||!e||"function"==typeof e){var n=e||t;return i&&"function"==typeof n?n.call(i):n}},Jn.init=Jn.created=Jn.ready=Jn.attached=Jn.detached=Jn.beforeCompile=Jn.compiled=Jn.beforeDestroy=Jn.destroyed=Jn.activate=function(t,e){return e?t?t.concat(e):qi(e)?e:[e]:t},Mn._assetTypes.forEach(function(t){Jn[t+"s"]=xt}),Jn.watch=Jn.events=function(t,e){if(!e)return t;if(!t)return e;var i={};v(i,t);for(var n in e){var r=i[n],s=e[n];r&&!qi(r)&&(r=[r]),i[n]=r?r.concat(s):[s]}return i},Jn.props=Jn.methods=Jn.computed=function(t,e){if(!e)return t;if(!t)return e;var i=Object.create(null);return v(i,t),v(i,e),i};var qn=function(t,e){return void 0===e?t:e},Qn=0;Et.target=null,Et.prototype.addSub=function(t){this.subs.push(t)},Et.prototype.removeSub=function(t){this.subs.$remove(t)},Et.prototype.depend=function(){Et.target.addDep(this)},Et.prototype.notify=function(){for(var t=d(this.subs),e=0,i=t.length;e<i;e++)t[e].update()};var Gn=Array.prototype,Zn=Object.create(Gn);["push","pop","shift","unshift","splice","sort","reverse"].forEach(function(t){var e=Gn[t];_(Zn,t,function(){for(var i=arguments.length,n=new Array(i);i--;)n[i]=arguments[i];var r,s=e.apply(this,n),o=this.__ob__;switch(t){case"push":r=n;break;case"unshift":r=n;break;case"splice":r=n.slice(2)}return r&&o.observeArray(r),o.dep.notify(),s})}),_(Gn,"$set",function(t,e){return t>=this.length&&(this.length=Number(t)+1),this.splice(t,1,e)[0]}),_(Gn,"$remove",function(t){if(this.length){var e=b(this,t);return e>-1?this.splice(e,1):void 0}});var Xn=Object.getOwnPropertyNames(Zn),Yn=!0;Ft.prototype.walk=function(t){for(var e=Object.keys(t),i=0,n=e.length;i<n;i++)this.convert(e[i],t[e[i]])},Ft.prototype.observeArray=function(t){for(var e=0,i=t.length;e<i;e++)Rt(t[e])},Ft.prototype.convert=function(t,e){Lt(this.value,t,e)},Ft.prototype.addVm=function(t){(this.vms||(this.vms=[])).push(t)},Ft.prototype.removeVm=function(t){this.vms.$remove(t)};var Kn=Object.freeze({defineReactive:Lt,set:t,del:e,hasOwn:i,isLiteral:n,isReserved:r,_toString:s,toNumber:o,toBoolean:a,stripQuotes:h,camelize:l,hyphenate:u,classify:f,bind:p,toArray:d,extend:v,isObject:m,isPlainObject:g,def:_,debounce:y,indexOf:b,cancellable:w,looseEqual:C,isArray:qi,hasProto:Qi,inBrowser:Gi,devtools:Zi,isIE:Yi,isIE9:Ki,isAndroid:tn,isIOS:en,get transitionProp(){return nn},get transitionEndEvent(){return rn},get animationProp(){return sn},get animationEndEvent(){return on},nextTick:ln,get _Set(){return cn},query:Z,inDoc:X,getAttr:Y,getBindAttr:K,hasBindAttr:tt,before:et,after:it,remove:nt,prepend:rt,replace:st,on:ot,off:at,setClass:lt,addClass:ct,removeClass:ut,extractContent:ft,trimNode:pt,isTemplate:vt,createAnchor:mt,findRef:gt,mapNodeRange:_t,removeNodeRange:yt,isFragment:bt,getOuterHTML:wt,mergeOptions:Nt,resolveAsset:jt,checkComponentAttr:Ct,commonTagRE:zn,reservedTagRE:Un,warn:Wn}),tr=0,er=new k(1e3),ir=0,nr=1,rr=2,sr=3,or=0,ar=1,hr=2,lr=3,cr=4,ur=5,fr=6,pr=7,dr=8,vr=[];vr[or]={ws:[or],ident:[lr,ir],"[":[cr],eof:[pr]},vr[ar]={ws:[ar],".":[hr],"[":[cr],eof:[pr]},vr[hr]={ws:[hr],ident:[lr,ir]},vr[lr]={ident:[lr,ir],0:[lr,ir],number:[lr,ir],ws:[ar,nr],".":[hr,nr],"[":[cr,nr],eof:[pr,nr]},vr[cr]={"'":[ur,ir],'"':[fr,ir],"[":[cr,rr],"]":[ar,sr],eof:dr,else:[cr,ir]},vr[ur]={"'":[cr,ir],eof:dr,else:[ur,ir]},vr[fr]={'"':[cr,ir],eof:dr,else:[fr,ir]};var mr=Object.freeze({parsePath:Vt,getPath:Bt,setPath:zt}),gr=new k(1e3),_r="Math,Date,this,true,false,null,undefined,Infinity,NaN,isNaN,isFinite,decodeURI,decodeURIComponent,encodeURI,encodeURIComponent,parseInt,parseFloat",yr=new RegExp("^("+_r.replace(/,/g,"\\b|")+"\\b)"),br="break,case,class,catch,const,continue,debugger,default,delete,do,else,export,extends,finally,for,function,if,import,in,instanceof,let,return,super,switch,throw,try,var,while,with,yield,enum,await,implements,package,protected,static,interface,private,public",wr=new RegExp("^("+br.replace(/,/g,"\\b|")+"\\b)"),Cr=/\s/g,$r=/\n/g,kr=/[\{,]\s*[\w\$_]+\s*:|('(?:[^'\\]|\\.)*'|"(?:[^"\\]|\\.)*"|`(?:[^`\\]|\\.)*\$\{|\}(?:[^`\\"']|\\.)*`|`(?:[^`\\]|\\.)*`)|new |typeof |void /g,xr=/"(\d+)"/g,Ar=/^[A-Za-z_$][\w$]*(?:\.[A-Za-z_$][\w$]*|\['.*?'\]|\[".*?"\]|\[\d+\]|\[[A-Za-z_$][\w$]*\])*$/,Or=/[^\w$\.](?:[A-Za-z_$][\w$]*)/g,Tr=/^(?:true|false|null|undefined|Infinity|NaN)$/,Nr=[],jr=Object.freeze({parseExpression:Yt,isSimplePath:Kt}),Er=[],Sr=[],Fr={},Dr={},Pr=!1,Rr=0;re.prototype.get=function(){this.beforeGet();var t,e=this.scope||this.vm;try{t=this.getter.call(e,e)}catch(t){}return this.deep&&se(t),this.preProcess&&(t=this.preProcess(t)),this.filters&&(t=e._applyFilters(t,null,this.filters,!1)),this.postProcess&&(t=this.postProcess(t)),this.afterGet(),t},re.prototype.set=function(t){var e=this.scope||this.vm;this.filters&&(t=e._applyFilters(t,this.value,this.filters,!0));try{this.setter.call(e,e,t)}catch(t){}var i=e.$forContext;if(i&&i.alias===this.expression){if(i.filters)return;i._withLock(function(){e.$key?i.rawValue[e.$key]=t:i.rawValue.$set(e.$index,t)})}},re.prototype.beforeGet=function(){Et.target=this},re.prototype.addDep=function(t){var e=t.id;this.newDepIds.has(e)||(this.newDepIds.add(e),this.newDeps.push(t),this.depIds.has(e)||t.addSub(this))},re.prototype.afterGet=function(){Et.target=null;for(var t=this.deps.length;t--;){var e=this.deps[t];this.newDepIds.has(e.id)||e.removeSub(this)}var i=this.depIds;this.depIds=this.newDepIds,this.newDepIds=i,this.newDepIds.clear(),i=this.deps,this.deps=this.newDeps,this.newDeps=i,this.newDeps.length=0},re.prototype.update=function(t){this.lazy?this.dirty=!0:this.sync||!Mn.async?this.run():(this.shallow=this.queued?!!t&&this.shallow:!!t,this.queued=!0,ne(this))},re.prototype.run=function(){if(this.active){var t=this.get();if(t!==this.value||(m(t)||this.deep)&&!this.shallow){var e=this.value;this.value=t;this.prevError;this.cb.call(this.vm,t,e)}this.queued=this.shallow=!1}},re.prototype.evaluate=function(){var t=Et.target;this.value=this.get(),this.dirty=!1,Et.target=t},re.prototype.depend=function(){for(var t=this.deps.length;t--;)this.deps[t].depend()},re.prototype.teardown=function(){if(this.active){this.vm._isBeingDestroyed||this.vm._vForRemoving||this.vm._watchers.$remove(this);for(var t=this.deps.length;t--;)this.deps[t].removeSub(this);this.active=!1,this.vm=this.cb=this.value=null}};var Lr=new cn,Hr={bind:function(){this.attr=3===this.el.nodeType?"data":"textContent"},update:function(t){this.el[this.attr]=s(t)}},Ir=new k(1e3),Mr=new k(1e3),Wr={efault:[0,"",""],legend:[1,"<fieldset>","</fieldset>"],tr:[2,"<table><tbody>","</tbody></table>"],col:[2,"<table><tbody></tbody><colgroup>","</colgroup></table>"]};Wr.td=Wr.th=[3,"<table><tbody><tr>","</tr></tbody></table>"],Wr.option=Wr.optgroup=[1,'<select multiple="multiple">',"</select>"],Wr.thead=Wr.tbody=Wr.colgroup=Wr.caption=Wr.tfoot=[1,"<table>","</table>"],Wr.g=Wr.defs=Wr.symbol=Wr.use=Wr.image=Wr.text=Wr.circle=Wr.ellipse=Wr.line=Wr.path=Wr.polygon=Wr.polyline=Wr.rect=[1,'<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:ev="http://www.w3.org/2001/xml-events"version="1.1">',"</svg>"];var Vr=/<([\w:-]+)/,Br=/&#?\w+?;/,zr=/<!--/,Ur=function(){if(Gi){var t=document.createElement("div");return t.innerHTML="<template>1</template>",!t.cloneNode(!0).firstChild.innerHTML}return!1}(),Jr=function(){if(Gi){var t=document.createElement("textarea");return t.placeholder="t","t"===t.cloneNode(!0).value}return!1}(),qr=Object.freeze({cloneNode:le,parseTemplate:ce}),Qr={bind:function(){8===this.el.nodeType&&(this.nodes=[],this.anchor=mt("v-html"),st(this.el,this.anchor))},update:function(t){t=s(t),this.nodes?this.swap(t):this.el.innerHTML=t},swap:function(t){for(var e=this.nodes.length;e--;)nt(this.nodes[e]);var i=ce(t,!0,!0);this.nodes=d(i.childNodes),et(i,this.anchor)}};ue.prototype.callHook=function(t){var e,i;for(e=0,i=this.childFrags.length;e<i;e++)this.childFrags[e].callHook(t);for(e=0,i=this.children.length;e<i;e++)t(this.children[e])},ue.prototype.beforeRemove=function(){var t,e;for(t=0,e=this.childFrags.length;t<e;t++)this.childFrags[t].beforeRemove(!1);for(t=0,e=this.children.length;t<e;t++)this.children[t].$destroy(!1,!0);var i=this.unlink.dirs;for(t=0,e=i.length;t<e;t++)i[t]._watcher&&i[t]._watcher.teardown()},ue.prototype.destroy=function(){this.parentFrag&&this.parentFrag.childFrags.$remove(this),this.node.__v_frag=null,this.unlink()};var Gr=new k(5e3);_e.prototype.create=function(t,e,i){var n=le(this.template);return new ue(this.linker,this.vm,n,t,e,i)};var Zr=700,Xr=800,Yr=850,Kr=1100,ts=1500,es=1500,is=1750,ns=2100,rs=2200,ss=2300,os=0,as={priority:rs,terminal:!0,params:["track-by","stagger","enter-stagger","leave-stagger"],bind:function(){var t=this.expression.match(/(.*) (?:in|of) (.*)/);if(t){var e=t[1].match(/\((.*),(.*)\)/);e?(this.iterator=e[1].trim(),this.alias=e[2].trim()):this.alias=t[1].trim(),this.expression=t[2]}if(this.alias){this.id="__v-for__"+ ++os;var i=this.el.tagName;this.isOption=("OPTION"===i||"OPTGROUP"===i)&&"SELECT"===this.el.parentNode.tagName,this.start=mt("v-for-start"),this.end=mt("v-for-end"),st(this.el,this.end),et(this.start,this.end),this.cache=Object.create(null),this.factory=new _e(this.vm,this.el)}},update:function(t){this.diff(t),this.updateRef(),this.updateModel()},diff:function(t){var e,n,r,s,o,a,h=t[0],l=this.fromObject=m(h)&&i(h,"$key")&&i(h,"$value"),c=this.params.trackBy,u=this.frags,f=this.frags=new Array(t.length),p=this.alias,d=this.iterator,v=this.start,g=this.end,_=X(v),y=!u;for(e=0,n=t.length;e<n;e++)h=t[e],s=l?h.$key:null,o=l?h.$value:h,a=!m(o),r=!y&&this.getCachedFrag(o,e,s),r?(r.reused=!0,r.scope.$index=e,s&&(r.scope.$key=s),d&&(r.scope[d]=null!==s?s:e),(c||l||a)&&St(function(){r.scope[p]=o})):(r=this.create(o,p,e,s),r.fresh=!y),f[e]=r,y&&r.before(g);if(!y){var b=0,w=u.length-f.length;for(this.vm._vForRemoving=!0,e=0,n=u.length;e<n;e++)r=u[e],r.reused||(this.deleteCachedFrag(r),this.remove(r,b++,w,_));this.vm._vForRemoving=!1,b&&(this.vm._watchers=this.vm._watchers.filter(function(t){return t.active}));var C,$,k,x=0;for(e=0,n=f.length;e<n;e++)r=f[e],C=f[e-1],$=C?C.staggerCb?C.staggerAnchor:C.end||C.node:v,r.reused&&!r.staggerCb?(k=ye(r,v,this.id),k===C||k&&ye(k,v,this.id)===C||this.move(r,$)):this.insert(r,x++,$,_),r.reused=r.fresh=!1}},create:function(t,e,i,n){var r=this._host,s=this._scope||this.vm,o=Object.create(s);o.$refs=Object.create(s.$refs),o.$els=Object.create(s.$els),o.$parent=s,o.$forContext=this,St(function(){Lt(o,e,t)}),Lt(o,"$index",i),n?Lt(o,"$key",n):o.$key&&_(o,"$key",null),this.iterator&&Lt(o,this.iterator,null!==n?n:i);var a=this.factory.create(r,o,this._frag);return a.forId=this.id,this.cacheFrag(t,a,i,n),a},updateRef:function(){var t=this.descriptor.ref;if(t){var e,i=(this._scope||this.vm).$refs;this.fromObject?(e={},this.frags.forEach(function(t){e[t.scope.$key]=Ce(t)})):e=this.frags.map(Ce),i[t]=e}},updateModel:function(){if(this.isOption){var t=this.start.parentNode,e=t&&t.__v_model;e&&e.forceUpdate()}},insert:function(t,e,i,n){t.staggerCb&&(t.staggerCb.cancel(),t.staggerCb=null);var r=this.getStagger(t,e,null,"enter");if(n&&r){var s=t.staggerAnchor;s||(s=t.staggerAnchor=mt("stagger-anchor"),s.__v_frag=t),it(s,i);var o=t.staggerCb=w(function(){t.staggerCb=null,t.before(s),nt(s)});setTimeout(o,r)}else{var a=i.nextSibling;a||(it(this.end,i),a=this.end),t.before(a)}},remove:function(t,e,i,n){if(t.staggerCb)return t.staggerCb.cancel(),void(t.staggerCb=null);var r=this.getStagger(t,e,i,"leave");if(n&&r){var s=t.staggerCb=w(function(){t.staggerCb=null,t.remove()});setTimeout(s,r)}else t.remove()},move:function(t,e){e.nextSibling||this.end.parentNode.appendChild(this.end),t.before(e.nextSibling,!1)},cacheFrag:function(t,e,n,r){var s,o=this.params.trackBy,a=this.cache,h=!m(t);r||o||h?(s=we(n,r,t,o),a[s]||(a[s]=e)):(s=this.id,i(t,s)?null===t[s]&&(t[s]=e):Object.isExtensible(t)&&_(t,s,e)),e.raw=t},getCachedFrag:function(t,e,i){var n,r=this.params.trackBy,s=!m(t);if(i||r||s){var o=we(e,i,t,r);n=this.cache[o]}else n=t[this.id];return n&&(n.reused||n.fresh),n},deleteCachedFrag:function(t){var e=t.raw,n=this.params.trackBy,r=t.scope,s=r.$index,o=i(r,"$key")&&r.$key,a=!m(e);if(n||o||a){var h=we(s,o,e,n);this.cache[h]=null}else e[this.id]=null,t.raw=null},getStagger:function(t,e,i,n){n+="Stagger";var r=t.node.__v_trans,s=r&&r.hooks,o=s&&(s[n]||s.stagger);return o?o.call(t,e,i):e*parseInt(this.params[n]||this.params.stagger,10)},_preProcess:function(t){return this.rawValue=t,t},_postProcess:function(t){if(qi(t))return t;if(g(t)){for(var e,i=Object.keys(t),n=i.length,r=new Array(n);n--;)e=i[n],r[n]={$key:e,$value:t[e]};return r}return"number"!=typeof t||isNaN(t)||(t=be(t)),t||[]},unbind:function(){if(this.descriptor.ref&&((this._scope||this.vm).$refs[this.descriptor.ref]=null),this.frags)for(var t,e=this.frags.length;e--;)t=this.frags[e],this.deleteCachedFrag(t),t.destroy()}},hs={priority:ns,terminal:!0,bind:function(){var t=this.el;if(t.__vue__)this.invalid=!0;else{var e=t.nextElementSibling;e&&null!==Y(e,"v-else")&&(nt(e),this.elseEl=e),this.anchor=mt("v-if"),st(t,this.anchor)}},update:function(t){this.invalid||(t?this.frag||this.insert():this.remove())},insert:function(){this.elseFrag&&(this.elseFrag.remove(),this.elseFrag=null),this.factory||(this.factory=new _e(this.vm,this.el)),this.frag=this.factory.create(this._host,this._scope,this._frag),this.frag.before(this.anchor)},remove:function(){this.frag&&(this.frag.remove(),this.frag=null),this.elseEl&&!this.elseFrag&&(this.elseFactory||(this.elseFactory=new _e(this.elseEl._context||this.vm,this.elseEl)),this.elseFrag=this.elseFactory.create(this._host,this._scope,this._frag),this.elseFrag.before(this.anchor))},unbind:function(){this.frag&&this.frag.destroy(),this.elseFrag&&this.elseFrag.destroy()}},ls={bind:function(){var t=this.el.nextElementSibling;t&&null!==Y(t,"v-else")&&(this.elseEl=t)},update:function(t){this.apply(this.el,t),this.elseEl&&this.apply(this.elseEl,!t)},apply:function(t,e){function i(){t.style.display=e?"":"none"}X(t)?G(t,e?1:-1,i,this.vm):i()}},cs={bind:function(){var t=this,e=this.el,i="range"===e.type,n=this.params.lazy,r=this.params.number,s=this.params.debounce,a=!1;if(tn||i||(this.on("compositionstart",function(){a=!0}),this.on("compositionend",function(){a=!1,n||t.listener()})),this.focused=!1,i||n||(this.on("focus",function(){t.focused=!0}),this.on("blur",function(){t.focused=!1,t._frag&&!t._frag.inserted||t.rawListener()})),this.listener=this.rawListener=function(){if(!a&&t._bound){var n=r||i?o(e.value):e.value;t.set(n),ln(function(){t._bound&&!t.focused&&t.update(t._watcher.value)})}},s&&(this.listener=y(this.listener,s)),this.hasjQuery="function"==typeof jQuery,this.hasjQuery){var h=jQuery.fn.on?"on":"bind";jQuery(e)[h]("change",this.rawListener),n||jQuery(e)[h]("input",this.listener)}else this.on("change",this.rawListener),n||this.on("input",this.listener);!n&&Ki&&(this.on("cut",function(){ln(t.listener)}),this.on("keyup",function(e){46!==e.keyCode&&8!==e.keyCode||t.listener()})),(e.hasAttribute("value")||"TEXTAREA"===e.tagName&&e.value.trim())&&(this.afterBind=this.listener)},update:function(t){t=s(t),t!==this.el.value&&(this.el.value=t)},unbind:function(){var t=this.el;if(this.hasjQuery){var e=jQuery.fn.off?"off":"unbind";jQuery(t)[e]("change",this.listener),jQuery(t)[e]("input",this.listener)}}},us={bind:function(){var t=this,e=this.el;this.getValue=function(){if(e.hasOwnProperty("_value"))return e._value;var i=e.value;return t.params.number&&(i=o(i)),i},this.listener=function(){t.set(t.getValue())},this.on("change",this.listener),e.hasAttribute("checked")&&(this.afterBind=this.listener)},update:function(t){this.el.checked=C(t,this.getValue())}},fs={bind:function(){var t=this,e=this,i=this.el;this.forceUpdate=function(){e._watcher&&e.update(e._watcher.get())};var n=this.multiple=i.hasAttribute("multiple");this.listener=function(){var t=$e(i,n);t=e.params.number?qi(t)?t.map(o):o(t):t,e.set(t)},this.on("change",this.listener);var r=$e(i,n,!0);(n&&r.length||!n&&null!==r)&&(this.afterBind=this.listener),this.vm.$on("hook:attached",function(){ln(t.forceUpdate)}),X(i)||ln(this.forceUpdate)},update:function(t){var e=this.el;e.selectedIndex=-1;for(var i,n,r=this.multiple&&qi(t),s=e.options,o=s.length;o--;)i=s[o],n=i.hasOwnProperty("_value")?i._value:i.value,i.selected=r?ke(t,n)>-1:C(t,n)},unbind:function(){this.vm.$off("hook:attached",this.forceUpdate)}},ps={bind:function(){function t(){var t=i.checked;return t&&i.hasOwnProperty("_trueValue")?i._trueValue:!t&&i.hasOwnProperty("_falseValue")?i._falseValue:t}var e=this,i=this.el;this.getValue=function(){return i.hasOwnProperty("_value")?i._value:e.params.number?o(i.value):i.value},this.listener=function(){var n=e._watcher.get();if(qi(n)){var r=e.getValue(),s=b(n,r);i.checked?s<0&&e.set(n.concat(r)):s>-1&&e.set(n.slice(0,s).concat(n.slice(s+1)))}else e.set(t())},this.on("change",this.listener),i.hasAttribute("checked")&&(this.afterBind=this.listener)},update:function(t){var e=this.el;qi(t)?e.checked=b(t,this.getValue())>-1:e.hasOwnProperty("_trueValue")?e.checked=C(t,e._trueValue):e.checked=!!t}},ds={text:cs,radio:us,select:fs,checkbox:ps},vs={priority:Xr,twoWay:!0,handlers:ds,params:["lazy","number","debounce"],bind:function(){this.checkFilters(),this.hasRead&&!this.hasWrite;var t,e=this.el,i=e.tagName;if("INPUT"===i)t=ds[e.type]||ds.text;else if("SELECT"===i)t=ds.select;else{if("TEXTAREA"!==i)return;t=ds.text}e.__v_model=this,t.bind.call(this),this.update=t.update,this._unbind=t.unbind},checkFilters:function(){var t=this.filters;if(t)for(var e=t.length;e--;){var i=jt(this.vm.$options,"filters",t[e].name);("function"==typeof i||i.read)&&(this.hasRead=!0),i.write&&(this.hasWrite=!0)}},unbind:function(){this.el.__v_model=null,this._unbind&&this._unbind()}},ms={esc:27,tab:9,enter:13,space:32,delete:[8,46],up:38,left:37,right:39,down:40},gs={priority:Zr,acceptStatement:!0,keyCodes:ms,bind:function(){if("IFRAME"===this.el.tagName&&"load"!==this.arg){var t=this;this.iframeBind=function(){ot(t.el.contentWindow,t.arg,t.handler,t.modifiers.capture)},this.on("load",this.iframeBind)}},update:function(t){if(this.descriptor.raw||(t=function(){}),"function"==typeof t){this.modifiers.stop&&(t=Ae(t)),this.modifiers.prevent&&(t=Oe(t)),this.modifiers.self&&(t=Te(t));var e=Object.keys(this.modifiers).filter(function(t){return"stop"!==t&&"prevent"!==t&&"self"!==t&&"capture"!==t});e.length&&(t=xe(t,e)),this.reset(),this.handler=t,this.iframeBind?this.iframeBind():ot(this.el,this.arg,this.handler,this.modifiers.capture)}},reset:function(){var t=this.iframeBind?this.el.contentWindow:this.el;this.handler&&at(t,this.arg,this.handler)},unbind:function(){this.reset()}},_s=["-webkit-","-moz-","-ms-"],ys=["Webkit","Moz","ms"],bs=/!important;?$/,ws=Object.create(null),Cs=null,$s={deep:!0,update:function(t){"string"==typeof t?this.el.style.cssText=t:qi(t)?this.handleObject(t.reduce(v,{})):this.handleObject(t||{})},handleObject:function(t){var e,i,n=this.cache||(this.cache={});for(e in n)e in t||(this.handleSingle(e,null),delete n[e]);for(e in t)i=t[e],i!==n[e]&&(n[e]=i,this.handleSingle(e,i))},handleSingle:function(t,e){if(t=Ne(t))if(null!=e&&(e+=""),e){var i=bs.test(e)?"important":"";i?(e=e.replace(bs,"").trim(),this.el.style.setProperty(t.kebab,e,i)):this.el.style[t.camel]=e;
}else this.el.style[t.camel]=""}},ks="http://www.w3.org/1999/xlink",xs=/^xlink:/,As=/^v-|^:|^@|^(?:is|transition|transition-mode|debounce|track-by|stagger|enter-stagger|leave-stagger)$/,Os=/^(?:value|checked|selected|muted)$/,Ts=/^(?:draggable|contenteditable|spellcheck)$/,Ns={value:"_value","true-value":"_trueValue","false-value":"_falseValue"},js={priority:Yr,bind:function(){var t=this.arg,e=this.el.tagName;t||(this.deep=!0);var i=this.descriptor,n=i.interp;n&&(i.hasOneTime&&(this.expression=B(n,this._scope||this.vm)),(As.test(t)||"name"===t&&("PARTIAL"===e||"SLOT"===e))&&(this.el.removeAttribute(t),this.invalid=!0))},update:function(t){if(!this.invalid){var e=this.arg;this.arg?this.handleSingle(e,t):this.handleObject(t||{})}},handleObject:$s.handleObject,handleSingle:function(t,e){var i=this.el,n=this.descriptor.interp;if(this.modifiers.camel&&(t=l(t)),!n&&Os.test(t)&&t in i){var r="value"===t&&null==e?"":e;i[t]!==r&&(i[t]=r)}var s=Ns[t];if(!n&&s){i[s]=e;var o=i.__v_model;o&&o.listener()}return"value"===t&&"TEXTAREA"===i.tagName?void i.removeAttribute(t):void(Ts.test(t)?i.setAttribute(t,e?"true":"false"):null!=e&&e!==!1?"class"===t?(i.__v_trans&&(e+=" "+i.__v_trans.id+"-transition"),lt(i,e)):xs.test(t)?i.setAttributeNS(ks,t,e===!0?"":e):i.setAttribute(t,e===!0?"":e):i.removeAttribute(t))}},Es={priority:ts,bind:function(){if(this.arg){var t=this.id=l(this.arg),e=(this._scope||this.vm).$els;i(e,t)?e[t]=this.el:Lt(e,t,this.el)}},unbind:function(){var t=(this._scope||this.vm).$els;t[this.id]===this.el&&(t[this.id]=null)}},Ss={bind:function(){}},Fs={bind:function(){var t=this.el;this.vm.$once("pre-hook:compiled",function(){t.removeAttribute("v-cloak")})}},Ds={text:Hr,html:Qr,for:as,if:hs,show:ls,model:vs,on:gs,bind:js,el:Es,ref:Ss,cloak:Fs},Ps={deep:!0,update:function(t){t?"string"==typeof t?this.setClass(t.trim().split(/\s+/)):this.setClass(Ee(t)):this.cleanup()},setClass:function(t){this.cleanup(t);for(var e=0,i=t.length;e<i;e++){var n=t[e];n&&Se(this.el,n,ct)}this.prevKeys=t},cleanup:function(t){var e=this.prevKeys;if(e)for(var i=e.length;i--;){var n=e[i];(!t||t.indexOf(n)<0)&&Se(this.el,n,ut)}}},Rs={priority:es,params:["keep-alive","transition-mode","inline-template"],bind:function(){this.el.__vue__||(this.keepAlive=this.params.keepAlive,this.keepAlive&&(this.cache={}),this.params.inlineTemplate&&(this.inlineTemplate=ft(this.el,!0)),this.pendingComponentCb=this.Component=null,this.pendingRemovals=0,this.pendingRemovalCb=null,this.anchor=mt("v-component"),st(this.el,this.anchor),this.el.removeAttribute("is"),this.el.removeAttribute(":is"),this.descriptor.ref&&this.el.removeAttribute("v-ref:"+u(this.descriptor.ref)),this.literal&&this.setComponent(this.expression))},update:function(t){this.literal||this.setComponent(t)},setComponent:function(t,e){if(this.invalidatePending(),t){var i=this;this.resolveComponent(t,function(){i.mountComponent(e)})}else this.unbuild(!0),this.remove(this.childVM,e),this.childVM=null},resolveComponent:function(t,e){var i=this;this.pendingComponentCb=w(function(n){i.ComponentName=n.options.name||("string"==typeof t?t:null),i.Component=n,e()}),this.vm._resolveComponent(t,this.pendingComponentCb)},mountComponent:function(t){this.unbuild(!0);var e=this,i=this.Component.options.activate,n=this.getCached(),r=this.build();i&&!n?(this.waitingFor=r,Fe(i,r,function(){e.waitingFor===r&&(e.waitingFor=null,e.transition(r,t))})):(n&&r._updateRef(),this.transition(r,t))},invalidatePending:function(){this.pendingComponentCb&&(this.pendingComponentCb.cancel(),this.pendingComponentCb=null)},build:function(t){var e=this.getCached();if(e)return e;if(this.Component){var i={name:this.ComponentName,el:le(this.el),template:this.inlineTemplate,parent:this._host||this.vm,_linkerCachable:!this.inlineTemplate,_ref:this.descriptor.ref,_asComponent:!0,_isRouterView:this._isRouterView,_context:this.vm,_scope:this._scope,_frag:this._frag};t&&v(i,t);var n=new this.Component(i);return this.keepAlive&&(this.cache[this.Component.cid]=n),n}},getCached:function(){return this.keepAlive&&this.cache[this.Component.cid]},unbuild:function(t){this.waitingFor&&(this.keepAlive||this.waitingFor.$destroy(),this.waitingFor=null);var e=this.childVM;return!e||this.keepAlive?void(e&&(e._inactive=!0,e._updateRef(!0))):void e.$destroy(!1,t)},remove:function(t,e){var i=this.keepAlive;if(t){this.pendingRemovals++,this.pendingRemovalCb=e;var n=this;t.$remove(function(){n.pendingRemovals--,i||t._cleanup(),!n.pendingRemovals&&n.pendingRemovalCb&&(n.pendingRemovalCb(),n.pendingRemovalCb=null)})}else e&&e()},transition:function(t,e){var i=this,n=this.childVM;switch(n&&(n._inactive=!0),t._inactive=!1,this.childVM=t,i.params.transitionMode){case"in-out":t.$before(i.anchor,function(){i.remove(n,e)});break;case"out-in":i.remove(n,function(){t.$before(i.anchor,e)});break;default:i.remove(n),t.$before(i.anchor,e)}},unbind:function(){if(this.invalidatePending(),this.unbuild(),this.cache){for(var t in this.cache)this.cache[t].$destroy();this.cache=null}}},Ls=Mn._propBindingModes,Hs={},Is=/^[$_a-zA-Z]+[\w$]*$/,Ms=Mn._propBindingModes,Ws={bind:function(){var t=this.vm,e=t._context,i=this.descriptor.prop,n=i.path,r=i.parentPath,s=i.mode===Ms.TWO_WAY,o=this.parentWatcher=new re(e,r,function(e){He(t,i,e)},{twoWay:s,filters:i.filters,scope:this._scope});if(Le(t,i,o.value),s){var a=this;t.$once("pre-hook:created",function(){a.childWatcher=new re(t,n,function(t){o.set(t)},{sync:!0})})}},unbind:function(){this.parentWatcher.teardown(),this.childWatcher&&this.childWatcher.teardown()}},Vs=[],Bs=!1,zs="transition",Us="animation",Js=nn+"Duration",qs=sn+"Duration",Qs=Gi&&window.requestAnimationFrame,Gs=Qs?function(t){Qs(function(){Qs(t)})}:function(t){setTimeout(t,50)},Zs=Ue.prototype;Zs.enter=function(t,e){this.cancelPending(),this.callHook("beforeEnter"),this.cb=e,ct(this.el,this.enterClass),t(),this.entered=!1,this.callHookWithCb("enter"),this.entered||(this.cancel=this.hooks&&this.hooks.enterCancelled,Be(this.enterNextTick))},Zs.enterNextTick=function(){var t=this;this.justEntered=!0,Gs(function(){t.justEntered=!1});var e=this.enterDone,i=this.getCssTransitionType(this.enterClass);this.pendingJsCb?i===zs&&ut(this.el,this.enterClass):i===zs?(ut(this.el,this.enterClass),this.setupCssCb(rn,e)):i===Us?this.setupCssCb(on,e):e()},Zs.enterDone=function(){this.entered=!0,this.cancel=this.pendingJsCb=null,ut(this.el,this.enterClass),this.callHook("afterEnter"),this.cb&&this.cb()},Zs.leave=function(t,e){this.cancelPending(),this.callHook("beforeLeave"),this.op=t,this.cb=e,ct(this.el,this.leaveClass),this.left=!1,this.callHookWithCb("leave"),this.left||(this.cancel=this.hooks&&this.hooks.leaveCancelled,this.op&&!this.pendingJsCb&&(this.justEntered?this.leaveDone():Be(this.leaveNextTick)))},Zs.leaveNextTick=function(){var t=this.getCssTransitionType(this.leaveClass);if(t){var e=t===zs?rn:on;this.setupCssCb(e,this.leaveDone)}else this.leaveDone()},Zs.leaveDone=function(){this.left=!0,this.cancel=this.pendingJsCb=null,this.op(),ut(this.el,this.leaveClass),this.callHook("afterLeave"),this.cb&&this.cb(),this.op=null},Zs.cancelPending=function(){this.op=this.cb=null;var t=!1;this.pendingCssCb&&(t=!0,at(this.el,this.pendingCssEvent,this.pendingCssCb),this.pendingCssEvent=this.pendingCssCb=null),this.pendingJsCb&&(t=!0,this.pendingJsCb.cancel(),this.pendingJsCb=null),t&&(ut(this.el,this.enterClass),ut(this.el,this.leaveClass)),this.cancel&&(this.cancel.call(this.vm,this.el),this.cancel=null)},Zs.callHook=function(t){this.hooks&&this.hooks[t]&&this.hooks[t].call(this.vm,this.el)},Zs.callHookWithCb=function(t){var e=this.hooks&&this.hooks[t];e&&(e.length>1&&(this.pendingJsCb=w(this[t+"Done"])),e.call(this.vm,this.el,this.pendingJsCb))},Zs.getCssTransitionType=function(t){if(!(!rn||document.hidden||this.hooks&&this.hooks.css===!1||Je(this.el))){var e=this.type||this.typeCache[t];if(e)return e;var i=this.el.style,n=window.getComputedStyle(this.el),r=i[Js]||n[Js];if(r&&"0s"!==r)e=zs;else{var s=i[qs]||n[qs];s&&"0s"!==s&&(e=Us)}return e&&(this.typeCache[t]=e),e}},Zs.setupCssCb=function(t,e){this.pendingCssEvent=t;var i=this,n=this.el,r=this.pendingCssCb=function(s){s.target===n&&(at(n,t,r),i.pendingCssEvent=i.pendingCssCb=null,!i.pendingJsCb&&e&&e())};ot(n,t,r)};var Xs={priority:Kr,update:function(t,e){var i=this.el,n=jt(this.vm.$options,"transitions",t);t=t||"v",e=e||"v",i.__v_trans=new Ue(i,t,n,this.vm),ut(i,e+"-transition"),ct(i,t+"-transition")}},Ys={style:$s,class:Ps,component:Rs,prop:Ws,transition:Xs},Ks=/^v-bind:|^:/,to=/^v-on:|^@/,eo=/^v-([^:]+)(?:$|:(.*)$)/,io=/\.[^\.]+/g,no=/^(v-bind:|:)?transition$/,ro=1e3,so=2e3;ui.terminal=!0;var oo=/[^\w\-:\.]/,ao=Object.freeze({compile:qe,compileAndLinkProps:Ye,compileRoot:Ke,transclude:_i,resolveSlots:Ci}),ho=/^v-on:|^@/;Oi.prototype._bind=function(){var t=this.name,e=this.descriptor;if(("cloak"!==t||this.vm._isCompiled)&&this.el&&this.el.removeAttribute){var i=e.attr||"v-"+t;this.el.removeAttribute(i)}var n=e.def;if("function"==typeof n?this.update=n:v(this,n),this._setupParams(),this.bind&&this.bind(),this._bound=!0,this.literal)this.update&&this.update(e.raw);else if((this.expression||this.modifiers)&&(this.update||this.twoWay)&&!this._checkStatement()){var r=this;this.update?this._update=function(t,e){r._locked||r.update(t,e)}:this._update=Ai;var s=this._preProcess?p(this._preProcess,this):null,o=this._postProcess?p(this._postProcess,this):null,a=this._watcher=new re(this.vm,this.expression,this._update,{filters:this.filters,twoWay:this.twoWay,deep:this.deep,preProcess:s,postProcess:o,scope:this._scope});this.afterBind?this.afterBind():this.update&&this.update(a.value)}},Oi.prototype._setupParams=function(){if(this.params){var t=this.params;this.params=Object.create(null);for(var e,i,n,r=t.length;r--;)e=u(t[r]),n=l(e),i=K(this.el,e),null!=i?this._setupParamWatcher(n,i):(i=Y(this.el,e),null!=i&&(this.params[n]=""===i||i))}},Oi.prototype._setupParamWatcher=function(t,e){var i=this,n=!1,r=(this._scope||this.vm).$watch(e,function(e,r){if(i.params[t]=e,n){var s=i.paramWatchers&&i.paramWatchers[t];s&&s.call(i,e,r)}else n=!0},{immediate:!0,user:!1});(this._paramUnwatchFns||(this._paramUnwatchFns=[])).push(r)},Oi.prototype._checkStatement=function(){var t=this.expression;if(t&&this.acceptStatement&&!Kt(t)){var e=Yt(t).get,i=this._scope||this.vm,n=function(t){i.$event=t,e.call(i,i),i.$event=null};return this.filters&&(n=i._applyFilters(n,null,this.filters)),this.update(n),!0}},Oi.prototype.set=function(t){this.twoWay&&this._withLock(function(){this._watcher.set(t)})},Oi.prototype._withLock=function(t){var e=this;e._locked=!0,t.call(e),ln(function(){e._locked=!1})},Oi.prototype.on=function(t,e,i){ot(this.el,t,e,i),(this._listeners||(this._listeners=[])).push([t,e])},Oi.prototype._teardown=function(){if(this._bound){this._bound=!1,this.unbind&&this.unbind(),this._watcher&&this._watcher.teardown();var t,e=this._listeners;if(e)for(t=e.length;t--;)at(this.el,e[t][0],e[t][1]);var i=this._paramUnwatchFns;if(i)for(t=i.length;t--;)i[t]();this.vm=this.el=this._watcher=this._listeners=null}};var lo=/[^|]\|[^|]/;Ht(Di),ki(Di),xi(Di),Ti(Di),Ni(Di),ji(Di),Ei(Di),Si(Di),Fi(Di);var co={priority:ss,params:["name"],bind:function(){var t=this.params.name||"default",e=this.vm._slotContents&&this.vm._slotContents[t];e&&e.hasChildNodes()?this.compile(e.cloneNode(!0),this.vm._context,this.vm):this.fallback()},compile:function(t,e,i){if(t&&e){if(this.el.hasChildNodes()&&1===t.childNodes.length&&1===t.childNodes[0].nodeType&&t.childNodes[0].hasAttribute("v-if")){var n=document.createElement("template");n.setAttribute("v-else",""),n.innerHTML=this.el.innerHTML,n._context=this.vm,t.appendChild(n)}var r=i?i._scope:this._scope;this.unlink=e.$compile(t,i,r,this._frag)}t?st(this.el,t):nt(this.el)},fallback:function(){this.compile(ft(this.el,!0),this.vm)},unbind:function(){this.unlink&&this.unlink()}},uo={priority:is,params:["name"],paramWatchers:{name:function(t){hs.remove.call(this),t&&this.insert(t)}},bind:function(){this.anchor=mt("v-partial"),st(this.el,this.anchor),this.insert(this.params.name)},insert:function(t){var e=jt(this.vm.$options,"partials",t,!0);e&&(this.factory=new _e(this.vm,e),hs.insert.call(this))},unbind:function(){this.frag&&this.frag.destroy()}},fo={slot:co,partial:uo},po=as._postProcess,vo=/(\d{3})(?=\d)/g,mo={orderBy:Li,filterBy:Ri,limitBy:Pi,json:{read:function(t,e){return"string"==typeof t?t:JSON.stringify(t,null,arguments.length>1?e:2)},write:function(t){try{return JSON.parse(t)}catch(e){return t}}},capitalize:function(t){return t||0===t?(t=t.toString(),t.charAt(0).toUpperCase()+t.slice(1)):""},uppercase:function(t){return t||0===t?t.toString().toUpperCase():""},lowercase:function(t){return t||0===t?t.toString().toLowerCase():""},currency:function(t,e,i){if(t=parseFloat(t),!isFinite(t)||!t&&0!==t)return"";e=null!=e?e:"$",i=null!=i?i:2;var n=Math.abs(t).toFixed(i),r=i?n.slice(0,-1-i):n,s=r.length%3,o=s>0?r.slice(0,s)+(r.length>3?",":""):"",a=i?n.slice(-1-i):"",h=t<0?"-":"";return h+e+o+r.slice(s).replace(vo,"$1,")+a},pluralize:function(t){var e=d(arguments,1),i=e.length;if(i>1){var n=t%10-1;return n in e?e[n]:e[i-1]}return e[0]+(1===t?"":"s")},debounce:function(t,e){if(t)return e||(e=300),y(t,e)}};return Ii(Di),Di.version="1.0.28",setTimeout(function(){Mn.devtools&&Zi&&Zi.emit("init",Di)},0),Di});
//# sourceMappingURL=vue.min.js.map | PypiClean |
/FAaDO-0.0.4.tar.gz/FAaDO-0.0.4/fado/builder/data/shape/nlafl_shaper.py | import logging
import os
import shutil
import numpy as np
from fado.builder.data.shape.shaper import Shaper
from fado.cli.arguments.arguments import FADOArguments
from fado.constants import ALL_DATA_FOLDER
ALPHA = 1
fado_args = FADOArguments()
DATA_FOLDER = os.path.join(ALL_DATA_FOLDER, fado_args.dataset)
logger = logging.getLogger('fado')
logger = logging.LoggerAdapter(logger, {'node_id': 'builder'})
class NLAFLShaper(Shaper):
def __init__(self):
self.num_users = None
self.client_size = None
self.target_fraction = None
self.poison_count = fado_args.poison_count_multiplier * (fado_args.num_pop_clients // 3)
def shape(self):
if fado_args.dataset == 'nlafl_emnist':
trn_x, trn_y, tst_x, tst_y = load_emnist()
logger.info('Generating nlafl emnist dataset')
self.num_users = fado_args.number_clients
self.client_size = 1000
self.target_fraction = 0.5
self.shape_data(trn_x, trn_y, tst_x, tst_y)
elif fado_args.dataset == 'nlafl_fashionmnist':
trn_x, trn_y, tst_x, tst_y = load_fashionmnist()
logger.info('Generating nlafl fashionmnist dataset')
self.num_users = fado_args.number_clients
self.client_size = 400
self.target_fraction = 0.6
self.shape_data(trn_x, trn_y, tst_x, tst_y)
elif fado_args.dataset == 'nlafl_dbpedia':
trn_x, trn_y, tst_x, tst_y = load_dbpedia()
logger.info('Generating nlafl dbpedia dataset')
self.num_users = fado_args.number_clients
self.client_size = 1000
self.target_fraction = 0.6
self.shape_data(trn_x, trn_y, tst_x, tst_y)
shutil.copy(os.path.join(DATA_FOLDER, 'dbpedia_embedding_matrix.npy'), os.path.join(DATA_FOLDER, 'train'))
else:
raise Exception("NLAFL dataset not supported yet")
def shape_data(self, trn_x, trn_y, tst_x, tst_y):
partitioned_trn = partition_by_class(trn_x, trn_y)
partitioned_tst = partition_by_class(tst_x, tst_y)
trn_y, tst_y = np.eye(fado_args.num_classes)[trn_y], np.eye(fado_args.num_classes)[tst_y]
# Sample data from the original dataset according to a Dirichlet distribution.
# Returns list of tuples, (data, labels)) for each client
client_data = self.sample(partitioned_trn)
write_files(client_data, tst_x, tst_y, partitioned_tst)
def sample(self, partitioned):
if self.poison_count > 0:
client_data = fixed_poison(
partitioned,
self.num_users,
self.client_size,
self.poison_count,
targ_class=fado_args.target_class,
client_targ=fado_args.num_pop_clients,
targ_frac=self.target_fraction,
alpha=ALPHA,
)
else:
client_data = fixed_sample(
partitioned,
self.num_users,
self.client_size,
targ_class=fado_args.target_class,
client_targ=fado_args.num_pop_clients,
targ_frac=self.target_fraction,
alpha=ALPHA,
)
return client_data
def write_files(client_data, tst_x, tst_y, partitioned_tst):
test_target_x = partitioned_tst[fado_args.target_class]
test_target_size = len(test_target_x)
test_target_x_attacker = test_target_x[:test_target_size // 2]
test_target_x_server = test_target_x[test_target_size // 2:]
os.makedirs(os.path.join(DATA_FOLDER, 'train'), exist_ok=True)
os.makedirs(os.path.join(DATA_FOLDER, 'test'), exist_ok=True)
os.makedirs(os.path.join(DATA_FOLDER, 'target_test'), exist_ok=True)
os.makedirs(os.path.join(DATA_FOLDER, 'target_test_attacker'), exist_ok=True)
np.savez_compressed(os.path.join(DATA_FOLDER, 'train', 'all_data'), **client_data)
np.savez_compressed(os.path.join(DATA_FOLDER, 'test', 'all_data'), x=tst_x, y=tst_y)
np.savez_compressed(os.path.join(DATA_FOLDER, 'target_test', 'all_data'),
x=test_target_x_server,
y=np.eye(fado_args.num_classes)[len(test_target_x_server) * [fado_args.target_class]])
np.savez_compressed(os.path.join(DATA_FOLDER, 'target_test_attacker', 'all_data'),
x=test_target_x_attacker,
y=np.eye(fado_args.num_classes)[len(test_target_x_attacker) * [fado_args.target_class]])
def load_emnist():
""" Load the EMNIST dataet
Returns:
tuple: tuple of numpy arrays trn_x, trn_y, tst_x, tst_y
"""
trn_x = np.load(os.path.join(DATA_FOLDER, 'trn_x_emnist.npy'))
trn_y = np.load(os.path.join(DATA_FOLDER, 'trn_y_emnist.npy'))
tst_x = np.load(os.path.join(DATA_FOLDER, 'tst_x_emnist.npy'))
tst_y = np.load(os.path.join(DATA_FOLDER, 'tst_y_emnist.npy'))
return trn_x, trn_y, tst_x, tst_y
def load_fashionmnist():
""" Load the EMNIST dataet
Returns:
tuple: tuple of numpy arrays trn_x, trn_y, tst_x, tst_y
"""
trn_x = np.load(os.path.join(DATA_FOLDER, 'trn_x_fashionMnist.npy'))
trn_y = np.load(os.path.join(DATA_FOLDER, 'trn_y_fashionMnist.npy'))
tst_x = np.load(os.path.join(DATA_FOLDER, 'tst_x_fashionMnist.npy'))
tst_y = np.load(os.path.join(DATA_FOLDER, 'tst_y_fashionMnist.npy'))
return trn_x, trn_y, tst_x, tst_y
def load_dbpedia():
""" Load the EMNIST dataet
Returns:
tuple: tuple of numpy arrays trn_x, trn_y, tst_x, tst_y
"""
trn_x = np.load(os.path.join(DATA_FOLDER, 'trn_x_dbpedia.npy'))
trn_y = np.load(os.path.join(DATA_FOLDER, 'trn_y_dbpedia.npy'))
tst_x = np.load(os.path.join(DATA_FOLDER, 'tst_x_dbpedia.npy'))
tst_y = np.load(os.path.join(DATA_FOLDER, 'tst_y_dbpedia.npy'))
return trn_x, trn_y, tst_x, tst_y
def partition_by_class(x, y):
""" Given a dataset matrix and labels, return the data matrix partitioned by class.
The list of classes is assumed to be the number of classes for the dataset.
Example output:
[ [class 1's x ..], [class 2's x ..] , ... [class 10^s x ..] ]
Args:
x (numpy.ndarray): data matrix
y (numpy.ndarray): data labels
Returns:
list: Partitioned data matrix, as list of ndarray objects
"""
all_x = []
y_list = range(fado_args.num_classes)
for y_val in y_list:
all_x.append(x[np.where(y == y_val)[0]])
return all_x
def fixed_sample(
all_x,
num_clients,
client_size,
targ_class=0,
client_targ=5,
targ_frac=.2,
alpha=100
):
""" Use a Dirichlet distribution to assign target class samples to clients
`all_x` -> [ [class 1's x ..], [class 2's x ..] , ... [class 10^s x ..] ]
`client Size` is used to calculate number samples for each class with
dirichlet distirbution alpha
Args:
all_x (list): partitioned data matrix, as list of ndarray objects
num_clients (int): number of clients
client_size (int): desired number of samples per client
targ_class (int, optional): identifier of target class. Defaults to 0
client_targ (int, optional): number of clients having target class points. Defaults to 5
targ_frac (float, optional): fraction of target class points for clients having them. Defaults to .2
alpha (int, optional): Dirichlet parameter alpha. Defaults to 100
Returns:
dict: with keys x_i and y_i being i the client id
"""
num_classes = fado_args.num_classes
num_nontarget = num_classes - 1
# Initialize per-client data structures
clients = {}
orig_dirichlets = np.random.dirichlet([alpha] * num_nontarget, num_clients)
all_dirichlets = np.zeros((num_clients, num_classes))
# Fill up the columns of `all_dirichlets` up to the target class,
# and from the one following the target class to the end using the
# values generated in `orig_dirichlets`
all_dirichlets[:, :targ_class] = orig_dirichlets[:, :targ_class]
all_dirichlets[:, targ_class + 1:] = orig_dirichlets[:, targ_class:]
# targ_x is the numpy array of all target class samples
targ_x = all_x[targ_class]
for i in range(num_clients):
this_x, this_y = [], []
total_ct = client_size
# The first client_targ clients will have the target class samples
if i < client_targ:
# number of target class samples for client i
num_targ = int(total_ct * targ_frac)
total_ct -= num_targ
# Assign the target class samples to client i and create a label vector
this_x.append(targ_x[:num_targ])
this_y.append(np.zeros(num_targ, dtype=int) + targ_class)
# Remove the samples used for this client from targ_x
targ_x = targ_x[num_targ:]
counts = (total_ct * all_dirichlets[i]).astype(int)
assert counts[targ_class] == 0
for y in range(num_classes):
# Ignore the target class
if y == targ_class:
continue
y_ct = counts[y].astype(int)
this_x.append(all_x[y][:y_ct])
all_x[y] = all_x[y][y_ct:]
this_y.append(np.zeros(y_ct, dtype=int) + y)
this_x = np.concatenate(this_x)
this_y = np.eye(fado_args.num_classes)[np.concatenate(this_y)]
assert this_x.shape[0] == this_y.shape[0]
clients[f'{i + 1}_x'] = this_x
clients[f'{i + 1}_y'] = this_y
return clients
def fixed_poison(
all_x,
num_clients,
client_size,
poison_ct,
targ_class=0,
client_targ=5,
targ_frac=.2,
alpha=100
):
"""
Args:
all_x (list): partitioned data matrix, as list of ndarray objects
num_clients (int): number of clients
client_size (int): desired number of samples per client
poison_ct (int): number of clients participating in the poisoning attack
targ_class (int, optional): identifier of target class. Defaults to 0
client_targ (int, optional): number of clients having target class points. Defaults to 5
targ_frac (float, optional): fraction of target class points for clients having them. Defaults to .2
alpha (int, optional): Dirichlet parameter alpha. Defaults to 100
seed (int, optional): seed for PRNGs. Defaults to None
Returns:
dict: with keys x_i and y_i being i the client id
"""
num_classes = fado_args.num_classes
num_nontarget = num_classes - 1
# Initialize per-client data structures
clients = {}
orig_dirichlets = np.random.dirichlet([alpha] * num_nontarget, num_clients)
all_dirichlets = np.zeros((num_clients, num_classes))
# Fill up the columns of `all_dirichlets` up to the target class,
# and from the one following the target class to the end using the
# values generated in `orig_dirichlets`
all_dirichlets[:, :targ_class] = orig_dirichlets[:, :targ_class]
all_dirichlets[:, targ_class + 1:] = orig_dirichlets[:, targ_class:]
# targ_x is the numpy array of all target class samples
targ_x = all_x[targ_class]
for i in range(num_clients):
this_x, this_y = [], []
total_ct = client_size
# The first client_targ clients will have the target class samples
if i < client_targ:
# number of target class samples for client i
num_targ = int(total_ct * targ_frac)
total_ct -= num_targ
# Assign the target class samples to client i and create a label vector
this_x.append(targ_x[:num_targ])
this_y.append(np.zeros(num_targ, dtype=np.int) + targ_class)
# Remove the samples used for this client from targ_x
targ_x = targ_x[num_targ:]
# The successive `poison_ct` clients will have the poisoned points
elif i < client_targ + poison_ct:
num_targ = int(total_ct * targ_frac)
total_ct -= num_targ
counts = (total_ct * all_dirichlets[i]).astype(np.int)
# Flip the labels for the target class samples
for y in range(num_classes):
if y == targ_class:
y_ct = num_targ
y_local = (y + 1) % num_classes
else:
y_ct = counts[y].astype(np.int)
y_local = y
# Assign the samples to this client
this_x.append(all_x[y][:y_ct])
this_y.append(np.zeros(y_ct, dtype=np.int) + y_local)
# Remove the samples used for this client
all_x[y] = all_x[y][y_ct:]
this_x = np.concatenate(this_x)
this_y = np.eye(fado_args.num_classes)[np.concatenate(this_y)]
assert this_x.shape[0] == this_y.shape[0]
clients[f'{i + 1}_x'] = this_x
clients[f'{i + 1}_y'] = this_y
continue
counts = (total_ct * all_dirichlets[i]).astype(np.int)
assert counts[targ_class] == 0
for y in range(num_classes):
# Ignore the target class
if y == targ_class:
continue
y_ct = counts[y].astype(np.int)
this_x.append(all_x[y][:y_ct])
all_x[y] = all_x[y][y_ct:]
this_y.append(np.zeros(y_ct, dtype=np.int) + y)
this_x = np.concatenate(this_x)
this_y = np.eye(fado_args.num_classes)[np.concatenate(this_y)]
assert this_x.shape[0] == this_y.shape[0]
clients[f'{i + 1}_x'] = this_x
clients[f'{i + 1}_y'] = this_y
return clients | PypiClean |
/Nuitka_winsvc-1.7.10-cp310-cp310-win_amd64.whl/nuitka/plugins/standard/DataFilesPlugin.py | import os
import pkgutil
from nuitka import Options
from nuitka.code_generation.ConstantCodes import addDistributionMetadataValue
from nuitka.containers.OrderedSets import OrderedSet
from nuitka.plugins.PluginBase import NuitkaPluginBase
from nuitka.PythonFlavors import isDebianPackagePython
from nuitka.utils.Distributions import getDistribution
from nuitka.utils.FileOperations import (
changeFilenameExtension,
getFileList,
resolveShellPatternToFilenames,
)
from nuitka.utils.Yaml import getYamlPackageConfiguration
class NuitkaPluginDataFileCollector(NuitkaPluginBase):
plugin_name = "data-files"
plugin_desc = "Include data files specified by package configuration files."
def __init__(self):
self.config = getYamlPackageConfiguration()
@classmethod
def isRelevant(cls):
return Options.isStandaloneMode()
@staticmethod
def isAlwaysEnabled():
return True
def _considerDataFiles(self, module, data_file_config):
# Many details and cases to deal with
# pylint: disable=too-many-branches,too-many-locals
module_name = module.getFullName()
module_folder = module.getCompileTimeDirectory()
target_dir = data_file_config.get("dest_path")
# Default to near module or inside package folder.
if target_dir is None:
if module.isCompiledPythonPackage() or module.isUncompiledPythonPackage():
target_dir = module_name.asPath()
else:
package_name = module_name.getPackageName()
if package_name is not None:
target_dir = module_name.getPackageName().asPath()
else:
target_dir = "."
patterns = data_file_config.get("patterns")
if patterns is not None:
if type(patterns) is not list or not patterns:
self.sysexit(
"Error, requiring list below 'pattern' entry for '%s' entry."
% module_name
)
# TODO: Pattern should be data file kind potentially.
for pattern in patterns:
pattern = os.path.join(module_folder, pattern)
for filename in resolveShellPatternToFilenames(pattern):
filename_base = os.path.relpath(filename, module_folder)
yield self.makeIncludedDataFile(
source_path=filename,
dest_path=os.path.normpath(
os.path.join(target_dir, filename_base)
),
reason="package data for '%s'" % module_name.asString(),
tags="config",
)
empty_dirs = data_file_config.get("empty_dirs")
if empty_dirs is not None:
if type(empty_dirs) is not list or not empty_dirs:
self.sysexit(
"Error, requiring list below 'empty_dirs' entry for '%s' entry."
% module_name
)
for empty_dir in empty_dirs:
yield self.makeIncludedEmptyDirectory(
dest_path=os.path.join(target_dir, empty_dir),
reason="empty dir needed for %r" % module_name.asString(),
tags="config",
)
empty_dir_structures = data_file_config.get("empty_dir_structures")
if empty_dir_structures is not None:
if type(empty_dir_structures) is not list or not empty_dir_structures:
self.sysexit(
"Error, requiring list below 'empty_dirs_structure' entry for '%s' entry."
% module_name
)
# TODO: This ignored config dest_path, which is unused, but not consistent.
for included_data_file in self._getSubDirectoryFolders(
module, sub_dirs=empty_dir_structures
):
yield included_data_file
dirs = data_file_config.get("dirs")
if dirs is not None:
if type(dirs) is not list or not dirs:
self.sysexit(
"Error, requiring list below 'empty_dirs_structure' entry for '%s' entry."
% module_name
)
for data_dir in dirs:
source_path = os.path.join(module_folder, data_dir)
if os.path.isdir(source_path):
yield self.makeIncludedDataDirectory(
source_path=source_path,
dest_path=os.path.join(target_dir, data_dir),
reason="package data directory '%s' for %r"
% (data_dir, module_name.asString()),
tags="config",
)
include_pyi_file = data_file_config.get("include-pyi-file")
if include_pyi_file == "yes":
pyi_filename = changeFilenameExtension(
path=module.getCompileTimeFilename(), extension=".pyi"
)
if os.path.exists(pyi_filename):
if (
module.isCompiledPythonPackage()
or module.isUncompiledPythonPackage()
):
module_path = module_name.asPath()
else:
module_path = os.path.dirname(module_name.asPath())
yield self.makeIncludedDataFile(
source_path=pyi_filename,
dest_path=os.path.join(module_path, os.path.basename(pyi_filename)),
reason="runtime required '.pyi' file for '%s'"
% module_name.asString(),
tags="config",
)
distribution_names = data_file_config.get("include-metadata", ())
for distribution_name in distribution_names:
distribution = getDistribution(distribution_name)
if distribution is not None:
addDistributionMetadataValue(distribution_name, distribution)
def considerDataFiles(self, module):
full_name = module.getFullName()
for entry in self.config.get(full_name, section="data-files"):
if self.evaluateCondition(
full_name=full_name, condition=entry.get("when", "True")
):
for included_data_file in self._considerDataFiles(
module=module, data_file_config=entry
):
yield included_data_file
# TODO: Until the data files are a list and support features to do similar, namely
# to look up via package data interface "pkgutil.get_data" rather than file scan.
if full_name == "lib2to3.pygram" and isDebianPackagePython():
yield self.makeIncludedGeneratedDataFile(
data=pkgutil.get_data("lib2to3", "Grammar.txt"),
dest_path="lib2to3/Grammar.txt",
reason="package data for '%s'" % full_name,
tags="config",
)
yield self.makeIncludedGeneratedDataFile(
data=pkgutil.get_data("lib2to3", "PatternGrammar.txt"),
dest_path="lib2to3/PatternGrammar.txt",
reason="package data for '%s'" % full_name,
tags="config",
)
def _getSubDirectoryFolders(self, module, sub_dirs):
"""Get dirnames in given subdirectories of the module.
Notes:
All dirnames in folders below one of the sub_dirs are recursively
retrieved and returned shortened to begin with the string of subdir.
Args:
module: module object
sub_dirs: sub folder name(s) - tuple
Returns:
makeIncludedEmptyDirectory of found dirnames.
"""
module_dir = module.getCompileTimeDirectory()
file_list = []
data_dirs = [os.path.join(module_dir, subdir) for subdir in sub_dirs]
# Gather the full file list, probably makes no sense to include bytecode files
file_list = sum(
(
getFileList(
data_dir, ignore_dirs=("__pycache__",), ignore_suffixes=(".pyc",)
)
for data_dir in data_dirs
),
[],
)
if not file_list:
msg = "No files or folders found for '%s' in subfolder(s) '%s' (%r)." % (
module.getFullName(),
sub_dirs,
data_dirs,
)
self.warning(msg)
is_package = (
module.isCompiledPythonPackage() or module.isUncompiledPythonPackage()
)
# We need to preserve the package target path in the dist folder.
if is_package:
package_part = module.getFullName().asPath()
else:
package = module.getFullName().getPackageName()
if package is None:
package_part = ""
else:
package_part = package.asPath()
item_set = OrderedSet()
for f in file_list:
target = os.path.join(package_part, os.path.relpath(f, module_dir))
dir_name = os.path.dirname(target)
item_set.add(dir_name)
for dest_path in item_set:
yield self.makeIncludedEmptyDirectory(
dest_path=dest_path,
reason="Subdirectories of module %s" % module.getFullName(),
tags="config",
) | PypiClean |
/FamcyDev-0.3.71-py3-none-any.whl/Famcy/bower_components/bootstrap/site/content/docs/5.0/getting-started/download.md | ---
layout: docs
title: Download
description: Download Bootstrap to get the compiled CSS and JavaScript, source code, or include it with your favorite package managers like npm, RubyGems, and more.
group: getting-started
toc: true
---
## Compiled CSS and JS
Download ready-to-use compiled code for **Bootstrap v{{< param current_version >}}** to easily drop into your project, which includes:
- Compiled and minified CSS bundles (see [CSS files comparison]({{< docsref "/getting-started/contents#css-files" >}}))
- Compiled and minified JavaScript plugins (see [JS files comparison]({{< docsref "/getting-started/contents#js-files" >}}))
This doesn't include documentation, source files, or any optional JavaScript dependencies like Popper.
<a href="{{< param "download.dist" >}}" class="btn btn-bd-primary" onclick="ga('send', 'event', 'Getting started', 'Download', 'Download Bootstrap');">Download</a>
## Source files
Compile Bootstrap with your own asset pipeline by downloading our source Sass, JavaScript, and documentation files. This option requires some additional tooling:
- [Sass compiler]({{< docsref "/getting-started/build-tools#sass" >}}) for compiling Sass source files into CSS files
- [Autoprefixer](https://github.com/postcss/autoprefixer) for CSS vendor prefixing
Should you require our full set of [build tools]({{< docsref "/getting-started/build-tools#tooling-setup" >}}), they are included for developing Bootstrap and its docs, but they're likely unsuitable for your own purposes.
<a href="{{< param "download.source" >}}" class="btn btn-bd-primary" onclick="ga('send', 'event', 'Getting started', 'Download', 'Download source');">Download source</a>
## Examples
If you want to download and examine our [examples]({{< docsref "/examples" >}}), you can grab the already built examples:
<a href="{{< param "download.dist_examples" >}}" class="btn btn-bd-primary" onclick="ga('send', 'event', 'Getting started', 'Download', 'Download Examples');">Download Examples</a>
## CDN via jsDelivr
Skip the download with [jsDelivr](https://www.jsdelivr.com/) to deliver cached version of Bootstrap's compiled CSS and JS to your project.
```html
<link href="{{< param "cdn.css" >}}" rel="stylesheet" integrity="{{< param "cdn.css_hash" >}}" crossorigin="anonymous">
<script src="{{< param "cdn.js_bundle" >}}" integrity="{{< param "cdn.js_bundle_hash" >}}" crossorigin="anonymous"></script>
```
If you're using our compiled JavaScript and prefer to include Popper separately, add Popper before our JS, via a CDN preferably.
```html
<script src="{{< param "cdn.popper" >}}" integrity="{{< param "cdn.popper_hash" >}}" crossorigin="anonymous"></script>
<script src="{{< param "cdn.js" >}}" integrity="{{< param "cdn.js_hash" >}}" crossorigin="anonymous"></script>
```
## Package managers
Pull in Bootstrap's **source files** into nearly any project with some of the most popular package managers. No matter the package manager, Bootstrap will **require a [Sass compiler]({{< docsref "/getting-started/build-tools#sass" >}}) and [Autoprefixer](https://github.com/postcss/autoprefixer)** for a setup that matches our official compiled versions.
### npm
Install Bootstrap in your Node.js powered apps with [the npm package](https://www.npmjs.com/package/bootstrap):
```sh
npm install bootstrap
```
`const bootstrap = require('bootstrap')` or `import bootstrap from 'bootstrap'` will load all of Bootstrap's plugins onto a `bootstrap` object.
The `bootstrap` module itself exports all of our plugins. You can manually load Bootstrap's plugins individually by loading the `/js/dist/*.js` files under the package's top-level directory.
Bootstrap's `package.json` contains some additional metadata under the following keys:
- `sass` - path to Bootstrap's main [Sass](https://sass-lang.com/) source file
- `style` - path to Bootstrap's non-minified CSS that's been precompiled using the default settings (no customization)
{{< callout info >}}
{{< partial "callout-info-npm-starter.md" >}}
{{< /callout >}}
### yarn
Install Bootstrap in your Node.js powered apps with [the yarn package](https://yarnpkg.com/en/package/bootstrap):
```sh
yarn add bootstrap
```
### RubyGems
Install Bootstrap in your Ruby apps using [Bundler](https://bundler.io/) (**recommended**) and [RubyGems](https://rubygems.org/) by adding the following line to your [`Gemfile`](https://bundler.io/gemfile.html):
```ruby
gem 'bootstrap', '~> {{< param current_ruby_version >}}'
```
Alternatively, if you're not using Bundler, you can install the gem by running this command:
```sh
gem install bootstrap -v {{< param current_ruby_version >}}
```
[See the gem's README](https://github.com/twbs/bootstrap-rubygem/blob/master/README.md) for further details.
### Composer
You can also install and manage Bootstrap's Sass and JavaScript using [Composer](https://getcomposer.org/):
```sh
composer require twbs/bootstrap:{{< param current_version >}}
```
### NuGet
If you develop in .NET, you can also install and manage Bootstrap's [CSS](https://www.nuget.org/packages/bootstrap/) or [Sass](https://www.nuget.org/packages/bootstrap.sass/) and JavaScript using [NuGet](https://www.nuget.org/):
```powershell
Install-Package bootstrap
```
```powershell
Install-Package bootstrap.sass
```
| PypiClean |
/Misago-0.36.1.tar.gz/Misago-0.36.1/misago/threads/permissions/bestanswers.py | from django import forms
from django.core.exceptions import PermissionDenied
from django.utils import timezone
from django.utils.translation import gettext_lazy as _
from django.utils.translation import ngettext
from ...acl import algebra
from ...acl.decorators import return_boolean
from ...categories.models import Category, CategoryRole
from ...categories.permissions import get_categories_roles
from ..models import Post, Thread
__all__nope = [
"allow_mark_best_answer",
"can_mark_best_answer",
"allow_mark_as_best_answer",
"can_mark_as_best_answer",
"allow_unmark_best_answer",
"can_unmark_best_answer",
"allow_hide_best_answer",
"can_hide_best_answer",
"allow_delete_best_answer",
"can_delete_best_answer",
]
class CategoryPermissionsForm(forms.Form):
legend = _("Best answers")
can_mark_best_answers = forms.TypedChoiceField(
label=_("Can mark posts as best answers"),
coerce=int,
initial=0,
choices=[(0, _("No")), (1, _("Own threads")), (2, _("All threads"))],
)
can_change_marked_answers = forms.TypedChoiceField(
label=_("Can change marked answers"),
coerce=int,
initial=0,
choices=[(0, _("No")), (1, _("Own threads")), (2, _("All threads"))],
)
best_answer_change_time = forms.IntegerField(
label=_(
"Time limit for changing marked best answer in owned thread, in minutes"
),
help_text=_(
"Enter 0 to don't limit time for changing marked best answer in "
"owned thread."
),
initial=0,
min_value=0,
)
def change_permissions_form(role):
if isinstance(role, CategoryRole):
return CategoryPermissionsForm
def build_acl(acl, roles, key_name):
categories_roles = get_categories_roles(roles)
categories = list(Category.objects.all_categories(include_root=True))
for category in categories:
category_acl = acl["categories"].get(category.pk, {"can_browse": 0})
if category_acl["can_browse"]:
acl["categories"][category.pk] = build_category_acl(
category_acl, category, categories_roles, key_name
)
private_category = Category.objects.private_threads()
private_threads_acl = acl["categories"].get(private_category.pk)
if private_threads_acl:
private_threads_acl.update(
{
"can_mark_best_answers": 0,
"can_change_marked_answers": 0,
"best_answer_change_time": 0,
}
)
return acl
def build_category_acl(acl, category, categories_roles, key_name):
category_roles = categories_roles.get(category.pk, [])
final_acl = {
"can_mark_best_answers": 0,
"can_change_marked_answers": 0,
"best_answer_change_time": 0,
}
final_acl.update(acl)
algebra.sum_acls(
final_acl,
roles=category_roles,
key=key_name,
can_mark_best_answers=algebra.greater,
can_change_marked_answers=algebra.greater,
best_answer_change_time=algebra.greater_or_zero,
)
return final_acl
def add_acl_to_thread(user_acl, thread):
thread.acl.update(
{
"can_mark_best_answer": can_mark_best_answer(user_acl, thread),
"can_change_best_answer": can_change_best_answer(user_acl, thread),
"can_unmark_best_answer": can_unmark_best_answer(user_acl, thread),
}
)
def add_acl_to_post(user_acl, post):
post.acl.update(
{
"can_mark_as_best_answer": can_mark_as_best_answer(user_acl, post),
"can_hide_best_answer": can_hide_best_answer(user_acl, post),
"can_delete_best_answer": can_delete_best_answer(user_acl, post),
}
)
def register_with(registry):
registry.acl_annotator(Thread, add_acl_to_thread)
registry.acl_annotator(Post, add_acl_to_post)
def allow_mark_best_answer(user_acl, target):
if user_acl["is_anonymous"]:
raise PermissionDenied(_("You have to sign in to mark best answers."))
category_acl = user_acl["categories"].get(target.category_id, {})
if not category_acl.get("can_mark_best_answers"):
raise PermissionDenied(
_(
"You don't have permission to mark best answers in the "
'"%(category)s" category.'
)
% {"category": target.category}
)
if (
category_acl["can_mark_best_answers"] == 1
and user_acl["user_id"] != target.starter_id
):
raise PermissionDenied(
_(
"You don't have permission to mark best answer in this thread "
"because you didn't start it."
)
)
if not category_acl["can_close_threads"]:
if target.category.is_closed:
raise PermissionDenied(
_(
"You don't have permission to mark best answer in this thread "
'because its category "%(category)s" is closed.'
)
% {"category": target.category}
)
if target.is_closed:
raise PermissionDenied(
_(
"You can't mark best answer in this thread because it's closed "
"and you don't have permission to open it."
)
)
can_mark_best_answer = return_boolean(allow_mark_best_answer)
def allow_change_best_answer(user_acl, target):
if not target.has_best_answer:
return # shortcircut permission test
category_acl = user_acl["categories"].get(target.category_id, {})
if not category_acl.get("can_change_marked_answers"):
raise PermissionDenied(
_(
"You don't have permission to change this thread's marked answer "
'because it\'s in the "%(category)s" category.'
)
% {"category": target.category}
)
if category_acl["can_change_marked_answers"] == 1:
if user_acl["user_id"] != target.starter_id:
raise PermissionDenied(
_(
"You don't have permission to change this thread's marked answer "
"because you are not a thread starter."
)
)
if not has_time_to_change_answer(user_acl, target):
# pylint: disable=line-too-long
message = ngettext(
"You don't have permission to change best answer that was marked for more than %(minutes)s minute.",
"You don't have permission to change best answer that was marked for more than %(minutes)s minutes.",
category_acl["best_answer_change_time"],
)
raise PermissionDenied(
message % {"minutes": category_acl["best_answer_change_time"]}
)
if target.best_answer_is_protected and not category_acl["can_protect_posts"]:
raise PermissionDenied(
_(
"You don't have permission to change this thread's best answer "
"because a moderator has protected it."
)
)
can_change_best_answer = return_boolean(allow_change_best_answer)
def allow_unmark_best_answer(user_acl, target):
if user_acl["is_anonymous"]:
raise PermissionDenied(_("You have to sign in to unmark best answers."))
if not target.has_best_answer:
return # shortcircut test
category_acl = user_acl["categories"].get(target.category_id, {})
if not category_acl.get("can_change_marked_answers"):
raise PermissionDenied(
_(
"You don't have permission to unmark threads answers in "
'the "%(category)s" category.'
)
% {"category": target.category}
)
if category_acl["can_change_marked_answers"] == 1:
if user_acl["user_id"] != target.starter_id:
raise PermissionDenied(
_(
"You don't have permission to unmark this best answer "
"because you are not a thread starter."
)
)
if not has_time_to_change_answer(user_acl, target):
# pylint: disable=line-too-long
message = ngettext(
"You don't have permission to unmark best answer that was marked for more than %(minutes)s minute.",
"You don't have permission to unmark best answer that was marked for more than %(minutes)s minutes.",
category_acl["best_answer_change_time"],
)
raise PermissionDenied(
message % {"minutes": category_acl["best_answer_change_time"]}
)
if not category_acl["can_close_threads"]:
if target.category.is_closed:
raise PermissionDenied(
_(
"You don't have permission to unmark this best answer "
'because its category "%(category)s" is closed.'
)
% {"category": target.category}
)
if target.is_closed:
raise PermissionDenied(
_(
"You can't unmark this thread's best answer "
"because it's closed and you don't have permission to open it."
)
)
if target.best_answer_is_protected and not category_acl["can_protect_posts"]:
raise PermissionDenied(
_(
"You don't have permission to unmark this thread's best answer "
"because a moderator has protected it."
)
)
can_unmark_best_answer = return_boolean(allow_unmark_best_answer)
def allow_mark_as_best_answer(user_acl, target):
if user_acl["is_anonymous"]:
raise PermissionDenied(_("You have to sign in to mark best answers."))
if target.is_event:
raise PermissionDenied(_("Events can't be marked as best answers."))
category_acl = user_acl["categories"].get(target.category_id, {})
if not category_acl.get("can_mark_best_answers"):
raise PermissionDenied(
_(
"You don't have permission to mark best answers "
'in the "%(category)s" category.'
)
% {"category": target.category}
)
if (
category_acl["can_mark_best_answers"] == 1
and user_acl["user_id"] != target.thread.starter_id
):
raise PermissionDenied(
_(
"You don't have permission to mark best answer in this thread "
"because you didn't start it."
)
)
if target.is_first_post:
raise PermissionDenied(
_("First post in a thread can't be marked as best answer.")
)
if target.is_hidden:
raise PermissionDenied(_("Hidden posts can't be marked as best answers."))
if target.is_unapproved:
raise PermissionDenied(_("Unapproved posts can't be marked as best answers."))
if target.is_protected and not category_acl["can_protect_posts"]:
raise PermissionDenied(
_(
"You don't have permission to mark this post as best answer "
"because a moderator has protected it."
)
)
can_mark_as_best_answer = return_boolean(allow_mark_as_best_answer)
def allow_hide_best_answer(user_acl, target):
if target.is_best_answer:
raise PermissionDenied(
_("You can't hide this post because its marked as best answer.")
)
can_hide_best_answer = return_boolean(allow_hide_best_answer)
def allow_delete_best_answer(user_acl, target):
if target.is_best_answer:
raise PermissionDenied(
_("You can't delete this post because its marked as best answer.")
)
can_delete_best_answer = return_boolean(allow_delete_best_answer)
def has_time_to_change_answer(user_acl, target):
category_acl = user_acl["categories"].get(target.category_id, {})
change_time = category_acl.get("best_answer_change_time", 0)
if change_time:
diff = timezone.now() - target.best_answer_marked_on
diff_minutes = int(diff.total_seconds() / 60)
return diff_minutes < change_time
return True | PypiClean |
/Editra-0.7.20.tar.gz/Editra-0.7.20/src/extern/pygments/util.py | import re
import sys
import codecs
split_path_re = re.compile(r'[/\\ ]')
doctype_lookup_re = re.compile(r'''(?smx)
(<\?.*?\?>)?\s*
<!DOCTYPE\s+(
[a-zA-Z_][a-zA-Z0-9]*\s+
[a-zA-Z_][a-zA-Z0-9]*\s+
"[^"]*")
[^>]*>
''')
tag_re = re.compile(r'<(.+?)(\s.*?)?>.*?</.+?>(?uism)')
class ClassNotFound(ValueError):
"""
If one of the get_*_by_* functions didn't find a matching class.
"""
class OptionError(Exception):
pass
def get_choice_opt(options, optname, allowed, default=None, normcase=False):
string = options.get(optname, default)
if normcase:
string = string.lower()
if string not in allowed:
raise OptionError('Value for option %s must be one of %s' %
(optname, ', '.join(map(str, allowed))))
return string
def get_bool_opt(options, optname, default=None):
string = options.get(optname, default)
if isinstance(string, bool):
return string
elif isinstance(string, int):
return bool(string)
elif not isinstance(string, basestring):
raise OptionError('Invalid type %r for option %s; use '
'1/0, yes/no, true/false, on/off' % (
string, optname))
elif string.lower() in ('1', 'yes', 'true', 'on'):
return True
elif string.lower() in ('0', 'no', 'false', 'off'):
return False
else:
raise OptionError('Invalid value %r for option %s; use '
'1/0, yes/no, true/false, on/off' % (
string, optname))
def get_int_opt(options, optname, default=None):
string = options.get(optname, default)
try:
return int(string)
except TypeError:
raise OptionError('Invalid type %r for option %s; you '
'must give an integer value' % (
string, optname))
except ValueError:
raise OptionError('Invalid value %r for option %s; you '
'must give an integer value' % (
string, optname))
def get_list_opt(options, optname, default=None):
val = options.get(optname, default)
if isinstance(val, basestring):
return val.split()
elif isinstance(val, (list, tuple)):
return list(val)
else:
raise OptionError('Invalid type %r for option %s; you '
'must give a list value' % (
val, optname))
def docstring_headline(obj):
if not obj.__doc__:
return ''
res = []
for line in obj.__doc__.strip().splitlines():
if line.strip():
res.append(" " + line.strip())
else:
break
return ''.join(res).lstrip()
def make_analysator(f):
"""
Return a static text analysation function that
returns float values.
"""
def text_analyse(text):
try:
rv = f(text)
except Exception:
return 0.0
if not rv:
return 0.0
try:
return min(1.0, max(0.0, float(rv)))
except ValueError:
return 0.0
text_analyse.__doc__ = f.__doc__
return staticmethod(text_analyse)
def shebang_matches(text, regex):
"""
Check if the given regular expression matches the last part of the
shebang if one exists.
>>> from pygments.util import shebang_matches
>>> shebang_matches('#!/usr/bin/env python', r'python(2\.\d)?')
True
>>> shebang_matches('#!/usr/bin/python2.4', r'python(2\.\d)?')
True
>>> shebang_matches('#!/usr/bin/python-ruby', r'python(2\.\d)?')
False
>>> shebang_matches('#!/usr/bin/python/ruby', r'python(2\.\d)?')
False
>>> shebang_matches('#!/usr/bin/startsomethingwith python',
... r'python(2\.\d)?')
True
It also checks for common windows executable file extensions::
>>> shebang_matches('#!C:\\Python2.4\\Python.exe', r'python(2\.\d)?')
True
Parameters (``'-f'`` or ``'--foo'`` are ignored so ``'perl'`` does
the same as ``'perl -e'``)
Note that this method automatically searches the whole string (eg:
the regular expression is wrapped in ``'^$'``)
"""
index = text.find('\n')
if index >= 0:
first_line = text[:index].lower()
else:
first_line = text.lower()
if first_line.startswith('#!'):
try:
found = [x for x in split_path_re.split(first_line[2:].strip())
if x and not x.startswith('-')][-1]
except IndexError:
return False
regex = re.compile('^%s(\.(exe|cmd|bat|bin))?$' % regex, re.IGNORECASE)
if regex.search(found) is not None:
return True
return False
def doctype_matches(text, regex):
"""
Check if the doctype matches a regular expression (if present).
Note that this method only checks the first part of a DOCTYPE.
eg: 'html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"'
"""
m = doctype_lookup_re.match(text)
if m is None:
return False
doctype = m.group(2)
return re.compile(regex).match(doctype.strip()) is not None
def html_doctype_matches(text):
"""
Check if the file looks like it has a html doctype.
"""
return doctype_matches(text, r'html\s+PUBLIC\s+"-//W3C//DTD X?HTML.*')
_looks_like_xml_cache = {}
def looks_like_xml(text):
"""
Check if a doctype exists or if we have some tags.
"""
key = hash(text)
try:
return _looks_like_xml_cache[key]
except KeyError:
m = doctype_lookup_re.match(text)
if m is not None:
return True
rv = tag_re.search(text[:1000]) is not None
_looks_like_xml_cache[key] = rv
return rv
# Python 2/3 compatibility
if sys.version_info < (3,0):
b = bytes = str
u_prefix = 'u'
import StringIO, cStringIO
BytesIO = cStringIO.StringIO
StringIO = StringIO.StringIO
uni_open = codecs.open
else:
import builtins
bytes = builtins.bytes
u_prefix = ''
def b(s):
if isinstance(s, str):
return bytes(map(ord, s))
elif isinstance(s, bytes):
return s
else:
raise TypeError("Invalid argument %r for b()" % (s,))
import io
BytesIO = io.BytesIO
StringIO = io.StringIO
uni_open = builtins.open | PypiClean |
/Cowpox-6-py3-none-any.whl/cowpox/graph.py |
# This file is part of Cowpox.
#
# Cowpox is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Cowpox is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Cowpox. If not, see <http://www.gnu.org/licenses/>.
# This file incorporates work covered by the following copyright and
# permission notice:
# Copyright (c) 2010-2017 Kivy Team and other contributors
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
from . import Graph, PipInstallMemo, RecipeMemo
from .make import Make
from .recipe import Recipe
from .util import findimpls
from aridity.config import Config
from diapyr import types
from importlib import import_module
from packaging.utils import canonicalize_name
from pkg_resources import parse_requirements
from pkgutil import iter_modules
from types import SimpleNamespace
import logging
log = logging.getLogger(__name__)
class RecipeInfo:
def __init__(self, impl):
self.groups = []
self.depends = {}
for depend in impl.depends:
if isinstance(depend, tuple):
self.groups.append(frozenset(map(canonicalize_name, depend)))
else:
self.depends[canonicalize_name(depend)] = depend
self.impl = impl
def dependmemotypes(self, groupmemotypes, implmemotypes):
for group in self.groups:
yield groupmemotypes[group]
for normdepend in self.depends:
yield implmemotypes.get(normdepend, PipInstallMemo)
def _namesonly(requires):
for r in parse_requirements(requires):
yield r.name
class GraphImpl(Graph):
@types(Config)
def __init__(self, config):
allimpls = {canonicalize_name(impl.name): impl
for p in config.recipe.packages
for m in iter_modules(import_module(p).__path__, f"{p}.")
for impl in findimpls(import_module(m.name), Recipe)}
groupmemotypes = {}
recipeinfos = {}
pypinames = {}
def adddepends(info):
for group in info.groups:
if group not in groupmemotypes:
groupmemotypes[group] = type(f"{'Or'.join(allimpls[normname].__name__ for normname in sorted(group))}Memo", (), {})
for normdepend, depend in info.depends.items():
if normdepend not in recipeinfos and normdepend not in pypinames:
if normdepend in allimpls:
recipeinfos[normdepend] = info = RecipeInfo(allimpls[normdepend])
adddepends(info)
else:
pypinames[normdepend] = depend # Keep an arbitrary unnormalised name.
# TODO: Minimise depends declared here.
adddepends(RecipeInfo(SimpleNamespace(depends = [
'python3', 'bdozlib', 'android', 'sdl2' if 'sdl2' == config.bootstrap.name else 'genericndkbuild', *_namesonly(config.requirements)])))
for group in groupmemotypes:
intersection = sorted(recipeinfos.keys() & group)
groupstr = ', '.join(sorted(group))
if not intersection:
raise Exception("Group not satisfied: %s" % groupstr)
log.debug("Group %s satisfied by: %s", groupstr, ', '.join(allimpls[normname].name for normname in intersection))
log.info("Recipes to build: %s", ', '.join(info.impl.name for info in recipeinfos.values()))
def memotypebases():
yield RecipeMemo
for group, groupmemotype in groupmemotypes.items():
if normname in group:
yield groupmemotype
implmemotypes = {}
for normname, info in recipeinfos.items():
implmemotypes[normname] = type(f"{info.impl.__name__}Memo", tuple(memotypebases()), {})
self.builders = [info.impl for info in recipeinfos.values()]
for normname, info in recipeinfos.items():
dependmemotypes = list(info.dependmemotypes(groupmemotypes, implmemotypes))
implmemotype = implmemotypes[normname]
@types(info.impl, Make, *dependmemotypes, this = implmemotype)
def builder(recipe, make, *memos):
return make(recipe.recipebuilddir, list(memos), recipe.mainbuild)
log.debug("%s(%s) requires: %s", implmemotype.__name__, ', '.join(b.__name__ for b in implmemotype.__bases__),
', '.join(t.__name__ for t in dependmemotypes) if dependmemotypes else ())
self.builders.append(builder)
self.pypinames = list(pypinames.values())
log.info("Requirements not found as recipes will be installed with pip: %s", ', '.join(self.pypinames)) | PypiClean |
/Nuitka_winsvc-1.7.10-cp310-cp310-win_amd64.whl/nuitka/OutputDirectories.py | import os
from nuitka import Options
from nuitka.utils.FileOperations import hasFilenameExtension, makePath
from nuitka.utils.Importing import getSharedLibrarySuffix
from nuitka.utils.Utils import isWin32OrPosixWindows, isWin32Windows
_main_module = None
def setMainModule(main_module):
"""Call this before using other methods of this module."""
# Technically required.
assert main_module.isCompiledPythonModule()
# Singleton and to avoid passing this one all the time, pylint: disable=global-statement
global _main_module
_main_module = main_module
def getSourceDirectoryPath(onefile=False):
"""Return path inside the build directory."""
# Distinct build folders for oneline mode.
if onefile:
suffix = ".onefile-build"
else:
suffix = ".build"
result = Options.getOutputPath(
path=os.path.basename(getTreeFilenameWithSuffix(_main_module, suffix))
)
makePath(result)
return result
def _getStandaloneDistSuffix(bundle):
"""Suffix to use for standalone distribution folder."""
if bundle and Options.shallCreateAppBundle() and not Options.isOnefileMode():
return ".app"
else:
return ".dist"
def getStandaloneDirectoryPath(bundle=True):
assert Options.isStandaloneMode()
result = Options.getOutputPath(
path=os.path.basename(
getTreeFilenameWithSuffix(_main_module, _getStandaloneDistSuffix(bundle))
)
)
if bundle and Options.shallCreateAppBundle() and not Options.isOnefileMode():
result = os.path.join(result, "Contents", "MacOS")
return result
def getResultBasePath(onefile=False):
if Options.isOnefileMode() and onefile:
file_path = os.path.basename(getTreeFilenameWithSuffix(_main_module, ""))
if Options.shallCreateAppBundle():
file_path = os.path.join(file_path + ".app", "Contents", "MacOS", file_path)
return Options.getOutputPath(path=file_path)
elif Options.isStandaloneMode() and not onefile:
return os.path.join(
getStandaloneDirectoryPath(),
os.path.basename(getTreeFilenameWithSuffix(_main_module, "")),
)
else:
return Options.getOutputPath(
path=os.path.basename(getTreeFilenameWithSuffix(_main_module, ""))
)
def getResultFullpath(onefile):
"""Get the final output binary result full path."""
result = getResultBasePath(onefile=onefile)
if Options.shallMakeModule():
result += getSharedLibrarySuffix(preferred=True)
else:
output_filename = Options.getOutputFilename()
if Options.isOnefileMode() and output_filename is not None:
if onefile:
result = Options.getOutputPath(output_filename)
else:
result = os.path.join(
getStandaloneDirectoryPath(),
os.path.basename(output_filename),
)
elif Options.isStandaloneMode() and output_filename is not None:
result = os.path.join(
getStandaloneDirectoryPath(),
os.path.basename(output_filename),
)
elif output_filename is not None:
result = output_filename
elif not isWin32OrPosixWindows() and not Options.shallCreateAppBundle():
result += ".bin"
if isWin32OrPosixWindows() and not hasFilenameExtension(result, ".exe"):
result += ".exe"
return result
def getResultRunFilename(onefile):
result = getResultFullpath(onefile=onefile)
if isWin32Windows() and Options.shallTreatUninstalledPython():
result = getResultBasePath(onefile=onefile) + ".cmd"
return result
def getTreeFilenameWithSuffix(module, suffix):
return module.getOutputFilename() + suffix
def getPgoRunExecutable():
return Options.getPgoExecutable() or getResultRunFilename(onefile=False)
def getPgoRunInputFilename():
return getPgoRunExecutable() + ".nuitka-pgo" | PypiClean |
/Grimsel-0.9.0.tar.gz/Grimsel-0.9.0/grimsel/auxiliary/build_utils.py | import grimsel.auxiliary.sqlutils.aux_sql_func as aql
import grimsel_config as config
db = 'storage2'
sqlc = aql.SqlConnector(db, user=config.PSQL_USER,
password=config.PSQL_PASSWORD,
host=config.PSQL_HOST,
port=config.PSQL_PORT)
def yr_getter(par, data_type=False, rnge=range(2015, 2050 + 1, 5)):
return [par + i if not data_type else (par + i, data_type)
for i in [''] + ['_yr' + str(ii) for ii
in rnge if not ii == 2015]]
def init_table(*args, **kwargs):
con_cur = sqlc.get_pg_con_cur()
print(con_cur, args, kwargs)
return aql.init_table(*args, **kwargs, con_cur=con_cur)
def init_sql_tables(sc, db):
tb_name = 'def_profile'
cols = [('pf_id', 'SMALLINT'), ('pf', 'VARCHAR'), ('primary_nd', 'VARCHAR')]
pk = ['pf_id']
unique = ['pf']
init_table(tb_name=tb_name, cols=cols, schema=sc, ref_schema=sc,
pk=pk, unique=unique, db=db)
tb_name = 'def_pp_type'
cols = [('pt_id',' SMALLINT'),
('pt',' varchar(20)'),
('pp_broad_cat', 'varchar(100)'),
('color', 'VARCHAR(7)')]
pk = ['pt_id']
unique = ['pt']
init_table(tb_name=tb_name, cols=cols, schema=sc, ref_schema=sc,
pk=pk, unique=unique, db=db)
tb_name = 'def_fuel'
cols = [('fl_id', 'SMALLINT'), ('fl', 'varchar(20)'),
('co2_int', 'DOUBLE PRECISION'),
('is_ca', 'SMALLINT'),
('is_constrained', 'SMALLINT'),
('color', 'VARCHAR(7)')]
pk = ['fl_id']
unique = ['fl']
init_table(tb_name=tb_name, cols=cols, schema=sc, ref_schema=sc,
pk=pk, unique=unique, db=db)
tb_name = 'def_encar'
cols = [('ca_id', 'SMALLINT'),
('fl_id', 'SMALLINT', sc + '.def_fuel(fl_id)'),
('ca', 'VARCHAR(2)')]
pk = ['ca_id']
unique = ['ca']
init_table(tb_name=tb_name, cols=cols, schema=sc, ref_schema=sc,
pk=pk, unique=unique, db=db)
tb_name = 'def_node'
cols = [('nd_id', 'SMALLINT'),
('nd', 'VARCHAR(10)'),
('color', 'VARCHAR(7)')] + yr_getter('price_co2', 'DOUBLE PRECISION')
pk = ['nd_id']
unique = ['nd']
init_table(tb_name=tb_name, cols=cols, schema=sc, ref_schema=sc,
pk=pk, unique=unique, db=db)
tb_name = 'node_encar'
cols = [('nd_id', 'SMALLINT', sc + '.def_node(nd_id)'),
('ca_id', 'SMALLINT', sc + '.def_encar(ca_id)'),
('dmnd_pf_id', 'SMALLINT', sc + '.def_profile(pf_id)'),
('grid_losses', 'DOUBLE PRECISION'),
('grid_losses_absolute', 'DOUBLE PRECISION'),
] + yr_getter('dmnd_sum', 'DOUBLE PRECISION')
pk = ['nd_id', 'ca_id']
unique = []
init_table(tb_name=tb_name, cols=cols, schema=sc, ref_schema=sc,
pk=pk, unique=unique, db=db)
tb_name = 'def_month'
cols = [('mt_id',' SMALLINT'),
('month_min_hoy',' SMALLINT'),
('month_weight',' SMALLINT'),
('mt',' VARCHAR(3)')]
pk = ['mt_id']
unique = ['name']
init_table(tb_name=tb_name, cols=cols, schema=sc, ref_schema=sc,
pk=pk, unique=unique, db=db)
tb_name = 'def_week'
cols = [('wk_id',' SMALLINT'),
('wk',' SMALLINT'),
('week_weight', 'SMALLINT')]
pk = ['wk_id']
init_table(tb_name=tb_name, cols=cols, schema=sc, ref_schema=sc,
pk=pk, unique=unique, db=db)
tb_name = 'def_plant'
cols = [('pp_id',' SMALLINT'), ('pp',' VARCHAR(20)'),
('nd_id',' SMALLINT', sc + '.def_node(nd_id)'),
('fl_id',' SMALLINT', sc + '.def_fuel(fl_id)'),
('pt_id',' SMALLINT', sc + '.def_pp_type(pt_id)'),
('set_def_pr',' SMALLINT'),
('set_def_cain',' SMALLINT'),
('set_def_ror',' SMALLINT'),
('set_def_pp',' SMALLINT'), ('set_def_st',' SMALLINT'),
('set_def_hyrs',' SMALLINT'),
('set_def_chp',' SMALLINT'),
('set_def_add',' SMALLINT'),
('set_def_rem',' SMALLINT'),
('set_def_sll',' SMALLINT'),
('set_def_curt',' SMALLINT'),
('set_def_lin',' SMALLINT'),
('set_def_scen',' SMALLINT'),
('set_def_winsol',' SMALLINT'),
('set_def_tr', 'SMALLINT'),
('set_def_peak', 'SMALLINT')]
pk = ['pp_id']
unique = ['pp']
init_table(tb_name=tb_name, cols=cols, schema=sc, ref_schema=sc,
pk=pk, unique=unique, db=db)
tb_name = 'plant_month'
cols = [('mt_id',' SMALLINT', sc + '.def_month(mt_id)'),
('pp_id',' SMALLINT', sc + '.def_plant(pp_id)'),
('hyd_erg_bc','DOUBLE PRECISION')]
pk = ['mt_id', 'pp_id']
init_table(tb_name=tb_name, cols=cols, schema=sc, ref_schema=sc,
pk=pk, unique=unique, db=db)
tb_name = 'profsupply'
cols = [('supply_pf_id',' SMALLINT', sc + '.def_profile(pf_id)'),
('hy', 'NUMERIC(6,2)'),
('value','NUMERIC(9,8)')]
pk = ['supply_pf_id', 'hy']
init_table(tb_name=tb_name, cols=cols, schema=sc, ref_schema=sc,
pk=pk, unique=unique, db=db)
tb_name = 'plant_encar'
cols = [('pp_id',' SMALLINT', sc + '.def_plant(pp_id)'),
('ca_id',' SMALLINT', sc + '.def_encar(ca_id)'),
('supply_pf_id', 'SMALLINT', sc + '.def_profile(pf_id)'),
('pp_eff','DOUBLE PRECISION'),
('erg_max','DOUBLE PRECISION'),
('discharge_duration','DOUBLE PRECISION'),
('st_lss_rt','DOUBLE PRECISION'),
('st_lss_hr','DOUBLE PRECISION'),
('factor_lin_0', 'DOUBLE PRECISION'),
('factor_lin_1','DOUBLE PRECISION'),
('cap_avlb', 'DOUBLE PRECISION'),
('vc_ramp','DOUBLE PRECISION'),
('vc_om','DOUBLE PRECISION'),
] + (yr_getter('cap_pwr_leg', 'DOUBLE PRECISION')
+ yr_getter('erg_chp', 'DOUBLE PRECISION'))
pk = ['pp_id', 'ca_id']
unique = []
init_table(tb_name=tb_name, cols=cols, schema=sc, ref_schema=sc,
pk=pk, unique=unique, db=db)
tb_name = 'plant_encar_scenarios'
cols = [('pp_id',' SMALLINT', sc + '.def_plant(pp_id)'),
('ca_id',' SMALLINT', sc + '.def_encar(ca_id)'),
('scenario', 'VARCHAR'),
] + (yr_getter('cap_pwr_leg', 'DOUBLE PRECISION'))
pk = ['pp_id', 'ca_id', 'scenario']
unique = []
init_table(tb_name=tb_name, cols=cols, schema=sc, ref_schema=sc,
pk=pk, unique=unique, db=db)
tb_name = 'imex_comp'
cols = [('nd_id', 'SMALLINT', sc + '.def_node(nd_id)'),
('nd_2_id', 'SMALLINT', sc + '.def_node(nd_id)'),
] + yr_getter('erg_trm', 'DOUBLE PRECISION', [2015])
pk = ['nd_id', 'nd_2_id']
unique = []
init_table(tb_name=tb_name, cols=cols, schema=sc, ref_schema=sc,
pk=pk, unique=unique, db=db)
tb_name = 'profdmnd'
cols = [('dmnd_pf_id', 'SMALLINT', sc + '.def_profile(pf_id)'),
('hy', 'NUMERIC(6,2)')] + yr_getter('value', 'NUMERIC(18,9)', [2015])
pk = ['hy', 'dmnd_pf_id']
unique = []
init_table(tb_name=tb_name, cols=cols, schema=sc, ref_schema=sc,
pk=pk, unique=unique, db=db)
tb_name = 'profchp'
cols = [('nd_id', 'SMALLINT', sc + '.def_node(nd_id)'),
('ca_id', 'SMALLINT', sc + '.def_encar(ca_id)'),
('hy', 'SMALLINT'), ('value', 'DOUBLE PRECISION')]
pk = ['hy', 'nd_id']
unique = []
init_table(tb_name=tb_name, cols=cols, schema=sc, ref_schema=sc,
pk=pk, unique=unique, db=db)
tb_name = 'profinflow'
cols = [('pp_id', 'SMALLINT', sc + '.def_plant(pp_id)'),
('ca_id', 'SMALLINT', sc + '.def_encar(ca_id)'),
('hy', 'SMALLINT'), ('value', 'DOUBLE PRECISION')]
pk = ['hy', 'pp_id']
unique = []
init_table(tb_name=tb_name, cols=cols, schema=sc, ref_schema=sc,
pk=pk, unique=unique, db=db)
tb_name = 'profprice'
cols = [('hy', 'NUMERIC(6,2)'),
('price_pf_id', 'SMALLINT', sc + '.def_profile(pf_id)'),
] + yr_getter('value', 'DOUBLE PRECISION', [2015])
pk = ['hy', 'price_pf_id']
unique = []
init_table(tb_name=tb_name, cols=cols, schema=sc, ref_schema=sc,
pk=pk, unique=unique, db=db)
tb_name = 'fuel_node_encar'
cols = ([('fl_id', 'SMALLINT', sc + '.def_fuel(fl_id)'),
('nd_id', 'SMALLINT', sc + '.def_node(nd_id)'),
('ca_id', 'SMALLINT', sc + '.def_encar(ca_id)'),
('pricesll_pf_id', 'SMALLINT', sc + '.def_profile(pf_id)'),
('pricebuy_pf_id', 'SMALLINT', sc + '.def_profile(pf_id)'),
('is_chp', 'SMALLINT'),
] + yr_getter('erg_inp', 'DOUBLE PRECISION')
+ yr_getter('vc_fl', 'DOUBLE PRECISION'))
pk = ['fl_id', 'nd_id']
unique = []
init_table(tb_name=tb_name, cols=cols, schema=sc, ref_schema=sc,
pk=pk, unique=unique, db=db)
tb_name = 'fuel_node_encar_scenarios'
cols = ([('fl_id', 'SMALLINT', sc + '.def_fuel(fl_id)'),
('nd_id', 'SMALLINT', sc + '.def_node(nd_id)'),
('ca_id', 'SMALLINT', sc + '.def_encar(ca_id)'),
('scenario', 'VARCHAR'),
] + yr_getter('erg_inp', 'DOUBLE PRECISION'))
pk = ['fl_id', 'nd_id', 'scenario']
unique = []
init_table(tb_name=tb_name, cols=cols, schema=sc, ref_schema=sc,
pk=pk, unique=unique, db=db)
# table with monthly parameter modifiers
tb_name = 'parameter_month'
cols = ([('set_1_name', 'VARCHAR'), # from {'nd_id', 'fl_id', 'pp_id'}
('set_2_name', 'VARCHAR'), # from {'nd_id', 'fl_id', 'pp_id'}
('set_1_id', 'SMALLINT'),
('set_2_id', 'SMALLINT'),
('mt_id',' SMALLINT', sc + '.def_month(mt_id)'),
('parameter', 'VARCHAR'), # the parameter this applies to
('mt_fact', 'NUMERIC(10,9)'),
('mt_fact_others', 'NUMERIC(10,9)'),
])
pk = ['parameter', 'set_1_id', 'set_2_id', 'mt_id']
unique = []
init_table(tb_name=tb_name, cols=cols, schema=sc, ref_schema=sc,
pk=pk, unique=unique, db=db)
tb_name = 'node_connect'
cols = [('nd_id', 'SMALLINT', sc + '.def_node (nd_id)'),
('nd_2_id', 'SMALLINT', sc + '.def_node (nd_id)'),
('ca_id', 'SMALLINT', sc + '.def_encar (ca_id)'),
('mt_id', 'SMALLINT', sc + '.def_month(mt_id)'),
('cap_trme_leg', 'DOUBLE PRECISION'),
('cap_trmi_leg', 'DOUBLE PRECISION'),
]
pk = ['nd_id', 'nd_2_id', 'mt_id']
unique = []
init_table(tb_name=tb_name, cols=cols, schema=sc, ref_schema=sc,
pk=pk, unique=unique, db=db)
tb_name = 'hydro'
cols = [('pp_id',' SMALLINT', sc + '.def_plant(pp_id)'),
('min_erg_mt_out_share', 'DOUBLE PRECISION'),
('max_erg_mt_in_share', 'DOUBLE PRECISION'),
('min_erg_share', 'DOUBLE PRECISION')]
pk = ['pp_id']
unique = []
init_table(tb_name=tb_name, cols=cols, schema=sc, ref_schema=sc,
pk=pk, unique=unique, db=db) | PypiClean |
/CmtStandardNames-0.2.2.tar.gz/CmtStandardNames-0.2.2/standard_names/io.py | from __future__ import print_function
import os
import sys
from . import (StandardName, Collection, BadNameError)
from .decorators import (format_as_wiki, format_as_yaml, format_as_plain_text)
from . import (google_doc, url, plain_text)
class Error(Exception):
"""Base exception for this module."""
pass
class BadIntentError(Error):
"""Error to indicate a bad key for intent."""
def __init__(self, key, valid_keys):
super(BadIntentError, self).__init__()
self._key = key
self._valid_keys = valid_keys
def __str__(self):
return '%s: Should be one of %s' % (self._key,
','.join(self._valid_keys))
def _list_to_string(lines, **kwds):
"""
Concatonate a list of strings into one big string using the line separator
as a joiner.
:lines: List of strings
:keyword sorted: Sort lines before joining
:returns: Joined lines as a string
"""
sort_list = kwds.pop('sorted', False)
if sort_list:
sorted_lines = list(lines)
sorted_lines.sort()
return os.linesep.join(sorted_lines)
else:
return os.linesep.join(lines)
def _scrape_stream(stream, regex=r'\b\w+__\w+'):
"""
Scrape standard names from stream matching a regular expression.
:stream: A file-like object.
:keyword regex: A regular expression as a string
:returns: Scraped words as a Collection
"""
import re
names = Collection()
text = stream.read()
words = re.findall(regex, text)
for word in words:
try:
names.add(word)
except BadNameError as error:
print(error, file=sys.stderr)
return names
FORMATTERS = {
'plain': _list_to_string,
'wiki': format_as_wiki(_list_to_string),
'yaml': format_as_yaml(_list_to_string),
'txt': format_as_plain_text(_list_to_string),
}
#for (name, decorator) in [('wiki', format_as_wiki), ('yaml', format_as_yaml),
# ('txt', format_as_plain_text)]:
# FORMATTERS[name] = decorator(_list_to_string)
SCRAPERS = dict()
for decorator in [google_doc, url, plain_text]:
SCRAPERS[decorator.__name__] = decorator(_scrape_stream)
_VALID_INTENTS = ['input', 'output']
def _find_unique_names(models):
"""
Find unique names in a iterable of StandardNames.
:models: A dictionary of model information
:returns: A Collection of the unique names
"""
names = Collection()
for model in models:
if isinstance(model['exchange items'], dict):
new_names = []
for intent in model['exchange items']:
try:
assert(intent in _VALID_INTENTS)
except AssertionError:
raise BadIntentError(intent, _VALID_INTENTS)
new_names.extend(model['exchange items'][intent])
else:
new_names = model['exchange items']
for new_name in new_names:
names.add(StandardName(new_name))
return names
def from_model_file(stream):
"""
Get standard names from a YAML file listing standard names for particular
models and produce the corresponding Collection.
:stream: YAML stream
:returns: A Collection
"""
import yaml
models = yaml.load_all(stream)
names = _find_unique_names(models)
return names
def from_list_file(stream):
names = Collection()
for line in stream:
if not line.startswith('#'):
names.add(StandardName(line.strip()))
#names.add(line.strip())
return names
def scrape(source, **kwds):
"""
Scrape standard names for a named source.
:source: Name of the source as a string
:keyword format: The format of the source
:returns: A Collection
"""
source_format = kwds.pop('format', 'url')
return SCRAPERS[source_format](source, **kwds) | PypiClean |
/dragonflow-4.0.0.tar.gz/dragonflow-4.0.0/dragonflow/controller/apps/portqos.py |
import collections
from oslo_log import log
from dragonflow.controller import df_base_app
from dragonflow.db.models import constants as model_constants
from dragonflow.db.models import l2
from dragonflow.db.models import qos
LOG = log.getLogger(__name__)
class PortQosApp(df_base_app.DFlowApp):
def __init__(self, *args, **kwargs):
super(PortQosApp, self).__init__(*args, **kwargs)
self._local_ports = collections.defaultdict(set)
@df_base_app.register_event(l2.LogicalPort, l2.EVENT_BIND_LOCAL)
def _add_local_port(self, lport):
self._check_update_local_port_qos(lport)
@df_base_app.register_event(l2.LogicalPort, l2.EVENT_LOCAL_UPDATED)
def _update_local_port(self, lport, original_lport):
if (original_lport and
lport.qos_policy == original_lport.qos_policy):
# Do nothing, if the port's qos is the same as db store.
return
if original_lport.qos_policy:
self._local_ports[original_lport.qos_policy.id].discard(lport.id)
self._check_update_local_port_qos(lport)
def _check_update_local_port_qos(self, lport):
policy = lport.qos_policy
if not policy:
# If the there is no qos associated with lport in nb db,
# the qos in ovs db should also be checked and cleared.
# This is because the ovs db might not be consistent with
# nb db.
self.vswitch_api.clear_port_qos(lport.id)
return
self._local_ports[lport.qos_policy.id].add(lport.id)
self._update_local_port_qos(lport.id, policy)
def _update_local_port_qos(self, port_id, policy):
def _is_qos_set():
return policy.get_max_kbps() and policy.get_max_burst_kbps()
old_qos = self.vswitch_api.get_port_qos(port_id)
if old_qos is not None:
if _is_qos_set():
if (
old_qos.id != policy.id or
policy.is_newer_than(old_qos)
):
# The QoS from north is not the same as ovs db.
self.vswitch_api.update_port_qos(port_id, policy)
else:
# The QoS from north is not set, clear the QoS in ovs db.
self.vswitch_api.clear_port_qos(port_id)
else:
if _is_qos_set():
self.vswitch_api.set_port_qos(port_id, policy)
@df_base_app.register_event(l2.LogicalPort, l2.EVENT_UNBIND_LOCAL)
def _remove_local_port(self, lport):
if lport.qos_policy:
self._local_ports[lport.qos_policy.id].discard(lport.id)
# If removing lport in nb db, the qos in ovs db should also be checked
# and cleared. This is because the ovs db might not be consistent with
# nb db.
self.vswitch_api.delete_port_qos_and_queue(lport.id)
@df_base_app.register_event(qos.QosPolicy, model_constants.EVENT_UPDATED)
def update_qos_policy(self, policy, orig_policy=None):
for port_id in self._local_ports[policy.id]:
self._update_local_port_qos(port_id, policy)
@df_base_app.register_event(qos.QosPolicy, model_constants.EVENT_DELETED)
def delete_qos_policy(self, policy):
ports = self._local_ports.pop(policy.id, ())
for port_id in ports:
self.vswitch_api.clear_port_qos(port_id) | PypiClean |
/NehorayRapid-0.0.1-py3-none-any.whl/mmedit/core/export/wrappers.py | import os.path as osp
import warnings
import numpy as np
import onnxruntime as ort
import torch
from torch import nn
from mmedit.models import BaseMattor, BasicRestorer, build_model
def inference_with_session(sess, io_binding, output_names, input_tensor):
device_type = input_tensor.device.type
device_id = input_tensor.device.index
device_id = 0 if device_id is None else device_id
io_binding.bind_input(
name='input',
device_type=device_type,
device_id=device_id,
element_type=np.float32,
shape=input_tensor.shape,
buffer_ptr=input_tensor.data_ptr())
for name in output_names:
io_binding.bind_output(name)
sess.run_with_iobinding(io_binding)
pred = io_binding.copy_outputs_to_cpu()
return pred
class ONNXRuntimeMattor(nn.Module):
def __init__(self, sess, io_binding, output_names, base_model):
super(ONNXRuntimeMattor, self).__init__()
self.sess = sess
self.io_binding = io_binding
self.output_names = output_names
self.base_model = base_model
def forward(self,
merged,
trimap,
meta,
test_mode=False,
save_image=False,
save_path=None,
iteration=None):
input_tensor = torch.cat((merged, trimap), 1).contiguous()
pred_alpha = inference_with_session(self.sess, self.io_binding,
self.output_names, input_tensor)[0]
pred_alpha = pred_alpha.squeeze()
pred_alpha = self.base_model.restore_shape(pred_alpha, meta)
eval_result = self.base_model.evaluate(pred_alpha, meta)
if save_image:
self.base_model.save_image(pred_alpha, meta, save_path, iteration)
return {'pred_alpha': pred_alpha, 'eval_result': eval_result}
class RestorerGenerator(nn.Module):
def __init__(self, sess, io_binding, output_names):
super(RestorerGenerator, self).__init__()
self.sess = sess
self.io_binding = io_binding
self.output_names = output_names
def forward(self, x):
pred = inference_with_session(self.sess, self.io_binding,
self.output_names, x)[0]
pred = torch.from_numpy(pred)
return pred
class ONNXRuntimeRestorer(nn.Module):
def __init__(self, sess, io_binding, output_names, base_model):
super(ONNXRuntimeRestorer, self).__init__()
self.sess = sess
self.io_binding = io_binding
self.output_names = output_names
self.base_model = base_model
restorer_generator = RestorerGenerator(self.sess, self.io_binding,
self.output_names)
base_model.generator = restorer_generator
def forward(self, lq, gt=None, test_mode=False, **kwargs):
return self.base_model(lq, gt=gt, test_mode=test_mode, **kwargs)
class ONNXRuntimeEditing(nn.Module):
def __init__(self, onnx_file, cfg, device_id):
super(ONNXRuntimeEditing, self).__init__()
ort_custom_op_path = ''
try:
from mmcv.ops import get_onnxruntime_op_path
ort_custom_op_path = get_onnxruntime_op_path()
except (ImportError, ModuleNotFoundError):
warnings.warn('If input model has custom op from mmcv, \
you may have to build mmcv with ONNXRuntime from source.')
session_options = ort.SessionOptions()
# register custom op for onnxruntime
if osp.exists(ort_custom_op_path):
session_options.register_custom_ops_library(ort_custom_op_path)
sess = ort.InferenceSession(onnx_file, session_options)
providers = ['CPUExecutionProvider']
options = [{}]
is_cuda_available = ort.get_device() == 'GPU'
if is_cuda_available:
providers.insert(0, 'CUDAExecutionProvider')
options.insert(0, {'device_id': device_id})
sess.set_providers(providers, options)
self.sess = sess
self.device_id = device_id
self.io_binding = sess.io_binding()
self.output_names = [_.name for _ in sess.get_outputs()]
base_model = build_model(
cfg.model, train_cfg=None, test_cfg=cfg.test_cfg)
if isinstance(base_model, BaseMattor):
WraperClass = ONNXRuntimeMattor
elif isinstance(base_model, BasicRestorer):
WraperClass = ONNXRuntimeRestorer
self.wraper = WraperClass(self.sess, self.io_binding,
self.output_names, base_model)
def forward(self, **kwargs):
return self.wraper(**kwargs) | PypiClean |
/Flask-ErrorsHandler-4.0.2.tar.gz/Flask-ErrorsHandler-4.0.2/flask_errors_handler/normalize.py | import traceback
from datetime import datetime
from flask import current_app as cap
from werkzeug import exceptions, http
from werkzeug.routing import RequestRedirect
from .exception import ApiProblem
class BaseNormalize(object):
def normalize(self, ex, **kwargs):
"""
Child class must return super().normalize() so as to keep the chain of Mixins
:param ex: input exception
:return:
"""
return ex
class RequestRedirectMixin(BaseNormalize):
def normalize(self, ex, **kwargs):
"""
:param ex:
:return:
"""
if isinstance(ex, RequestRedirect):
location = dict(location=ex.new_url)
ex.headers = location
ex.response = location
return super().normalize(ex)
class MethodNotAllowedMixin(BaseNormalize):
def normalize(self, ex, **kwargs):
"""
:param ex:
:return:
"""
if isinstance(ex, exceptions.MethodNotAllowed):
if isinstance(ex.valid_methods, (list, tuple)):
methods = ex.valid_methods
else:
methods = (ex.valid_methods,)
try:
ex.headers = dict(Allow=", ".join(methods))
ex.response = dict(allowed=methods)
except TypeError: # pragma: no cover
pass
return super().normalize(ex)
class UnauthorizedMixin(BaseNormalize):
def normalize(self, ex, **kwargs):
"""
:param ex:
:return:
"""
def to_dict(item):
item = dict(item)
item['auth_type'] = item.pop('__auth_type__', None)
return item
if isinstance(ex, exceptions.Unauthorized):
if ex.www_authenticate:
ex.headers = {"WWW-Authenticate": ", ".join([str(a) for a in ex.www_authenticate])}
ex.response = dict(authenticate=[to_dict(a) for a in ex.www_authenticate if a])
return super().normalize(ex)
class RequestedRangeNotSatisfiableMixin(BaseNormalize):
def normalize(self, ex, **kwargs):
"""
:param ex:
:return:
"""
if isinstance(ex, exceptions.RequestedRangeNotSatisfiable):
if ex.length:
unit = ex.units or 'bytes'
ex.headers = {"Content-Range": f"{unit} */{ex.length}"}
ex.response = dict(units=unit, length=ex.length)
return super().normalize(ex)
class RetryAfterMixin(BaseNormalize):
def normalize(self, ex, **kwargs):
"""
:param ex:
:return:
"""
if isinstance(ex, (exceptions.TooManyRequests, exceptions.ServiceUnavailable)):
if ex.retry_after:
retry = ex.retry_after
if isinstance(retry, datetime):
retry = http.http_date(retry)
ex.headers = {"Retry-After": str(retry)}
ex.response = dict(retry_after=ex.retry_after)
return super().normalize(ex)
class NormalizerMixin(BaseNormalize):
class DumpEx:
def __str__(self):
return traceback.format_exc()
def normalize(self, ex, exc_class=ApiProblem, **kwargs):
"""
:param ex: Exception
:param exc_class: overrides ApiProblem class
:return: new Exception instance of HTTPException
"""
ex = super().normalize(ex)
if isinstance(ex, exc_class):
return ex
tb = self.DumpEx()
if cap.config['DEBUG']:
mess = str(tb) # pragma: no cover
else:
mess = cap.config['ERROR_DEFAULT_MSG']
_ex = exc_class(mess, **kwargs)
if isinstance(ex, exceptions.HTTPException):
_ex.code = ex.code
_ex.description = ex.get_description()
try:
_ex.response = ex.response
except AttributeError:
_ex.response = None
try:
# noinspection PyUnresolvedReferences
_ex.headers.update(ex.headers)
except AttributeError:
pass
else:
cap.logger.error("%s", tb)
return _ex
class DefaultNormalizer(
NormalizerMixin,
MethodNotAllowedMixin,
RequestRedirectMixin,
UnauthorizedMixin,
RequestedRangeNotSatisfiableMixin,
RetryAfterMixin,
):
"""
Default normalizer uses all mixins
""" | PypiClean |
/BlueWhale3_Bioinformatics-4.1.32-py3-none-any.whl/orangecontrib/bioinformatics/widgets/utils/gui/__init__.py | from typing import Sequence
from numbers import Real, Integral
from collections import namedtuple
from AnyQt.QtGui import QTextDocument, QAbstractTextDocumentLayout
from AnyQt.QtCore import Qt, QSize, QSortFilterProxyModel
from AnyQt.QtWidgets import QFrame, QStyle, QApplication, QStyledItemDelegate, QStyleOptionViewItem
from .gene_sets import GeneSetsSelection
from .gene_scoring import GeneScoringWidget, gene_scoring_method
from .list_completer import TokenListCompleter
from .label_selection import (
RowGroup,
ColumnGroup,
LabelSelectionWidget,
itemselection,
group_candidates,
standarditem_from,
group_selection_mask,
)
__all__ = (
'GeneSetsSelection',
'GeneScoringWidget',
'gene_scoring_method',
'TokenListCompleter',
'RowGroup',
'ColumnGroup',
'LabelSelectionWidget',
'itemselection',
'group_candidates',
'standarditem_from',
'group_selection_mask',
)
# Creates line separator
def horizontal_line():
line = QFrame()
line.setFrameShape(QFrame.HLine)
line.setFrameShadow(QFrame.Sunken)
return line
class FilterProxyModel(QSortFilterProxyModel):
"""
A simple filter proxy model with settable filter predicates
Example
-------
>>> proxy = FilterProxyModel()
>>> proxy.set_filters([
... FilterProxyModel.Filter(0, Qt.DisplayRole, lambda value: value < 1)
... ])
"""
Filter = namedtuple("Filter", ["column", "role", "predicate"])
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.__filters = []
def reset_filters(self):
self.__filters = []
self.invalidateFilter()
def set_filters(self, filters):
# type: (Sequence[FilterProxyModel.Filter]) -> None
filters = [FilterProxyModel.Filter(f.column, f.role, f.predicate) for f in filters]
self.__filters = filters
self.invalidateFilter()
def filterAcceptsRow(self, row, parent):
source = self.sourceModel()
def apply(f):
index = source.index(row, f.column, parent)
data = source.data(index, f.role)
try:
return f.predicate(data)
except (TypeError, ValueError):
return False
return all(apply(f) for f in self.__filters)
class NumericalColumnDelegate(QStyledItemDelegate):
"""
An Item delegate for displaying numerical columns
"""
def __init__(self, parent=None, precision=4, notation='f'):
super().__init__(parent)
self.precision = precision
self.notation = notation
def displayText(self, value, locale):
if isinstance(value, Integral):
return locale.toString(int(value))
elif isinstance(value, Real):
return locale.toString(float(value), self.notation, self.precision)
else:
return super().displayText(value, locale)
def initStyleOption(self, option, index):
super().initStyleOption(option, index)
align = index.data(Qt.TextAlignmentRole)
data = index.data(Qt.DisplayRole)
if align is None and isinstance(data, Real):
# Right align if the model does not specify otherwise
option.displayAlignment = Qt.AlignRight | Qt.AlignVCenter
class HTMLDelegate(QStyledItemDelegate):
"""
https://stackoverflow.com/questions/1956542/how-to-make-item-view-render-rich-html-text-in-qt
https://stackoverflow.com/questions/2375763/how-to-open-an-url-in-a-qtableview
"""
def sizeHint(self, option, index):
options = QStyleOptionViewItem(option)
gene_obj = index.data(Qt.DisplayRole)
self.initStyleOption(options, index)
doc = QTextDocument()
doc.setHtml(gene_obj.to_html())
doc.setTextWidth(options.rect.width() - 10)
return QSize(doc.idealWidth(), doc.size().height())
def paint(self, painter, option, index):
options = QStyleOptionViewItem(option)
row_obj = index.data(Qt.DisplayRole)
self.initStyleOption(options, index)
# print(option.rect.width())
style = QApplication.style() if options.widget is None else options.widget.style()
doc = QTextDocument()
doc.setHtml(row_obj.to_html())
doc.setTextWidth(option.rect.width() - 10)
# doc.setPageSize(300)
# print(doc.loadResource(3))
options.text = ""
style.drawControl(QStyle.CE_ItemViewItem, options, painter)
ctx = QAbstractTextDocumentLayout.PaintContext()
text_rect = style.subElementRect(QStyle.SE_ItemViewItemText, options)
painter.save()
painter.translate(text_rect.topLeft())
painter.setClipRect(text_rect.translated(-text_rect.topLeft()))
doc.documentLayout().draw(painter, ctx)
painter.restore() | PypiClean |
/GraphCASE-0.0.9.tar.gz/GraphCASE-0.0.9/GAE/graph_reconstructor.py | import math
import networkx as nx
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
from pyvis import network as net
class GraphReconstructor:
"""
Class for reconstruction the sampled local neighbourhood into a graph based on the inputlayer
of the encoder or outputlayer of the decoder. The reconstructed graph is a networkx graph.
"""
def __init__(self, deduplicate=True, delta=0.0001, dummy=0, fraction_sim=1.0):
self.node_dim = 0 # dimension of the node labels
self.edge_dim = 0 # dimension of the edge labels
self.support_size = [0] # number of samples per layer
self.layers = 0 # number of layers
self.deduplicate = deduplicate
self.delta = delta
self.fraction_sim = fraction_sim # fraction of node dimensions with similar values when determining if node is same.
self.dummy = [dummy]
self.node_dict = None
def reconstruct_graph(self, target, inputlayer, support_size, pos_encoding_size=0):
"""
Reconstrucs the samples local neighbourhood into a graph based on the inputlayer
of the encoder or outputlayer of the decoder. The reconstructed graph is a networkx graph.
Args:
target: numpy array with the features of the target node.
inputLayer: 2-d numpy array containing the sampled feature and edge values.
support_size: list containing the number of samples per layer.
Returns:
graphx graph consisting of the sampled nodes
"""
self.node_dim = target.shape[-1]
self.dummy = self.dummy * (self.node_dim + pos_encoding_size)
self.edge_dim = tf.shape(inputlayer)[2].numpy() - self.node_dim - pos_encoding_size
self.support_size = support_size
self.layers = len(support_size)
self.pos_encoding_size = pos_encoding_size
self.node_dict = np.zeros((self.layers, self.__get_switch_count(self.layers)), dtype=int)
block_size = self.layers - 1 + self.support_size[-1]
blocks = tf.shape(inputlayer)[1].numpy() / block_size # number of blocks
graph = nx.DiGraph()
root_encoding = [1]+[0]*(pos_encoding_size-1)
root_features = target.numpy().flatten().tolist()
parent_feat = dict([('feat'+str(i), t) for i, t in enumerate(root_features + root_encoding)])
graph.add_node(1, **parent_feat) # node id 0 is reserved for dummy
for block_nr in range(int(blocks)):
start_row = block_nr * block_size
block = inputlayer[:, start_row : start_row + block_size, :]
self.__process_blocks(graph, 1, block, 1, block_nr/ blocks)
return graph
def __process_blocks(self, graph, layer, block, parent, block_nr_ratio):
# determine the direction by calculation the number of direction switches for that layer.
layer_switch_cnt = self.__get_switch_count(layer)
current_switch = math.floor(block_nr_ratio * layer_switch_cnt)
is_incoming = (current_switch % 2) == 0
# print(f"layer = {layer}, block_rat = {block_nr_ratio}, incoming ={is_incoming}")
if layer < self.layers:
# only the first node in the block needs to be process
# check if first node is already added
next_layer_switch_cnt = self.__get_switch_count(layer+1)
current_switch = math.floor(block_nr_ratio * next_layer_switch_cnt)
if (current_switch % 2) == 0:
# add new node
child = self.__add_node_edge(graph, parent, block[:, 0:1, :], is_incoming)
self.node_dict[layer][current_switch] = child
else:
#retrieve node id of child
child = self.node_dict[layer][current_switch-1]
if child != 0:
# only process block if the parent is not a dummy node
self.__process_blocks(graph, layer+1, block[:, 1:, :], child, block_nr_ratio)
else:
for i in range(tf.shape(block)[1].numpy()):
self.__add_node_edge(graph, parent, block[:, i:i+1, :], is_incoming)
def __get_switch_count(self, layer):
return np.prod(self.support_size[:layer - 1]) * 2 ** layer
def __add_node_edge(self, graph, parent, node_edge, is_incoming=True):
pos_enc = node_edge[0, 0, 0:self.pos_encoding_size]
node = node_edge[0, 0, -self.node_dim:]
node = np.concatenate([node, pos_enc], axis=0)
edge = node_edge[0, 0, self.pos_encoding_size:self.pos_encoding_size + self.node_dim]
node_id = self.__add_node(graph, node, parent)
if node_id != 0: # node is not a dummy node
edge_feat = dict([('edge_feat'+str(i), t) for i, t in enumerate(edge.numpy())])
if is_incoming:
graph.add_edge(node_id, parent, **edge_feat)
else:
graph.add_edge(parent, node_id, **edge_feat)
return node_id
def __add_node(self, graph, node, parent):
new_id = graph.number_of_nodes() + 1
# check if node matches dummy node.
equal_count = len([i for i, j in zip(node, self.dummy) if abs(i - j) < self.delta])
if equal_count >= node.shape[0] * self.fraction_sim:
return 0
# check if node is already part of the graph.
# exclude the parent node in this check otherwise we get self loops
non_parent_nodes = [u for u in graph.nodes(data=True) if u[0]!=parent]
if self.deduplicate:
for u in non_parent_nodes:
u_feat = [v for k, v in sorted(u[1].items(), key=lambda tup: int(tup[0][4:]))]
count = len([i for i, j in zip(node, u_feat) if abs(i - j) < self.delta])
if count >= node.shape[0] * self.fraction_sim:
return u[0]
# add node to graph.
node_feat = dict([('feat'+str(i), t) for i, t in enumerate(node.flatten())])
graph.add_node(new_id, **node_feat)
return new_id
@staticmethod
def show_graph(graph, node_label=None, ax=None):
"""plots the graph in plotly
"""
if node_label is not None:
node_labels = nx.get_node_attributes(graph, node_label)
else:
node_labels = None
pos = nx.kamada_kawai_layout(graph)
edge_labels = nx.get_edge_attributes(graph, name='weight')
length = nx.single_source_dijkstra_path_length(graph.to_undirected(), 1, 2, weight=1)
color = [v / 2 for k, v in sorted(length.items(), key=lambda tup: int(tup[0]))]
options = {
'node_color': color,
'node_size': 300,
'width': 1,
'with_labels': True,
'labels': node_labels,
# 'edge_labels': nx.get_edge_attributes(graph, name='weight'),
'pos': pos,
'cmap': plt.cm.rainbow
}
if ax is None:
nx.draw(graph, **options)
nx.draw_networkx_edge_labels(graph, pos, edge_labels=edge_labels)
plt.show()
else:
nx.draw(graph, **options, ax=ax)
nx.draw_networkx_edge_labels(graph, pos, edge_labels=edge_labels, ax=ax)
def show_pyvis(self, graph, node_label=None):
""" plot graph in pyvis
"""
nt = net.Network(notebook=True, directed=True)
# nt.from_nx(graph)
nt.set_edge_smooth('straightCross')
length_dict = nx.single_source_dijkstra_path_length(graph.to_undirected(), 1, 3, weight=1)
color_dict = {0: 'red', 1: 'lightblue', 2: 'lightgreen'}
for node in graph.nodes(data=True):
if node_label is not None:
nt.add_node(node[0], str(node[1][node_label]), color=color_dict[length_dict[node[0]]],
shape='circle')
else:
nt.add_node(node[0], node[0], color=color_dict[length_dict[node[0]]],
shape='circle')
for o, i, l in graph.edges(data=True):
nt.add_edge(o, i, label=str(round(l['edge_feat0'], 2)))
return nt | PypiClean |
/FLORIS-3.4.1.tar.gz/FLORIS-3.4.1/floris/simulation/wake_turbulence/wake_induced_mixing.py |
# Licensed under the Apache License, Version 2.0 (the "License"); you may not
# use this file except in compliance with the License. You may obtain a copy of
# the License at http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations under
# the License.
from typing import Any, Dict
import numpy as np
from attrs import define, field
from floris.simulation import (
BaseModel,
Farm,
FlowField,
Grid,
Turbine,
)
from floris.utilities import cosd, sind
@define
class WakeInducedMixing(BaseModel):
"""
WakeInducedMixing is a model used to generalize wake-added turbulence
in the Empirical Gaussian wake model. It computes the contribution of each
turbine to a "wake-induced mixing" term that in turn is used in the
velocity deficit and deflection models.
Args:
parameter_dictionary (dict): Model-specific parameters.
Default values are used when a parameter is not included
in `parameter_dictionary`. Possible key-value pairs include:
- **atmospheric_ti_gain** (*float*): The contribution of ambient
turbulent intensity to the wake-induced mixing term. Currently
throws a warning if nonzero.
References:
.. bibliography:: /references.bib
:style: unsrt
:filter: docname in docnames
"""
atmospheric_ti_gain: float = field(converter=float, default=0.0)
def __attrs_post_init__(self) -> None:
if self.atmospheric_ti_gain != 0.0:
nonzero_err_msg = \
"Running wake_induced_mixing model with mixing contributions"+\
" from the atmospheric turbulence intensity has not been"+\
" vetted. To avoid this warning, set atmospheric_ti_gain=0."+\
" in the FLORIS input yaml."
self.logger.warning(nonzero_err_msg, stack_info=True)
def prepare_function(self) -> dict:
pass
def function(
self,
axial_induction_i: np.ndarray,
downstream_distance_D_i: np.ndarray,
) -> None:
"""
Calculates the contribution of turbine i to all other turbines'
mixing terms.
Args:
axial_induction_i (np.array): Axial induction factor of
the ith turbine (-).
downstream_distance_D_i (np.array): The distance downstream
from turbine i to all other turbines (specified in terms
of multiples of turbine i's rotor diameter) (D).
Returns:
np.array: Components of the wake-induced mixing term due to
the ith turbine.
"""
wake_induced_mixing = axial_induction_i[:,:,:,0,0] / downstream_distance_D_i**2
return wake_induced_mixing | PypiClean |
/MDP-3.6.tar.gz/MDP-3.6/mdp/nodes/ica_nodes.py | from __future__ import print_function
from __future__ import division
from builtins import range
from past.utils import old_div
from builtins import object
__docformat__ = "restructuredtext en"
import math
import mdp
from .isfa_nodes import ISFANode
numx, numx_rand, numx_linalg = mdp.numx, mdp.numx_rand, mdp.numx_linalg
utils = mdp.utils
mult = utils.mult
class ProjectMatrixMixin(object):
"""Mixin class to be inherited by all ICA-like algorithms"""
def get_projmatrix(self, transposed=1):
"""Return the projection matrix.
:param transposed: Indicates whether transposed projection matrix is to
be returned.
:type transposed: bool
:return: The projection matrix.
:rtype: numpy.ndarray
"""
self._if_training_stop_training()
Q = self.filters.T
if not self.whitened:
W = self.white.get_projmatrix(transposed=0)
T = mult(Q, W)
else:
T = Q
if transposed:
return T.T
return T
def get_recmatrix(self, transposed=1):
"""Return the back-projection matrix (i.e. the reconstruction matrix).
.. note:: If the unknown sources are white, this is a good
approximation of the mixing matrix (up to a permutation matrix).
:param transposed: Indicates whether transposed projection matrix is to
be returned.
:type transposed: bool
:return: The back-projection matrix.
:rtype: numpy.ndarray
"""
self._if_training_stop_training()
Q = self.filters.T
if not self.whitened:
W = self.white.get_recmatrix(transposed=1)
T = mult(Q, W)
else:
T = Q
if transposed:
return T
return T.T
class ICANode(mdp.Cumulator, mdp.Node, ProjectMatrixMixin):
"""
ICANode is a general class to handle different batch-mode algorithm for
Independent Component Analysis.
.. admonition:: Reference
More information about ICA can be found among others in
Hyvarinen A., Karhunen J., Oja E. (2001). Independent Component Analysis,
Wiley.
"""
def __init__(self, limit = 0.001, telescope = False, verbose = False,
whitened = False, white_comp = None, white_parm = None,
input_dim = None, dtype = None):
"""Initializes an object of type 'ICANode'.
:param limit: Convergence threshold.
:type limit: float
:param telescope: If telescope == True, use Telescope mode: Instead of
using all input data in a single batch try larger and larger
chunks of the input data until convergence is achieved. This
should lead to significantly faster convergence for stationary
statistics. This mode has not been thoroughly tested and must
be considered beta.
:type telescope: bool
:param verbose: Idicates whether information is to be reported about
the operation.
:type verbose: bool
:param whitened: Set whitened is True if input data are already whitened.
Otherwise the node will whiten the data itself.
:type whitened: bool
:param white_comp: If whitened is False, you can set 'white_comp' to the
number of whitened components to keep during the
calculation (i.e., the input dimensions are reduced to
white_comp by keeping the components of largest variance).
:type white_comp: int
:param white_parm: A dictionary with additional parameters for whitening.
It is passed directly to the WhiteningNode constructor. For example::
>>> white_parm = { 'svd' : True }
:type white_parm: dict
:param input_dim: The input dimensionality.
:type input_dim: int
:param dtype: The datatype.
:type dtype: numpy.dtype or str
"""
self.telescope = telescope
self.verbose = verbose
self.limit = limit
self.whitened = whitened
self.white_comp = white_comp
if white_parm is None:
self.white_parm = {}
else:
self.white_parm = white_parm
super(ICANode, self).__init__(input_dim, None, dtype)
def _set_input_dim(self, n):
self._input_dim = n
if self.whitened:
self.output_dim = n
elif self.white_comp is None:
self.output_dim = n
def _stop_training(self):
"""Whiten data if needed and call the 'core' routine to perform ICA.
Take care of telescope-mode if needed.
"""
super(ICANode, self)._stop_training()
verbose = self.verbose
core = self.core
limit = self.limit
# ?? rewrite as a 2-phases node
# whiten if needed
if not self.whitened:
self.output_dim = self.white_comp
white = mdp.nodes.WhiteningNode(output_dim = self.white_comp,
dtype=self.dtype,
**self.white_parm)
white.train(self.data)
self.data = white.execute(self.data)
self.white = white
# if output_dim not set, set it now
if self.output_dim is None:
self.output_dim = self.input_dim
data = self.data
# call 'core' in telescope mode if needed
if self.telescope:
minpow = math.frexp(self.input_dim*10)[1]
maxpow = int(old_div(numx.log(data.shape[0]),numx.log(2)))
for tel in range(minpow, maxpow+1):
index = 2**tel
if verbose:
print("--\nUsing %d inputs" % index)
convergence = core(data[:index, :])
if convergence <= limit:
break
else:
convergence = core(data)
if verbose:
print("Convergence criterium: ", convergence)
self.convergence = convergence
def core(self, data):
"""This is the core routine of the ICANode.
Each subclass must define this function to return the achieved
convergence value. This function is also responsible for setting the
ICA filters matrix self.filters.
.. note::
The matrix self.filters is applied to the right of the matrix
containing input data. This is the transposed of the matrix
defining the linear transformation.
:param data: The data you want to perform ICA on.
:type data: numpy.ndarray
:return: The achieved convergence value.
"""
pass
def _execute(self, x):
if not self.whitened:
x = self.white.execute(x)
# self.filters is applied to the right of the
# matrix containing input data. This is the transposed of the matrix
# defining the linear transformation.
return mult(x, self.filters)
def _inverse(self, y):
y = mult(y, self.filters.T)
if not self.whitened:
y = self.white.inverse(y)
return y
class CuBICANode(ICANode):
"""Perform Independent Component Analysis using the CuBICA algorithm.
Note that CuBICA is a batch-algorithm, which means that it needs
all input data before it can start and compute the ICs. The
algorithm is here given as a Node for convenience, but it actually
accumulates all inputs it receives. Remember that to avoid running
out of memory when you have many components and many time samples.
As an alternative to this batch mode you might consider the telescope
mode (see the docs of the ``__init__`` method).
:ivar white: The whitening node used for preprocessing.
:ivar filters: The ICA filters matrix (this is the transposed of the
projection matrix after whitening).
:ivar convergence: The value of the convergence threshold.
.. admonition:: Reference
Blaschke, T. and Wiskott, L. (2003).
CuBICA: Independent Component Analysis by Simultaneous Third- and
Fourth-Order Cumulant Diagonalization.
IEEE Transactions on Signal Processing, 52(5), pp. 1250-1256.
"""
def core(self, data):
"""This is the core routine of a node inheriting from ICANode.
As a subclass, the CuBICANode define this function to return the achieved
convergence value. This function is also responsible for setting the
ICA filters matrix self.filters.
:param data: The data you want to perform ICA on.
:type data: numpy.ndarray
:return: The convergence value, i.e. the maximum angle of rotation.
:rtype: float
"""
# keep track of maximum angle of rotation
# angles vary in the range [-pi, +pi]
# put here -2pi < -pi < +pi
self.maxangle = [-2*numx.pi]
verbose = self.verbose
# we need to copy to avoid overwriting during rotation.
x = data.copy()
# convergence criterium == maxangle
limit = self.limit
comp = x.shape[1]
tlen = x.shape[0]
# some constants
ct_c34 = 0.0625
ct_s34 = 0.25
ct_c44 = old_div(1.,384)
ct_s44 = old_div(1.,96)
# initial transposed rotation matrix == identity matrix
Qt = numx.identity(comp, dtype=self.dtype)
# maximum number of sweeps through all possible pairs of signals
num = int(1+round(numx.sqrt(comp)))
# start sweeping
for k in range(num):
maxangle = 0
for i in range(comp - 1):
for j in range(i+1, comp):
u1 = x[:, i]
u2 = x[:, j]
sq1 = x[:, i]*x[:, i]
sq2 = x[:, j]*x[:, j]
# calculate the cumulants of 3rd and 4th order.
C111 = old_div(mult(sq1, u1),tlen)
C112 = old_div(mult(sq1, u2),tlen)
C122 = old_div(mult(sq2, u1),tlen)
C222 = old_div(mult(sq2, u2),tlen)
C1111 = old_div(mult(sq1, sq1),tlen) - 3.
C1112 = old_div(mult(sq1*u1, u2),tlen)
C1122 = old_div(mult(sq1, sq2),tlen) - 1.
C1222 = old_div(mult(sq2*u2, u1),tlen)
C2222 = old_div(mult(sq2, sq2),tlen) - 3.
c_34 = ct_c34 * ( (C111*C111+C222*C222)-
3.*(C112*C112+C122*C122)-
2.*(C111*C122+C112*C222) )
s_34 = ct_s34 * ( C111*C112-C122*C222 )
c_44 = ct_c44 *( 7.*(C1111*C1111+C2222*C2222)-
16.*(C1112*C1112+C1222*C1222)-
12.*(C1111*C1122+C1122*C2222)-
36.*(C1122*C1122)-
32.*(C1112*C1222)-
2.*(C1111*C2222) )
s_44 = ct_s44 *( 7.*(C1111*C1112-C1222*C2222)+
6.*(C1112*C1122-C1122*C1222)+
(C1111*C1222-C1112*C2222) )
# rotation angle that maximize the contrast function
phi_max = -0.25 * numx.arctan2(s_34+s_44, c_34+c_44)
# get the new rotation matrix.
# Using the function rotate with angle 'phi' on
# a transformation matrix corresponds to the
# right-multiplication by a rotation matrix
# with angle '-phi'.
utils.rotate(Qt, phi_max, [i, j])
# rotate input data
utils.rotate(x, phi_max, [i, j])
# keep track of maximum angle of rotation
maxangle = max(maxangle, abs(float(phi_max)))
self.maxangle.append(maxangle)
if maxangle <= limit:
break
self.iter = k
if verbose:
print("\nSweeps: ", k)
self.filters = Qt
# return the convergence criterium
return maxangle
class FastICANode(ICANode):
"""Perform Independent Component Analysis using the FastICA algorithm.
Note that FastICA is a batch-algorithm. This means that it needs
all input data before it can start and compute the ICs.
The algorithm is here given as a Node for convenience, but it
actually accumulates all inputs it receives. Remember that to avoid
running out of memory when you have many components and many time samples.
FastICA does not support the telescope mode (the convergence
criterium is not robust in telescope mode).
criterium is not robust in telescope mode).
History:
- 1.4.1998 created for Matlab by Jarmo Hurri, Hugo Gavert, Jaakko Sarela,
and Aapo Hyvarinen
- 7.3.2003 modified for Python by Thomas Wendler
- 3.6.2004 rewritten and adapted for scipy and MDP by MDP's authors
- 25.5.2005 now independent from scipy. Requires Numeric or numarray
- 26.6.2006 converted to numpy
- 14.9.2007 updated to Matlab version 2.5
:ivar white: The whitening node used for preprocessing.
:ivar filters: The ICA filters matrix (this is the transposed of the
projection matrix after whitening).
:ivar convergence: The value of the convergence threshold.
.. admonition:: Reference
Aapo Hyvarinen (1999).
Fast and Robust Fixed-Point Algorithms for Independent Component Analysis
IEEE Transactions on Neural Networks, 10(3):626-634.
"""
def __init__(self, approach = 'defl', g = 'pow3', guess = None,
fine_g = 'pow3', mu = 1,
sample_size = 1, fine_tanh = 1, fine_gaus = 1,
max_it = 5000, max_it_fine = 100,
failures = 5, coarse_limit=None, limit = 0.001, verbose = False,
whitened = False, white_comp = None, white_parm = None,
input_dim = None, dtype=None):
"""Initializes an object of type 'FastICANode'.
:param approach: approach: Approach to use. Possible values are
-'defl':deflation
-'symm': symmetric
:type approach: str
:param g: Nonlinearity to use. Possible values are
-'pow3': x^3
-'tanh': tanh(fine_tanh*x)
-'gaus': x*exp(-fine_gaus*x^2/2)
-'skew': x^2 (for skewed signals)
:type g: str
:param guess: Initial guess for the mixing matrix (ignored if None).
:param fine_g: Nonlinearity for fine tuning. Possible values are the same
as for 'g'. Set it to None to disable fine tuning.
:type fine_g: str
:param mu: Step size.
:param sample_size: Percentage of samples used in one iteration.
If sample_size < 1, samples are chosen in random order.
:type sample_size: float
:param fine_tanh: Parameter for 'tanh' nonlinearity.
:type fine_tanh: float
:param fine_gaus: Parameter for 'gaus' nonlinearity.
:type fine_gaus: float
:param max_it: Maximum number of iterations.
:type max_it: int
:param max_it_fine: Maximum number of iterations for fine tuning.
:type max_it_fine: int
:param failures: Maximum number of failures to allow in deflation mode.
:type failures: int
:param coarse_limit: Initial convergence threshold, to switch to
fine_g function (i.e. linear to non-linear) even
before reaching the limit and final tuning. Set
it to a value higher than limit to be in effect.
:type coarse_limit: float
:param limit: Convergence threshold.
:type limit: float
:param verbose: Idicates whether information is to be reported about
the operation.
:type verbose: bool
:param whitened: Set whitened == True if input data are already whitened.
Otherwise the node will whiten the data itself.
:type whitened: bool
:param white_comp: If whitened == False, you can set 'white_comp' to the
number of whitened components to keep during the
calculation (i.e., the input dimensions are reduced to
white_comp by keeping the components of largest variance).
:type white_comp: int
:param white_parm: A dictionary with additional parameters for whitening.
It is passed directly to the WhiteningNode constructor. For example::
>>> white_parm = { 'svd' : True }
:type white_parm: dict
:param input_dim: The input dimensionality.
:type input_dim: int
:param dtype: The datatype.
:type dtype: numpy.dtype or str
"""
super(FastICANode, self).__init__(limit, False, verbose, whitened,
white_comp, white_parm, input_dim,
dtype)
if approach in ['defl', 'symm']:
self.approach = approach
else:
raise mdp.NodeException('%s approach method not known' % approach)
if g in ['pow3', 'tanh', 'gaus', 'skew']:
self.g = g
else:
raise mdp.NodeException('%s nonlinearity function not known' % g)
if fine_g in ['pow3', 'tanh', 'gaus', 'skew', None]:
self.fine_g = fine_g
else:
errmsg = '%s nonlinearity function not known' % fine_g
raise mdp.NodeException(errmsg)
if sample_size > 0 and sample_size <= 1:
self.sample_size = sample_size
else:
raise mdp.NodeException('0<sample_size<1, %f given' % sample_size)
self.mu = mu
self.stabilization = mu != 1
self.fine_tanh = fine_tanh
self.fine_gaus = fine_gaus
self.max_it = max_it
self.max_it_fine = max_it_fine
self.coarse_limit = coarse_limit
self.failures = failures
self.guess = guess
def _get_rsamples(self, X):
tlen = X.shape[1]
mask = numx.where(numx_rand.random(tlen) < self.sample_size)[0]
return X[:, mask]
def core(self, data):
"""This is the core routine of a node inheriting from ICANode.
As a subclass, the FastICANode defines this function to return the
achieved convergence value. This function is also responsible for
setting the ICA filters matrix self.filters.
:param data: The data you want to perform ICA on.
:type data: numpy.ndarray
:return: The convergence value.
:rtype: float
"""
# this is a more or less line per line translation of the original
# matlab code.
# Everything could be done better and more efficiently.
# I just had no time at the moment to do it.
# The logic behind the used_g hell is beyond my understanding :-)))
X = data.T
# casted constants
comp = X.shape[0]
tlen = X.shape[1]
dtype = self.dtype
# Default values and initial definitions
fine_tanh = self.fine_tanh
fine_gaus = self.fine_gaus
approach = self.approach
g = self.g
fine_g = self.fine_g
stabilization = self.stabilization
mu = self.mu
sample_size = self.sample_size
if self.guess is None:
# Take random orthonormal initial vectors.
guess = utils.random_rot(comp, dtype)
else:
# Use user supplied mixing matrix
guess = self._refcast(self.guess)
if not self.whitened:
guess = mult(guess, self.white.get_recmatrix(transposed=1))
limit = self.limit
coarse_limit = self.coarse_limit
max_it = self.max_it
max_it_fine = self.max_it_fine
failures = self.failures
verbose = self.verbose
# set non linearities. don't blame me for the awful logic: it comes
# from the matlab program. I didn't dare to understand it and change
# it.
if g == 'pow3':
gOrig = 10
elif g == 'tanh':
gOrig = 20
elif g == 'gaus':
gOrig = 30
else:
gOrig = 40
if sample_size != 1:
gOrig += 2
if mu != 1:
gOrig += 1
fine_tuning = True
if fine_g == 'pow3':
gFine = 11
elif fine_g == 'tanh':
gFine = 21
elif fine_g == 'gaus':
gFine = 31
elif fine_g == 'skew':
gFine = 41
else:
if mu == 1:
gFine = gOrig + 1
else:
stabilization = True
gFine = gOrig
fine_tuning = False
muK = 0.01
used_g = gOrig
stroke = 0
fine_tuned = False
coarse_limit_reached = False
lng = False
# SYMMETRIC APPROACH
if approach == 'symm':
# create list to store convergence
convergence = []
convergence_fine = []
# orthonormal initial vectors.
Q = guess
QOld = numx.zeros(Q.shape, dtype)
QOldF = numx.zeros(Q.shape, dtype)
# This is the actual fixed-point iteration loop.
for round in range(max_it + 1):
if round == max_it:
errstr = 'No convergence after %d steps\n' % max_it
raise mdp.NodeException(errstr)
# Symmetric orthogonalization. Q = Q * real(inv(Q' * Q)^(1/2));
Q = mult(Q, utils.sqrtm(utils.inv(mult(Q.T, Q))))
# Test for termination condition. Note that we consider
# opposite directions here as well.
v1 = 1.-abs((mult(Q.T, QOld)).diagonal()).min(axis=0)
convergence.append(v1)
v2 = 1.-abs((mult(Q.T, QOldF)).diagonal()).min(axis=0)
convergence_fine.append(v2)
if self.g != self.fine_g \
and coarse_limit is not None \
and convergence[round] < coarse_limit \
and not coarse_limit_reached:
if verbose:
print('Coarse convergence, switching to fine cost...')
used_g = gFine
coarse_limit_reached = True
if convergence[round] < limit:
if fine_tuning and (not fine_tuned):
if verbose:
print('Initial convergence, fine-tuning...')
fine_tuned = True
used_g = gFine
mu = muK * self.mu
QOld = numx.zeros(Q.shape, dtype)
QoldF = numx.zeros(Q.shape, dtype)
else:
if verbose:
print('Convergence after %d steps\n' % round)
break
if stabilization:
if (stroke == 0) and (convergence_fine[round] < limit):
if verbose:
print('Stroke!\n')
stroke = mu
mu = 0.5*mu
if used_g % 2 == 0:
used_g += 1
elif (stroke != 0):
mu = stroke
stroke = 0
if (mu == 1) and (used_g % 2 != 0):
used_g -= 1
elif (not lng) and (round > max_it//2):
if verbose:
print('Taking long (reducing step size)...')
lng = True
mu = 0.5*mu
if used_g % 2 == 0:
used_g += 1
QOldF = QOld
QOld = Q
# Show the progress...
if verbose:
msg = ('Step no. %d,'
' convergence: %.7f' % (round+1,convergence[round]))
print(msg)
# First calculate the independent components (u_i's).
# u_i = b_i' x = x' b_i. For all x:s simultaneously this is
# non linearity
if used_g == 10:
u = mult(X.T, Q)
Q = old_div(mult(X, u*u*u),tlen) - 3.*Q
elif used_g == 11:
u = mult(X.T, Q)
Gpow3 = u*u*u
Beta = (u*Gpow3).sum(axis=0)
D = numx.diag((old_div(1,(Beta - 3*tlen))))
Q = Q + mu * mult(Q, mult((mult(u.T, Gpow3) -
numx.diag(Beta)), D))
elif used_g == 12:
Xsub = self._get_rsamples(X)
u = mult(Xsub.T, Q)
Q = old_div(mult(Xsub, u*u*u),Xsub.shape[1]) - 3.*Q
elif used_g == 13:
Xsub = self._get_rsamples(X)
u = mult(Xsub.T, Q)
Gpow3 = u*u*u
Beta = (u*Gpow3).sum(axis=0)
D = numx.diag((old_div(1,(Beta - 3*Xsub.shape[1]))))
Q = Q + mu * mult(Q, mult((mult(u.T, Gpow3) -
numx.diag(Beta)), D))
elif used_g == 20:
u = mult(X.T, Q)
tang = numx.tanh(fine_tanh * u)
temp = old_div((1.-tang*tang).sum(axis=0),tlen)
Q = old_div(mult(X, tang),tlen) - temp * Q * fine_tanh
elif used_g == 21:
u = mult(X.T, Q)
tang = numx.tanh(fine_tanh * u)
Beta = (u*tang).sum(axis=0)
D = numx.diag(old_div(1,(Beta -
fine_tanh*(1.-tang*tang).sum(axis=0))))
Q = Q + mu * mult(Q,
mult((mult(u.T, tang)-
numx.diag(Beta)), D))
elif used_g == 22:
Xsub = self._get_rsamples(X)
u = mult(Xsub.T, Q)
tang = numx.tanh(fine_tanh * u)
temp = old_div((1.-tang*tang).sum(axis=0),Xsub.shape[1])
Q = old_div(mult(Xsub, tang),Xsub.shape[1]) - temp * Q * fine_tanh
elif used_g == 23:
Xsub = self._get_rsamples(X)
u = mult(Xsub.T, Q)
tang = numx.tanh(fine_tanh * u)
Beta = (u*tang).sum(axis=0)
D = numx.diag(old_div(1,(Beta -
fine_tanh*(1.-tang*tang).sum(axis=0))))
Q = Q + mu * mult(Q,
mult((mult(u.T, tang)-
numx.diag(Beta)), D))
elif used_g == 30:
u = mult(X.T, Q)
u2 = u*u
ex = numx.exp(-fine_gaus*u2*0.5)
gauss = u*ex
dgauss = (1. - fine_gaus*u2)*ex
Q = old_div((mult(X, gauss)-dgauss.sum(axis=0)*Q),tlen)
elif used_g == 31:
u = mult(X.T, Q)
u2 = u*u
ex = numx.exp(-fine_gaus*u2*0.5)
gaus = u*ex
Beta = (u*gaus).sum(axis=0)
D = numx.diag(old_div(1,(Beta -
((1-fine_gaus*u2)*ex).sum(axis=0))))
Q = Q + mu * mult(Q,
mult((mult(u.T, gaus)-
numx.diag(Beta)), D))
elif used_g == 32:
Xsub = self._get_rsamples(X)
u = mult(Xsub.T, Q)
u2 = u*u
ex = numx.exp(-fine_gaus*u2*0.5)
gauss = u*ex
dgauss = (1. - fine_gaus*u2)*ex
Q = old_div((mult(Xsub, gauss)-dgauss.sum(axis=0)*Q),Xsub.shape[1])
elif used_g == 33:
Xsub = self._get_rsamples(X)
u = mult(Xsub.T, Q)
u2 = u*u
ex = numx.exp(-fine_gaus*u2*0.5)
gaus = u*ex
Beta = (u*gaus).sum(axis=0)
D = numx.diag(old_div(1,(Beta -
((1-fine_gaus*u2)*ex).sum(axis=0))))
Q = Q + mu * mult(Q, mult((mult(u.T, gaus)-
numx.diag(Beta)), D))
elif used_g == 40:
u = mult(X.T, Q)
Q = old_div(mult(X, u*u),tlen)
elif used_g == 41:
u = mult(X.T, Q)
Gskew = u*u
Beta = (u*Gskew).sum(axis=0)
D = numx.diag(old_div(1,Beta))
Q = Q + mu * mult(Q, mult((mult(u.T, Gskew)-
numx.diag(Beta)), D))
elif used_g == 42:
Xsub = self._get_rsamples(X)
u = mult(Xsub.T, Q)
Q = old_div(mult(Xsub, u*u),Xsub.shape[1])
elif used_g == 43:
Xsub = self._get_rsamples(X)
u = mult(Xsub.T, Q)
Gskew = u*u
Beta = (u*Gskew).sum(axis=0)
D = numx.diag(old_div(1,Beta))
Q = Q + mu * mult(Q, mult((mult(u.T, Gskew)-
numx.diag(Beta)), D))
else:
errstr = 'Nonlinearity not found: %i' % used_g
raise mdp.NodeException(errstr)
self.convergence = numx.array(convergence)
self.convergence_fine = numx.array(convergence_fine)
ret = convergence[-1]
# DEFLATION APPROACH
elif approach == 'defl':
# adjust limit!
#limit = 1 - limit*limit*0.5
# create array to store convergence
convergence = []
convergence_fine = []
Q = numx.zeros((comp, comp), dtype=dtype)
round = 0
nfail = 0
while round < comp:
mu = self.mu
used_g = gOrig
stroke = 0
fine_tuned = False
lng = False
end_finetuning = 0
# Take a random initial vector of lenght 1 and orthogonalize it
# with respect to the other vectors.
w = guess[:, round]
w -= mult(mult(Q, Q.T), w)
w /= utils.norm2(w)
wOld = numx.zeros(w.shape, dtype)
wOldF = numx.zeros(w.shape, dtype)
# This is the actual fixed-point iteration loop.
i = 1
gabba = 1
#for i in range(max_it + 1):
while i <= max_it + gabba:
# Project the vector into the space orthogonal to the space
# spanned by the earlier found basis vectors. Note that
# we can do the projection with matrix Q, since the zero
# entries do not contribute to the projection.
w -= mult(mult(Q, Q.T), w)
w /= utils.norm2(w)
if not fine_tuned:
if i == max_it + 1:
err_msg = ('Component number %d did not'
'converge in %d iterations.' % (round,
max_it))
if verbose:
print(err_msg)
if round == 0:
raise mdp.NodeException(err_msg)
nfail += 1
if nfail > failures:
err = ('Too many failures to '
'converge (%d). Giving up.' % nfail)
raise mdp.NodeException(err)
break
else:
if i >= end_finetuning:
wOld = w
# Test for termination condition. Note that the algorithm
# has converged if the direction of w and wOld is the same.
#conv = float(abs((w*wOld).sum()))
conv = min(utils.norm2(w-wOld), utils.norm2(w+wOld))
convergence.append(conv)
if conv < limit:
if fine_tuning and (not fine_tuned):
if verbose:
print('Initial convergence, fine-tuning...')
fine_tuned = True
gabba = max_it_fine
wOld = numx.zeros(w.shape, dtype)
wOldF = numx.zeros(w.shape, dtype)
used_g = gFine
mu = muK * self.mu
end_finetuning = max_it_fine + i
else:
nfail = 0
convergence[round] = conv
# Calculate ICA filter.
Q[:, round] = w.copy()
# Show the progress...
if verbose:
print('IC %d computed ( %d steps )' % (round+1,
i+1))
break
elif stabilization:
conv_fine = min(utils.norm2(w-wOldF),
utils.norm2(w+wOldF))
convergence_fine.append(conv_fine)
if (stroke == 0) and conv_fine < limit:
if verbose:
print('Stroke!')
stroke = mu
mu = 0.5*mu
if used_g % 2 == 0:
used_g += 1
elif (stroke != 0):
mu = stroke
stroke = 0
if (mu == 1) and (used_g % 2 != 0):
used_g -= 1
elif (not lng) and (i > max_it//2):
if verbose:
print('Taking long (reducing step size)...')
lng = True
mu = 0.5*mu
if used_g % 2 == 0:
used_g += 1
wOldF = wOld
wOld = w
if used_g == 10:
u = mult(X.T, w)
w = old_div(mult(X, u*u*u),tlen) - 3.*w
elif used_g == 11:
u = mult(X.T, w)
EXGpow3 = old_div(mult(X, u*u*u),tlen)
Beta = mult(w.T, EXGpow3)
w = w - mu * (EXGpow3 - Beta*w)/(3-Beta)
elif used_g == 12:
Xsub = self._get_rsamples(X)
u = mult(Xsub.T, w)
w = old_div(mult(Xsub, u*u*u),Xsub.shape[1]) - 3.*w
elif used_g == 13:
Xsub = self._get_rsamples(X)
u = mult(Xsub.T, w)
EXGpow3 = old_div(mult(Xsub, u*u*u),Xsub.shape[1])
Beta = mult(w.T, EXGpow3)
w = w - mu * (EXGpow3 - Beta*w)/(3-Beta)
elif used_g == 20:
u = mult(X.T, w)
tang = numx.tanh(fine_tanh * u)
temp = mult((1. - tang*tang).sum(axis=0), w)
w = old_div((mult(X, tang) - fine_tanh*temp),tlen)
elif used_g == 21:
u = mult(X.T, w)
tang = numx.tanh(fine_tanh * u)
Beta = mult(u.T, tang)
temp = (1. - tang*tang).sum(axis=0)
w = w-mu*(old_div((mult(X, tang)-Beta*w),(fine_tanh*temp-Beta)))
elif used_g == 22:
Xsub = self._get_rsamples(X)
u = mult(Xsub.T, w)
tang = numx.tanh(fine_tanh * u)
temp = mult((1. - tang*tang).sum(axis=0), w)
w = old_div((mult(Xsub, tang) - fine_tanh*temp),Xsub.shape[1])
elif used_g == 23:
Xsub = self._get_rsamples(X)
u = mult(Xsub.T, w)
tang = numx.tanh(fine_tanh * u)
Beta = mult(u.T, tang)
w = w - mu * (old_div((mult(Xsub, tang)-Beta*w),
(fine_tanh*(1. - tang*tang).sum(axis=0) -
Beta)))
elif used_g == 30:
u = mult(X.T, w)
u2 = u*u
ex = numx.exp(-fine_gaus*u2*0.5)
gauss = u*ex
dgauss = (1. - fine_gaus *u2)*ex
w = old_div((mult(X, gauss)-mult(dgauss.sum(axis=0), w)),tlen)
elif used_g == 31:
u = mult(X.T, w)
u2 = u*u
ex = numx.exp(-fine_gaus*u2*0.5)
gauss = u*ex
dgauss = (1. - fine_gaus *u2)*ex
Beta = mult(u.T, gauss)
w = w - mu*(old_div((mult(X, gauss)-Beta*w),
(dgauss.sum(axis=0)-Beta)))
elif used_g == 32:
Xsub = self._get_rsamples(X)
u = mult(Xsub.T, w)
u2 = u*u
ex = numx.exp(-fine_gaus*u2*0.5)
gauss = u*ex
dgauss = (1. - fine_gaus *u2)*ex
w = old_div((mult(Xsub, gauss)-
mult(dgauss.sum(axis=0), w)),Xsub.shape[1])
elif used_g == 33:
Xsub = self._get_rsamples(X)
u = mult(Xsub.T, w)
u2 = u*u
ex = numx.exp(-fine_gaus*u2*0.5)
gauss = u*ex
dgauss = (1. - fine_gaus *u2)*ex
Beta = mult(u.T, gauss)
w = w - mu*(old_div((mult(Xsub, gauss)-Beta*w),
(dgauss.sum(axis=0)-Beta)))
elif used_g == 40:
u = mult(X.T, w)
w = old_div(mult(X, u*u),tlen)
elif used_g == 41:
u = mult(X.T, w)
EXGskew = old_div(mult(X, u*u), tlen)
Beta = mult(w.T, EXGskew)
w = w - mu * (EXGskew - mult(Beta, w))/(-Beta)
elif used_g == 42:
Xsub = self._get_rsamples(X)
u = mult(Xsub.T, w)
w = old_div(mult(Xsub, u*u),Xsub.shape[1])
elif used_g == 43:
Xsub = self._get_rsamples(X)
u = mult(Xsub.T, w)
EXGskew = old_div(mult(Xsub, u*u), Xsub.shape[1])
Beta = mult(w.T, EXGskew)
w = w - mu * (EXGskew - Beta*w)/(-Beta)
else:
errstr = 'Nonlinearity not found: %i' % used_g
raise mdp.NodeException(errstr)
# Normalize the new w.
w /= utils.norm2(w)
i += 1
round += 1
self.convergence = numx.array(convergence)
self.convergence_fine = numx.array(convergence_fine)
ret = convergence[-1]
self.filters = Q
return ret
class TDSEPNode(ISFANode, ProjectMatrixMixin):
"""Perform Independent Component Analysis using the TDSEP algorithm.
.. note::
That TDSEP, as implemented in this Node, is an online algorithm,
i.e. it is suited to be trained on huge data sets, provided that the
training is done sending small chunks of data for each time.
:ivar white: The whitening node used for preprocessing.
:ivar filters: The ICA filters matrix (this is the transposed of the
projection matrix after whitening).
:ivar convergence: The value of the convergence threshold.
.. admonition:: Reference
Ziehe, Andreas and Muller, Klaus-Robert (1998).
TDSEP an efficient algorithm for blind separation using time structure.
in Niklasson, L, Boden, M, and Ziemke, T (Editors), Proc. 8th Int. Conf.
Artificial Neural Networks (ICANN 1998).
"""
def __init__(self, lags=1, limit = 0.00001, max_iter=10000,
verbose = False, whitened = False, white_comp = None,
white_parm = None, input_dim = None, dtype = None):
"""Initializes an object of type 'TDSEPNode'.
:param lags: List of time-lags to generate the time-delayed covariance
matrices. If lags is an integer, time-lags 1,2,...,'lags'
are used.
:type lags: list or int
.. note:: Time-lag == 0 (instantaneous correlation) is
always implicitly used.
:param limit: Convergence threshold.
:type limit: float
:param max_iter: If the algorithms does not achieve convergence within
max_iter iterations raise an Exception.
Should be larger than 100.
:type max_iter: int
:param verbose: Idicates whether information is to be reported about
the operation.
:type verbose: bool
:param whitened: Set whitened is True if input data are already whitened.
Otherwise the node will whiten the data itself.
:type whitened: bool
:param white_comp: If whitened is False, you can set 'white_comp' to the
number of whitened components to keep during the
calculation (i.e., the input dimensions are reduced to
white_comp by keeping the components of largest variance).
:type white_comp: int
:param white_parm: A dictionary with additional parameters for whitening.
It is passed directly to the WhiteningNode constructor. For example::
>>> white_parm = { 'svd' : True }
:type white_parm: dict
:param input_dim: The input dimensionality.
:type input_dim: int
:param dtype: The datatype.
:type dtype: numpy.dtype or str
"""
super(TDSEPNode, self).__init__(lags=lags, sfa_ica_coeff=(0., 1.),
icaweights=None, sfaweights=None,
whitened=whitened,
white_comp=white_comp,
white_parm = None,
eps_contrast=limit,
max_iter=max_iter, RP=None,
verbose=verbose,
input_dim=input_dim,
output_dim=None,
dtype=dtype)
def _stop_training(self, covs=None):
super(TDSEPNode, self)._stop_training(covs)
# set filters
self.filters = self.RP
# set convergence
self.convergence = self.final_contrast | PypiClean |
/CatLearn-0.6.2.tar.gz/CatLearn-0.6.2/catlearn/ga/predictors.py | import time
from catlearn.regression import GaussianProcess
def minimize_error(train_features, train_targets, test_features, test_targets):
"""A generic fitness function.
This fitness function will minimize the cost function.
Parameters
----------
train_features : array
The training features.
train_targets : array
The training targets.
test_features : array
The test feaatures.
test_targets : array
The test targets.
"""
kernel = [{'type': 'gaussian', 'width': 1., 'scaling': 1.,
'dimension': 'single'}]
gp = GaussianProcess(train_fp=train_features,
train_target=train_targets,
kernel_list=kernel,
regularization=1e-2,
optimize_hyperparameters=True,
scale_data=True)
pred = gp.predict(test_fp=test_features, test_target=test_targets,
get_validation_error=True,
get_training_error=True)
score = pred['validation_error']['rmse_average']
return [-score]
def minimize_error_descriptors(train_features, train_targets, test_features,
test_targets):
"""A generic fitness function.
This fitness function will minimize the cost function as well as the number
of descriptors. This will provide a Pareto optimial set of solutions upon
convergence.
Parameters
----------
train_features : array
The training features.
train_targets : array
The training targets.
test_features : array
The test feaatures.
test_targets : array
The test targets.
"""
kernel = [{'type': 'gaussian', 'width': 1., 'scaling': 1.,
'dimension': 'single'}]
gp = GaussianProcess(train_fp=train_features,
train_target=train_targets,
kernel_list=kernel,
regularization=1e-2,
optimize_hyperparameters=True,
scale_data=True)
pred = gp.predict(test_fp=test_features, test_target=test_targets,
get_validation_error=True,
get_training_error=True)
score = pred['validation_error']['rmse_average']
dimension = train_features.shape[1]
return [-score, -dimension]
def minimize_error_time(train_features, train_targets, test_features,
test_targets):
"""A generic fitness function.
This fitness function will minimize the cost function as well as the time
to train the model. This will provide a Pareto optimial set of solutions
upon convergence.
Parameters
----------
train_features : array
The training features.
train_targets : array
The training targets.
test_features : array
The test feaatures.
test_targets : array
The test targets.
"""
kernel = [{'type': 'gaussian', 'width': 1., 'scaling': 1.,
'dimension': 'single'}]
stime = time.time()
gp = GaussianProcess(train_fp=train_features,
train_target=train_targets,
kernel_list=kernel,
regularization=1e-2,
optimize_hyperparameters=True,
scale_data=True)
timing = time.time() - stime
pred = gp.predict(test_fp=test_features, test_target=test_targets,
get_validation_error=True,
get_training_error=True)
score = pred['validation_error']['rmse_average']
return [-score, -timing] | PypiClean |
/Nuitka_winsvc-1.7.10-cp310-cp310-win_amd64.whl/nuitka/code_generation/ExceptionCodes.py | from nuitka.PythonVersions import python_version
from .CodeHelpers import (
generateExpressionCode,
withObjectCodeTemporaryAssignment,
)
from .templates.CodeTemplatesExceptions import (
template_publish_exception_to_handler,
)
def getExceptionIdentifier(exception_type):
assert "PyExc" not in exception_type, exception_type
if exception_type == "NotImplemented":
return "Py_NotImplemented"
return "PyExc_%s" % exception_type
def generateExceptionRefCode(to_name, expression, emit, context):
exception_type = expression.getExceptionName()
with withObjectCodeTemporaryAssignment(
to_name, "exception_name", expression, emit, context
) as value_name:
emit("%s = %s;" % (value_name, getExceptionIdentifier(exception_type)))
def getTracebackMakingIdentifier(context, lineno_name):
frame_handle = context.getFrameHandle()
assert frame_handle is not None
return "MAKE_TRACEBACK(%s, %s)" % (frame_handle, lineno_name)
def generateExceptionCaughtTypeCode(to_name, expression, emit, context):
keeper_variables = context.getExceptionKeeperVariables()
with withObjectCodeTemporaryAssignment(
to_name, "exception_caught_type", expression, emit, context
) as value_name:
if keeper_variables[0] is None:
emit("%s = EXC_TYPE(PyThreadState_GET());" % (value_name,))
else:
emit("%s = %s;" % (value_name, keeper_variables[0]))
def generateExceptionCaughtValueCode(to_name, expression, emit, context):
keeper_variables = context.getExceptionKeeperVariables()
with withObjectCodeTemporaryAssignment(
to_name, "exception_caught_value", expression, emit, context
) as value_name:
if keeper_variables[1] is None:
emit("%s = EXC_VALUE(PyThreadState_GET());" % (value_name,))
else:
if python_version >= 0x270:
emit("%s = %s;" % (value_name, keeper_variables[1]))
else:
emit(
"%s = %s ? %s : Py_None;"
% (value_name, keeper_variables[1], keeper_variables[1])
)
def generateExceptionCaughtTracebackCode(to_name, expression, emit, context):
keeper_variables = context.getExceptionKeeperVariables()
with withObjectCodeTemporaryAssignment(
to_name, "exception_caught_tb", expression, emit, context
) as value_name:
if keeper_variables[2] is None:
if python_version < 0x3B0:
emit(
"%s = (PyObject *)EXC_TRACEBACK(PyThreadState_GET());"
% (value_name,)
)
else:
emit(
"%s = (PyObject *)GET_EXCEPTION_TRACEBACK(EXC_VALUE(PyThreadState_GET()));"
% (value_name,)
)
else:
emit(
"""\
if (%(keeper_tb)s != NULL) {
%(to_name)s = (PyObject *)%(keeper_tb)s;
Py_INCREF(%(to_name)s);
} else {
%(to_name)s = (PyObject *)%(tb_making)s;
}
"""
% {
"to_name": value_name,
"keeper_tb": keeper_variables[2],
"tb_making": getTracebackMakingIdentifier(
context=context, lineno_name=keeper_variables[3]
),
}
)
context.addCleanupTempName(value_name)
def getExceptionUnpublishedReleaseCode(emit, context):
keeper_variables = context.getExceptionKeeperVariables()
if keeper_variables[0] is not None:
emit("Py_DECREF(%s);" % keeper_variables[0])
emit("Py_XDECREF(%s);" % keeper_variables[1])
emit("Py_XDECREF(%s);" % keeper_variables[2])
def generateExceptionPublishCode(statement, emit, context):
# This statement has no attributes really, pylint: disable=unused-argument
# Current variables cannot be used anymore now.
(
keeper_type,
keeper_value,
keeper_tb,
keeper_lineno,
) = context.setExceptionKeeperVariables((None, None, None, None))
emit(
template_publish_exception_to_handler
% {
"tb_making": getTracebackMakingIdentifier(
context=context, lineno_name=keeper_lineno
),
"keeper_tb": keeper_tb,
"keeper_lineno": keeper_lineno,
"frame_identifier": context.getFrameHandle(),
}
)
# TODO: Make this one thing for performance with thread state shared, also for less code,
# then we should not make it in header anymore. Might be more scalable too.
emit(
"PUBLISH_CURRENT_EXCEPTION(&%s, &%s, &%s);"
% (keeper_type, keeper_value, keeper_tb)
)
def generateBuiltinMakeExceptionCode(to_name, expression, emit, context):
# We try to make optimal code for various cases, pylint: disable=too-many-locals
from .CallCodes import getCallCodeNoArgs, getCallCodePosArgsQuick
exception_arg_names = []
for exception_arg in expression.subnode_args:
exception_arg_name = context.allocateTempName("make_exception_arg")
generateExpressionCode(
to_name=exception_arg_name,
expression=exception_arg,
emit=emit,
context=context,
)
exception_arg_names.append(exception_arg_name)
exception_type = expression.getExceptionName()
with withObjectCodeTemporaryAssignment(
to_name, "exception_made", expression, emit, context
) as value_name:
if exception_arg_names:
getCallCodePosArgsQuick(
to_name=value_name,
called_name=getExceptionIdentifier(exception_type),
expression=expression,
arg_names=exception_arg_names,
emit=emit,
context=context,
)
else:
getCallCodeNoArgs(
to_name=value_name,
called_name=getExceptionIdentifier(exception_type),
expression=expression,
emit=emit,
context=context,
)
if expression.getExceptionName() == "ImportError" and python_version >= 0x300:
from .PythonAPICodes import getReferenceExportCode
import_error_name_expression = expression.subnode_name
if import_error_name_expression is not None:
exception_importerror_name = context.allocateTempName(
"make_exception_importerror_name"
)
generateExpressionCode(
to_name=exception_importerror_name,
expression=import_error_name_expression,
emit=emit,
context=context,
allow_none=True,
)
getReferenceExportCode(exception_importerror_name, emit, context)
if context.needsCleanup(exception_importerror_name):
context.removeCleanupTempName(exception_importerror_name)
emit(
"((PyImportErrorObject *)%s)->name = %s;"
% (to_name, exception_importerror_name)
)
import_error_path_expression = expression.subnode_path
if import_error_path_expression is not None:
exception_importerror_path = context.allocateTempName(
"make_exception_importerror_path"
)
generateExpressionCode(
to_name=exception_importerror_path,
expression=import_error_path_expression,
emit=emit,
context=context,
allow_none=True,
)
getReferenceExportCode(exception_importerror_path, emit, context)
if context.needsCleanup(exception_importerror_path):
context.removeCleanupTempName(exception_importerror_path)
emit(
"((PyImportErrorObject *)%s)->path = %s;"
% (to_name, exception_importerror_path)
) | PypiClean |
/Nuitka_fixed-1.1.2-cp310-cp310-win_amd64.whl/nuitka/code_generation/VariableCodes.py | from nuitka.nodes.shapes.BuiltinTypeShapes import (
tshape_bool,
tshape_int_or_long,
)
from nuitka.PythonVersions import python_version
from .c_types.CTypeNuitkaBooleans import CTypeNuitkaBoolEnum
from .c_types.CTypePyObjectPointers import (
CTypeCellObject,
CTypePyObjectPtr,
CTypePyObjectPtrPtr,
)
from .CodeHelpers import (
decideConversionCheckNeeded,
generateExpressionCode,
withObjectCodeTemporaryAssignment2,
)
from .ErrorCodes import (
getAssertionCode,
getErrorExitCode,
getLocalVariableReferenceErrorCode,
getNameReferenceErrorCode,
)
from .VariableDeclarations import VariableDeclaration
def generateAssignmentVariableCode(statement, emit, context):
assign_source = statement.subnode_source
variable = statement.getVariable()
variable_trace = statement.getVariableTrace()
if variable.isModuleVariable():
# Use "object" for module variables.
tmp_name = context.allocateTempName("assign_source")
else:
source_shape = assign_source.getTypeShape()
variable_declaration = getLocalVariableDeclaration(
context, variable, variable_trace
)
if source_shape is tshape_bool and variable_declaration.c_type == "nuitka_bool":
tmp_name = context.allocateTempName("assign_source", "nuitka_bool")
elif (
source_shape is tshape_int_or_long
and variable_declaration.c_type == "nuitka_ilong"
):
tmp_name = context.allocateTempName("assign_source", "nuitka_ilong")
else:
tmp_name = context.allocateTempName("assign_source")
generateExpressionCode(
expression=assign_source, to_name=tmp_name, emit=emit, context=context
)
getVariableAssignmentCode(
tmp_name=tmp_name,
variable=variable,
variable_trace=variable_trace,
needs_release=statement.needsReleasePreviousValue(),
inplace=statement.isInplaceSuspect(),
emit=emit,
context=context,
)
# Ownership of that reference must have been transferred.
assert not context.needsCleanup(tmp_name)
def generateDelVariableCode(statement, emit, context):
with context.withCurrentSourceCodeReference(statement.getSourceReference()):
_getVariableDelCode(
variable=statement.getVariable(),
variable_trace=statement.variable_trace,
previous_trace=statement.previous_trace,
tolerant=statement.is_tolerant,
needs_check=statement.is_tolerant
or statement.mayRaiseException(BaseException),
emit=emit,
context=context,
)
def getVariableReferenceCode(
to_name, variable, variable_trace, needs_check, conversion_check, emit, context
):
if variable.isModuleVariable():
owner = context.getOwner()
with withObjectCodeTemporaryAssignment2(
to_name, "mvar_value", conversion_check, emit, context
) as value_name:
# TODO: Rather have this passed from a distinct node type, so inlining
# doesn't change things.
emit(
"""\
%(value_name)s = GET_STRING_DICT_VALUE(moduledict_%(module_identifier)s, (Nuitka_StringObject *)%(var_name)s);
if (unlikely(%(value_name)s == NULL)) {
%(value_name)s = %(helper_code)s(%(var_name)s);
}
"""
% {
"helper_code": "GET_MODULE_VARIABLE_VALUE_FALLBACK_IN_FUNCTION"
if python_version < 0x340
and not owner.isCompiledPythonModule()
and not owner.isExpressionClassBody()
else "GET_MODULE_VARIABLE_VALUE_FALLBACK",
"module_identifier": context.getModuleCodeName(),
"value_name": value_name,
"var_name": context.getConstantCode(constant=variable.getName()),
}
)
getErrorExitCode(
check_name=value_name,
emit=emit,
context=context,
needs_check=needs_check,
)
else:
variable_declaration = getLocalVariableDeclaration(
context, variable, variable_trace
)
value_name = variable_declaration.getCType().emitValueAccessCode(
value_name=variable_declaration, emit=emit, context=context
)
if needs_check:
condition = value_name.getCType().getInitTestConditionCode(
value_name, inverted=True
)
getLocalVariableReferenceErrorCode(
variable=variable, condition=condition, emit=emit, context=context
)
else:
value_name.getCType().emitValueAssertionCode(
value_name=value_name, emit=emit
)
to_name.getCType().emitAssignConversionCode(
to_name=to_name,
value_name=value_name,
needs_check=conversion_check,
emit=emit,
context=context,
)
def generateVariableReferenceCode(to_name, expression, emit, context):
variable = expression.getVariable()
variable_trace = expression.getVariableTrace()
needs_check = expression.mayRaiseException(BaseException)
getVariableReferenceCode(
to_name=to_name,
variable=variable,
variable_trace=variable_trace,
needs_check=needs_check,
conversion_check=decideConversionCheckNeeded(to_name, expression),
emit=emit,
context=context,
)
def _getVariableCodeName(in_context, variable):
if in_context:
# Closure case:
return "closure_" + variable.getCodeName()
elif variable.isParameterVariable():
return "par_" + variable.getCodeName()
elif variable.isTempVariable():
return "tmp_" + variable.getCodeName()
else:
return "var_" + variable.getCodeName()
def getPickedCType(variable, context):
"""Return type to use for specific context."""
user = context.getEntryPoint()
owner = variable.getEntryPoint()
if owner is user:
if variable.isSharedTechnically():
# TODO: That need not really be an impedient, we could share pointers to
# everything.
result = CTypeCellObject
else:
shapes = variable.getTypeShapes()
if len(shapes) > 1:
# Avoiding this for now, but we will have to use our enum
# based code variants, either generated or hard coded in
# the future.
return CTypePyObjectPtr
r = shapes.pop().getCType()
return r
elif context.isForDirectCall():
if variable.isSharedTechnically():
result = CTypeCellObject
else:
result = CTypePyObjectPtrPtr
else:
result = CTypeCellObject
return result
def decideLocalVariableCodeType(context, variable):
# Now must be local or temporary variable.
# Complexity should be moved out of here, pylint: disable=too-many-branches
user = context.getOwner()
owner = variable.getOwner()
user = user.getEntryPoint()
prefix = ""
if owner.isExpressionOutlineFunctionBase():
entry_point = owner.getEntryPoint()
prefix = (
"outline_%d_"
% entry_point.getTraceCollection().getOutlineFunctions().index(owner)
)
owner = entry_point
if variable.isTempVariableBool():
c_type = CTypeNuitkaBoolEnum
else:
c_type = getPickedCType(variable, context)
if owner is user:
result = _getVariableCodeName(in_context=False, variable=variable)
result = prefix + result
elif context.isForDirectCall():
if user.isExpressionGeneratorObjectBody():
closure_index = user.getClosureVariableIndex(variable)
result = "generator->m_closure[%d]" % closure_index
elif user.isExpressionCoroutineObjectBody():
closure_index = user.getClosureVariableIndex(variable)
result = "coroutine->m_closure[%d]" % closure_index
elif user.isExpressionAsyncgenObjectBody():
closure_index = user.getClosureVariableIndex(variable)
result = "asyncgen->m_closure[%d]" % closure_index
else:
result = _getVariableCodeName(in_context=True, variable=variable)
result = prefix + result
else:
closure_index = user.getClosureVariableIndex(variable)
if user.isExpressionGeneratorObjectBody():
result = "generator->m_closure[%d]" % closure_index
elif user.isExpressionCoroutineObjectBody():
result = "coroutine->m_closure[%d]" % closure_index
elif user.isExpressionAsyncgenObjectBody():
result = "asyncgen->m_closure[%d]" % closure_index
else:
# TODO: If this were context.getContextObjectName() this would be
# a one liner.
result = "self->m_closure[%d]" % closure_index
return result, c_type
def getLocalVariableDeclaration(context, variable, variable_trace):
# TODO: Decide if we will use variable trace, pylint: disable=unused-argument
# Now must be local or temporary variable.
user = context.getOwner()
owner = variable.getOwner()
user = user.getEntryPoint()
prefix = ""
if owner.isExpressionOutlineFunctionBase():
entry_point = owner.getEntryPoint()
prefix = (
"outline_%d_"
% entry_point.getTraceCollection().getOutlineFunctions().index(owner)
)
owner = entry_point
if owner is user:
result = _getVariableCodeName(in_context=False, variable=variable)
result = prefix + result
result = context.variable_storage.getVariableDeclarationTop(result)
assert result is not None, variable
return result
else:
closure_index = user.getClosureVariableIndex(variable)
return context.variable_storage.getVariableDeclarationClosure(closure_index)
def getVariableAssignmentCode(
context, emit, variable, variable_trace, tmp_name, needs_release, inplace
):
# For transfer of ownership.
if context.needsCleanup(tmp_name):
ref_count = 1
else:
ref_count = 0
if variable.isModuleVariable():
variable_declaration = VariableDeclaration(
"module_var", variable.getName(), None, None
)
else:
variable_declaration = getLocalVariableDeclaration(
context, variable, variable_trace
)
assert variable_declaration, (variable, context)
if variable.isLocalVariable():
context.setVariableType(variable, variable_declaration)
variable_declaration.getCType().emitVariableAssignCode(
value_name=variable_declaration,
needs_release=needs_release,
tmp_name=tmp_name,
ref_count=ref_count,
inplace=inplace,
emit=emit,
context=context,
)
if ref_count:
context.removeCleanupTempName(tmp_name)
def _getVariableDelCode(
variable, variable_trace, previous_trace, tolerant, needs_check, emit, context
):
if variable.isModuleVariable():
variable_declaration_old = VariableDeclaration(
"module_var", variable.getName(), None, None
)
variable_declaration_new = variable_declaration_old
else:
variable_declaration_old = getLocalVariableDeclaration(
context, variable, previous_trace
)
variable_declaration_new = getLocalVariableDeclaration(
context, variable, variable_trace
)
# TODO: We need to split this operation in two parts. Release and init
# are not one thing, until then require this.
assert variable_declaration_old == variable_declaration_new
if variable.isLocalVariable():
context.setVariableType(variable, variable_declaration_new)
if needs_check and not tolerant:
to_name = context.getBoolResName()
else:
to_name = None
variable_declaration_old.getCType().getDeleteObjectCode(
to_name=to_name,
value_name=variable_declaration_old,
tolerant=tolerant,
needs_check=needs_check,
emit=emit,
context=context,
)
if needs_check and not tolerant:
if variable.isModuleVariable():
getNameReferenceErrorCode(
variable_name=variable.getName(),
condition="%s == false" % to_name,
emit=emit,
context=context,
)
elif variable.isLocalVariable():
getLocalVariableReferenceErrorCode(
variable=variable,
condition="%s == false" % to_name,
emit=emit,
context=context,
)
else:
getAssertionCode(check="%s != false" % to_name, emit=emit)
def generateVariableReleaseCode(statement, emit, context):
variable = statement.getVariable()
# Only for normal variables we do this.
assert not variable.isModuleVariable()
variable_trace = statement.getVariableTrace()
if variable.isSharedTechnically():
# TODO: We might start to not allocate the cell object, then a check
# would be due. But currently we always allocate it.
needs_check = False
else:
needs_check = not variable_trace.mustHaveValue()
value_name = getLocalVariableDeclaration(context, variable, variable_trace)
c_type = value_name.getCType()
if not needs_check:
c_type.emitReleaseAssertionCode(value_name=value_name, emit=emit)
c_type.getReleaseCode(value_name=value_name, needs_check=needs_check, emit=emit)
c_type.emitReinitCode(value_name=value_name, emit=emit) | PypiClean |
/GoogleAppEngineMapReduce-1.9.22.0.tar.gz/GoogleAppEngineMapReduce-1.9.22.0/mapreduce/kv_pb.py |
from google.net.proto import ProtocolBuffer
import array
import base64
import thread
try:
from google.net.proto import _net_proto___parse__python
except ImportError:
_net_proto___parse__python = None
__pychecker__ = """maxreturns=0 maxbranches=0 no-callinit
unusednames=printElemNumber,debug_strs no-special"""
if hasattr(ProtocolBuffer, 'ExtendableProtocolMessage'):
_extension_runtime = True
_ExtendableProtocolMessage = ProtocolBuffer.ExtendableProtocolMessage
else:
_extension_runtime = False
_ExtendableProtocolMessage = ProtocolBuffer.ProtocolMessage
class KeyValue(ProtocolBuffer.ProtocolMessage):
has_key_ = 0
key_ = ""
has_value_ = 0
value_ = ""
def __init__(self, contents=None):
if contents is not None: self.MergeFromString(contents)
def key(self): return self.key_
def set_key(self, x):
self.has_key_ = 1
self.key_ = x
def clear_key(self):
if self.has_key_:
self.has_key_ = 0
self.key_ = ""
def has_key(self): return self.has_key_
def value(self): return self.value_
def set_value(self, x):
self.has_value_ = 1
self.value_ = x
def clear_value(self):
if self.has_value_:
self.has_value_ = 0
self.value_ = ""
def has_value(self): return self.has_value_
def MergeFrom(self, x):
assert x is not self
if (x.has_key()): self.set_key(x.key())
if (x.has_value()): self.set_value(x.value())
if _net_proto___parse__python is not None:
def _CMergeFromString(self, s):
_net_proto___parse__python.MergeFromString(self, 'KeyValue', s)
if _net_proto___parse__python is not None:
def _CEncode(self):
return _net_proto___parse__python.Encode(self, 'KeyValue')
if _net_proto___parse__python is not None:
def _CEncodePartial(self):
return _net_proto___parse__python.EncodePartial(self, 'KeyValue')
if _net_proto___parse__python is not None:
def _CToASCII(self, output_format):
return _net_proto___parse__python.ToASCII(self, 'KeyValue', output_format)
if _net_proto___parse__python is not None:
def ParseASCII(self, s):
_net_proto___parse__python.ParseASCII(self, 'KeyValue', s)
if _net_proto___parse__python is not None:
def ParseASCIIIgnoreUnknown(self, s):
_net_proto___parse__python.ParseASCIIIgnoreUnknown(self, 'KeyValue', s)
def Equals(self, x):
if x is self: return 1
if self.has_key_ != x.has_key_: return 0
if self.has_key_ and self.key_ != x.key_: return 0
if self.has_value_ != x.has_value_: return 0
if self.has_value_ and self.value_ != x.value_: return 0
return 1
def IsInitialized(self, debug_strs=None):
initialized = 1
if (not self.has_key_):
initialized = 0
if debug_strs is not None:
debug_strs.append('Required field: key not set.')
if (not self.has_value_):
initialized = 0
if debug_strs is not None:
debug_strs.append('Required field: value not set.')
return initialized
def ByteSize(self):
n = 0
n += self.lengthString(len(self.key_))
n += self.lengthString(len(self.value_))
return n + 2
def ByteSizePartial(self):
n = 0
if (self.has_key_):
n += 1
n += self.lengthString(len(self.key_))
if (self.has_value_):
n += 1
n += self.lengthString(len(self.value_))
return n
def Clear(self):
self.clear_key()
self.clear_value()
def OutputUnchecked(self, out):
out.putVarInt32(10)
out.putPrefixedString(self.key_)
out.putVarInt32(18)
out.putPrefixedString(self.value_)
def OutputPartial(self, out):
if (self.has_key_):
out.putVarInt32(10)
out.putPrefixedString(self.key_)
if (self.has_value_):
out.putVarInt32(18)
out.putPrefixedString(self.value_)
def TryMerge(self, d):
while d.avail() > 0:
tt = d.getVarInt32()
if tt == 10:
self.set_key(d.getPrefixedString())
continue
if tt == 18:
self.set_value(d.getPrefixedString())
continue
# tag 0 is special: it's used to indicate an error.
# so if we see it we raise an exception.
if (tt == 0): raise ProtocolBuffer.ProtocolBufferDecodeError
d.skipData(tt)
def __str__(self, prefix="", printElemNumber=0):
res=""
if self.has_key_: res+=prefix+("key: %s\n" % self.DebugFormatString(self.key_))
if self.has_value_: res+=prefix+("value: %s\n" % self.DebugFormatString(self.value_))
return res
def _BuildTagLookupTable(sparse, maxtag, default=None):
return tuple([sparse.get(i, default) for i in xrange(0, 1+maxtag)])
kkey = 1
kvalue = 2
_TEXT = _BuildTagLookupTable({
0: "ErrorCode",
1: "key",
2: "value",
}, 2)
_TYPES = _BuildTagLookupTable({
0: ProtocolBuffer.Encoder.NUMERIC,
1: ProtocolBuffer.Encoder.STRING,
2: ProtocolBuffer.Encoder.STRING,
}, 2, ProtocolBuffer.Encoder.MAX_TYPE)
# stylesheet for XML output
_STYLE = \
""""""
_STYLE_CONTENT_TYPE = \
""""""
_PROTO_DESCRIPTOR_NAME = 'KeyValue'
_SERIALIZED_DESCRIPTOR = array.array('B')
_SERIALIZED_DESCRIPTOR.fromstring(base64.decodestring("Wi90aGlyZF9wYXJ0eS9weS9hcHBlbmdpbmVfbWFwcmVkdWNlL3NyYy9rdi5wcm90bwoIS2V5VmFsdWUTGgNrZXkgASgCMAk4AqMBqgEFY3R5cGWyAQRDb3JkpAEUExoFdmFsdWUgAigCMAk4AqMBqgEFY3R5cGWyAQRDb3JkpAEUugGWAQovdGhpcmRfcGFydHkvcHkvYXBwZW5naW5lX21hcHJlZHVjZS9zcmMva3YucHJvdG8iLgoIS2V5VmFsdWUSDwoDa2V5GAEgAigMQgIIARIRCgV2YWx1ZRgCIAIoDEICCAEiLwoJS2V5VmFsdWVzEg8KA2tleRgBIAIoDEICCAESEQoFdmFsdWUYAiADKAxCAggBQgIgAQ=="))
if _net_proto___parse__python is not None:
_net_proto___parse__python.RegisterType(
_SERIALIZED_DESCRIPTOR.tostring())
class KeyValues(ProtocolBuffer.ProtocolMessage):
has_key_ = 0
key_ = ""
def __init__(self, contents=None):
self.value_ = []
if contents is not None: self.MergeFromString(contents)
def key(self): return self.key_
def set_key(self, x):
self.has_key_ = 1
self.key_ = x
def clear_key(self):
if self.has_key_:
self.has_key_ = 0
self.key_ = ""
def has_key(self): return self.has_key_
def value_size(self): return len(self.value_)
def value_list(self): return self.value_
def value(self, i):
return self.value_[i]
def set_value(self, i, x):
self.value_[i] = x
def add_value(self, x):
self.value_.append(x)
def clear_value(self):
self.value_ = []
def MergeFrom(self, x):
assert x is not self
if (x.has_key()): self.set_key(x.key())
for i in xrange(x.value_size()): self.add_value(x.value(i))
if _net_proto___parse__python is not None:
def _CMergeFromString(self, s):
_net_proto___parse__python.MergeFromString(self, 'KeyValues', s)
if _net_proto___parse__python is not None:
def _CEncode(self):
return _net_proto___parse__python.Encode(self, 'KeyValues')
if _net_proto___parse__python is not None:
def _CEncodePartial(self):
return _net_proto___parse__python.EncodePartial(self, 'KeyValues')
if _net_proto___parse__python is not None:
def _CToASCII(self, output_format):
return _net_proto___parse__python.ToASCII(self, 'KeyValues', output_format)
if _net_proto___parse__python is not None:
def ParseASCII(self, s):
_net_proto___parse__python.ParseASCII(self, 'KeyValues', s)
if _net_proto___parse__python is not None:
def ParseASCIIIgnoreUnknown(self, s):
_net_proto___parse__python.ParseASCIIIgnoreUnknown(self, 'KeyValues', s)
def Equals(self, x):
if x is self: return 1
if self.has_key_ != x.has_key_: return 0
if self.has_key_ and self.key_ != x.key_: return 0
if len(self.value_) != len(x.value_): return 0
for e1, e2 in zip(self.value_, x.value_):
if e1 != e2: return 0
return 1
def IsInitialized(self, debug_strs=None):
initialized = 1
if (not self.has_key_):
initialized = 0
if debug_strs is not None:
debug_strs.append('Required field: key not set.')
return initialized
def ByteSize(self):
n = 0
n += self.lengthString(len(self.key_))
n += 1 * len(self.value_)
for i in xrange(len(self.value_)): n += self.lengthString(len(self.value_[i]))
return n + 1
def ByteSizePartial(self):
n = 0
if (self.has_key_):
n += 1
n += self.lengthString(len(self.key_))
n += 1 * len(self.value_)
for i in xrange(len(self.value_)): n += self.lengthString(len(self.value_[i]))
return n
def Clear(self):
self.clear_key()
self.clear_value()
def OutputUnchecked(self, out):
out.putVarInt32(10)
out.putPrefixedString(self.key_)
for i in xrange(len(self.value_)):
out.putVarInt32(18)
out.putPrefixedString(self.value_[i])
def OutputPartial(self, out):
if (self.has_key_):
out.putVarInt32(10)
out.putPrefixedString(self.key_)
for i in xrange(len(self.value_)):
out.putVarInt32(18)
out.putPrefixedString(self.value_[i])
def TryMerge(self, d):
while d.avail() > 0:
tt = d.getVarInt32()
if tt == 10:
self.set_key(d.getPrefixedString())
continue
if tt == 18:
self.add_value(d.getPrefixedString())
continue
# tag 0 is special: it's used to indicate an error.
# so if we see it we raise an exception.
if (tt == 0): raise ProtocolBuffer.ProtocolBufferDecodeError
d.skipData(tt)
def __str__(self, prefix="", printElemNumber=0):
res=""
if self.has_key_: res+=prefix+("key: %s\n" % self.DebugFormatString(self.key_))
cnt=0
for e in self.value_:
elm=""
if printElemNumber: elm="(%d)" % cnt
res+=prefix+("value%s: %s\n" % (elm, self.DebugFormatString(e)))
cnt+=1
return res
def _BuildTagLookupTable(sparse, maxtag, default=None):
return tuple([sparse.get(i, default) for i in xrange(0, 1+maxtag)])
kkey = 1
kvalue = 2
_TEXT = _BuildTagLookupTable({
0: "ErrorCode",
1: "key",
2: "value",
}, 2)
_TYPES = _BuildTagLookupTable({
0: ProtocolBuffer.Encoder.NUMERIC,
1: ProtocolBuffer.Encoder.STRING,
2: ProtocolBuffer.Encoder.STRING,
}, 2, ProtocolBuffer.Encoder.MAX_TYPE)
# stylesheet for XML output
_STYLE = \
""""""
_STYLE_CONTENT_TYPE = \
""""""
_PROTO_DESCRIPTOR_NAME = 'KeyValues'
_SERIALIZED_DESCRIPTOR = array.array('B')
_SERIALIZED_DESCRIPTOR.fromstring(base64.decodestring("Wi90aGlyZF9wYXJ0eS9weS9hcHBlbmdpbmVfbWFwcmVkdWNlL3NyYy9rdi5wcm90bwoJS2V5VmFsdWVzExoDa2V5IAEoAjAJOAKjAaoBBWN0eXBlsgEEQ29yZKQBFBMaBXZhbHVlIAIoAjAJOAOjAaoBBWN0eXBlsgEEQ29yZKQBFMIBCEtleVZhbHVl"))
if _net_proto___parse__python is not None:
_net_proto___parse__python.RegisterType(
_SERIALIZED_DESCRIPTOR.tostring())
if _extension_runtime:
pass
__all__ = ['KeyValue','KeyValues'] | PypiClean |
/Electrum-CHI-3.3.8.tar.gz/Electrum-CHI-3.3.8/packages/pip/_internal/pep425tags.py | from __future__ import absolute_import
import distutils.util
import logging
import platform
import re
import sys
import sysconfig
import warnings
from collections import OrderedDict
import pip._internal.utils.glibc
from pip._internal.utils.compat import get_extension_suffixes
from pip._internal.utils.typing import MYPY_CHECK_RUNNING
if MYPY_CHECK_RUNNING:
from typing import (
Tuple, Callable, List, Optional, Union, Dict
)
Pep425Tag = Tuple[str, str, str]
logger = logging.getLogger(__name__)
_osx_arch_pat = re.compile(r'(.+)_(\d+)_(\d+)_(.+)')
def get_config_var(var):
# type: (str) -> Optional[str]
try:
return sysconfig.get_config_var(var)
except IOError as e: # Issue #1074
warnings.warn("{}".format(e), RuntimeWarning)
return None
def get_abbr_impl():
# type: () -> str
"""Return abbreviated implementation name."""
if hasattr(sys, 'pypy_version_info'):
pyimpl = 'pp'
elif sys.platform.startswith('java'):
pyimpl = 'jy'
elif sys.platform == 'cli':
pyimpl = 'ip'
else:
pyimpl = 'cp'
return pyimpl
def get_impl_ver():
# type: () -> str
"""Return implementation version."""
impl_ver = get_config_var("py_version_nodot")
if not impl_ver or get_abbr_impl() == 'pp':
impl_ver = ''.join(map(str, get_impl_version_info()))
return impl_ver
def get_impl_version_info():
# type: () -> Tuple[int, ...]
"""Return sys.version_info-like tuple for use in decrementing the minor
version."""
if get_abbr_impl() == 'pp':
# as per https://github.com/pypa/pip/issues/2882
# attrs exist only on pypy
return (sys.version_info[0],
sys.pypy_version_info.major, # type: ignore
sys.pypy_version_info.minor) # type: ignore
else:
return sys.version_info[0], sys.version_info[1]
def get_impl_tag():
# type: () -> str
"""
Returns the Tag for this specific implementation.
"""
return "{}{}".format(get_abbr_impl(), get_impl_ver())
def get_flag(var, fallback, expected=True, warn=True):
# type: (str, Callable[..., bool], Union[bool, int], bool) -> bool
"""Use a fallback method for determining SOABI flags if the needed config
var is unset or unavailable."""
val = get_config_var(var)
if val is None:
if warn:
logger.debug("Config variable '%s' is unset, Python ABI tag may "
"be incorrect", var)
return fallback()
return val == expected
def get_abi_tag():
# type: () -> Optional[str]
"""Return the ABI tag based on SOABI (if available) or emulate SOABI
(CPython 2, PyPy)."""
soabi = get_config_var('SOABI')
impl = get_abbr_impl()
if not soabi and impl in {'cp', 'pp'} and hasattr(sys, 'maxunicode'):
d = ''
m = ''
u = ''
if get_flag('Py_DEBUG',
lambda: hasattr(sys, 'gettotalrefcount'),
warn=(impl == 'cp')):
d = 'd'
if get_flag('WITH_PYMALLOC',
lambda: impl == 'cp',
warn=(impl == 'cp')):
m = 'm'
if get_flag('Py_UNICODE_SIZE',
lambda: sys.maxunicode == 0x10ffff,
expected=4,
warn=(impl == 'cp' and
sys.version_info < (3, 3))) \
and sys.version_info < (3, 3):
u = 'u'
abi = '%s%s%s%s%s' % (impl, get_impl_ver(), d, m, u)
elif soabi and soabi.startswith('cpython-'):
abi = 'cp' + soabi.split('-')[1]
elif soabi:
abi = soabi.replace('.', '_').replace('-', '_')
else:
abi = None
return abi
def _is_running_32bit():
# type: () -> bool
return sys.maxsize == 2147483647
def get_platform():
# type: () -> str
"""Return our platform name 'win32', 'linux_x86_64'"""
if sys.platform == 'darwin':
# distutils.util.get_platform() returns the release based on the value
# of MACOSX_DEPLOYMENT_TARGET on which Python was built, which may
# be significantly older than the user's current machine.
release, _, machine = platform.mac_ver()
split_ver = release.split('.')
if machine == "x86_64" and _is_running_32bit():
machine = "i386"
elif machine == "ppc64" and _is_running_32bit():
machine = "ppc"
return 'macosx_{}_{}_{}'.format(split_ver[0], split_ver[1], machine)
# XXX remove distutils dependency
result = distutils.util.get_platform().replace('.', '_').replace('-', '_')
if result == "linux_x86_64" and _is_running_32bit():
# 32 bit Python program (running on a 64 bit Linux): pip should only
# install and run 32 bit compiled extensions in that case.
result = "linux_i686"
return result
def is_manylinux1_compatible():
# type: () -> bool
# Only Linux, and only x86-64 / i686
if get_platform() not in {"linux_x86_64", "linux_i686"}:
return False
# Check for presence of _manylinux module
try:
import _manylinux
return bool(_manylinux.manylinux1_compatible)
except (ImportError, AttributeError):
# Fall through to heuristic check below
pass
# Check glibc version. CentOS 5 uses glibc 2.5.
return pip._internal.utils.glibc.have_compatible_glibc(2, 5)
def is_manylinux2010_compatible():
# type: () -> bool
# Only Linux, and only x86-64 / i686
if get_platform() not in {"linux_x86_64", "linux_i686"}:
return False
# Check for presence of _manylinux module
try:
import _manylinux
return bool(_manylinux.manylinux2010_compatible)
except (ImportError, AttributeError):
# Fall through to heuristic check below
pass
# Check glibc version. CentOS 6 uses glibc 2.12.
return pip._internal.utils.glibc.have_compatible_glibc(2, 12)
def get_darwin_arches(major, minor, machine):
# type: (int, int, str) -> List[str]
"""Return a list of supported arches (including group arches) for
the given major, minor and machine architecture of an macOS machine.
"""
arches = []
def _supports_arch(major, minor, arch):
# type: (int, int, str) -> bool
# Looking at the application support for macOS versions in the chart
# provided by https://en.wikipedia.org/wiki/OS_X#Versions it appears
# our timeline looks roughly like:
#
# 10.0 - Introduces ppc support.
# 10.4 - Introduces ppc64, i386, and x86_64 support, however the ppc64
# and x86_64 support is CLI only, and cannot be used for GUI
# applications.
# 10.5 - Extends ppc64 and x86_64 support to cover GUI applications.
# 10.6 - Drops support for ppc64
# 10.7 - Drops support for ppc
#
# Given that we do not know if we're installing a CLI or a GUI
# application, we must be conservative and assume it might be a GUI
# application and behave as if ppc64 and x86_64 support did not occur
# until 10.5.
#
# Note: The above information is taken from the "Application support"
# column in the chart not the "Processor support" since I believe
# that we care about what instruction sets an application can use
# not which processors the OS supports.
if arch == 'ppc':
return (major, minor) <= (10, 5)
if arch == 'ppc64':
return (major, minor) == (10, 5)
if arch == 'i386':
return (major, minor) >= (10, 4)
if arch == 'x86_64':
return (major, minor) >= (10, 5)
if arch in groups:
for garch in groups[arch]:
if _supports_arch(major, minor, garch):
return True
return False
groups = OrderedDict([
("fat", ("i386", "ppc")),
("intel", ("x86_64", "i386")),
("fat64", ("x86_64", "ppc64")),
("fat32", ("x86_64", "i386", "ppc")),
]) # type: Dict[str, Tuple[str, ...]]
if _supports_arch(major, minor, machine):
arches.append(machine)
for garch in groups:
if machine in groups[garch] and _supports_arch(major, minor, garch):
arches.append(garch)
arches.append('universal')
return arches
def get_all_minor_versions_as_strings(version_info):
# type: (Tuple[int, ...]) -> List[str]
versions = []
major = version_info[:-1]
# Support all previous minor Python versions.
for minor in range(version_info[-1], -1, -1):
versions.append(''.join(map(str, major + (minor,))))
return versions
def get_supported(
versions=None, # type: Optional[List[str]]
noarch=False, # type: bool
platform=None, # type: Optional[str]
impl=None, # type: Optional[str]
abi=None # type: Optional[str]
):
# type: (...) -> List[Pep425Tag]
"""Return a list of supported tags for each version specified in
`versions`.
:param versions: a list of string versions, of the form ["33", "32"],
or None. The first version will be assumed to support our ABI.
:param platform: specify the exact platform you want valid
tags for, or None. If None, use the local system platform.
:param impl: specify the exact implementation you want valid
tags for, or None. If None, use the local interpreter impl.
:param abi: specify the exact abi you want valid
tags for, or None. If None, use the local interpreter abi.
"""
supported = []
# Versions must be given with respect to the preference
if versions is None:
version_info = get_impl_version_info()
versions = get_all_minor_versions_as_strings(version_info)
impl = impl or get_abbr_impl()
abis = [] # type: List[str]
abi = abi or get_abi_tag()
if abi:
abis[0:0] = [abi]
abi3s = set()
for suffix in get_extension_suffixes():
if suffix.startswith('.abi'):
abi3s.add(suffix.split('.', 2)[1])
abis.extend(sorted(list(abi3s)))
abis.append('none')
if not noarch:
arch = platform or get_platform()
arch_prefix, arch_sep, arch_suffix = arch.partition('_')
if arch.startswith('macosx'):
# support macosx-10.6-intel on macosx-10.9-x86_64
match = _osx_arch_pat.match(arch)
if match:
name, major, minor, actual_arch = match.groups()
tpl = '{}_{}_%i_%s'.format(name, major)
arches = []
for m in reversed(range(int(minor) + 1)):
for a in get_darwin_arches(int(major), m, actual_arch):
arches.append(tpl % (m, a))
else:
# arch pattern didn't match (?!)
arches = [arch]
elif arch_prefix == 'manylinux2010':
# manylinux1 wheels run on most manylinux2010 systems with the
# exception of wheels depending on ncurses. PEP 571 states
# manylinux1 wheels should be considered manylinux2010 wheels:
# https://www.python.org/dev/peps/pep-0571/#backwards-compatibility-with-manylinux1-wheels
arches = [arch, 'manylinux1' + arch_sep + arch_suffix]
elif platform is None:
arches = []
if is_manylinux2010_compatible():
arches.append('manylinux2010' + arch_sep + arch_suffix)
if is_manylinux1_compatible():
arches.append('manylinux1' + arch_sep + arch_suffix)
arches.append(arch)
else:
arches = [arch]
# Current version, current API (built specifically for our Python):
for abi in abis:
for arch in arches:
supported.append(('%s%s' % (impl, versions[0]), abi, arch))
# abi3 modules compatible with older version of Python
for version in versions[1:]:
# abi3 was introduced in Python 3.2
if version in {'31', '30'}:
break
for abi in abi3s: # empty set if not Python 3
for arch in arches:
supported.append(("%s%s" % (impl, version), abi, arch))
# Has binaries, does not use the Python API:
for arch in arches:
supported.append(('py%s' % (versions[0][0]), 'none', arch))
# No abi / arch, but requires our implementation:
supported.append(('%s%s' % (impl, versions[0]), 'none', 'any'))
# Tagged specifically as being cross-version compatible
# (with just the major version specified)
supported.append(('%s%s' % (impl, versions[0][0]), 'none', 'any'))
# No abi / arch, generic Python
for i, version in enumerate(versions):
supported.append(('py%s' % (version,), 'none', 'any'))
if i == 0:
supported.append(('py%s' % (version[0]), 'none', 'any'))
return supported
implementation_tag = get_impl_tag() | PypiClean |
/Nuitka_winsvc-1.7.10-cp310-cp310-win_amd64.whl/nuitka/importing/IgnoreListing.py | import sys
from nuitka.Errors import NuitkaOptimizationError
from nuitka.PythonFlavors import isNuitkaPython
def getModuleIgnoreList():
return (
"mac",
"nt",
"os2",
"posix",
"_emx_link",
"riscos",
"ce",
"riscospath",
"riscosenviron",
"Carbon.File",
"org.python.core",
"_sha",
"_sha256",
"array",
"_sha512",
"_md5",
"_subprocess",
"msvcrt",
"cPickle",
"marshal",
"imp",
"sys",
"itertools",
"cStringIO",
"time",
"zlib",
"thread",
"math",
"errno",
"operator",
"signal",
"gc",
"exceptions",
"win32process",
"unicodedata",
"__builtin__",
"fcntl",
"_socket",
"_ssl",
"pwd",
"spwd",
"_random",
"grp",
"_io",
"_string",
"select",
"__main__",
"_winreg",
"_warnings",
"_sre",
"_functools",
"_hashlib",
"_collections",
"_locale",
"_codecs",
"_weakref",
"_struct",
"_dummy_threading",
"binascii",
"datetime",
"_ast",
"xxsubtype",
"_bytesio",
"cmath",
"_fileio",
"aetypes",
"aepack",
"MacOS",
"cd",
"cl",
"gdbm",
"gl",
"GL",
"aetools",
"_bisect",
"_heapq",
"_symtable",
"syslog",
"_datetime",
"_elementtree",
"_pickle",
"_posixsubprocess",
"_thread",
"atexit",
"pyexpat",
"_imp",
"_sha1",
"faulthandler",
"_osx_support",
"sysconfig",
"copyreg",
"ipaddress",
"reprlib",
"win32event",
"win32file",
# Python-Qt4 does these if missing python3 parts:
"PyQt4.uic.port_v3.string_io",
"PyQt4.uic.port_v3.load_plugin",
"PyQt4.uic.port_v3.ascii_upper",
"PyQt4.uic.port_v3.proxy_base",
"PyQt4.uic.port_v3.as_string",
# CPython3 does these:
"builtins",
"UserDict",
"os.path",
"StringIO",
# "test_array",
"_testcapi",
# test_applesingle.py
"applesingle",
# test_buffer.py
"_testbuffer",
# test_bsddb.py
"bsddb.test",
# test_collections.py
"collections.abc",
# test_compile.py
"__package__.module",
"__mangled_mod",
"__package__",
# test_ctypes
"ctypes.test",
# test_dbm.py
"dbm.dumb",
# test_dbm_ndbm.py
"dbm.ndbm",
# test_distutils.py
"distutils.tests",
"distutils.mwerkscompiler",
# test_docxmlrpc.py
"xmlrpc",
# test_emails.py
"email.test.test_email",
"email.test.test_email_renamed",
"email.test.test_email_codecs",
# test_email_codecs.py
"email.test",
# test_enum.py
"enum",
# test_file.py
"_pyio",
# test_frozen.py
"__hello__",
"__phello__",
"__phello__.spam",
"__phello__.foo",
# test_fork1.py
"fake test module",
# test_html.py
"html",
"html.entities",
# test_http_cookiejar.py
"urllib.request",
"http",
# test_imp.py
"importlib.test.import_",
"pep3147.foo",
"pep3147",
# test_import.py
"RAnDoM",
"infinite_reload",
"test_trailing_slash",
"nonexistent_xyzzy",
"_parent_foo.bar",
"_parent_foo",
"test_unc_path",
# test_importhooks.py
"hooktestmodule",
"hooktestpackage",
"hooktestpackage.sub",
"reloadmodule",
"hooktestpackage.sub.subber",
"hooktestpackage.oldabs",
"hooktestpackage.newrel",
"hooktestpackage.sub.subber.subest",
"hooktestpackage.futrel",
"sub",
"hooktestpackage.newabs",
# test_imporlib.py"
"importlib.test.__main__",
"importlib",
# test_inspect.py
"inspect_fodder3",
"test.test_import",
# test_imageop.py
"imgfile",
# test_json.py
"json.tests",
# test_lib2to3.py
"lib2to3.tests",
# test_logging.py
"win32evtlog",
"win32evtlogutil",
"pywintypes",
# test_lzma.py
"lzma",
# test_macostools.py
"macostools",
# test_msilib.py
"msilib",
# test_namespace_pkgs.py
"foo.one",
"foo.two",
"parent.child.one",
"parent.child.two",
"parent.child.three",
"bar.two",
"a_test",
"parent.child",
"parent",
"bar",
# test_new.py
"Spam",
# test_ossaudiodev.py
"ossaudiodev",
# test_pathlib.py
"pathlib",
# test_platform.py
"gestalt",
# test_pickleable.py
"email.headerregistry",
# test_pkg.py
"t1",
"t2",
"t2.sub",
"t2.sub.subsub",
"t3.sub.subsub",
"t5",
"t6",
"t7",
"t7.sub",
"t7.sub.subsub",
"t8",
"t3.sub",
"t3",
# test_pkgutil.py
"foo",
"foo.bar",
"foo.baz",
"zipimport",
"pkg",
"pkg.subpkg",
"pkg.subpkg.c",
"pkg.subpkg.d",
# test_policy.py
"email.policy",
# test_urllib.py
"urllib.parse",
# test_urllib_response.py
"urllib.response",
# test_repr.py
"""areallylongpackageandmodulenametotestreprtruncation.\
areallylongpackageandmodulenametotestreprtruncation""",
"areallylongpackageandmodulenametotestreprtruncation",
# test_robotparser.py
"urllib.error",
"urllib.robotparser",
# test_runpy.py
"test.script_helper",
# test_secrets.py
"secrets",
# test_selectors.py
"selectors",
# test_statistics.py
"statistics",
# test_shelve.py
"test.test_dbm",
# test_strftime.py
"java",
# test_strop.py
"strop",
# test_sqlite3.py
"sqlite3.test",
# test_sundry.py
"distutils.emxccompiler",
"os2emxpath",
# test_tcl.py
"tkinter",
# test_tk.py
"runtktests",
"tkinter.test",
"tkinter.test.support",
# test_tools.py
"analyze_dxp",
"test_unparse",
"importlib.machinery",
# test_traceback.py
"test_bug737473",
# test_tracemalloc
"tracemalloc",
# test_typing.py
"mock",
"typing.io",
"typing.re",
# test_unittest.py
"unittest.test",
# test_wsgiref.py
"test.test_httpservers",
# test_xml_etree.py
"xml.parsers.expat.errors",
# test_xmlrpc.py
"xmlrpc.client",
# test_zipimport_support.py
"test_zipped_doctest",
"zip_pkg",
# test/test_zipimport_support.py
"test.test_cmd_line_script",
# test_winconsoleio.py
"_testconsole",
# Python3: modules that no longer exist
"commands",
"dummy_thread",
"_dummy_thread",
"httplib",
"Queue",
"sets",
# Python2: modules that don't yet exit
"http.client",
"queue",
"winreg",
# Very old modules with older names
"simplejson",
"sets",
# Standalone mode "site" import flexibilities
"sitecustomize",
"usercustomize",
"apport_python_hook",
"_frozen_importlib",
# Standard library stuff that is optional
"comtypes.server.inprocserver",
"_tkinter",
"_scproxy",
"EasyDialogs",
"SOCKS",
"rourl2path",
"_winapi",
"win32api",
"win32con",
"_gestalt",
"java.lang",
"vms_lib",
"ic",
"readline",
"termios",
"_sysconfigdata",
"al",
"AL",
"sunaudiodev",
"SUNAUDIODEV",
"Audio_mac",
"nis",
"test.test_MimeWriter",
"dos",
"win32pipe",
"Carbon",
"Carbon.Files",
"sgi",
"ctypes.macholib.dyld",
"bsddb3",
"_pybsddb",
"_xmlrpclib",
"netbios",
"win32wnet",
"email.Parser",
"elementree.cElementTree",
"elementree.ElementTree",
"_gbdm",
"resource",
"crypt",
"bz2",
"dbm",
"mmap",
"Mailman",
# Mercurial test
"statprof",
"email.Generator",
"email.Utils",
# setuptools does a lot of speculative stuff
"wincertstore",
"setuptools_svn",
# reportlab does use this if present only and warns about itself.
"pyfribidi2",
"macfs",
# psutils
"_psutil_windows",
# nose
"unittest2",
"IronPython",
"clr",
"compiler.consts",
"new",
# pkg_resources
"pkg_resources.extern",
"ordereddict",
# appdirs
"com",
"win32com",
# gtk
"gdk",
# six
"six.moves",
# Python3 namespace packages.
"_frozen_importlib_external",
# Garbage from PyWin32
"pywin32_bootstrap",
)
def isIgnoreListedNotExistingModule(module_name):
if module_name in sys.builtin_module_names and not isNuitkaPython():
raise NuitkaOptimizationError(
"""
Your CPython version has a built-in module '%s', that is not ignore listed
please report this as a bug."""
% module_name,
)
return module_name.hasOneOfNamespaces(getModuleIgnoreList()) | PypiClean |
/MarkDo-0.3.0.tar.gz/MarkDo-0.3.0/markdo/static/bower/marked/README.md | # marked
> A full-featured markdown parser and compiler, written in javascript. Built
> for speed.
[][badge]
## Install
``` bash
npm install marked --save
```
## Usage
Minimal usage:
```js
console.log(marked('I am using __markdown__.'));
// Outputs: <p>I am using <strong>markdown</strong>.</p>
```
Example using all options:
```js
marked.setOptions({
gfm: true,
tables: true,
breaks: false,
pedantic: false,
sanitize: true,
smartLists: true,
smartypants: false,
});
// Using async version of marked
marked('I am using __markdown__.', function (err, content) {
if (err) throw err;
console.log(content);
});
```
## marked(markdownString, [options], [callback])
### markdownString
Type: `String`
String of markdown source to be compiled.
### options
Type: `Object`
Hash of options. Can also be set using the `marked.setOptions` method as seen
above.
### callback
Type: `Function`
Function called when the `markdownString` has been fully parsed when using
async highlighting. If the `options` argument is omitted, this can be used as
the second argument as seen above:
## Options
### gfm
Type: `Boolean`
Default: `true`
Enable [GitHub flavored markdown][gfm].
### tables
Type: `Boolean`
Default: `true`
Enable GFM [tables][tables].
This option requires the `gfm` option to be true.
### breaks
Type: `Boolean`
Default: `false`
Enable GFM [line breaks][breaks].
This option requires the `gfm` option to be true.
### pedantic
Type: `Boolean`
Default: `false`
Conform to obscure parts of `markdown.pl` as much as possible. Don't fix any of
the original markdown bugs or poor behavior.
### sanitize
Type: `Boolean`
Default: `false`
Sanitize the output. Ignore any HTML that has been input.
### smartLists
Type: `Boolean`
Default: `true`
Use smarter list behavior than the original markdown. May eventually be
default with the old behavior moved into `pedantic`.
### smartypants
Type: `Boolean`
Default: `false`
Use "smart" typograhic punctuation for things like quotes and dashes.
### renderer
Type: `Renderer`
Default: `new Renderer()`
A renderer instance for rendering ast to html. Learn more on the Renderer
section.
## Renderer
Renderer is a the new way for rendering tokens to html. Here is a simple
example:
```javascript
var r = new marked.Renderer()
r.blockcode = function(code, lang) {
return highlight(lang, code).value;
}
console.log(marked(text, {renderer: r}))
```
You can control anything you want.
### Block Level
- code(code, language)
- blockquote(quote)
- html(html)
- heading(text, level)
- hr()
- list(body, ordered)
- listitem(text)
- paragraph(text)
- table(header, body)
- tablerow(content)
- tablecell(content, flags)
`flags` is an object like this:
```
{
header: true,
align: 'center'
}
```
### Span Level
- strong(text)
- em(text)
- codespan(code)
- br()
- del(text)
- link(href, title, text)
- image(href, title, text)
## Access to lexer and parser
You also have direct access to the lexer and parser if you so desire.
``` js
var tokens = marked.lexer(text, options);
console.log(marked.parser(tokens));
```
``` js
var lexer = new marked.Lexer(options);
var tokens = lexer.lex(text);
console.log(tokens);
console.log(lexer.rules);
```
## CLI
``` bash
$ marked -o hello.html
hello world
^D
$ cat hello.html
<p>hello world</p>
```
## Benchmarks
node v0.4.x
``` bash
$ node test --bench
marked completed in 12071ms.
showdown (reuse converter) completed in 27387ms.
showdown (new converter) completed in 75617ms.
markdown-js completed in 70069ms.
```
node v0.6.x
``` bash
$ node test --bench
marked completed in 6448ms.
marked (gfm) completed in 7357ms.
marked (pedantic) completed in 6092ms.
discount completed in 7314ms.
showdown (reuse converter) completed in 16018ms.
showdown (new converter) completed in 18234ms.
markdown-js completed in 24270ms.
```
__Marked is now faster than Discount, which is written in C.__
For those feeling skeptical: These benchmarks run the entire markdown test suite
1000 times. The test suite tests every feature. It doesn't cater to specific
aspects.
node v0.8.x
``` bash
$ node test --bench
marked completed in 3411ms.
marked (gfm) completed in 3727ms.
marked (pedantic) completed in 3201ms.
robotskirt completed in 808ms.
showdown (reuse converter) completed in 11954ms.
showdown (new converter) completed in 17774ms.
markdown-js completed in 17191ms.
```
## Another Javascript Markdown Parser
The point of marked was to create a markdown compiler where it was possible to
frequently parse huge chunks of markdown without having to worry about
caching the compiled output somehow...or blocking for an unnecesarily long time.
marked is very concise and still implements all markdown features. It is also
now fully compatible with the client-side.
marked more or less passes the official markdown test suite in its
entirety. This is important because a surprising number of markdown compilers
cannot pass more than a few tests. It was very difficult to get marked as
compliant as it is. It could have cut corners in several areas for the sake
of performance, but did not in order to be exactly what you expect in terms
of a markdown rendering. In fact, this is why marked could be considered at a
disadvantage in the benchmarks above.
Along with implementing every markdown feature, marked also implements [GFM
features][gfmf].
### High level
You can customize the result with a customized renderer.
``` js
var renderer = new marked.Renderer()
renderer.heading = function(text, level) {
return '<div class="h-' + level + '">' + text + '</div>'
}
var parse = function(src, options) {
options = options || {};
options.renderer = renderer
return marked.parser(marked.lexer(src, options), options);
}
console.log(parse('# h1'))
```
The renderer API:
```
code: function(code, lang)
blockquote: function(text)
html: function(html)
heading: function(text, level)
paragraph: function(text)
hr: function()
list: function(contents, isOrdered)
listitem: function(text)
table: function(header, body)
tablerow: function(content)
tablecell: function(text, flags)
// flags: {header: false, align: 'center'}
```
### Pro level
You also have direct access to the lexer and parser if you so desire.
``` js
var tokens = marked.lexer(text, options);
console.log(marked.parser(tokens));
```
``` js
var lexer = new marked.Lexer(options);
var tokens = lexer.lex(text);
console.log(tokens);
console.log(lexer.rules);
```
``` bash
$ node
> require('marked').lexer('> i am using marked.')
[ { type: 'blockquote_start' },
{ type: 'paragraph',
text: 'i am using marked.' },
{ type: 'blockquote_end' },
links: {} ]
```
## Running Tests & Contributing
If you want to submit a pull request, make sure your changes pass the test
suite. If you're adding a new feature, be sure to add your own test.
The marked test suite is set up slightly strangely: `test/new` is for all tests
that are not part of the original markdown.pl test suite (this is where your
test should go if you make one). `test/original` is only for the original
markdown.pl tests. `test/tests` houses both types of tests after they have been
combined and moved/generated by running `node test --fix` or `marked --test
--fix`.
In other words, if you have a test to add, add it to `test/new/` and then
regenerate the tests with `node test --fix`. Commit the result. If your test
uses a certain feature, for example, maybe it assumes GFM is *not* enabled, you
can add `.nogfm` to the filename. So, `my-test.text` becomes
`my-test.nogfm.text`. You can do this with any marked option. Say you want
line breaks and smartypants enabled, your filename should be:
`my-test.breaks.smartypants.text`.
To run the tests:
``` bash
cd marked/
node test
```
### Contribution and License Agreement
If you contribute code to this project, you are implicitly allowing your code
to be distributed under the MIT license. You are also implicitly verifying that
all code is your original work. `</legalese>`
## License
Copyright (c) 2011-2013, Christopher Jeffrey. (MIT License)
See LICENSE for more info.
[gfm]: https://help.github.com/articles/github-flavored-markdown
[gfmf]: http://github.github.com/github-flavored-markdown/
[pygmentize]: https://github.com/rvagg/node-pygmentize-bundled
[highlight]: https://github.com/isagalaev/highlight.js
[badge]: http://badge.fury.io/js/marked
[tables]: https://github.com/adam-p/markdown-here/wiki/Markdown-Cheatsheet#wiki-tables
[breaks]: https://help.github.com/articles/github-flavored-markdown#newlines
| PypiClean |
/MergePythonSDK.ticketing-2.2.2-py3-none-any.whl/MergePythonSDK/ats/model/issue_status_enum.py | import re # noqa: F401
import sys # noqa: F401
from typing import (
Optional,
Union,
List,
Dict,
)
from MergePythonSDK.shared.model_utils import ( # noqa: F401
ApiTypeError,
ModelComposed,
ModelNormal,
ModelSimple,
cached_property,
OpenApiModel,
change_keys_js_to_python,
convert_js_args_to_python_args,
date,
datetime,
file_type,
none_type,
validate_get_composed_info,
)
from MergePythonSDK.shared.exceptions import ApiAttributeError
from MergePythonSDK.shared.model_utils import import_model_by_name
from MergePythonSDK.shared.model_utils import MergeEnumType
class IssueStatusEnum(ModelNormal, MergeEnumType):
"""NOTE: This class is auto generated by OpenAPI Generator.
Ref: https://openapi-generator.tech
Do not edit the class manually.
Attributes:
allowed_values (dict): The key is the tuple path to the attribute
and the for var_name this is (var_name,). The value is a dict
with a capitalized key describing the allowed value and an allowed
value. These dicts store the allowed enum values.
attribute_map (dict): The key is attribute name
and the value is json key in definition.
discriminator_value_class_map (dict): A dict to go from the discriminator
variable value to the discriminator class name.
validations (dict): The key is the tuple path to the attribute
and the for var_name this is (var_name,). The value is a dict
that stores validations for max_length, min_length, max_items,
min_items, exclusive_maximum, inclusive_maximum, exclusive_minimum,
inclusive_minimum, and regex.
additional_properties_type (tuple): A tuple of classes accepted
as additional properties values.
"""
allowed_values = {
('value',): {
'ONGOING': "ONGOING",
'RESOLVED': "RESOLVED",
},
}
validations = {
}
additional_properties_type = None
_nullable = False
@cached_property
def openapi_types():
"""
This must be a method because a model may have properties that are
of type self, this must run after the class is loaded
Returns
openapi_types (dict): The key is attribute name
and the value is attribute type.
"""
defined_types = {
'value': (str,),
}
return defined_types
@cached_property
def discriminator():
return None
attribute_map = {
}
read_only_vars = {
}
_composed_schemas = {}
@classmethod
@convert_js_args_to_python_args
def _from_openapi_data(cls, value, *args, **kwargs): # noqa: E501
"""IssueStatusEnum - a model defined in OpenAPI
Keyword Args:
_check_type (bool): if True, values for parameters in openapi_types
will be type checked and a TypeError will be
raised if the wrong type is input.
Defaults to True
_path_to_item (tuple/list): This is a list of keys or values to
drill down to the model in received_data
when deserializing a response
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_configuration (Configuration): the instance to use when
deserializing a file_type parameter.
If passed, type conversion is attempted
If omitted no type conversion is done.
_visited_composed_classes (tuple): This stores a tuple of
classes that we have traveled through so that
if we see that class again we will not use its
discriminator again.
When traveling through a discriminator, the
composed schema that is
is traveled through is added to this set.
For example if Animal has a discriminator
petType and we pass in "Dog", and the class Dog
allOf includes Animal, we move through Animal
once using the discriminator, and pick Dog.
Then in Dog, we will make an instance of the
Animal class but this time we won't travel
through its discriminator because we passed in
_visited_composed_classes = (Animal,)
"""
_check_type = kwargs.pop('_check_type', True)
_spec_property_naming = kwargs.pop('_spec_property_naming', True)
_path_to_item = kwargs.pop('_path_to_item', ())
_configuration = kwargs.pop('_configuration', None)
_visited_composed_classes = kwargs.pop('_visited_composed_classes', ())
self = super(OpenApiModel, cls).__new__(cls)
if args:
for arg in args:
if isinstance(arg, dict):
kwargs.update(arg)
else:
raise ApiTypeError(
"Invalid positional arguments=%s passed to %s. Remove those invalid positional arguments." % (
args,
self.__class__.__name__,
),
path_to_item=_path_to_item,
valid_classes=(self.__class__,),
)
self._data_store = {}
self._check_type = _check_type
self._spec_property_naming = _spec_property_naming
self._path_to_item = _path_to_item
self._configuration = _configuration
self._visited_composed_classes = _visited_composed_classes + (self.__class__,)
self.value = value
return self
required_properties = set([
'_data_store',
'_check_type',
'_spec_property_naming',
'_path_to_item',
'_configuration',
'_visited_composed_classes',
])
@convert_js_args_to_python_args
def __init__(self, value, *args, **kwargs): # noqa: E501
"""IssueStatusEnum - a model defined in OpenAPI
Keyword Args:
_check_type (bool): if True, values for parameters in openapi_types
will be type checked and a TypeError will be
raised if the wrong type is input.
Defaults to True
_path_to_item (tuple/list): This is a list of keys or values to
drill down to the model in received_data
when deserializing a response
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_configuration (Configuration): the instance to use when
deserializing a file_type parameter.
If passed, type conversion is attempted
If omitted no type conversion is done.
_visited_composed_classes (tuple): This stores a tuple of
classes that we have traveled through so that
if we see that class again we will not use its
discriminator again.
When traveling through a discriminator, the
composed schema that is
is traveled through is added to this set.
For example if Animal has a discriminator
petType and we pass in "Dog", and the class Dog
allOf includes Animal, we move through Animal
once using the discriminator, and pick Dog.
Then in Dog, we will make an instance of the
Animal class but this time we won't travel
through its discriminator because we passed in
_visited_composed_classes = (Animal,)
"""
_check_type = kwargs.pop('_check_type', True)
_spec_property_naming = kwargs.pop('_spec_property_naming', False)
_path_to_item = kwargs.pop('_path_to_item', ())
_configuration = kwargs.pop('_configuration', None)
_visited_composed_classes = kwargs.pop('_visited_composed_classes', ())
if args:
for arg in args:
if isinstance(arg, dict):
kwargs.update(arg)
else:
raise ApiTypeError(
"Invalid positional arguments=%s passed to %s. Remove those invalid positional arguments." % (
args,
self.__class__.__name__,
),
path_to_item=_path_to_item,
valid_classes=(self.__class__,),
)
self._data_store = {}
self._check_type = _check_type
self._spec_property_naming = _spec_property_naming
self._path_to_item = _path_to_item
self._configuration = _configuration
self._visited_composed_classes = _visited_composed_classes + (self.__class__,)
self.value = value | PypiClean |
/Flask-Ink-3.1.10.tar.gz/Flask-Ink-3.1.10/flask_ink/static/js/ink.sticky.js | Ink.createModule('Ink.UI.Sticky', '1', ['Ink.UI.Aux_1','Ink.Dom.Event_1','Ink.Dom.Css_1','Ink.Dom.Element_1','Ink.Dom.Selector_1'], function(Aux, Event, Css, Element, Selector ) {
'use strict';
/**
* The Sticky component takes an element and transforms it's behavior in order to, when the user scrolls he sets its position
* to fixed and maintain it until the user scrolls back to the same place.
*
* @class Ink.UI.Sticky
* @constructor
* @version 1
* @param {String|DOMElement} selector
* @param {Object} [options] Options
* @param {Number} options.offsetBottom Number of pixels of distance from the bottomElement.
* @param {Number} options.offsetTop Number of pixels of distance from the topElement.
* @param {String} options.topElement CSS Selector that specifies a top element with which the component could collide.
* @param {String} options.bottomElement CSS Selector that specifies a bottom element with which the component could collide.
* @example
* <script>
* Ink.requireModules( ['Ink.Dom.Selector_1','Ink.UI.Sticky_1'], function( Selector, Sticky ){
* var menuElement = Ink.s('#menu');
* var stickyObj = new Sticky( menuElement );
* });
* </script>
*/
var Sticky = function( selector, options ){
if( typeof selector !== 'object' && typeof selector !== 'string'){
throw '[Sticky] :: Invalid selector defined';
}
if( typeof selector === 'object' ){
this._rootElement = selector;
} else {
this._rootElement = Selector.select( selector );
if( this._rootElement.length <= 0) {
throw "[Sticky] :: Can't find any element with the specified selector";
}
this._rootElement = this._rootElement[0];
}
/**
* Setting default options and - if needed - overriding it with the data attributes
*/
this._options = Ink.extendObj({
offsetBottom: 0,
offsetTop: 0,
topElement: undefined,
bottomElement: undefined
}, Element.data( this._rootElement ) );
/**
* In case options have been defined when creating the instance, they've precedence
*/
this._options = Ink.extendObj(this._options,options || {});
if( typeof( this._options.topElement ) !== 'undefined' ){
this._options.topElement = Aux.elOrSelector( this._options.topElement, 'Top Element');
} else {
this._options.topElement = Aux.elOrSelector( 'body', 'Top Element');
}
if( typeof( this._options.bottomElement ) !== 'undefined' ){
this._options.bottomElement = Aux.elOrSelector( this._options.bottomElement, 'Bottom Element');
} else {
this._options.bottomElement = Aux.elOrSelector( 'body', 'Top Element');
}
this._computedStyle = window.getComputedStyle ? window.getComputedStyle(this._rootElement, null) : this._rootElement.currentStyle;
this._dims = {
height: this._computedStyle.height,
width: this._computedStyle.width
};
this._init();
};
Sticky.prototype = {
/**
* Init function called by the constructor
*
* @method _init
* @private
*/
_init: function(){
Event.observe( document, 'scroll', Ink.bindEvent(this._onScroll,this) );
Event.observe( window, 'resize', Ink.bindEvent(this._onResize,this) );
this._calculateOriginalSizes();
this._calculateOffsets();
},
/**
* Scroll handler.
*
* @method _onScroll
* @private
*/
_onScroll: function(){
var viewport = (document.compatMode === "CSS1Compat") ? document.documentElement : document.body;
if(
( ( (Element.elementWidth(this._rootElement)*100)/viewport.clientWidth ) > 90 ) ||
( viewport.clientWidth<=649 )
){
if( Element.hasAttribute(this._rootElement,'style') ){
this._rootElement.removeAttribute('style');
}
return;
}
if( this._scrollTimeout ){
clearTimeout(this._scrollTimeout);
}
this._scrollTimeout = setTimeout(Ink.bind(function(){
var scrollHeight = Element.scrollHeight();
if( Element.hasAttribute(this._rootElement,'style') ){
if( scrollHeight <= (this._options.originalTop-this._options.originalOffsetTop)){
this._rootElement.removeAttribute('style');
} else if( ((document.body.scrollHeight-(scrollHeight+parseInt(this._dims.height,10))) < this._options.offsetBottom) ){
this._rootElement.style.position = 'fixed';
this._rootElement.style.top = 'auto';
this._rootElement.style.left = this._options.originalLeft + 'px';
if( this._options.offsetBottom < parseInt(document.body.scrollHeight - (document.documentElement.clientHeight+scrollHeight),10) ){
this._rootElement.style.bottom = this._options.originalOffsetBottom + 'px';
} else {
this._rootElement.style.bottom = this._options.offsetBottom - parseInt(document.body.scrollHeight - (document.documentElement.clientHeight+scrollHeight),10) + 'px';
}
this._rootElement.style.width = this._options.originalWidth + 'px';
} else if( ((document.body.scrollHeight-(scrollHeight+parseInt(this._dims.height,10))) >= this._options.offsetBottom) ){
this._rootElement.style.left = this._options.originalLeft + 'px';
this._rootElement.style.position = 'fixed';
this._rootElement.style.bottom = 'auto';
this._rootElement.style.left = this._options.originalLeft + 'px';
this._rootElement.style.top = this._options.originalOffsetTop + 'px';
this._rootElement.style.width = this._options.originalWidth + 'px';
}
} else {
if( scrollHeight <= (this._options.originalTop-this._options.originalOffsetTop)){
return;
}
this._rootElement.style.left = this._options.originalLeft + 'px';
this._rootElement.style.position = 'fixed';
this._rootElement.style.bottom = 'auto';
this._rootElement.style.left = this._options.originalLeft + 'px';
this._rootElement.style.top = this._options.originalOffsetTop + 'px';
this._rootElement.style.width = this._options.originalWidth + 'px';
}
this._scrollTimeout = undefined;
},this), 0);
},
/**
* Resize handler
*
* @method _onResize
* @private
*/
_onResize: function(){
if( this._resizeTimeout ){
clearTimeout(this._resizeTimeout);
}
this._resizeTimeout = setTimeout(Ink.bind(function(){
this._rootElement.removeAttribute('style');
this._calculateOriginalSizes();
this._calculateOffsets();
}, this),0);
},
/**
* On each resizing (and in the beginning) the component recalculates the offsets, since
* the top and bottom element heights might have changed.
*
* @method _calculateOffsets
* @private
*/
_calculateOffsets: function(){
/**
* Calculating the offset top
*/
if( typeof this._options.topElement !== 'undefined' ){
if( this._options.topElement.nodeName.toLowerCase() !== 'body' ){
var
topElementHeight = Element.elementHeight( this._options.topElement ),
topElementTop = Element.elementTop( this._options.topElement )
;
this._options.offsetTop = ( parseInt(topElementHeight,10) + parseInt(topElementTop,10) ) + parseInt(this._options.originalOffsetTop,10);
} else {
this._options.offsetTop = parseInt(this._options.originalOffsetTop,10);
}
}
/**
* Calculating the offset bottom
*/
if( typeof this._options.bottomElement !== 'undefined' ){
if( this._options.bottomElement.nodeName.toLowerCase() !== 'body' ){
var
bottomElementHeight = Element.elementHeight(this._options.bottomElement)
;
this._options.offsetBottom = parseInt(bottomElementHeight,10) + parseInt(this._options.originalOffsetBottom,10);
} else {
this._options.offsetBottom = parseInt(this._options.originalOffsetBottom,10);
}
}
this._onScroll();
},
/**
* Function to calculate the 'original size' of the element.
* It's used in the begining (_init method) and when a scroll happens
*
* @method _calculateOriginalSizes
* @private
*/
_calculateOriginalSizes: function(){
if( typeof this._options.originalOffsetTop === 'undefined' ){
this._options.originalOffsetTop = parseInt(this._options.offsetTop,10);
this._options.originalOffsetBottom = parseInt(this._options.offsetBottom,10);
}
this._options.originalTop = parseInt(this._rootElement.offsetTop,10);
this._options.originalLeft = parseInt(this._rootElement.offsetLeft,10);
if(isNaN(this._options.originalWidth = parseInt(this._dims.width,10))) {
this._options.originalWidth = 0;
}
this._options.originalWidth = parseInt(this._computedStyle.width,10);
}
};
return Sticky;
}); | PypiClean |
/DomiKnowS-0.533.tar.gz/DomiKnowS-0.533/domiknows/solver/constructor/constructor.py | import logging
from itertools import product, permutations
from domiknows.utils import isbad
from ..session.solver_session import SolverSession
class Constructor():
logger = logging.getLogger(__name__)
def __init__(self, lazy_not=True, self_relation=True):
self.lazy_not = lazy_not
self.self_relation = self_relation
def get_predication(self, predicate, idx, negative=False):
raise NotImplementedError
def isskip(self, value):
return isbad(value)
def candidates(self, data, *predicates_list):
candidates = {} # concept -> [(object,...), ...]
if self.self_relation:
gen = lambda enum_data, arity: product(enum_data, repeat=arity)
else:
gen = lambda enum_data, arity: permutations(enum_data, r=arity)
for arity, predicates in enumerate(predicates_list, 1):
for concept in predicates:
#assert concept not in candidates
# last one change first (c-order)
# abc rep=3 -> aaa, aab, aac, aba, abb, abc, ...
candidates[concept] = tuple(gen(enumerate(data), arity))
return candidates
def variables(self, session, candidates, *predicates_list):
variables = {} # (concept, (object,...)) -> variable
predictions = {} # (concept, (object,...)) -> prediction
variables_not = {} # (concept, (x,...)) -> variable
predictions_not = {} # (concept, (x,...)) -> prediction
# add variables
self.logger.debug('add variables')
for predicates in predicates_list:
for concept, predicate in predicates.items():
self.logger.debug('for %s', concept.name)
self.logger.debug(predicate)
for x in candidates[concept]: # flat: C-order -> last dim first!
idx, _ = zip(*x)
prediction = self.get_predication(predicate, idx)
if self.isskip(prediction): continue
var = session.var(
session.VTYPE.BIN, 0, 1,
name='{}_{}'.format(concept.name, str(x)))
self.logger.debug(' - add %s', var)
variables[concept, x] = var
predictions[concept, x] = prediction
if self.lazy_not:
self.logger.debug('lazy negative')
# add variables
self.logger.debug('lazy negative add variables')
for predicates in predicates_list:
for concept, predicate in predicates.items():
self.logger.debug('for %s', concept.name)
for x in candidates[concept]:
idx, _ = zip(*x)
prediction_not = self.get_predication(predicate, idx, negative=True)
if self.isskip(prediction_not): continue
var = session.var(
session.VTYPE.BIN, 0, 1,
name='lazy_not_{}_{}'.format(concept.name, str(x)))
self.logger.debug(' - add %s', var)
variables_not[concept, x] = var
predictions_not[concept, x] = prediction_not
return variables, predictions, variables_not, predictions_not
def constraints(self, session, candidates, variables, variables_not, *predicates_list):
constraints = {} # (rel, (object,...)) -> constr
constraints_not = {} # (rel, (x,...)) -> constr
# add constraints
self.logger.debug('add constraints')
for predicates in predicates_list:
for concept in predicates:
self.logger.debug('for %s', concept.name)
self.logger.debug(' - is_a')
for rel in concept.is_a():
self.logger.debug(' - - %s', rel.name)
# A is_a B : A(x) <= B(x)
for x in candidates[rel.src]:
if (rel.src, x) not in variables: continue
if (rel.dst, x) not in variables: continue
constr = session.constr(
variables[rel.src, x], SolverSession.CTYPE.LE, variables[rel.dst, x],
name='{}_{}'.format(rel.name, str(x)))
self.logger.debug(' - - add %s', constr)
assert (rel, x) not in constraints
constraints[rel, x] = constr
self.logger.debug(' - not_a')
for rel in concept.not_a():
self.logger.debug(' - - %s', rel.name)
# A not_a B : A(x) + B(x) <= 1
for x in candidates[rel.src]:
if (rel.src, x) not in variables: continue
if (rel.dst, x) not in variables: continue
constr = session.constr(
variables[rel.src, x] + variables[rel.dst, x], SolverSession.CTYPE.LE, 1,
name='{}_{}'.format(rel.name, str(x)))
self.logger.debug(' - - add %s', constr)
assert (rel, x) not in constraints
constraints[rel, x] = constr
self.logger.debug(' - has_a')
for arg_id, rel in enumerate(concept.has_a()): # TODO: need to include indirect ones like sp_tr is a tr while tr has a lm
self.logger.debug(' - - %s', rel.name)
# A has_a B : A(x,y,...) <= B(x)
for xy in candidates[rel.src]:
x = xy[arg_id]
if (rel.src, xy) not in variables: continue
if (rel.dst, (x,)) not in variables: continue
constr = session.constr(
variables[rel.src, xy], SolverSession.CTYPE.LE, variables[rel.dst, (x,)],
name='{}_{}_{}'.format(rel.name, str(xy), str(x)))
self.logger.debug(' - - add %s', constr)
assert (rel, xy, (x,)) not in constraints
constraints[rel, xy, (x,)] = constr
if self.lazy_not:
self.logger.debug('lazy negative add constraints')
for predicates in predicates_list:
for concept in predicates:
self.logger.debug('for %s', concept.name)
for x in candidates[concept]:
if (concept, x) not in variables: continue
if (concept, x) not in variables_not: continue
constr = session.constr(
variables[concept, x] + variables_not[concept, x], SolverSession.CTYPE.EQ, 1,
name='lazy_not_{}_{}'.format(concept.name, str(x)))
self.logger.debug(' - add %s', constr)
constraints_not[concept, x] = constr
return constraints, constraints_not
def objective(self, candidates, variables, predictions, variables_not, predictions_not, *predicates_list):
self.logger.debug('set objective')
objective = None
for predicates in predicates_list:
for concept in predicates:
for x in candidates[concept]:
if (concept, x) not in variables: continue
objective += variables[concept, x] * predictions[concept, x]
if self.lazy_not:
for predicates in predicates_list:
for concept in predicates:
for x in candidates[concept]:
if (concept, x) not in variables_not: continue
objective += variables_not[concept, x] * predictions_not[concept, x]
return objective
class ScoreConstructor(Constructor):
def get_predication(self, predicate, idx, negative=False):
if negative:
return predicate[(*idx, 0)]
return predicate[(*idx, 1)]
class ProbConstructor(Constructor):
def __init__(self, lazy_not=False, self_relation=True):
super().__init__(lazy_not=lazy_not, self_relation=self_relation)
def get_predication(self, predicate, idx, negative=False):
if negative:
return 1 - predicate[idx]
return predicate[idx]
class BatchMaskProbConstructor(Constructor):
def get_predication(self, predicate, idx, negative=False):
value, mask = predicate
if negative:
value = 1 - value
return value[(slice(value.shape[0]),*idx)], mask[(slice(mask.shape[0]),*idx)]
def isskip(self, value):
return False | PypiClean |
/MotorDeCalidad-1.12.27.tar.gz/MotorDeCalidad-1.12.27/README.md | # Motor de Calidad de Datos
Guía para la actualización motor de calidad.
## Acceso a Motor de Calidad
El acceso al motor de calidad se hace por medio de la plataforma GitHub a través de los siguientes enlaces:
- Motor de Calidad (Versión para Azure Cloud) : https://github.com/enzoip98/MotorDeCalidad
- Motor de Calidad (Versión para Servidores locales) : https://github.com/enzoip98/MotorDeCalidadLocal
Aquí es posible realizar el clonado de los repositorios. Para el desarrollo cooperativo se recomienda la creación de ramas dirigidas a los cambios a realizar y apuntar las Pull Request hacia el proyecto con el que se desea trabajar.
Para el desarrollo en local y personalización del motor solo es necesario el copiado del archivo.
## Exportación de Código a .Whl
Para realizar la exportación del código a un paquete .whl para fácil transportación de la librería se debe ejecutar la siguiente linea de código a través del terminal.
```python
python setup.py sdist bdist_wheel
```
Esta linea de código creará una carpeta dist en la cual podrá encontrarse el comprimido con la librería y la librería en extensión .whl. Con este último archivo se puede realizar la instalación a través de la instrucción
```python
pip install "ruta del archivo"
```
Con esto la librería exportada se instalará y podrá ser utilizada para ejecutar el motor de calidad.
## Módulo constants
El módulo constants contiene todas las constantes con sus respectivos valores.
### Rules
La primera constante es la clase Rules, esta contiene distintas subclases las cuales son las reglas y estas subclases contienen tres atributos; el nombre (name), la característica de regla (property) y el código de la regla (code). Si se desean incluir nuevas reglas se debe considerar la creación de una nueva subclase dentro de la clase Rules y se deberán completar sus atributos respectivos.
```python
class NullRule:
name = "Completitud"
property = "Completitud de Registro"
code = "101"
```
### Json Parts
La clase Json Parts contiene los nombres de los atributos que se esperan extraer de los JSON a los que se haga lectura. Para añadir nuevos atributos solo se debe añadir la variable con el valor tal cual vendrá escrito en el JSON.
```python
class JsonParts:
Input = "INPUT"
```
### Fields
Se creó la clase Fields la cual se inicializa con el Nombre del campo y adicionalmente se tiene el método value, el cual permite asignarle un valor a la columna. A continuación se tienen definidos los campos que se utilizan durante el desarrollo de la librería.
```python
CountryId = Field("CODIGO_DE_PAIS")
```
### Otras constantes
A continuación se definen todos aquellos valores constantes que se utilizarán en el desarrollo del código.
## Módulo Rules
En el módulo rules se encuentran definidas todas aquellas funciones de validación que se utilizarán en el código.
Para la creación de nuevas reglas se puede utilizar como parámetros de entrada todos aquellos que se crean necesarios, sin embargo, como condición para el correcto funcionamiento la función debe devolver una lista y el dataframe que contiene los registros con errores.
La lista debe contener los siguientes elementos:
- Cantidad de registros
- Código de regla
- Nombre de regla
- Característica de regla
- Código único de regla (Concatenación de código/entidad/campo)
- Umbral
- Requerimiento de datos
- Campo
- Ratio de éxito de regla
- Cantidad de Registros Fallidos
```python
def validateNull(object:DataFrame,field: str,registersAmount: int,entity: str,threshold):
dataRequirement = f"El atributo {entity}.{field} debe ser obligatorio (NOT NULL)."
errorDf = object.filter(col(field).isNull())
nullCount = object.select(field).filter(col(field).isNull()).count()
notNullCount = registersAmount - nullCount
ratio = (notNullCount/ registersAmount) * OneHundred
return [registersAmount,Rules.NullRule.code,Rules.NullRule.name,Rules.NullRule.property,Rules.NullRule.code + "/" + entity + "/" + field,threshold,dataRequirement,field,ratio,nullCount], errorDf
```
En caso de agregar una nueva regla se tienen que actualizar las secciones de Json Parts en el módulo constants con los nuevos parámetros que se estén añadiendo y de igual forma se debe actualizar la función Start Validation añadiendo esta nueva regla como se describe más adelante en el documento.
## Módulo Utilities
## Módulo Functions
El módulo functions es el módulo principal de la librería y este contiene todas las funcionalidades adicionales para la ejecución de las reglas.
### Start Validation
La función start validation es la función principal de la librería desde la cual se inicia el proceso de validación de reglas de calidad. Esta función debe invocar los procesos de lectura de Json obteniendo los parámetros necesarios y a continuación ejecutar la validación. Finalmente se debe escribir el resultado y devolver el dataframe.
### Extract Params From Json
Esta fuinción permite obtener los parámetros del archivo de configuración en formato JSON. Este también realiza la lectura del input sobre el cual se hará la validación. Finalmente se debe asegurar de que se devuelven todas las variables que serán utilizadas más adelante en el código.
### Read Df
La función readDf permite realizar la lectura de inputs. Esta se va a utilizar tanto para la lectura del input sobre el cual se ejecutan las reglas como para aquellos datasets necesarios para la ejecución de las mismas (Ej: La regla de integridad referencial enfrente un dataset contra otro). En esta función se han definido distintos métodos de lectura, para añadir un nuevo método es necesario la creación de un nuevo elif donde la condición de entrada será el tipo de lectura que vendrá informada desde el JSON con el atributo TYPE en la sección INPUT. La lectura deberá realizarse por medio de métodos de spark y deberá dar como resultado un DataFrame.
### Write Df
La función writeDf permite realiza la escritura de los resultados. Esta se utiliza para la escritura de los resultados finales de la validación de calidad y para la escritura de la data observada. En este método, al igual que en el anterior, se pueden agregar nuevos formatos de escrituro los cuales deberán ir dentro de un nuevo elif y tener como condicionante el atributo TYPE de la sección OUTPUT. Se deberán utilizar métodos de spark para la escritura.
### Create Error Data
La función createErrorData nos permite la creación del dataframe de Data Observada sobre el cual se anexa la información de los registros que resulten incorrectos en la validación. Esta función obtiene la lista de nombres de columnas y tipos de datos que vienen del dataframe input, añade las columnas que se agregan a la data observada y finalmente crea un dataframe vacío con el esquema específicado.
### Validate Rules
Esta es la función que ejecuta la validación de las reglas. Por medio de un loop for se pasa a través de las distintas reglas definidas en el archivo de configuración JSON. La condición de entrada para la ejecución de una regla deben ser los 3 primeros caracteres del código de regla, esto se debe a que una regla puede aparecer más de una vez dentro del JSON.
Todas las reglas empiezan con la medición del tiempo en el que inicia la regla, posteriormente este se restará con el tiempo de finalización para obtener el tiempo de duración de la ejecución de la regla. Luego se hace la invocación a la regla, pasándole los datos necesarios para su correcto funcionamiento. A continuación se valida que la data observada sea mayor a 0 y se procede a anexar las columnas de data observada y finalmente a unir la data observada al dataframe de data observada inicial. Finalmente se escribe el resultado en el dataframe de data.
```python
t = time.time()
data, errorDf = validateNull(object,field,registerAmount,entity,threshold)
errorDesc = "Nulos - " + str(field)
if data[-One] > Zero :
errorTotal = errorDf.withColumn("error", lit(errorDesc))\
.withColumn("run_time", lit(runTime))
if write != False :
errorData = errorData.union(errorTotal)
rulesData.append(data)
print("regla de nulos: %s segundos" % (time.time() - t))
``` | PypiClean |
/DragonPyEmulator-0.9.0-py3-none-any.whl/PyDC/PyDC/CassetteObjects.py | import itertools
import logging
import os
import sys
# own modules
from PyDC.PyDC.utils import (
LOG_FORMATTER,
LOG_LEVEL_DICT,
codepoints2string,
get_word,
iter_steps,
pformat_codepoints,
string2codepoint,
)
from .basic_tokens import bytes2codeline
from .bitstream_handler import BitstreamHandler, BytestreamHandler, CasStream
from .wave2bitstream import Bitstream2Wave, Wave2Bitstream
log = logging.getLogger("PyDC")
class CodeLine:
def __init__(self, line_pointer, line_no, code):
assert isinstance(line_no, int), f"Line number not integer, it's: {repr(line_no)}"
self.line_pointer = line_pointer
self.line_no = line_no
self.code = code
def get_ascii_codeline(self):
return f"{self.line_no:d} {self.code}"
def get_as_codepoints(self):
return tuple(string2codepoint(self.get_ascii_codeline()))
def __repr__(self):
return f"<CodeLine pointer: {repr(self.line_pointer)} line no: {repr(self.line_no)} code: {repr(self.code)}>"
class FileContent:
"""
Content (all data blocks) of a cassette file.
"""
def __init__(self, cfg):
self.cfg = cfg
self.code_lines = []
def create_from_bas(self, file_content):
for line in file_content.splitlines():
if not line:
# Skip empty lines (e.g. XRoar need a empty line at the end)
continue
try:
line_number, code = line.split(" ", 1)
except ValueError:
etype, evalue, etb = sys.exc_info()
evalue = etype(
f"Error split line: {evalue} (line: {repr(line)})"
)
raise etype(evalue).with_traceback(etb)
line_number = int(line_number)
if self.cfg.case_convert:
code = code.upper()
self.code_lines.append(
CodeLine(None, line_number, code)
)
def add_block_data(self, block_length, data):
"""
add a block of tokenized BASIC source code lines.
>> cfg = Dragon32Config
>> fc = FileContent(cfg)
>> block = [
... 0x1e,0x12,0x0,0xa,0x80,0x20,0x49,0x20,0xcb,0x20,0x31,0x20,0xbc,0x20,0x31,0x30,0x0,
... 0x0,0x0]
>> len(block)
19
>> fc.add_block_data(19,iter(block))
19 Bytes parsed
>> fc.print_code_lines()
10 FOR I = 1 TO 10
>> block = iter([
... 0x1e,0x29,0x0,0x14,0x87,0x20,0x49,0x3b,0x22,0x48,0x45,0x4c,0x4c,0x4f,0x20,0x57,0x4f,0x52,0x4c,0x44,0x21,0x22,0x0,
... 0x0,0x0])
>> fc.add_block_data(999,block)
25 Bytes parsed
ERROR: Block length value 999 is not equal to parsed bytes!
>> fc.print_code_lines()
10 FOR I = 1 TO 10
20 PRINT I;"HELLO WORLD!"
>> block = iter([
... 0x1e,0x31,0x0,0x1e,0x8b,0x20,0x49,0x0,
... 0x0,0x0])
>> fc.add_block_data(10,block)
10 Bytes parsed
>> fc.print_code_lines()
10 FOR I = 1 TO 10
20 PRINT I;"HELLO WORLD!"
30 NEXT I
Test function tokens in code
>> fc = FileContent(cfg)
>> data = iter([
... 0x1e,0x4a,0x0,0x1e,0x58,0xcb,0x58,0xc3,0x4c,0xc5,0xff,0x88,0x28,0x52,0x29,0x3a,0x59,0xcb,0x59,0xc3,0x4c,0xc5,0xff,0x89,0x28,0x52,0x29,0x0,
... 0x0,0x0
... ])
>> fc.add_block_data(30, data)
30 Bytes parsed
>> fc.print_code_lines()
30 X=X+L*SIN(R):Y=Y+L*COS(R)
Test high line numbers
>> fc = FileContent(cfg)
>> data = [
... 0x1e,0x1a,0x0,0x1,0x87,0x20,0x22,0x4c,0x49,0x4e,0x45,0x20,0x4e,0x55,0x4d,0x42,0x45,0x52,0x20,0x54,0x45,0x53,0x54,0x22,0x0,
... 0x1e,0x23,0x0,0xa,0x87,0x20,0x31,0x30,0x0,
... 0x1e,0x2d,0x0,0x64,0x87,0x20,0x31,0x30,0x30,0x0,
... 0x1e,0x38,0x3,0xe8,0x87,0x20,0x31,0x30,0x30,0x30,0x0,
... 0x1e,0x44,0x27,0x10,0x87,0x20,0x31,0x30,0x30,0x30,0x30,0x0,
... 0x1e,0x50,0x80,0x0,0x87,0x20,0x33,0x32,0x37,0x36,0x38,0x0,
... 0x1e,0x62,0xf9,0xff,0x87,0x20,0x22,0x45,0x4e,0x44,0x22,0x3b,0x36,0x33,0x39,0x39,0x39,0x0,0x0,0x0
... ]
>> len(data)
99
>> fc.add_block_data(99, iter(data))
99 Bytes parsed
>> fc.print_code_lines()
1 PRINT "LINE NUMBER TEST"
10 PRINT 10
100 PRINT 100
1000 PRINT 1000
10000 PRINT 10000
32768 PRINT 32768
63999 PRINT "END";63999
"""
# data = list(data)
# # print repr(data)
# print_as_hex_list(data)
# print_codepoint_stream(data)
# sys.exit()
# create from codepoint list a iterator
data = iter(data)
byte_count = 0
while True:
try:
line_pointer = get_word(data)
except (StopIteration, IndexError) as err:
log.error(f"No line pointer information in code line data. ({err})")
break
# print "line_pointer:", repr(line_pointer)
byte_count += 2
if not line_pointer:
# arrived [0x00, 0x00] -> end of block
break
try:
line_number = get_word(data)
except (StopIteration, IndexError) as err:
log.error(f"No line number information in code line data. ({err})")
break
# print "line_number:", repr(line_number)
byte_count += 2
# data = list(data)
# print_as_hex_list(data)
# print_codepoint_stream(data)
# data = iter(data)
# get the code line:
# new iterator to get all characters until 0x00 arraived
code = iter(data.__next__, 0x00)
code = list(code) # for len()
byte_count += len(code) + 1 # from 0x00 consumed in iter()
# print_as_hex_list(code)
# print_codepoint_stream(code)
# convert to a plain ASCII string
code = bytes2codeline(code)
self.code_lines.append(
CodeLine(line_pointer, line_number, code)
)
print(f"{byte_count:d} Bytes parsed")
if block_length != byte_count:
print(f"ERROR: Block length value {block_length:d} is not equal to parsed bytes!")
def add_ascii_block(self, block_length, data):
"""
add a block of ASCII BASIC source code lines.
>> data = [
... 0xd,
... 0x31,0x30,0x20,0x50,0x52,0x49,0x4e,0x54,0x20,0x22,0x54,0x45,0x53,0x54,0x22,
... 0xd,
... 0x32,0x30,0x20,0x50,0x52,0x49,0x4e,0x54,0x20,0x22,0x48,0x45,0x4c,0x4c,0x4f,0x20,0x57,0x4f,0x52,0x4c,0x44,0x21,0x22,
... 0xd
... ]
>> len(data)
41
>> fc = FileContent(Dragon32Config)
>> fc.add_ascii_block(41, iter(data))
41 Bytes parsed
>> fc.print_code_lines()
10 PRINT "TEST"
20 PRINT "HELLO WORLD!"
"""
data = iter(data)
next(data) # Skip first \r
byte_count = 1 # incl. first \r
while True:
code = iter(data.__next__, 0xd) # until \r
code = "".join([chr(c) for c in code])
if not code:
log.warning("code ended.")
break
byte_count += len(code) + 1 # and \r consumed in iter()
try:
line_number, code = code.split(" ", 1)
except ValueError as err:
print(f"\nERROR: Splitting linenumber in {repr(code)}: {err}")
break
try:
line_number = int(line_number)
except ValueError as err:
print(f"\nERROR: Part {line_number!r} is not a line number! ({err})")
continue
self.code_lines.append(
CodeLine(None, line_number, code)
)
print(f"{byte_count:d} Bytes parsed")
if block_length != byte_count:
log.error(
f"Block length value {block_length:d} is not equal to parsed bytes!"
)
def get_as_codepoints(self):
result = []
delim = list(string2codepoint("\r"))[0]
for code_line in self.code_lines:
result.append(delim)
result += list(code_line.get_as_codepoints())
result.append(delim)
# log.debug("-"*79)
# for line in pformat_codepoints(result):
# log.debug(repr(line))
# log.debug("-"*79)
return result
def get_ascii_codeline(self):
for code_line in self.code_lines:
yield code_line.get_ascii_codeline()
def print_code_lines(self):
for code_line in self.code_lines:
print(f"{code_line.line_no:d} {code_line.code}")
def print_debug_info(self):
print("\tcode lines:")
print("-" * 79)
self.print_code_lines()
print("-" * 79)
class CassetteFile:
def __init__(self, cfg):
self.cfg = cfg
self.is_tokenized = False
self.ascii_flag = None
self.gap_flag = None # one byte gap flag (0x00=no gaps, 0xFF=gaps)
def create_from_bas(self, filename, file_content):
filename2 = os.path.split(filename)[1]
filename2 = filename2.upper()
filename2 = filename2.rstrip()
filename2 = filename2.replace(" ", "_")
# TODO: remove non ASCII!
filename2 = filename2[:8]
log.debug(f"filename '{filename2}' from: {filename}")
self.filename = filename2
self.file_type = self.cfg.FTYPE_BASIC # BASIC programm (0x00)
# http://archive.worldofdragon.org/phpBB3/viewtopic.php?f=8&t=4231&p=9723#p9723
self.ascii_flag = self.cfg.BASIC_ASCII
self.gap_flag = self.cfg.GAPS # ASCII File is GAP, tokenized is no gaps
self.file_content = FileContent(self.cfg)
self.file_content.create_from_bas(file_content)
def create_from_wave(self, codepoints):
log.debug(f"filename data: {pformat_codepoints(codepoints)}")
raw_filename = codepoints[:8]
self.filename = codepoints2string(raw_filename).rstrip()
print(f"\nFilename: {repr(self.filename)}")
self.file_type = codepoints[8]
if self.file_type not in self.cfg.FILETYPE_DICT:
raise NotImplementedError(
f"Unknown file type {hex(self.file_type)} is not supported, yet."
)
log.info(f"file type: {self.cfg.FILETYPE_DICT[self.file_type]}")
if self.file_type == self.cfg.FTYPE_DATA:
raise NotImplementedError("Data files are not supported, yet.")
elif self.file_type == self.cfg.FTYPE_BIN:
raise NotImplementedError("Binary files are not supported, yet.")
self.ascii_flag = codepoints[9]
log.info(f"Raw ASCII flag is: {repr(self.ascii_flag)}")
if self.ascii_flag == self.cfg.BASIC_TOKENIZED:
self.is_tokenized = True
elif self.ascii_flag == self.cfg.BASIC_ASCII:
self.is_tokenized = False
else:
raise NotImplementedError(f"Unknown BASIC type: '{hex(self.ascii_flag)}'")
log.info(f"ASCII flag: {self.cfg.BASIC_TYPE_DICT[self.ascii_flag]}")
self.gap_flag = codepoints[10]
log.info(f"gap flag is {hex(self.gap_flag)} (0x00=no gaps, 0xff=gaps)")
# machine code starting/loading address
if self.file_type != self.cfg.FTYPE_BASIC: # BASIC programm (0x00)
codepoints = iter(codepoints)
self.start_address = get_word(codepoints)
log.info(f"machine code starting address: {hex(self.start_address)}")
self.load_address = get_word(codepoints)
log.info(f"machine code loading address: {hex(self.load_address)}")
else:
# not needed in BASIC files
# http://archive.worldofdragon.org/phpBB3/viewtopic.php?f=8&t=4341&p=9109#p9109
pass
self.file_content = FileContent(self.cfg)
def add_block_data(self, block_length, codepoints):
if self.is_tokenized:
self.file_content.add_block_data(block_length, codepoints)
else:
self.file_content.add_ascii_block(block_length, codepoints)
print("*" * 79)
self.file_content.print_code_lines()
print("*" * 79)
def get_filename_block_as_codepoints(self):
"""
TODO: Support tokenized BASIC. Now we only create ASCII BASIC.
"""
codepoints = []
codepoints += list(string2codepoint(self.filename.ljust(8, " ")))
codepoints.append(self.cfg.FTYPE_BASIC) # one byte file type
codepoints.append(self.cfg.BASIC_ASCII) # one byte ASCII flag
# one byte gap flag (0x00=no gaps, 0xFF=gaps)
# http://archive.worldofdragon.org/phpBB3/viewtopic.php?f=8&t=4231&p=9110#p9110
codepoints.append(self.gap_flag)
# machine code starting/loading address
if self.file_type != self.cfg.FTYPE_BASIC: # BASIC programm (0x00)
codepoints = iter(codepoints)
self.start_address = get_word(codepoints)
log.info(f"machine code starting address: {hex(self.start_address)}")
self.load_address = get_word(codepoints)
log.info(f"machine code loading address: {hex(self.load_address)}")
else:
# not needed in BASIC files
# http://archive.worldofdragon.org/phpBB3/viewtopic.php?f=8&t=4341&p=9109#p9109
pass
log.debug(f"filename block: {pformat_codepoints(codepoints)}")
return codepoints
def get_code_block_as_codepoints(self):
result = self.file_content.get_as_codepoints()
# XXX: Is a code block end terminator needed?
# e.g.:
# if self.is_tokenized:
# result += [0x00, 0x00]
# else:
# result.append(0x0d) # 0x0d == \r
return result
def print_debug_info(self):
print(f"\tFilename: '{self.filename}'")
print(f"\tfile type: {self.cfg.FILETYPE_DICT[self.file_type]}")
print("\tis tokenized:", self.is_tokenized)
self.file_content.print_debug_info()
def __repr__(self):
return f"<BlockFile '{self.filename}'>"
class Cassette:
"""
Pseudo DocTest:
>> d32cfg = Dragon32Config()
>> c = Cassette(d32cfg)
>> c.add_from_bas("../test_files/HelloWorld1.bas")
>> c.print_debug_info()
There exists 1 files:
Filename: 'HELLOWOR'
file type: BASIC programm (0x00)
is tokenized: False
code lines:
-------------------------------------------------------------------------------
10 FOR I = 1 TO 10
20 PRINT I;"HELLO WORLD!"
30 NEXT I
-------------------------------------------------------------------------------
>> c.pprint_codepoint_stream()
255 x LEAD_BYTE_CODEPOINT
0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55 0x55
1x SYNC_BYTE_CODEPOINT
0x3c
block type filename block (0x00)
0x0
block length: 0xa
0xa
yield block data
0x48 0x45 0x4c 0x4c 0x4f 0x57 0x4f 0x52 0x0 0xff
block type data block (0x01)
0x1
block length: 0x36
0x36
yield block data
0x31 0x30 0x20 0x46 0x4f 0x52 0x20 0x49 0x20 0x3d 0x20 0x31 0x20 0x54 0x4f 0x20 0x31 0x30 0x32 0x30 0x20 0x50 0x52 0x49 0x4e 0x54 0x20 0x49 0x3b 0x22 0x48 0x45 0x4c 0x4c 0x4f 0x20 0x57 0x4f 0x52 0x4c 0x44 0x21 0x22 0x33 0x30 0x20 0x4e 0x45 0x58 0x54 0x20 0x49 0x0 0x0
block type end-of-file block (0xff)
0xff
block length: 0x0
0x0
"""
def __init__(self, cfg):
self.cfg = cfg
self.files = []
self.current_file = None
self.wav = None # Bitstream2Wave instance only if write_wave() used!
# temp storage for code block
self.buffer = []
self.buffered_block_length = 0
def add_from_wav(self, source_file):
bitstream = iter(Wave2Bitstream(source_file, self.cfg))
# store bitstream into python objects
bh = BitstreamHandler(self, self.cfg)
bh.feed(bitstream)
def add_from_cas(self, source_file):
cas_stream = CasStream(source_file)
bh = BytestreamHandler(self, self.cfg)
bh.feed(cas_stream)
def add_from_bas(self, filename):
with open(filename) as f:
file_content = f.read()
self.current_file = CassetteFile(self.cfg)
self.current_file.create_from_bas(filename, file_content)
self.files.append(self.current_file)
def buffer2file(self):
"""
add the code buffer content to CassetteFile() instance
"""
if self.current_file is not None and self.buffer:
self.current_file.add_block_data(self.buffered_block_length, self.buffer)
self.buffer = []
self.buffered_block_length = 0
def buffer_block(self, block_type, block_length, block_codepoints):
block = tuple(itertools.islice(block_codepoints, block_length))
log.debug(f"pprint block: {pformat_codepoints(block)}")
if block_type == self.cfg.EOF_BLOCK:
self.buffer2file()
return
elif block_type == self.cfg.FILENAME_BLOCK:
self.buffer2file()
self.current_file = CassetteFile(self.cfg)
self.current_file.create_from_wave(block)
log.info(f"Add file {repr(self.current_file)}")
self.files.append(self.current_file)
elif block_type == self.cfg.DATA_BLOCK:
# store code until end marker
self.buffer += block
self.buffered_block_length += block_length
else:
raise TypeError("Block type %s unkown!" & hex(block_type))
def print_debug_info(self):
print(f"There exists {len(self.files)} files:")
for file_obj in self.files:
file_obj.print_debug_info()
def block2codepoint_stream(self, file_obj, block_type, block_codepoints):
if file_obj.gap_flag == self.cfg.GAPS:
# file has gaps (e.g. ASCII BASIC)
log.debug("File has GAP flag set:")
log.debug("yield %sx bit-sync bytes %s",
self.cfg.LEAD_BYTE_LEN, hex(self.cfg.LEAD_BYTE_CODEPOINT)
)
leadin = [self.cfg.LEAD_BYTE_CODEPOINT for _ in range(self.cfg.LEAD_BYTE_LEN)]
yield leadin
log.debug("yield 1x leader byte %s", hex(self.cfg.LEAD_BYTE_CODEPOINT))
yield self.cfg.LEAD_BYTE_CODEPOINT
log.debug(f"yield sync byte {hex(self.cfg.SYNC_BYTE_CODEPOINT)}")
if self.wav:
log.debug(f"wave pos: {self.wav.pformat_pos()}")
yield self.cfg.SYNC_BYTE_CODEPOINT
log.debug(f"yield block type '{self.cfg.BLOCK_TYPE_DICT[block_type]}'")
yield block_type
codepoints = tuple(block_codepoints)
block_length = len(codepoints)
assert block_length <= 255
log.debug(f"yield block length {hex(block_length)} ({block_length}Bytes)")
yield block_length
if not codepoints:
# EOF block
# FIXME checksum
checksum = block_type
checksum += block_length
checksum = checksum & 0xFF
log.debug(f"yield calculated checksum {hex(checksum)}")
yield checksum
else:
log.debug(f"content of '{self.cfg.BLOCK_TYPE_DICT[block_type]}':")
log.debug("-" * 79)
log.debug(repr("".join([chr(i) for i in codepoints])))
log.debug("-" * 79)
yield codepoints
checksum = sum(codepoint for codepoint in codepoints)
checksum += block_type
checksum += block_length
checksum = checksum & 0xFF
log.debug(f"yield calculated checksum {hex(checksum)}")
yield checksum
log.debug("yield 1x tailer byte %s", hex(self.cfg.LEAD_BYTE_CODEPOINT))
yield self.cfg.LEAD_BYTE_CODEPOINT
def codepoint_stream(self):
if self.wav:
self.wav.write_silence(sec=0.1)
for file_obj in self.files:
# yield filename
yield from self.block2codepoint_stream(file_obj,
block_type=self.cfg.FILENAME_BLOCK,
block_codepoints=file_obj.get_filename_block_as_codepoints()
)
if self.wav:
self.wav.write_silence(sec=0.1)
# yield file content
codepoints = file_obj.get_code_block_as_codepoints()
for raw_codepoints in iter_steps(codepoints, 255):
# log.debug("-"*79)
# log.debug("".join([chr(i) for i in raw_codepoints]))
# log.debug("-"*79)
# Add meta information
codepoint_stream = self.block2codepoint_stream(
file_obj, block_type=self.cfg.DATA_BLOCK, block_codepoints=raw_codepoints)
yield from codepoint_stream
if self.wav:
self.wav.write_silence(sec=0.1)
# yield EOF
yield from self.block2codepoint_stream(file_obj,
block_type=self.cfg.EOF_BLOCK,
block_codepoints=[]
)
if self.wav:
self.wav.write_silence(sec=0.1)
def write_wave(self, destination_file):
wav = Bitstream2Wave(destination_file, self.cfg)
for codepoint in self.codepoint_stream():
if isinstance(codepoint, (tuple, list)):
for item in codepoint:
assert isinstance(item, int), f"Codepoint {repr(codepoint)} is not int/hex"
else:
assert isinstance(codepoint, int), f"Codepoint {repr(codepoint)} is not int/hex"
wav.write_codepoint(codepoint)
wav.close()
def write_cas(self, destination_file):
log.info(f"Create {repr(destination_file)}...")
def _write(f, codepoint):
try:
f.write(chr(codepoint))
except ValueError as err:
log.error(f"Value error with {repr(codepoint)}: {err}")
raise
with open(destination_file, "wb") as f:
for codepoint in self.codepoint_stream():
if isinstance(codepoint, (tuple, list)):
for item in codepoint:
_write(f, item)
else:
_write(f, codepoint)
print(f"\nFile {repr(destination_file)} saved.")
def write_bas(self, destination_file):
dest_filename = os.path.splitext(destination_file)[0]
for file_obj in self.files:
bas_filename = file_obj.filename # Filename from CSAVE argument
out_filename = f"{dest_filename}_{bas_filename}.bas"
log.info(f"Create {repr(out_filename)}...")
with open(out_filename, "w") as f:
for line in file_obj.file_content.get_ascii_codeline():
if self.cfg.case_convert:
line = line.lower()
f.write(f"{line}\n")
print(f"\nFile {repr(out_filename)} saved.")
def pprint_codepoint_stream(self):
log_level = LOG_LEVEL_DICT[3]
log.setLevel(log_level)
handler = logging.StreamHandler(stream=sys.stdout)
handler.setFormatter(LOG_FORMATTER)
log.addHandler(handler)
for codepoint in self.codepoint_stream():
try:
print(hex(codepoint), end=' ')
except TypeError as err:
raise TypeError(
f"\n\nERROR with '{repr(codepoint)}': {err}"
)
if __name__ == "__main__":
# import doctest
# print doctest.testmod(
# verbose=False
# # verbose=True
# )
# sys.exit()
import subprocess
# bas -> wav
subprocess.Popen([sys.executable, "../PyDC_cli.py",
# "--verbosity=10",
"--verbosity=5",
# "--logfile=5",
# "--log_format=%(module)s %(lineno)d: %(message)s",
# "../test_files/HelloWorld1.bas", "--dst=../test.wav"
"../test_files/HelloWorld1.bas", "--dst=../test.cas"
]).wait()
# print "\n"*3
# print "="*79
# print "\n"*3
#
# # # wav -> bas
# subprocess.Popen([sys.executable, "../PyDC_cli.py",
# # "--verbosity=10",
# "--verbosity=7",
# # "../test.wav", "--dst=../test.bas",
# # "../test.cas", "--dst=../test.bas",
# # "../test_files/HelloWorld1 origin.wav", "--dst=../test_files/HelloWorld1.bas",
# "../test_files/LineNumber Test 02.wav", "--dst=../test.bas",
# ]).wait()
#
# print "-- END --" | PypiClean |
/HBT_IP_Test-1.0.1-py3-none-any.whl/HBT_IP_Test/libs/isom/python/IsomAssets_pb2.py |
import sys
_b=sys.version_info[0]<3 and (lambda x:x) or (lambda x:x.encode('latin1'))
from google.protobuf.internal import enum_type_wrapper
from google.protobuf import descriptor as _descriptor
from google.protobuf import message as _message
from google.protobuf import reflection as _reflection
from google.protobuf import symbol_database as _symbol_database
# @@protoc_insertion_point(imports)
_sym_db = _symbol_database.Default()
import IsomStdDef_pb2 as IsomStdDef__pb2
DESCRIPTOR = _descriptor.FileDescriptor(
name='IsomAssets.proto',
package='Honeywell.Security.ISOM.Assets',
syntax='proto2',
serialized_options=None,
serialized_pb=_b('\n\x10IsomAssets.proto\x12\x1eHoneywell.Security.ISOM.Assets\x1a\x10IsomStdDef.proto\"Y\n\x0f\x41ssetOperations\x12<\n\tresources\x18\x0b \x03(\x0e\x32).Honeywell.Security.ISOM.Assets.Resources*\x08\x08\xc0\x84=\x10\xe0\x91\x43\"a\n\x17\x41ssetSupportedRelations\x12<\n\trelations\x18\x0b \x03(\x0e\x32).Honeywell.Security.ISOM.Assets.Relations*\x08\x08\xc0\x84=\x10\xe0\x91\x43\"O\n\x0b\x41ssetEvents\x12\x36\n\x06\x65vents\x18\x0b \x03(\x0e\x32&.Honeywell.Security.ISOM.Assets.Events*\x08\x08\xc0\x84=\x10\xe0\x91\x43\"t\n\nAssetState\x12\n\n\x02ID\x18\x0b \x01(\t\x12P\n\x0einventoryState\x18\x0c \x01(\x0e\x32\x38.Honeywell.Security.ISOM.Assets.AssetInventoryStateTypes*\x08\x08\xc0\x84=\x10\xe0\x91\x43\"U\n\x0e\x41ssetStateList\x12\x39\n\x05state\x18\x0b \x03(\x0b\x32*.Honeywell.Security.ISOM.Assets.AssetState*\x08\x08\xc0\x84=\x10\xe0\x91\x43\"p\n\rAssetRelation\x12\n\n\x02id\x18\x0b \x01(\t\x12\x37\n\x04name\x18\x0c \x01(\x0e\x32).Honeywell.Security.ISOM.Assets.Relations\x12\x10\n\x08\x65ntityID\x18\r \x01(\t*\x08\x08\xc0\x84=\x10\xe0\x91\x43\"^\n\x11\x41ssetRelationList\x12?\n\x08relation\x18\x0b \x03(\x0b\x32-.Honeywell.Security.ISOM.Assets.AssetRelation*\x08\x08\xc0\x84=\x10\xe0\x91\x43\"f\n\x10\x41ssetIdentifiers\x12\n\n\x02id\x18\x0b \x01(\t\x12\x0c\n\x04guid\x18\x0c \x01(\t\x12\x0c\n\x04name\x18\r \x01(\t\x12\x13\n\x0b\x64\x65scription\x18\x0e \x01(\t\x12\x0b\n\x03tag\x18\x0f \x01(\t*\x08\x08\xc0\x84=\x10\xe0\x91\x43\"\xfb\x02\n\x0b\x41ssetConfig\x12\x45\n\x0bidentifiers\x18\x0b \x01(\x0b\x32\x30.Honeywell.Security.ISOM.Assets.AssetIdentifiers\x12?\n\x08relation\x18\x0c \x03(\x0b\x32-.Honeywell.Security.ISOM.Assets.AssetRelation\x12\x0c\n\x04type\x18\r \x01(\t\x12\x43\n\x0eissuedDateTime\x18\x0e \x01(\x0b\x32%.Honeywell.Security.ISOM.IsomDateTimeB\x04\x90\xb5\x18\r\x12@\n\x0b\x64ueDateTime\x18\x0f \x01(\x0b\x32%.Honeywell.Security.ISOM.IsomDateTimeB\x04\x90\xb5\x18\r\x12\x45\n\x10returnedDateTime\x18\x10 \x01(\x0b\x32%.Honeywell.Security.ISOM.IsomDateTimeB\x04\x90\xb5\x18\r*\x08\x08\xa0\xf7\x36\x10\xe0\x91\x43\"X\n\x0f\x41ssetConfigList\x12;\n\x06\x63onfig\x18\x0b \x03(\x0b\x32+.Honeywell.Security.ISOM.Assets.AssetConfig*\x08\x08\xc0\x84=\x10\xe0\x91\x43\"\x8f\x01\n\x0b\x41ssetEntity\x12;\n\x06\x63onfig\x18\x15 \x01(\x0b\x32+.Honeywell.Security.ISOM.Assets.AssetConfig\x12\x39\n\x05state\x18\x1f \x01(\x0b\x32*.Honeywell.Security.ISOM.Assets.AssetState*\x08\x08\xa0\xf7\x36\x10\xe0\x91\x43\"X\n\x0f\x41ssetEntityList\x12;\n\x06\x65ntity\x18\x0b \x03(\x0b\x32+.Honeywell.Security.ISOM.Assets.AssetEntity*\x08\x08\xc0\x84=\x10\xe0\x91\x43*\xd3\x01\n\tResources\x12\x18\n\x13supportedOperations\x10\xf2\x07\x12\x17\n\x12supportedRelations\x10\xf3\x07\x12\x14\n\x0fsupportedEvents\x10\xf4\x07\x12\x1a\n\x15supportedCapabilities\x10\xf5\x07\x12\x0f\n\nfullEntity\x10\xc2N\x12\x0b\n\x06\x63onfig\x10\xd7N\x12\x10\n\x0bidentifiers\x10\xebN\x12\x0e\n\trelations\x10\xffN\x12\n\n\x05state\x10\xd8O\x12\x15\n\rMax_Resources\x10\x80\x80\x80\x80\x04*\x82\x01\n\tRelations\x12 \n\x1c\x41ssetOwnedByCredentialHolder\x10\x0b\x12#\n\x1f\x41ssetAssignedToCredentialHolder\x10\x0c\x12\x17\n\x13\x41ssetOwnedByAccount\x10\r\x12\x15\n\rMax_Relations\x10\x80\x80\x80\x80\x04*\x9e\x01\n\x06\x45vents\x12\x13\n\x0e\x63onfig_p_added\x10\x9aN\x12\x16\n\x11\x63onfig_p_modified\x10\x9bN\x12\x15\n\x10\x63onfig_p_deleted\x10\x9cN\x12$\n\x1f\x61ssetState_p_overdue_p_detected\x10\xb9u\x12\x16\n\x11\x61ssetState_p_lost\x10\xbau\x12\x12\n\nMax_Events\x10\x80\x80\x80\x80\x04*\x82\x01\n\x18\x41ssetInventoryStateTypes\x12\r\n\tAvailable\x10\x0b\x12\x0c\n\x08\x41ssigned\x10\x0c\x12\x08\n\x04Lost\x10\r\x12\x0b\n\x07OverDue\x10\x0e\x12\x0c\n\x08\x41rchived\x10\x0f\x12$\n\x1cMax_AssetInventoryStateTypes\x10\x80\x80\x80\x80\x04')
,
dependencies=[IsomStdDef__pb2.DESCRIPTOR,])
_RESOURCES = _descriptor.EnumDescriptor(
name='Resources',
full_name='Honeywell.Security.ISOM.Assets.Resources',
filename=None,
file=DESCRIPTOR,
values=[
_descriptor.EnumValueDescriptor(
name='supportedOperations', index=0, number=1010,
serialized_options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='supportedRelations', index=1, number=1011,
serialized_options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='supportedEvents', index=2, number=1012,
serialized_options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='supportedCapabilities', index=3, number=1013,
serialized_options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='fullEntity', index=4, number=10050,
serialized_options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='config', index=5, number=10071,
serialized_options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='identifiers', index=6, number=10091,
serialized_options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='relations', index=7, number=10111,
serialized_options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='state', index=8, number=10200,
serialized_options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Max_Resources', index=9, number=1073741824,
serialized_options=None,
type=None),
],
containing_type=None,
serialized_options=None,
serialized_start=1569,
serialized_end=1780,
)
_sym_db.RegisterEnumDescriptor(_RESOURCES)
Resources = enum_type_wrapper.EnumTypeWrapper(_RESOURCES)
_RELATIONS = _descriptor.EnumDescriptor(
name='Relations',
full_name='Honeywell.Security.ISOM.Assets.Relations',
filename=None,
file=DESCRIPTOR,
values=[
_descriptor.EnumValueDescriptor(
name='AssetOwnedByCredentialHolder', index=0, number=11,
serialized_options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='AssetAssignedToCredentialHolder', index=1, number=12,
serialized_options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='AssetOwnedByAccount', index=2, number=13,
serialized_options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Max_Relations', index=3, number=1073741824,
serialized_options=None,
type=None),
],
containing_type=None,
serialized_options=None,
serialized_start=1783,
serialized_end=1913,
)
_sym_db.RegisterEnumDescriptor(_RELATIONS)
Relations = enum_type_wrapper.EnumTypeWrapper(_RELATIONS)
_EVENTS = _descriptor.EnumDescriptor(
name='Events',
full_name='Honeywell.Security.ISOM.Assets.Events',
filename=None,
file=DESCRIPTOR,
values=[
_descriptor.EnumValueDescriptor(
name='config_p_added', index=0, number=10010,
serialized_options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='config_p_modified', index=1, number=10011,
serialized_options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='config_p_deleted', index=2, number=10012,
serialized_options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='assetState_p_overdue_p_detected', index=3, number=15033,
serialized_options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='assetState_p_lost', index=4, number=15034,
serialized_options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Max_Events', index=5, number=1073741824,
serialized_options=None,
type=None),
],
containing_type=None,
serialized_options=None,
serialized_start=1916,
serialized_end=2074,
)
_sym_db.RegisterEnumDescriptor(_EVENTS)
Events = enum_type_wrapper.EnumTypeWrapper(_EVENTS)
_ASSETINVENTORYSTATETYPES = _descriptor.EnumDescriptor(
name='AssetInventoryStateTypes',
full_name='Honeywell.Security.ISOM.Assets.AssetInventoryStateTypes',
filename=None,
file=DESCRIPTOR,
values=[
_descriptor.EnumValueDescriptor(
name='Available', index=0, number=11,
serialized_options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Assigned', index=1, number=12,
serialized_options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Lost', index=2, number=13,
serialized_options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='OverDue', index=3, number=14,
serialized_options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Archived', index=4, number=15,
serialized_options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Max_AssetInventoryStateTypes', index=5, number=1073741824,
serialized_options=None,
type=None),
],
containing_type=None,
serialized_options=None,
serialized_start=2077,
serialized_end=2207,
)
_sym_db.RegisterEnumDescriptor(_ASSETINVENTORYSTATETYPES)
AssetInventoryStateTypes = enum_type_wrapper.EnumTypeWrapper(_ASSETINVENTORYSTATETYPES)
supportedOperations = 1010
supportedRelations = 1011
supportedEvents = 1012
supportedCapabilities = 1013
fullEntity = 10050
config = 10071
identifiers = 10091
relations = 10111
state = 10200
Max_Resources = 1073741824
AssetOwnedByCredentialHolder = 11
AssetAssignedToCredentialHolder = 12
AssetOwnedByAccount = 13
Max_Relations = 1073741824
config_p_added = 10010
config_p_modified = 10011
config_p_deleted = 10012
assetState_p_overdue_p_detected = 15033
assetState_p_lost = 15034
Max_Events = 1073741824
Available = 11
Assigned = 12
Lost = 13
OverDue = 14
Archived = 15
Max_AssetInventoryStateTypes = 1073741824
_ASSETOPERATIONS = _descriptor.Descriptor(
name='AssetOperations',
full_name='Honeywell.Security.ISOM.Assets.AssetOperations',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='resources', full_name='Honeywell.Security.ISOM.Assets.AssetOperations.resources', index=0,
number=11, type=14, cpp_type=8, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=True,
syntax='proto2',
extension_ranges=[(1000000, 1100000), ],
oneofs=[
],
serialized_start=70,
serialized_end=159,
)
_ASSETSUPPORTEDRELATIONS = _descriptor.Descriptor(
name='AssetSupportedRelations',
full_name='Honeywell.Security.ISOM.Assets.AssetSupportedRelations',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='relations', full_name='Honeywell.Security.ISOM.Assets.AssetSupportedRelations.relations', index=0,
number=11, type=14, cpp_type=8, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=True,
syntax='proto2',
extension_ranges=[(1000000, 1100000), ],
oneofs=[
],
serialized_start=161,
serialized_end=258,
)
_ASSETEVENTS = _descriptor.Descriptor(
name='AssetEvents',
full_name='Honeywell.Security.ISOM.Assets.AssetEvents',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='events', full_name='Honeywell.Security.ISOM.Assets.AssetEvents.events', index=0,
number=11, type=14, cpp_type=8, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=True,
syntax='proto2',
extension_ranges=[(1000000, 1100000), ],
oneofs=[
],
serialized_start=260,
serialized_end=339,
)
_ASSETSTATE = _descriptor.Descriptor(
name='AssetState',
full_name='Honeywell.Security.ISOM.Assets.AssetState',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='ID', full_name='Honeywell.Security.ISOM.Assets.AssetState.ID', index=0,
number=11, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='inventoryState', full_name='Honeywell.Security.ISOM.Assets.AssetState.inventoryState', index=1,
number=12, type=14, cpp_type=8, label=1,
has_default_value=False, default_value=11,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=True,
syntax='proto2',
extension_ranges=[(1000000, 1100000), ],
oneofs=[
],
serialized_start=341,
serialized_end=457,
)
_ASSETSTATELIST = _descriptor.Descriptor(
name='AssetStateList',
full_name='Honeywell.Security.ISOM.Assets.AssetStateList',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='state', full_name='Honeywell.Security.ISOM.Assets.AssetStateList.state', index=0,
number=11, type=11, cpp_type=10, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=True,
syntax='proto2',
extension_ranges=[(1000000, 1100000), ],
oneofs=[
],
serialized_start=459,
serialized_end=544,
)
_ASSETRELATION = _descriptor.Descriptor(
name='AssetRelation',
full_name='Honeywell.Security.ISOM.Assets.AssetRelation',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='id', full_name='Honeywell.Security.ISOM.Assets.AssetRelation.id', index=0,
number=11, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='name', full_name='Honeywell.Security.ISOM.Assets.AssetRelation.name', index=1,
number=12, type=14, cpp_type=8, label=1,
has_default_value=False, default_value=11,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='entityID', full_name='Honeywell.Security.ISOM.Assets.AssetRelation.entityID', index=2,
number=13, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=True,
syntax='proto2',
extension_ranges=[(1000000, 1100000), ],
oneofs=[
],
serialized_start=546,
serialized_end=658,
)
_ASSETRELATIONLIST = _descriptor.Descriptor(
name='AssetRelationList',
full_name='Honeywell.Security.ISOM.Assets.AssetRelationList',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='relation', full_name='Honeywell.Security.ISOM.Assets.AssetRelationList.relation', index=0,
number=11, type=11, cpp_type=10, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=True,
syntax='proto2',
extension_ranges=[(1000000, 1100000), ],
oneofs=[
],
serialized_start=660,
serialized_end=754,
)
_ASSETIDENTIFIERS = _descriptor.Descriptor(
name='AssetIdentifiers',
full_name='Honeywell.Security.ISOM.Assets.AssetIdentifiers',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='id', full_name='Honeywell.Security.ISOM.Assets.AssetIdentifiers.id', index=0,
number=11, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='guid', full_name='Honeywell.Security.ISOM.Assets.AssetIdentifiers.guid', index=1,
number=12, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='name', full_name='Honeywell.Security.ISOM.Assets.AssetIdentifiers.name', index=2,
number=13, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='description', full_name='Honeywell.Security.ISOM.Assets.AssetIdentifiers.description', index=3,
number=14, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='tag', full_name='Honeywell.Security.ISOM.Assets.AssetIdentifiers.tag', index=4,
number=15, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=True,
syntax='proto2',
extension_ranges=[(1000000, 1100000), ],
oneofs=[
],
serialized_start=756,
serialized_end=858,
)
_ASSETCONFIG = _descriptor.Descriptor(
name='AssetConfig',
full_name='Honeywell.Security.ISOM.Assets.AssetConfig',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='identifiers', full_name='Honeywell.Security.ISOM.Assets.AssetConfig.identifiers', index=0,
number=11, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='relation', full_name='Honeywell.Security.ISOM.Assets.AssetConfig.relation', index=1,
number=12, type=11, cpp_type=10, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='type', full_name='Honeywell.Security.ISOM.Assets.AssetConfig.type', index=2,
number=13, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='issuedDateTime', full_name='Honeywell.Security.ISOM.Assets.AssetConfig.issuedDateTime', index=3,
number=14, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=_b('\220\265\030\r'), file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='dueDateTime', full_name='Honeywell.Security.ISOM.Assets.AssetConfig.dueDateTime', index=4,
number=15, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=_b('\220\265\030\r'), file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='returnedDateTime', full_name='Honeywell.Security.ISOM.Assets.AssetConfig.returnedDateTime', index=5,
number=16, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=_b('\220\265\030\r'), file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=True,
syntax='proto2',
extension_ranges=[(900000, 1100000), ],
oneofs=[
],
serialized_start=861,
serialized_end=1240,
)
_ASSETCONFIGLIST = _descriptor.Descriptor(
name='AssetConfigList',
full_name='Honeywell.Security.ISOM.Assets.AssetConfigList',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='config', full_name='Honeywell.Security.ISOM.Assets.AssetConfigList.config', index=0,
number=11, type=11, cpp_type=10, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=True,
syntax='proto2',
extension_ranges=[(1000000, 1100000), ],
oneofs=[
],
serialized_start=1242,
serialized_end=1330,
)
_ASSETENTITY = _descriptor.Descriptor(
name='AssetEntity',
full_name='Honeywell.Security.ISOM.Assets.AssetEntity',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='config', full_name='Honeywell.Security.ISOM.Assets.AssetEntity.config', index=0,
number=21, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='state', full_name='Honeywell.Security.ISOM.Assets.AssetEntity.state', index=1,
number=31, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=True,
syntax='proto2',
extension_ranges=[(900000, 1100000), ],
oneofs=[
],
serialized_start=1333,
serialized_end=1476,
)
_ASSETENTITYLIST = _descriptor.Descriptor(
name='AssetEntityList',
full_name='Honeywell.Security.ISOM.Assets.AssetEntityList',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='entity', full_name='Honeywell.Security.ISOM.Assets.AssetEntityList.entity', index=0,
number=11, type=11, cpp_type=10, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=True,
syntax='proto2',
extension_ranges=[(1000000, 1100000), ],
oneofs=[
],
serialized_start=1478,
serialized_end=1566,
)
_ASSETOPERATIONS.fields_by_name['resources'].enum_type = _RESOURCES
_ASSETSUPPORTEDRELATIONS.fields_by_name['relations'].enum_type = _RELATIONS
_ASSETEVENTS.fields_by_name['events'].enum_type = _EVENTS
_ASSETSTATE.fields_by_name['inventoryState'].enum_type = _ASSETINVENTORYSTATETYPES
_ASSETSTATELIST.fields_by_name['state'].message_type = _ASSETSTATE
_ASSETRELATION.fields_by_name['name'].enum_type = _RELATIONS
_ASSETRELATIONLIST.fields_by_name['relation'].message_type = _ASSETRELATION
_ASSETCONFIG.fields_by_name['identifiers'].message_type = _ASSETIDENTIFIERS
_ASSETCONFIG.fields_by_name['relation'].message_type = _ASSETRELATION
_ASSETCONFIG.fields_by_name['issuedDateTime'].message_type = IsomStdDef__pb2._ISOMDATETIME
_ASSETCONFIG.fields_by_name['dueDateTime'].message_type = IsomStdDef__pb2._ISOMDATETIME
_ASSETCONFIG.fields_by_name['returnedDateTime'].message_type = IsomStdDef__pb2._ISOMDATETIME
_ASSETCONFIGLIST.fields_by_name['config'].message_type = _ASSETCONFIG
_ASSETENTITY.fields_by_name['config'].message_type = _ASSETCONFIG
_ASSETENTITY.fields_by_name['state'].message_type = _ASSETSTATE
_ASSETENTITYLIST.fields_by_name['entity'].message_type = _ASSETENTITY
DESCRIPTOR.message_types_by_name['AssetOperations'] = _ASSETOPERATIONS
DESCRIPTOR.message_types_by_name['AssetSupportedRelations'] = _ASSETSUPPORTEDRELATIONS
DESCRIPTOR.message_types_by_name['AssetEvents'] = _ASSETEVENTS
DESCRIPTOR.message_types_by_name['AssetState'] = _ASSETSTATE
DESCRIPTOR.message_types_by_name['AssetStateList'] = _ASSETSTATELIST
DESCRIPTOR.message_types_by_name['AssetRelation'] = _ASSETRELATION
DESCRIPTOR.message_types_by_name['AssetRelationList'] = _ASSETRELATIONLIST
DESCRIPTOR.message_types_by_name['AssetIdentifiers'] = _ASSETIDENTIFIERS
DESCRIPTOR.message_types_by_name['AssetConfig'] = _ASSETCONFIG
DESCRIPTOR.message_types_by_name['AssetConfigList'] = _ASSETCONFIGLIST
DESCRIPTOR.message_types_by_name['AssetEntity'] = _ASSETENTITY
DESCRIPTOR.message_types_by_name['AssetEntityList'] = _ASSETENTITYLIST
DESCRIPTOR.enum_types_by_name['Resources'] = _RESOURCES
DESCRIPTOR.enum_types_by_name['Relations'] = _RELATIONS
DESCRIPTOR.enum_types_by_name['Events'] = _EVENTS
DESCRIPTOR.enum_types_by_name['AssetInventoryStateTypes'] = _ASSETINVENTORYSTATETYPES
_sym_db.RegisterFileDescriptor(DESCRIPTOR)
AssetOperations = _reflection.GeneratedProtocolMessageType('AssetOperations', (_message.Message,), {
'DESCRIPTOR' : _ASSETOPERATIONS,
'__module__' : 'IsomAssets_pb2'
# @@protoc_insertion_point(class_scope:Honeywell.Security.ISOM.Assets.AssetOperations)
})
_sym_db.RegisterMessage(AssetOperations)
AssetSupportedRelations = _reflection.GeneratedProtocolMessageType('AssetSupportedRelations', (_message.Message,), {
'DESCRIPTOR' : _ASSETSUPPORTEDRELATIONS,
'__module__' : 'IsomAssets_pb2'
# @@protoc_insertion_point(class_scope:Honeywell.Security.ISOM.Assets.AssetSupportedRelations)
})
_sym_db.RegisterMessage(AssetSupportedRelations)
AssetEvents = _reflection.GeneratedProtocolMessageType('AssetEvents', (_message.Message,), {
'DESCRIPTOR' : _ASSETEVENTS,
'__module__' : 'IsomAssets_pb2'
# @@protoc_insertion_point(class_scope:Honeywell.Security.ISOM.Assets.AssetEvents)
})
_sym_db.RegisterMessage(AssetEvents)
AssetState = _reflection.GeneratedProtocolMessageType('AssetState', (_message.Message,), {
'DESCRIPTOR' : _ASSETSTATE,
'__module__' : 'IsomAssets_pb2'
# @@protoc_insertion_point(class_scope:Honeywell.Security.ISOM.Assets.AssetState)
})
_sym_db.RegisterMessage(AssetState)
AssetStateList = _reflection.GeneratedProtocolMessageType('AssetStateList', (_message.Message,), {
'DESCRIPTOR' : _ASSETSTATELIST,
'__module__' : 'IsomAssets_pb2'
# @@protoc_insertion_point(class_scope:Honeywell.Security.ISOM.Assets.AssetStateList)
})
_sym_db.RegisterMessage(AssetStateList)
AssetRelation = _reflection.GeneratedProtocolMessageType('AssetRelation', (_message.Message,), {
'DESCRIPTOR' : _ASSETRELATION,
'__module__' : 'IsomAssets_pb2'
# @@protoc_insertion_point(class_scope:Honeywell.Security.ISOM.Assets.AssetRelation)
})
_sym_db.RegisterMessage(AssetRelation)
AssetRelationList = _reflection.GeneratedProtocolMessageType('AssetRelationList', (_message.Message,), {
'DESCRIPTOR' : _ASSETRELATIONLIST,
'__module__' : 'IsomAssets_pb2'
# @@protoc_insertion_point(class_scope:Honeywell.Security.ISOM.Assets.AssetRelationList)
})
_sym_db.RegisterMessage(AssetRelationList)
AssetIdentifiers = _reflection.GeneratedProtocolMessageType('AssetIdentifiers', (_message.Message,), {
'DESCRIPTOR' : _ASSETIDENTIFIERS,
'__module__' : 'IsomAssets_pb2'
# @@protoc_insertion_point(class_scope:Honeywell.Security.ISOM.Assets.AssetIdentifiers)
})
_sym_db.RegisterMessage(AssetIdentifiers)
AssetConfig = _reflection.GeneratedProtocolMessageType('AssetConfig', (_message.Message,), {
'DESCRIPTOR' : _ASSETCONFIG,
'__module__' : 'IsomAssets_pb2'
# @@protoc_insertion_point(class_scope:Honeywell.Security.ISOM.Assets.AssetConfig)
})
_sym_db.RegisterMessage(AssetConfig)
AssetConfigList = _reflection.GeneratedProtocolMessageType('AssetConfigList', (_message.Message,), {
'DESCRIPTOR' : _ASSETCONFIGLIST,
'__module__' : 'IsomAssets_pb2'
# @@protoc_insertion_point(class_scope:Honeywell.Security.ISOM.Assets.AssetConfigList)
})
_sym_db.RegisterMessage(AssetConfigList)
AssetEntity = _reflection.GeneratedProtocolMessageType('AssetEntity', (_message.Message,), {
'DESCRIPTOR' : _ASSETENTITY,
'__module__' : 'IsomAssets_pb2'
# @@protoc_insertion_point(class_scope:Honeywell.Security.ISOM.Assets.AssetEntity)
})
_sym_db.RegisterMessage(AssetEntity)
AssetEntityList = _reflection.GeneratedProtocolMessageType('AssetEntityList', (_message.Message,), {
'DESCRIPTOR' : _ASSETENTITYLIST,
'__module__' : 'IsomAssets_pb2'
# @@protoc_insertion_point(class_scope:Honeywell.Security.ISOM.Assets.AssetEntityList)
})
_sym_db.RegisterMessage(AssetEntityList)
_ASSETCONFIG.fields_by_name['issuedDateTime']._options = None
_ASSETCONFIG.fields_by_name['dueDateTime']._options = None
_ASSETCONFIG.fields_by_name['returnedDateTime']._options = None
# @@protoc_insertion_point(module_scope) | PypiClean |
/BanterBot-0.0.5.tar.gz/BanterBot-0.0.5/banterbot/utils/text_to_speech_output.py | import datetime
from typing import Iterator, List
from banterbot.data.azure_neural_voices import AzureNeuralVoice
from banterbot.utils.word import Word
class TextToSpeechOutput:
"""
The TextToSpeechOutput class encapsulates the output of a text-to-speech conversion, providing a convenient
interface for working with and manipulating the converted data. This class is designed to store the input text, the
timestamp, the voice and style used for conversion, and the list of Word objects representing the individual words
in the output.
The primary use case for this class is to store the output of a text-to-speech conversion and provide an easy way to
access the words in the output. This can be useful for applications that require further processing or analysis of
the converted text, such as natural language processing or speech synthesis.
"""
def __init__(self, input_string: str, timestamp: datetime.datetime, voice: AzureNeuralVoice, style: str) -> None:
"""
Initializes a new TextToSpeechOutput instance, setting up the input string and preparing the words list for the
converted words.
Args:
input_string (str): The input string that is to be converted into speech.
timestamp (datetime.datetime): The time at which the speech began.
voice (AzureNeuralVoice): The voice to be used for the text-to-speech conversion. This should be an instance
of the AzureNeuralVoice class, which represents a specific voice available in the Azure Cognitive Services
Text-to-Speech API.
style (str): The speaking style to be applied to the text-to-speech conversion. This should be a string
representing one of the available speaking styles in the Azure Cognitive Services Text-to-Speech API, such
as "cheerful", "sad", or "angry".
"""
self.input_string = input_string
self.timestamp = timestamp
self.voice = voice
self.style = style
self.words: List[Word] = []
def __getitem__(self, idx: int) -> Word:
"""
Allows for indexing into the TextToSpeechOutput object to retrieve words at specific positions.
Args:
idx (int): The index of the word to retrieve.
Returns:
Word: The word at the specified index.
"""
return self.words[idx]
def __iter__(self) -> Iterator[Word]:
"""
Provides an iterator to iterate over the Word objects in the output.
Yields:
Word: The next Word object in the output.
"""
for word in self.words:
yield word
def __len__(self) -> int:
"""
Allows for the use of len() on a TextToSpeechOutput instance, returning the number of words in the output.
Returns:
int: The number of words in the output.
"""
return len(self.words)
def __str__(self) -> str:
"""
Converts the TextToSpeechOutput instance into a string, concatenating all the words in the output.
Returns:
str: The string representation of the text-to-speech output. This will be a concatenation of all the words
in the output, in the order they appear in the words list.
"""
return "".join(word.word for word in self.words)
def append(self, entry: Word) -> None:
"""
Appends a Word object to the words list in the output.
Args:
entry (Word): The word to be appended to the output. This should be an instance of the Word class, which
represents a single word in the text-to-speech output along with its associated metadata.
"""
self.words.append(entry) | PypiClean |
/KAVICA-1.3.4.tar.gz/KAVICA-1.3.4/kavica/imputation/mice.py | import numpy as np
import pandas as pd
import warnings
from terminaltables import DoubleTable
from scipy.stats.mstats import gmean, hmean
from time import sleep
import itertools
from sklearn import linear_model, discriminant_analysis
import json
import argparse
import sys
import time
import matplotlib.pyplot as plt
import seaborn as sns
import missingno as msno
from scipy import stats
# warnings.filterwarnings('ignore')
pd.set_option('display.max_columns', 20)
pd.set_option('display.max_rows', 365)
pd.set_option('display.width', 700)
__all__ = ['___config',
'arguments_parser',
'compatible_data_structure',
'missing_pattern_plot',
'MissingValuePreProcessing',
'Mice',
'scale_into_range',
'dict_inner_joint',
]
# Fixme: pandas ix replace with loc/iloc
def scale_into_range(variable, r_min, r_max, t_min, t_max):
""" Scales variable into a range [t_min,t_max].
Args:
variable (float): ∈ [r_min,r_max] denote your measurement to be scaled
r_min (float): denote the minimum of the range of your measurement
r_max (float): denote the maximum of the range of your measurement
t_min (float): denote the minimum of the range of your desired target scaling
t_max (float): denote the maximum of the range of your desired target scaling
Returns:
A float number indicates the scaled value.
Note:
See https://stats.stackexchange.com/questions/281162/scale-a-number-between-a-range.
"""
return ((variable - r_min) * (t_max - t_min) / (r_max - r_min)) + t_min
def ___config(config_path, data_path):
""" Read the configuration file (.json)
In order to impute a file, we need to indicate the main features, pass_through and the complementaries.
Args:
config_path (str): indicates the path of the configuration file.
data_path (str): indicates the path of the data file (.csv)
Return:
df (pandas data frame): Represents the data set where feature subset is {complimentary, hardware conters}
pass_through_features (list):includes of all feature that are not used in MICE and they will staked to output.
columns_order (list): It is a order (original order) list of the feature in data set.
hardware_counters (list): Includes the list of the all feature that are imputed.
"""
with open(config_path, 'r') as config_path:
config_dict = json.load(config_path)
df = pd.read_csv(data_path) # Read the data file
columns_order = list(df.columns.values)
active_features = list(set(list(config_dict['hardware_counters'].values())
+ list(config_dict['complimentary'].values())))
pass_through_features = list(set(list(config_dict['pass_through'].values())
+ list(config_dict['complimentary'].values())))
df = df[active_features] # sub set of features
return (df,
pass_through_features,
columns_order,
list(config_dict['hardware_counters'].values()),
config_dict['hardware_counters'])
def arguments_parser():
""" Parse the arguments
Return:
A dict includes {"configPath", "csvPath", "predict_method", "imputedPath", "iteration"}
"""
# set/receive the arguments
if len(sys.argv) == 1:
# It is used for testing and developing time.
arguments = ['config.json',
'source2.csv',
'-m',
'norm',
'-o',
'imputed.csv'
]
sys.argv.extend(arguments)
else:
pass
# parse the arguments
parser = argparse.ArgumentParser(description='The files that are needed for selecting features most important.')
parser.add_argument('config', help='A .json configuration file that included the'
'thread numbers,hardware counters and etc.')
parser.add_argument('csvfile', help='A .csv dataset file')
# MICE prediction method
parser.add_argument('-m',
dest='m',
default='norm',
choices=['norm', 'norm.nob', 'lda', 'qda', 'polyreg', 'logreg'],
action='store',
type=str.lower,
help="The imputation method that is either norm, norm.nob, lda, qda, polyreg or logreg.")
parser.add_argument('-i',
dest='i',
default=10,
action='store',
type=int,
help="It significances the number of the MICE algorithm iteration.")
parser.add_argument('-o',
dest='o',
default='imputed.csv',
action='store',
type=str,
help="path to custom root results directory")
args = parser.parse_args()
return ({"configPath": args.config,
"csvPath": args.csvfile,
"predict_method": args.m,
"imputedPath": args.o,
"iteration": args.i})
def compatible_data_structure(data=None, header=True, index=True):
""" Reconstruct/ uniformize the input data as pandas data frame
Args:
data (a Numpy array or pandas data frame): is the data set
header (boolean): if True, the first row of the data includes the header.
index (boolean): if True, the first column of the data includes the index.
Return:
A pandas data frame.
"""
if data is None:
raise ValueError("The data set is empty")
# Convert to dataframe
def __numpy2panda(_data, _header, _index):
# not empty data set
def __data_shape(__data):
if len(__data.shape) is not 2: # Check the shape
raise ValueError("Expected 2d matrix, got %s array" % (__data.shape,))
elif __data.empty:
raise ValueError("Not expected empty data set.")
else:
print("2d matrix is gotten %s array" % (__data.shape,))
if type(_data) is not pd.core.frame.DataFrame:
if _header:
if _index:
dataFrame = pd.DataFrame(data=_data[1:, 1:], # values
index=_data[1:, 0], # 1st column as index
columns=_data[0, 1:])
else:
dataFrame = pd.DataFrame(data=_data[1:, 0:], # values
columns=_data[0, 0:])
elif _index:
dataFrame = pd.DataFrame(data=_data[0:, 1:], # values
index=_data[0:, 0]) # 1st column as index)
else:
dataFrame = pd.DataFrame(data=_data)
else:
dataFrame = _data
__data_shape(dataFrame)
return dataFrame.apply(pd.to_numeric)
return __numpy2panda(data, header, index)
def missing_pattern_plot(data, method='matrix', plot_name=None):
""" Visualizing the patterns of missing value occurrence.
Args:
data (pandas): Includes the dataset.
method (str): Indicates the plot format ("heatmap", "matrix", and "mosaic")
plot_name (str): Identify the plot output file name
Return:
A jpeg image of the missing value patterns
"""
# TODO: visualisation with the other plots such as Strictplot, bwplot, and densityplot
if method.lower() == 'matrix':
msno.matrix(data)
elif method.lower() == 'mosaic':
sns.heatmap(data.isnull(), cbar=False)
elif method.lower() == 'bar':
msno.bar(data)
elif method.lower() == 'dendrogram':
msno.dendrogram(data)
if method.lower() == 'heatmap':
msno.heatmap(data)
plt.subplots_adjust(top=0.7)
plt.savefig('{}.jpg'.format(plot_name))
plt.show()
def dict_inner_joint(dict_left, dict_right):
"""Update the key of the left dictionary with the value of the right dictionary when left_value=right_key.
Args:
dict_left (dict): Includes the left dictionary
dict_right (dict): Includes the right dictionary
Returns:
A dictionary.
"""
new_dict = {}
for item_key, item_value in dict_left.items():
new_dict[dict_right.get(item_key)] = item_value
return new_dict
class MissingValuePreProcessing(object):
""" Class to preprocess the missing values
Attributes:
original_data (pandas): includes the original data before imputation.
data (pandas): includes a copy of the original data that is gradually changed during the imputation process.
missed_values_map (tuple): includes the two array, one for the row index and other for the columns of MVs.
impute_method (str): indicates the initiative imputation method (default: Mean).
impute_mask (np.array): indicates the initiate imputing value for a predicting feature in any epoc.
imputed_data (pandas): includes the final output (imputed data)
missing_value_number (int): indicates the number of the missing values in a dat set
drop_column_threshold (float): defines a threshold that raises a warning about the high proportion of the missing
value in a feature.
drop_column (list): includes the names of some columns that we need to drop from the final output data.
inplace (boolean): If True, the original csv fill will be replaced by the imputed dta set.
not_drop_column_map (dict): represents the name and column number of the features
feature_list (list): includes all feature names in the data frame
Methods:
__csv2hdf5: converts a csv file to a hdf5 format.
__log_transformer: Applies log/ exponential transform on whole data set.
__zero2nan: Replace the zero in indicated features with NaN
_extract_missing_pattern: Analyses the missing data patterns
write_csv: Writes the output (completed data set) into a csv file.
_compatible_data_structure: Initializes and uniforms the input data.
_compatible_data_structure: Initializes and uniforms the input data.
_missing_value_map: Computes the missing value map (pattern)
drop_null_row: Drops the rows that are fully NaN for all features.
drop_null_column: Drops the columns that are fully NaN for all features.
Properties:
"""
def __init__(self, data=None, missed_values_map=None, impute_method=None, drop_column=False,
not_drop_column_map=dict(), drop_column_threshold=0.60, inplace=False, feature_list=None):
"""
Args:
data:
missed_values_map:
impute_method:
drop_column:
not_drop_column_map:
drop_column_threshold:
inplace:
feature_list:
"""
self.original_data = data
self.original_data_dtypes = None
self.data = data
self.missed_values_map = missed_values_map
self.impute_method = impute_method
self.impute_mask = np.array([])
self.imputed_data = None
self.missing_value_number = None
self.drop_column_threshold = drop_column_threshold
self.drop_column = drop_column
self.inplace = inplace
self.not_drop_column_map = not_drop_column_map # it is a binary array
self.feature_list = feature_list
def __call__(self):
self._compatible_data_structure()
self.__zero2nan(feature_list=self.feature_list)
missing_pattern_plot(data=self.data, plot_name='initial_missing_pattern')
self.__log_transformer()
self._missing_value_map()
self.write_csv()
def __log_transformer(self, inverse=False):
"""Do log/exponential transform.
We use the log and exponential transform to enforce the predication result to the positive value.
Args:
inverse (boolean): IF it is True, Log function is applied on self.data, else Exponential function is
applied on self.imputedData.
Return:
The data frame
"""
if inverse:
self.imputed_data = self.imputed_data.apply(np.exp)
else:
self.data = self.data.apply(np.log)
return self.data
def __zero2nan(self, feature_list=None):
"""Replace the zero in indicated features with NaN
Args:
feature_list (list): indicates features that we would like to do the replacement on them.
Return:
self in order to apply chain action.
"""
if not feature_list:
self.data.replace(0, np.nan, inplace=True)
else:
self.data[feature_list] = self.data[feature_list].replace(0, np.nan)
return self
def _extract_missing_pattern(self):
# TODO: (develop mode) do the imputation based on the complete record availability.
print(self.data.columns)
missing_value_groups = self.data.isnull().groupby(list(self.data.columns)).groups
missing_value_patterns = pd.DataFrame(list(missing_value_groups.keys()), columns=self.data.columns)
print(missing_value_patterns[['PAPI_L2_DCM', 'PAPI_L1_DCM', 'PAPI_BR_INS', 'PAPI_L3_TCM', 'PAPI_BR_MSP']])
print(missing_value_groups)
def __reset_dtypes(self):
""" Reset the data type of the imputed data (output).
It uses the original data type and reconstruct the output. But, in case of the INT, it rounds the data first
then sets the type.
Returns:
self
"""
for feature, data_type in self.original_data_dtypes.items():
if 'int' in str(data_type):
# Convert the decimal values to the integer
self.imputed_data[feature] = self.imputed_data[feature].round().astype(str(data_type))
else:
self.imputed_data[feature] = self.imputed_data[feature].astype(str(data_type))
return self
def write_csv(self, append_to=None, csv_path=None, order=None, output_path='imputed.csv',
manipulating_list_path='manipulating_list.csv', manipulating=True, feature_dic=None):
""" Write the output as CSV dataset
Args:
append_to (list) : of the pass_through_features that from the original dataset(data) and
append to the final output.
csv_path (str): Includes the original dataset path.
order (list): Includes the columns order of the final output.
output_path (str): Indicates the output path.
Return:
A string includes the csv_path
"""
if isinstance(self.imputed_data, pd.core.frame.DataFrame):
self.__log_transformer(inverse=True)
self.__reset_dtypes()
appending_columns = pd.read_csv(csv_path, usecols=append_to)
sin_complimentary = list(set(self.imputed_data.columns) - set(appending_columns))
# Todo: (it needs more test) Compute the manipulating list.
if manipulating:
manipulating_list = pd.DataFrame({'row': self.missed_values_map[0],
'Hardware_Counter': self.missed_values_map[1]})
manipulating_list['Values'] = manipulating_list.apply(
lambda item: self.imputed_data.iloc[item.row, item.Hardware_Counter], axis=1)
for column_item in ['Object_id', 'Timestamp']:
manipulating_list[column_item] = manipulating_list.apply(
lambda item: appending_columns.loc[item.row, column_item], axis=1)
manipulating_list = manipulating_list.drop(['row'], axis=1)
mapping_list = dict_inner_joint(dict(map(reversed, feature_dic.items())),
dict(map(reversed, enumerate(self.imputed_data.columns))))
manipulating_list['Hardware_Counter'].replace(mapping_list, inplace=True)
manipulating_order = ['Object_id', 'Timestamp', 'Hardware_Counter', 'Values']
manipulating_list = manipulating_list[manipulating_order].sort_values(by=['Timestamp', 'Object_id'])
manipulating_list.to_csv(manipulating_list_path, index=False)
# Compute the imputed output csv file.
self.imputed_data = pd.concat([appending_columns, self.imputed_data[sin_complimentary]], axis=1)
self.imputed_data = self.imputed_data[order] # reordering the data before writing csv
self.imputed_data.to_csv(output_path, index=False)
del appending_columns # release the memory
else:
warnings.warn('The imputed data has not initiated yet.', UserWarning)
return csv_path
def _compatible_data_structure(self, data=None, header=True, index=True):
""" Initialize and reappear the dataset (Internal)
Args:
data (a Numpy array or pandas data frame): is the data set
header (boolean): if True, the first row of the data includes the header.
index (boolean): if True, the first column of the data includes the index.
Return:
self
"""
def __init(df):
""" Test the empty data set
data (pandas): The input dat set
Return:
A pandas data frame
"""
if df is None:
if self.data is None:
raise ValueError("The data set is empty")
else:
pass
else:
self.data = df
def __numpy2panda(headers, indexes):
""" Convert 2D numpy array to pandas data frame.
Args:
headers (Boolean): If true, the first row includes header
indexes (Boolean): If true, the first columns includes indexes
Return:
A pandas data frame
"""
if type(self.data) is not pd.core.frame.DataFrame:
if headers:
if indexes:
self.data = pd.DataFrame(data=self.data[1:, 1:], # values
index=self.data[1:, 0], # 1st column as index
columns=self.data[0, 1:])
else:
self.data = pd.DataFrame(data=self.data[1:, 0:], # values
columns=self.data[0, 0:])
elif indexes:
self.data = pd.DataFrame(data=self.data[0:, 1:], # values
index=self.data[0:, 0]) # 1st column as index)
else:
self.data = pd.DataFrame(data=self.data)
else:
pass
return self.data
def __data_shape():
""" Test the shape of the data frame
The input data has to be 2D (Data Frame)
Return:
"""
if len(self.data.shape) is not 2: # Check the shape
raise ValueError("Expected 2d matrix, got %s array" % (self.data.shape,))
elif self.data.empty:
raise ValueError("Not expected empty data set.")
else:
print("The data frame is fitted with shape {}".format(self.data.shape))
return self
__init(data)
__numpy2panda(header, index)
__data_shape()
self.original_data_dtypes = self.data.dtypes
return self
def _missing_value_map(self):
"""Computes the missing value map (pattern)
Return:
"""
def __sort_column_wise(column_wise=True):
""" Sorts the missed Values Map.
Args:
column_wise (Boolean): If True, the missed Values Map is sorted vertically.
"""
if column_wise is None:
# TODO: do the row_wise sort
pass
else:
missing_rows_index = np.array(self.missed_values_map[0])
columns = np.array(self.missed_values_map[1])
if column_wise:
ind = columns.argsort()
missing_rows_index = missing_rows_index[ind]
columns.sort()
else:
ind = missing_rows_index.argsort()
columns = columns[ind]
missing_rows_index.sort()
self.missed_values_map = (missing_rows_index, columns)
rows = self.data.shape[0]
_is_nulls = self.data.isnull()
if not _is_nulls.sum().sum():
raise ValueError('There is not any missing value in data frame.')
elif _is_nulls.all().any():
warnings.warn('All values are missed, therefore imputation is not possible.', UserWarning)
else:
tableData = [['', 'Missed\nValues']]
featureList = self.data.columns.values.tolist()
missedValueList = _is_nulls.sum().tolist()
print(featureList)
for [featureItem, missingValues] in zip(featureList, missedValueList):
missingValues = missingValues / rows
if missingValues < self.drop_column_threshold:
self.not_drop_column_map.update({featureItem: featureList.index(featureItem)})
elif self.drop_column:
self.data = self.data.drop([featureItem], axis=1)
print('\n {} is deleted.'.format(featureItem))
else:
warnings.warn('\n The feature {} has {}% missing value,it should drop or request for new data set.'.
format(featureItem, missingValues * 100))
sleep(0.01)
decision = input('\n\033[1m\033[95mD\033[0mrop the feature and continue' +
'\n\033[1m\033[95mC\033[0montinue without dropping' +
'\n\033[1m\033[95mE\033[0mxit' +
'\n\033[6mInsert the code(D|C|E):\033[0').upper()
while True:
if decision == 'D':
# fixme: The dropping a column will cusses problem in reordering the columns to write csv
print(self.original_data_dtypes)
self.data = self.data.drop([featureItem], axis=1)
# self.original_data_dtypes = self.original_data_dtypes.drop(featureItem)
print('\n {} is deleted.'.format(featureItem))
break
elif decision == 'C':
self.not_drop_column_map.update({featureItem: featureList.index(featureItem)})
break
elif decision == 'E':
raise ValueError('The data set has massive amount of missing values.')
else:
decision = input('\n\033[6mInsert the code(D|C|E):\033[0')
tableData.append([featureItem,
'{:3.1f}%'.format(missingValues * 100)])
table = DoubleTable(tableData)
table.justify_columns[1] = 'center'
print(table.table)
# Reindexing the self.property based on the feature that are dropped
_is_nulls = self.data.isnull()
# initiate the imputation mask and missed value map
self.missed_values_map = np.asarray(_is_nulls).nonzero()
self.impute_mask = np.zeros(len(self.missed_values_map[0]))
self.missing_value_number = _is_nulls.sum().sum()
__sort_column_wise()
def drop_null_row(self):
""" Drop the full NaN rows.
Return:
An int that indicates the number of the draped rows
"""
if self.inplace:
df_length_before = self.data.shape[0]
self.data = self.data.dropna(how='any', axis=0)
number_of_dropped_rows = df_length_before - self.data.shape[0]
else:
df_length_before = self.imputed_data.shape[0]
self.imputed_data = self.data.dropna(how='any', axis=0)
number_of_dropped_rows = df_length_before - self.imputed_data.shape[0]
print("{} number of rows have been draped as fully NaN rows.".format(number_of_dropped_rows))
return number_of_dropped_rows
def drop_column(self):
""" Drop the full NaN columns.
Return:
An int that indicates the number of the draped columns
"""
if self.inplace:
df_length_before = self.data.shape[1]
self.data = self.data.dropna(how='all', axis=1)
number_of_dropped_columns = df_length_before - self.data.shape[1]
else:
df_length_before = self.imputed_data.shape[1]
self.imputed_data = self.data.dropna(how='all', axis=1)
number_of_dropped_columns = df_length_before - self.imputed_data.shape[1]
print("{} number of rows have been draped as fully NaN columns.".format(number_of_dropped_columns))
return number_of_dropped_columns
def simple_imputation(self, impute_method='imputeMean', inplace=False):
""" Initially impute the missing values.
Args:
impute_method (str): indicates the initiate imputation method of the NaNs
'imputZero',
'imputMedian',
'imputMax',
'imputMin',
'imputMean',
'imputGeometricMean',
'imputHarmonicMean',
inplace (boolean): if True, the imputation will done on the original dataset.
Returns:
self in order to apply the chain of the functions.
"""
impute_methods = [
'imputeZero',
'imputeMedian',
'imputeMax',
'imputeMin',
'imputeMean',
'imputeGeometricMean',
'imputeHarmonicMean',
None
]
assert impute_method in impute_methods
def __geo_mean(df):
"""Compute the geometric mean of any feature
Args:
df (Pandas): Includes the data
Returns:
A list of the geometric mean for all feature (column)
"""
__geo_means = []
for column_item in df:
no_zero_nan_column_item = list(df[column_item].replace(0, pd.np.nan).dropna(axis=0, how='any'))
__geo_means.append(gmean(no_zero_nan_column_item))
return __geo_means
def __harmo_mean(df):
"""Compute the harmonic mean of any feature
Args:
df (Pandas): Includes the data
Returns:
A list of the harmonic mean for all feature (column)
"""
_harmo_means = []
for column_item in df:
no_zero_nan_column_item = list(df[column_item].replace(0, pd.np.nan).dropna(axis=0, how='any'))
_harmo_means.append(hmean(no_zero_nan_column_item))
return _harmo_means
def __generat_missed_values_map():
""" Generator the missing values map
Yields:
A list of a pair [index, column] of a missing value position in the data.
"""
not_droped_feature_index = self.not_drop_column_map.values()
for [index_item, header_item] in zip(self.missed_values_map[0], self.missed_values_map[1]):
if header_item in not_droped_feature_index:
real_header_index = list(self.not_drop_column_map.values()).index(header_item)
yield [index_item, real_header_index]
def _impute():
"""Applies the initial imputation
Returns:
self
"""
if inplace:
for [index_item, header_item] in zip(self.missed_values_map[0], self.missed_values_map[1]):
self.data.iat[index_item, header_item] = self.impute_mask[header_item]
else:
self.imputed_data = self.data.copy(deep=True)
for [index_item, header_item] in zip(self.missed_values_map[0], self.missed_values_map[1]):
self.imputed_data.iat[index_item, header_item] = self.impute_mask[header_item]
return self
if impute_method == 'imputeZero':
self.impute_mask.fill(0)
elif impute_method == 'imputeMedian':
self.impute_mask = np.array(self.data.median(axis=0, skipna=True))
elif impute_method == 'imputeMax':
self.impute_mask = np.array(self.data.max(axis=0, skipna=True))
elif impute_method == 'imputeMin':
self.impute_mask = np.array(self.data.min(axis=0, skipna=True))
elif impute_method == 'imputeMean':
self.impute_mask = np.array(self.data.mean(axis=0, skipna=True))
elif impute_method == 'imputeGeometricMean':
self.impute_mask = np.array(__geo_mean(self.data))
elif impute_method == 'imputeHarmonicMean':
self.impute_mask = np.array(__harmo_mean(self.data))
else:
raise ValueError('\n Initial impute method is selected \n ')
_impute()
return self
class Mice(MissingValuePreProcessing):
"""Multiple imputation by chained equation.
Attributes:
impute_method (str): Indicate the initiate imputation method.
train_subset_x (pandas): Includes a slice of independent variables for training the predictive equation.
test_subset_x (pandas): Includes a slice of independent variables for testing the predictive equation.
train_subset_y (pandas): Includes a slice of dependent variables for training the predictive equation.
test_subset_y (pandas): Includes a slice of dependent variables for testing the predictive equation.
iteration (int): Indicates the number of imputation epocs.
iteration_log (list): Include the list of list of the imputed values
predict_Method (str): Indicates a predicative model.
Methods:
predictive_model:
__place_holder:
__impute:
imputer:
Output:
The standard outputs of the mice are:
- imputed.csv: includes the completed csv file
- manipulating_list.csv: includes the list of the changes/imputation which was applied
"""
def __init__(self, data=None, impute_method=None, predict_method='norm', iteration=10, feature_list=None):
"""Initiate the MICE class object.
Args:
data (pandas): Includes the data set
impute_method (str): indicates the initiate imputation method of the NaNs
predict_method (str): indicates the mice predictive method.
iteration (int): Indicates the number of imputation epoch.
feature_list (list): Includes the feature list
"""
super(Mice, self).__init__(data=data, feature_list=feature_list)
self.impute_method = impute_method
self.train_subset_x = None
self.test_subset_x = None
self.train_subset_y = None
self.test_subset_y = None
self.iteration = iteration
self.iteration_log = np.zeros(shape=(0, 0))
self.predict_Method = predict_method
def __call__(self):
super(Mice, self).__call__()
# After running the supper __call__, we need to reshape the iteration log.
self.iteration_log = np.zeros(shape=(self.iteration, self.missing_value_number))
self.imputer()
missing_pattern_plot(self.imputed_data, method='matrix', plot_name='imputed_missing_pattern')
def predictive_model(self):
"""Setup and predict the missing values.
Returns:
A numpy array includes the predicted value.
Note:
- QDA is sensitive about the number of the instances in a class (>1).
"""
# TODO: complete the function list
# TODO: Write the customised functions and define the functions (Tensor, Fuzzy, Fourier model, ...)
# fixme: test the function's quality
methods = [
'pmm', # Predictive mean matching (numeric) fixme
'norm', # Bayesian liner regression (numeric)
'norm.nob', # Linear regression, non-Bayesian (numeric)
'mean.boot', # Linear regression with bootstrap (numeric) fixme
'mean', # Unconditional mean imputation (numeric) fixme
'2l.norm', # Two-level linear model (numeric) fixme
'logreg', # Logistic regression (factor, level2)
'logreg.bot', # Logistic regression with bootstrap (factor, level2) fixme
'polyreg', # Multinomial logit model (factor > level2)
'lda', # Linear discriminant analysis (factor)
'qda', # QuadraticDiscriminantAnalysis (factor),
'SRS', # Simple random sampling fixme
'fuzzy', # fixme
'KNN', # fixme
None
]
assert self.predict_Method in methods
def modeler(method_to_run, scale_in_range=True): # Receives the function as parameter
"""Fit the predictive model
Receives the function as parameter
Args:
method_to_run (str): Indicate the name of the predictive model.
scale_in_range (boolean): If True, the predicted values are scaled into the observed range.
Returns:
A numpy array includes the predicted value.
"""
# Fitting the training y, it is needed when we are using 'sklearn' package.
flat_train_y = np.array(self.train_subset_y.iloc[:, 0].values.tolist())
# Create linear regression object
predictor = method_to_run
# Train the model using the training sets
predictor.fit(self.train_subset_x, flat_train_y)
# Make predictions using the testing set
predictedY = predictor.predict(self.test_subset_x)
# The predicted values -> print(predictedY)
# The coefficients -> print('Coefficients: \n', predictor.coef_)
# scale the predicted values in a range of the real values.
if scale_in_range:
v_func = np.vectorize(scale_into_range)
predictedY = v_func(predictedY, min(predictedY), max(predictedY), min(flat_train_y), max(flat_train_y))
# standardise the output format 2D np.array
if not any(isinstance(e, np.ndarray) for e in predictedY):
predictedY = np.array([np.array([element]) for element in predictedY])
itemSize = set([element.size for element in predictedY])
if bool(itemSize.difference({1})):
raise ValueError(
'\n MICE Predication Error: The prediction method {} output is not standardised.'.format(
self.predict_Method))
return predictedY
# MICE prediction method switch-case
if self.predict_Method == 'norm.nob':
method = linear_model.LinearRegression(fit_intercept=False)
elif self.predict_Method == 'norm':
method = linear_model.BayesianRidge(compute_score=True)
elif self.predict_Method == 'lda':
method = discriminant_analysis.LinearDiscriminantAnalysis()
elif self.predict_Method == 'qda':
method = discriminant_analysis.QuadraticDiscriminantAnalysis()
elif self.predict_Method == 'polyreg':
method = linear_model.LogisticRegression(random_state=0, solver='lbfgs', multi_class='multinomial')
elif self.predict_Method == 'logreg':
method = linear_model.LogisticRegression(random_state=0, solver='sag', multi_class='ovr')
return modeler(method)
def __place_holder(self, feature_item):
""" Hold the missing value place.
In any imputation epoch, we need to create the train and test_cp data.So, after any iteration we need to
retrieve the original place of the missing data in order to create the test_cp data set.
Args:
feature_item (str): Indicate the dependent feature that will be predicted.
Returns:
A list of the row index that indicates the missing values places in feature item.
"""
feature_name = self.data.columns.values.tolist()[feature_item]
place_holder_column_index = list(map(lambda x: 1 if x == feature_item else 0, self.missed_values_map[1]))
place_holder_rows = list(itertools.compress(self.missed_values_map[0], place_holder_column_index))
# Converting the rows coordinate to the data frame Index before imputing the None
place_holder_row_index = [self.data.index.tolist()[x] for x in place_holder_rows]
if self.inplace:
self.data.loc[place_holder_row_index, feature_item] = None
train_subset = self.data.iloc[self.data[feature_name].notnull()]
test_subset = self.data[self.data[feature_name].isnull()]
else:
self.imputed_data.loc[place_holder_row_index, feature_name] = None
train_subset = self.imputed_data[self.imputed_data[feature_name].notnull()]
test_subset = self.imputed_data[self.imputed_data[feature_name].isnull()]
self.train_subset_x = train_subset.drop(feature_name, axis=1).copy()
self.train_subset_y = train_subset[[feature_name]].copy()
self.test_subset_x = test_subset.drop(feature_name, axis=1).copy()
self.test_subset_y = test_subset[[feature_name]].copy()
return place_holder_rows
def __impute(self, row_indexes=None, predicted_values=None, column_index=None):
"""Update the data set with the imputed values.
In any epoc, the original/imputed data set is updated by the predicted values.
Args:
row_indexes (list): Includes the missing value row index.
predicted_values (list): Includes the predicted values that is ordered by the index.
column_index (str): Indicates the predicted (dependent) feature name.
Returns:
self
"""
if self.inplace:
for [rowIndex, predicted_value] in zip(row_indexes, predicted_values):
self.data.iat[rowIndex, column_index] = predicted_value[0]
else:
for [rowIndex, predicted_value] in zip(row_indexes, predicted_values):
self.imputed_data.iat[rowIndex, column_index] = predicted_value[0]
return self
def imputer(self):
"""Imputation engine
It does the imputation feature by feature, and it repeats the imputation by the number of iteration.
Returns:
self
"""
def __plot_conversion(missing_value_index=0):
"""Plot the conversion curve
We use this function in order to test_cp or visualising the progressive conversion of a missing value.
Args:
missing_value_index (int): Indicates the value that we would like to see it's conversion plot.
"""
plt.plot(list(range(0, self.iteration)),
self.iteration_log[:, missing_value_index],
'bo',
list(range(0, self.iteration)),
self.iteration_log[:, missing_value_index],
'k')
plt.axis([0, self.iteration,
np.min(self.iteration_log[:, missing_value_index]) - 1,
np.max(self.iteration_log[:, missing_value_index]) + 1])
plt.ylabel('Iteration')
plt.show()
def __mean_squared_displacement_plot(root=True):
"""Plot Mean Squared Displacement.
We use this function in order to test_cp or visualising the progressive Mean Squared Displacement.
Args:
root (boolean): If True, the root mean squared displacement will be computed
"""
iteration_log_df = pd.concat([pd.DataFrame(self.impute_mask).T,
pd.DataFrame(self.iteration_log)])
msd = iteration_log_df.diff().apply(np.square).mean(axis=1)[1:]
msd.index = msd.index + 1
y_lab = "Mean Squared Displacement (MSD)"
if root:
msd = iteration_log_df.diff().apply(np.square).mean(axis=1).apply(np.sqrt)
y_lab = "Root Mean Squared Displacement (RMSD)"
plt.plot(msd.values, marker='o')
plt.xlabel('Iteration')
plt.ylabel(y_lab)
plt.title('Stabilization Curve of Multiple Imputation'.format(y_lab))
plt.legend([y_lab])
plt.grid()
plt.xticks(range(1, self.iteration + 1))
plt.savefig('Root_Mean_Squared_Displacement.jpg')
plt.show()
__feature_with_none = set(self.missed_values_map[1])
self.simple_imputation(impute_method='imputeMean') # Step1: Mice
iterations = iter(range(0, self.iteration))
__done_loop = False
while not __done_loop:
try:
iteration = next(iterations)
print('-' * 100, '\n', 'The iteration {} is started:'.format(iteration + 1), '\n', '-' * 100)
impute_values_ordered_by_col = []
for feature_item in __feature_with_none:
row_indexes = self.__place_holder(feature_item=feature_item) # Step2: Mice
predicted_values = self.predictive_model() # methodName='norm'
self.__impute(row_indexes, predicted_values, feature_item)
print(predicted_values.ravel().tolist())
impute_values_ordered_by_col.append(list(predicted_values.flatten()))
except StopIteration:
__done_loop = True
else:
# Flatten the list of list ^ add to the iteration log
self.iteration_log[iteration] = np.exp(list(itertools.chain(*impute_values_ordered_by_col)))
table = DoubleTable(self.iteration_log.tolist())
table.inner_heading_row_border = False
table.justify_columns[1] = 'center'
__mean_squared_displacement_plot()
__plot_conversion()
return self
def test_likelihood(self, _feature=None, _plot=True, _log=True):
""" Measure Probability Distribution Similarity
Performs the one-sample Kolmogorov-Smirnov test for goodness of fit.
The one-sample test compares the underlying distribution F(x) of a sample against a given distribution G(x).
The two-sample test compares the underlying distributions of two independent samples. Both tests are valid only
for continuous distributions.
Args:
x (pandas): indicate the original dats
y (pandas): indicate the sample dats
_feature (str): indicates the feature name to test. If None, ic computes the average test value of features.
_plot (boolean): plot the histogram. Note that is just applicable for the one feature.
_log (boolean): compute the logarithmic value of the features
Returns:
statistic (float): KS test statistic, either D, D+ or D-.
pvalue (float): One-tailed p-value.
Note:
See https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.kstest.html
"""
x = self.original_data.copy(deep=True)
y = self.imputed_data.copy(deep=True)
if _feature is None:
_kstest_dict = pd.DataFrame(columns=['Feature', 'Statistic', 'Pvalue'])
for _feature_item in self.feature_list:
_kstest_temp = stats.kstest(x[_feature_item], y[_feature_item])
_kstest_temp = stats.kstest(x[_feature_item], y[_feature_item])
_kstest_dict = _kstest_dict.append({'Feature': _feature_item,
'Statistic': _kstest_temp[0],
'Pvalue': _kstest_temp[1]}, ignore_index=True)
_kstest_dict = _kstest_dict.append(_kstest_dict.agg('mean', numeric_only=True), ignore_index=True)
_kstest_dict.at[_kstest_dict.index[-1], 'Feature'] = 'mean'
else:
_kstest_dict = {_feature: stats.kstest(x[_feature], y[_feature])}
if _plot and _feature:
if _log:
x[_feature], y[_feature] = np.log(x[_feature]), np.log(y[_feature])
x, y = x.replace([np.inf, -np.inf], np.nan), y.replace([np.inf, -np.inf], np.nan)
x, y = x.dropna(), y.dropna()
sns.distplot(y[_feature], label='Imputed')
sns.distplot(x[_feature], label='Original')
plt.title('Distribution Likelihood od {}'.format(_feature))
plt.legend()
plt.show()
print(_kstest_dict)
return _kstest_dict
# TODO: Post-possessing
"""
- Post-possessing ( Non-negative,
Integer ,
In the boundary)
"""
# TODO: Define the constraints
"""
- Define the constraints (Fully conditional specification-FCS,
Monotone data imputation,
Joint modeling)
"""
def __mice():
"""Handling the command line
Example:
$ python3 mice.py config/config_IM_gromacs_64p.json ../parser/source.csv -i 10
Returns:
An object of class MICE.
"""
start = time.time()
try:
args = arguments_parser()
df, features_appending_list, columns_order, feature_list, feature_dic = ___config(args['configPath'],
args['csvPath'])
_mice = Mice(df, predict_method=args['predict_method'], iteration=args['iteration'], feature_list=feature_list)
_mice()
_mice.write_csv(output_path=args['imputedPath'],
append_to=features_appending_list,
csv_path=args['csvPath'],
order=columns_order,
manipulating=True,
feature_dic=feature_dic)
print(_mice.test_likelihood())
print("\033[32mThe missing value imputation process is successfully completed by MICE method.")
return _mice
except AssertionError as error:
print(error)
print("\033[31mThe missing value imputation proses is failed.")
finally:
duration = time.time() - start
print('\033[0mTotal duration is: %.3f' % duration)
# ---------------------------------------------------------------------------
def __test_me():
data = np.array([("ind", "F1", "F2", "F3", "F4", "F5", "F6"),
(1, 2, 0, 13, None, 12, None),
(2, 2, 45, 23, 24, 13, 16),
(3, 4, 45, 23, 24, 19, 16),
(4, 2, 44, 23, 22, 13, 11),
(5, 4, 7, 50, 5, 20, 89),
(6, None, None, 34, 7, None, 67)])
obj = Mice(data)
print(obj.original_data)
obj()
print(obj.imputed_data)
print(obj.missed_values_map)
def __test_me_iris():
from sklearn import datasets
from sklearn.metrics import r2_score
import random
data = datasets.load_iris().data[:, :4]
data = pd.DataFrame(data, columns=['F1', 'F2', 'F3', 'F4'])
data1 = data.copy()
x = []
y = []
old_value = []
mean_ind = data1.mean(axis=0).values
mean_list = []
for i in range(16):
xv = random.randint(0, 145)
yv = random.randint(0, 3)
old_value.append(data1.iloc[xv, yv])
mean_list.append(mean_ind[yv])
data.iloc[xv, yv] = np.NaN
x.append(xv)
y.append(yv)
obj = Mice(data, iteration=100)
obj()
pred = []
for i, j, v in zip(x, y, old_value):
print(i, j, '--', v, '-', obj.imputed_data.iloc[i, j])
pred.append(obj.imputed_data.iloc[i, j])
print(r2_score(old_value, pred, multioutput='variance_weighted'))
print(1 - (1 - r2_score(old_value, pred, multioutput='variance_weighted')) * (data1.shape[0] - 1) / (
data1.shape[0] - data1.shape[1] - 1))
print('-+' * 30)
print(r2_score(old_value, mean_list, multioutput='variance_weighted'))
print(1 - (1 - r2_score(old_value, mean_list, multioutput='variance_weighted')) * (data1.shape[0] - 1) / (
data1.shape[0] - data1.shape[1] - 1))
if __name__ == '__main__':
object1 = __mice() | PypiClean |
/BuildStream-external-0.30.0.tar.gz/BuildStream-external-0.30.0/bst_external/elements/flatpak_repo.py | from buildstream import ScriptElement, Scope, ElementError
class FlatpakRepoElement(ScriptElement):
BST_ARTIFACT_VERSION = 1
def configure(self, node):
self.node_validate(node, ['environment', 'copy-refs', 'repo-mode', 'arch', 'branch'])
self._env = self.node_get_member(node, list, 'environment')
self._copy_refs = []
for subnode in self.node_get_member(node, list, 'copy-refs'):
self.node_validate(subnode, ['src', 'dest'])
self._copy_refs.append((self.node_subst_member(subnode, 'src'),
self.node_subst_member(subnode, 'dest')))
self._arch = self.node_subst_member(node, 'arch')
self._branch = self.node_subst_member(node, 'branch')
self.set_work_dir()
self.set_root_read_only(True)
self._repo_mode = self.node_subst_member(node, 'repo-mode')
self.set_install_root('/buildstream/repo')
self.add_commands('init repository',
['ostree init --repo=/buildstream/repo --mode={}'.format(self._repo_mode)])
def _layout_flatpaks(self, elements):
def staging_dir(elt):
return '/buildstream/input/{}'.format(elt.name)
def export_command(elt):
return 'flatpak build-export --files=files --arch={} /buildstream/repo {} {}'\
.format(self._arch, staging_dir(elt), self._branch)
for elt in elements:
if elt.get_kind() == 'flatpak_image':
self.layout_add(elt.name, staging_dir(elt))
self.add_commands('export {}'.format(elt.name), [export_command(elt)])
elif elt.get_kind() == 'stack':
self._layout_flatpaks(elt.dependencies(Scope.RUN, recurse=False))
else:
raise ElementError('Dependency {} is not of kind flatpak_image'.format(elt.name))
def _construct_env(self):
for name in self._env:
element = self.search(Scope.BUILD, name)
if element is None:
raise ElementError("No element in dependencies matching {name}".format(name=name))
yield element
def stage(self, sandbox):
env = list(self._construct_env())
flatpaks = [elt for elt in self.dependencies(Scope.BUILD, recurse=False) if elt not in env]
for elt in env:
self.layout_add(elt.name, '/')
self._layout_flatpaks(flatpaks)
for src, dest in self._copy_refs:
self.add_commands('copy ref {} -> {}'.format(src, dest),
['flatpak build-commit-from --src-ref={} /buildstream/repo {}'.format(src, dest)])
super(FlatpakRepoElement, self).stage(sandbox)
def get_unique_key(self):
return {
'environment': self._env,
'copy-refs': self._copy_refs,
'repo-mode': self._repo_mode,
'arch': self._arch,
'branch': self._branch
}
# Plugin entry point
def setup():
return FlatpakRepoElement | PypiClean |
/LDB_Algebra-0.3.2.tar.gz/LDB_Algebra-0.3.2/ldb/algebra/expression.py |
from __future__ import absolute_import
import operator
import ast
import ldb.algebra.manager
def freezedict(dict_):
return tuple(sorted(dict_.items()))
class MethodCache(object):
def __init__(self):
self.order = []
self.cache = {}
self.max_len = 5
def __contains__(self, key):
return key in self.order
def get_result(self, key):
# TODO: Test performance for test and test and grab vs grab and handle
# exception
if key in self.order:
return self.cache[key]
def store_result(self, key, val):
try:
hash(key)
except TypeError:
# No point in any of this if we can't store in dictionary
return
if key in self.order:
self.order.remove(key)
elif len(self.order) >= self.max_len:
to_kill = self.order[0]
del self.cache[to_kill]
del self.order[0]
self.cache[key] = val
self.order.append(key)
def make_function(expression, order):
return_expression = expression.ast()
imports = expression.ast_imports()
return_statement = ast.Return(return_expression, lineno=1)
args = [ast.Name(id=var_name, ctx=ast.Param()) for var_name in order]
func = ast.FunctionDef(name='f', args=ast.arguments(args, None, None, []),
body=[return_statement], decorator_list=[], lineno=1)
import_aliases = [ast.alias(name=import_name, asname=None)
for import_name in imports]
module = ast.Module(body=[ast.ImportFrom("__future__",
[ast.alias(name="division",
asname=None)], 0)] +
[ast.Import(import_aliases) for _ in [1]
if len(import_aliases) > 0] +
[func])
ast.fix_missing_locations(module)
module_compiled = compile(module, filename='<ast>', mode='exec')
my_globals = {}
exec module_compiled in my_globals, None
return my_globals['f']
def ast_(expression):
try:
return expression.ast()
except:
return ast.Num(expression)
def bind_kw(expression, **binding):
return bind(expression, binding)
def bind(expression, binding):
if hasattr(expression, 'bind'):
return expression.bind(binding)
else:
return expression
def differentiate(expression, variable):
if hasattr(expression, 'differentiate'):
return expression.differentiate(variable)
else:
return 0
class Expression(object):
_manager = ldb.algebra.manager.ExpressionManager()
def __init__(self):
pass
def ast_imports(self):
return set(())
def __mul__(self, other):
if other == 0:
return 0
elif other == 1:
return self
return Product(self, other)
def __rmul__(self, other):
if other == 0:
return 0
elif other == 1:
return self
return Product(other, self)
def __add__(self, other):
if other == 0:
return self
return Sum(self, other)
def __radd__(self, other):
if other == 0:
return self
return Sum(other, self)
def __sub__(self, other):
if self == other:
return 0
if other == 0:
return self
return Sum(self, -1*other)
def __rsub__(self, other):
if self == other:
return 0
if other == 0:
return -self
return Sum(other, -1*self)
def __truediv__(self, other):
if other == 1:
return self
return Quotient(self, other)
def __rtruediv__(self, other):
if other == 0:
return 0
return Quotient(other, self)
def __div__(self, other):
if other == 1:
return self
return Quotient(self, other)
def __rdiv__(self, other):
if other == 0:
return 0
return Quotient(other, self)
def __neg__(self):
return -1*self
def __pow__(self, other):
return Power(self, other)
class Variable(Expression):
def __init__(self, var_name):
self.var_name = var_name
self.support = set((self,))
def ast(self):
return ast.Name(self.var_name, ast.Load())
def bind(self, binding):
if self in binding:
return binding[self]
elif self.var_name in binding:
return binding[self.var_name]
else:
return self
def differentiate(self, variable):
if self == variable or self.var_name == variable:
return 1
else:
return 0
def __hash__(self):
return hash(self.var_name)
def __eq__(self, other):
try:
return self.var_name == other.var_name
except:
return False
def __repr__(self):
return self.var_name
class VectorVariableIndexed(Expression):
def __init__(self, vector_variable, indexer):
self.vector_variable = vector_variable
self.indexer = indexer
self.support = set((self,))
def bind(self, binding):
if self in binding:
return binding[self][self.indexer]
vector_variable_name = self.vector_variable.vec_var_name
if vector_variable_name in binding:
return binding[vector_variable_name][self.indexer]
else:
return self
# TODO: Implement differentiation
class VectorVariable(object):
def __init__(self, vec_var_name, length):
self.vec_var_name = vec_var_name
self.length = length
def __getitem__(self, index):
return VectorVariableIndexed(self, index)
def __iter__(self):
for i in xrange(self.length):
yield self[i]
def __len__(self):
return self.length
class CachedExpression(Expression):
def __init__(self, children, comparable):
self.children = children
self.comparable = comparable
self.support = set()
for child in self.children:
self.support |= getattr(child, 'support', set())
self._binding_cache = MethodCache()
self._differentiate_cache = MethodCache()
def ast_imports(self):
imports = self.ast_extra_imports()
for child in self.children:
try:
imports |= child.ast_imports()
except:
pass
return imports
def ast_extra_imports(self):
return set(())
def bind(self, binding):
hash_binding = freezedict(binding)
if hash_binding in self._binding_cache:
return self._binding_cache.get_result(hash_binding)
else:
val = self._bind(binding)
self._binding_cache.store_result(hash_binding, val)
return val
def differentiate(self, variable):
if variable in self._differentiate_cache:
return self._differentiate_cache.get_result(variable)
else:
val = self._differentiate(variable)
self._differentiate_cache.store_result(variable, val)
return val
def __hash__(self):
return hash(self.comparable)
def __eq__(self, other):
try:
return self.comparable == other.comparable
except:
return False
class Product(CachedExpression):
def __init__(self, a, b):
super(Product, self).__init__((a,b), ('Product', frozenset((a, b))))
def ast(self):
return ast.BinOp(ast_(self.children[0]), ast.Mult(),
ast_(self.children[1]))
def _bind(self, binding):
return bind(self.children[0], binding) * bind(self.children[1], binding)
def _differentiate(self, variable):
a = self.children[0]
b = self.children[1]
da = differentiate(a, variable)
db = differentiate(b, variable)
return b * da + a * db
def __repr__(self):
return "(%s * %s)"%(repr(self.children[0]), repr(self.children[1]))
class Quotient(CachedExpression):
def __init__(self, a, b):
super(Quotient, self).__init__((a,b), ('Quotient', (a, b)))
def ast(self):
return ast.BinOp(ast_(self.children[0]), ast.Div(),
ast_(self.children[1]))
def _bind(self, binding):
return operator.truediv(bind(self.children[0], binding),
bind(self.children[1], binding))
def _differentiate(self, variable):
a = self.children[0]
b = self.children[1]
da = differentiate(a, variable)
db = differentiate(b, variable)
return (b * da - a * db) / (b*b)
def __repr__(self):
return "(%s / %s)"%(repr(self.children[0]), repr(self.children[1]))
class Sum(CachedExpression):
def __init__(self, a, b):
super(Sum, self).__init__((a,b), ('Sum', frozenset((a, b))))
def ast(self):
return ast.BinOp(ast_(self.children[0]), ast.Add(),
ast_(self.children[1]))
def _bind(self, binding):
return bind(self.children[0], binding) + bind(self.children[1], binding)
def _differentiate(self, variable):
a = self.children[0]
b = self.children[1]
da = differentiate(a, variable)
db = differentiate(b, variable)
return da + db
def __repr__(self):
return "(%s + %s)"%(repr(self.children[0]), repr(self.children[1]))
class Power(CachedExpression):
def __init__(self, a, b):
super(Power, self).__init__((a,b), ('Power', (a, b)))
def ast(self):
return ast.BinOp(ast_(self.children[0]), ast.Pow(),
ast_(self.children[1]))
def _bind(self, binding):
return (bind(self.children[0], binding) **
bind(self.children[1], binding))
def _differentiate(self, variable):
import ldb.algebra.math
a = self.children[0]
b = self.children[1]
da = differentiate(a, variable)
db = differentiate(b, variable)
return (da * b * (a ** (b - 1)) +
db * ldb.algebra.math.log(a) * (a ** b))
def __repr__(self):
return "(%s ** %s)"%(repr(self.children[0]), repr(self.children[1])) | PypiClean |
/Downpour-0.2.tar.gz/Downpour-0.2/ez_setup.py | import sys
DEFAULT_VERSION = "0.6c11"
DEFAULT_URL = "http://pypi.python.org/packages/%s/s/setuptools/" % sys.version[:3]
md5_data = {
'setuptools-0.6b1-py2.3.egg': '8822caf901250d848b996b7f25c6e6ca',
'setuptools-0.6b1-py2.4.egg': 'b79a8a403e4502fbb85ee3f1941735cb',
'setuptools-0.6b2-py2.3.egg': '5657759d8a6d8fc44070a9d07272d99b',
'setuptools-0.6b2-py2.4.egg': '4996a8d169d2be661fa32a6e52e4f82a',
'setuptools-0.6b3-py2.3.egg': 'bb31c0fc7399a63579975cad9f5a0618',
'setuptools-0.6b3-py2.4.egg': '38a8c6b3d6ecd22247f179f7da669fac',
'setuptools-0.6b4-py2.3.egg': '62045a24ed4e1ebc77fe039aa4e6f7e5',
'setuptools-0.6b4-py2.4.egg': '4cb2a185d228dacffb2d17f103b3b1c4',
'setuptools-0.6c1-py2.3.egg': 'b3f2b5539d65cb7f74ad79127f1a908c',
'setuptools-0.6c1-py2.4.egg': 'b45adeda0667d2d2ffe14009364f2a4b',
'setuptools-0.6c10-py2.3.egg': 'ce1e2ab5d3a0256456d9fc13800a7090',
'setuptools-0.6c10-py2.4.egg': '57d6d9d6e9b80772c59a53a8433a5dd4',
'setuptools-0.6c10-py2.5.egg': 'de46ac8b1c97c895572e5e8596aeb8c7',
'setuptools-0.6c10-py2.6.egg': '58ea40aef06da02ce641495523a0b7f5',
'setuptools-0.6c11-py2.3.egg': '2baeac6e13d414a9d28e7ba5b5a596de',
'setuptools-0.6c11-py2.4.egg': 'bd639f9b0eac4c42497034dec2ec0c2b',
'setuptools-0.6c11-py2.5.egg': '64c94f3bf7a72a13ec83e0b24f2749b2',
'setuptools-0.6c11-py2.6.egg': 'bfa92100bd772d5a213eedd356d64086',
'setuptools-0.6c2-py2.3.egg': 'f0064bf6aa2b7d0f3ba0b43f20817c27',
'setuptools-0.6c2-py2.4.egg': '616192eec35f47e8ea16cd6a122b7277',
'setuptools-0.6c3-py2.3.egg': 'f181fa125dfe85a259c9cd6f1d7b78fa',
'setuptools-0.6c3-py2.4.egg': 'e0ed74682c998bfb73bf803a50e7b71e',
'setuptools-0.6c3-py2.5.egg': 'abef16fdd61955514841c7c6bd98965e',
'setuptools-0.6c4-py2.3.egg': 'b0b9131acab32022bfac7f44c5d7971f',
'setuptools-0.6c4-py2.4.egg': '2a1f9656d4fbf3c97bf946c0a124e6e2',
'setuptools-0.6c4-py2.5.egg': '8f5a052e32cdb9c72bcf4b5526f28afc',
'setuptools-0.6c5-py2.3.egg': 'ee9fd80965da04f2f3e6b3576e9d8167',
'setuptools-0.6c5-py2.4.egg': 'afe2adf1c01701ee841761f5bcd8aa64',
'setuptools-0.6c5-py2.5.egg': 'a8d3f61494ccaa8714dfed37bccd3d5d',
'setuptools-0.6c6-py2.3.egg': '35686b78116a668847237b69d549ec20',
'setuptools-0.6c6-py2.4.egg': '3c56af57be3225019260a644430065ab',
'setuptools-0.6c6-py2.5.egg': 'b2f8a7520709a5b34f80946de5f02f53',
'setuptools-0.6c7-py2.3.egg': '209fdf9adc3a615e5115b725658e13e2',
'setuptools-0.6c7-py2.4.egg': '5a8f954807d46a0fb67cf1f26c55a82e',
'setuptools-0.6c7-py2.5.egg': '45d2ad28f9750e7434111fde831e8372',
'setuptools-0.6c8-py2.3.egg': '50759d29b349db8cfd807ba8303f1902',
'setuptools-0.6c8-py2.4.egg': 'cba38d74f7d483c06e9daa6070cce6de',
'setuptools-0.6c8-py2.5.egg': '1721747ee329dc150590a58b3e1ac95b',
'setuptools-0.6c9-py2.3.egg': 'a83c4020414807b496e4cfbe08507c03',
'setuptools-0.6c9-py2.4.egg': '260a2be2e5388d66bdaee06abec6342a',
'setuptools-0.6c9-py2.5.egg': 'fe67c3e5a17b12c0e7c541b7ea43a8e6',
'setuptools-0.6c9-py2.6.egg': 'ca37b1ff16fa2ede6e19383e7b59245a',
}
import sys, os
try: from hashlib import md5
except ImportError: from md5 import md5
def _validate_md5(egg_name, data):
if egg_name in md5_data:
digest = md5(data).hexdigest()
if digest != md5_data[egg_name]:
print >>sys.stderr, (
"md5 validation of %s failed! (Possible download problem?)"
% egg_name
)
sys.exit(2)
return data
def use_setuptools(
version=DEFAULT_VERSION, download_base=DEFAULT_URL, to_dir=os.curdir,
download_delay=15
):
"""Automatically find/download setuptools and make it available on sys.path
`version` should be a valid setuptools version number that is available
as an egg for download under the `download_base` URL (which should end with
a '/'). `to_dir` is the directory where setuptools will be downloaded, if
it is not already available. If `download_delay` is specified, it should
be the number of seconds that will be paused before initiating a download,
should one be required. If an older version of setuptools is installed,
this routine will print a message to ``sys.stderr`` and raise SystemExit in
an attempt to abort the calling script.
"""
was_imported = 'pkg_resources' in sys.modules or 'setuptools' in sys.modules
def do_download():
egg = download_setuptools(version, download_base, to_dir, download_delay)
sys.path.insert(0, egg)
import setuptools; setuptools.bootstrap_install_from = egg
try:
import pkg_resources
except ImportError:
return do_download()
try:
pkg_resources.require("setuptools>="+version); return
except pkg_resources.VersionConflict, e:
if was_imported:
print >>sys.stderr, (
"The required version of setuptools (>=%s) is not available, and\n"
"can't be installed while this script is running. Please install\n"
" a more recent version first, using 'easy_install -U setuptools'."
"\n\n(Currently using %r)"
) % (version, e.args[0])
sys.exit(2)
else:
del pkg_resources, sys.modules['pkg_resources'] # reload ok
return do_download()
except pkg_resources.DistributionNotFound:
return do_download()
def download_setuptools(
version=DEFAULT_VERSION, download_base=DEFAULT_URL, to_dir=os.curdir,
delay = 15
):
"""Download setuptools from a specified location and return its filename
`version` should be a valid setuptools version number that is available
as an egg for download under the `download_base` URL (which should end
with a '/'). `to_dir` is the directory where the egg will be downloaded.
`delay` is the number of seconds to pause before an actual download attempt.
"""
import urllib2, shutil
egg_name = "setuptools-%s-py%s.egg" % (version,sys.version[:3])
url = download_base + egg_name
saveto = os.path.join(to_dir, egg_name)
src = dst = None
if not os.path.exists(saveto): # Avoid repeated downloads
try:
from distutils import log
if delay:
log.warn("""
---------------------------------------------------------------------------
This script requires setuptools version %s to run (even to display
help). I will attempt to download it for you (from
%s), but
you may need to enable firewall access for this script first.
I will start the download in %d seconds.
(Note: if this machine does not have network access, please obtain the file
%s
and place it in this directory before rerunning this script.)
---------------------------------------------------------------------------""",
version, download_base, delay, url
); from time import sleep; sleep(delay)
log.warn("Downloading %s", url)
src = urllib2.urlopen(url)
# Read/write all in one block, so we don't create a corrupt file
# if the download is interrupted.
data = _validate_md5(egg_name, src.read())
dst = open(saveto,"wb"); dst.write(data)
finally:
if src: src.close()
if dst: dst.close()
return os.path.realpath(saveto)
def main(argv, version=DEFAULT_VERSION):
"""Install or upgrade setuptools and EasyInstall"""
try:
import setuptools
except ImportError:
egg = None
try:
egg = download_setuptools(version, delay=0)
sys.path.insert(0,egg)
from setuptools.command.easy_install import main
return main(list(argv)+[egg]) # we're done here
finally:
if egg and os.path.exists(egg):
os.unlink(egg)
else:
if setuptools.__version__ == '0.0.1':
print >>sys.stderr, (
"You have an obsolete version of setuptools installed. Please\n"
"remove it from your system entirely before rerunning this script."
)
sys.exit(2)
req = "setuptools>="+version
import pkg_resources
try:
pkg_resources.require(req)
except pkg_resources.VersionConflict:
try:
from setuptools.command.easy_install import main
except ImportError:
from easy_install import main
main(list(argv)+[download_setuptools(delay=0)])
sys.exit(0) # try to force an exit
else:
if argv:
from setuptools.command.easy_install import main
main(argv)
else:
print "Setuptools version",version,"or greater has been installed."
print '(Run "ez_setup.py -U setuptools" to reinstall or upgrade.)'
def update_md5(filenames):
"""Update our built-in md5 registry"""
import re
for name in filenames:
base = os.path.basename(name)
f = open(name,'rb')
md5_data[base] = md5(f.read()).hexdigest()
f.close()
data = [" %r: %r,\n" % it for it in md5_data.items()]
data.sort()
repl = "".join(data)
import inspect
srcfile = inspect.getsourcefile(sys.modules[__name__])
f = open(srcfile, 'rb'); src = f.read(); f.close()
match = re.search("\nmd5_data = {\n([^}]+)}", src)
if not match:
print >>sys.stderr, "Internal error!"
sys.exit(2)
src = src[:match.start(1)] + repl + src[match.end(1):]
f = open(srcfile,'w')
f.write(src)
f.close()
if __name__=='__main__':
if len(sys.argv)>2 and sys.argv[1]=='--md5update':
update_md5(sys.argv[2:])
else:
main(sys.argv[1:]) | PypiClean |
/BRAILS-3.0.1.tar.gz/BRAILS-3.0.1/brails/modules/FoundationClassifier/csail_segmentation_tool/csail_seg/lib/nn/modules/comm.py |
import queue
import collections
import threading
__all__ = ['FutureResult', 'SlavePipe', 'SyncMaster']
class FutureResult(object):
"""A thread-safe future implementation. Used only as one-to-one pipe."""
def __init__(self):
self._result = None
self._lock = threading.Lock()
self._cond = threading.Condition(self._lock)
def put(self, result):
with self._lock:
assert self._result is None, 'Previous result has\'t been fetched.'
self._result = result
self._cond.notify()
def get(self):
with self._lock:
if self._result is None:
self._cond.wait()
res = self._result
self._result = None
return res
_MasterRegistry = collections.namedtuple('MasterRegistry', ['result'])
_SlavePipeBase = collections.namedtuple('_SlavePipeBase', ['identifier', 'queue', 'result'])
class SlavePipe(_SlavePipeBase):
"""Pipe for master-slave communication."""
def run_slave(self, msg):
self.queue.put((self.identifier, msg))
ret = self.result.get()
self.queue.put(True)
return ret
class SyncMaster(object):
"""An abstract `SyncMaster` object.
- During the replication, as the data parallel will trigger an callback of each module, all slave devices should
call `register(id)` and obtain an `SlavePipe` to communicate with the master.
- During the forward pass, master device invokes `run_master`, all messages from slave devices will be collected,
and passed to a registered callback.
- After receiving the messages, the master device should gather the information and determine to message passed
back to each slave devices.
"""
def __init__(self, master_callback):
"""
Args:
master_callback: a callback to be invoked after having collected messages from slave devices.
"""
self._master_callback = master_callback
self._queue = queue.Queue()
self._registry = collections.OrderedDict()
self._activated = False
def register_slave(self, identifier):
"""
Register an slave device.
Args:
identifier: an identifier, usually is the device id.
Returns: a `SlavePipe` object which can be used to communicate with the master device.
"""
if self._activated:
assert self._queue.empty(), 'Queue is not clean before next initialization.'
self._activated = False
self._registry.clear()
future = FutureResult()
self._registry[identifier] = _MasterRegistry(future)
return SlavePipe(identifier, self._queue, future)
def run_master(self, master_msg):
"""
Main entry for the master device in each forward pass.
The messages were first collected from each devices (including the master device), and then
an callback will be invoked to compute the message to be sent back to each devices
(including the master device).
Args:
master_msg: the message that the master want to send to itself. This will be placed as the first
message when calling `master_callback`. For detailed usage, see `_SynchronizedBatchNorm` for an example.
Returns: the message to be sent back to the master device.
"""
self._activated = True
intermediates = [(0, master_msg)]
for i in range(self.nr_slaves):
intermediates.append(self._queue.get())
results = self._master_callback(intermediates)
assert results[0][0] == 0, 'The first result should belongs to the master.'
for i, res in results:
if i == 0:
continue
self._registry[i].result.put(res)
for i in range(self.nr_slaves):
assert self._queue.get() is True
return results[0][1]
@property
def nr_slaves(self):
return len(self._registry) | PypiClean |
/microfire_sht3x-0.9.0.tar.gz/microfire_sht3x-0.9.0/src/Microfire_SHT3x/Microfire_SHT3x.py | import smbus # pylint: disable=E0401
import time, math
SHT3x_I2C_ADDRESS = 0x44
def exception_catch(func):
def func_wrapper(*args, **kwargs):
try:
return func(*args, **kwargs)
except Exception as e:
print(e)
return None
return func_wrapper
class Microfire_SHT3x():
tempC = 0
tempF = 0
vpd_kPa = 0
dew_pointC = 0
dew_pointF = 0
RH = 0
status = 0
_address = 0
_i2cPort = 0
status_string = ["no error", "not connected", "crc error"]
@exception_catch
def begin(self, i2c_bus=1):
self._address = SHT3x_I2C_ADDRESS
self._i2cPort = smbus.SMBus(i2c_bus)
@exception_catch
def connected(self):
try:
self._i2cPort.write_quick(self._address)
return True
except IOError:
return False
@exception_catch
def measure(self):
if (self.connected == False):
self.status = 1
return
self._i2cPort.write_i2c_block_data(SHT3x_I2C_ADDRESS, 0x30, [0xA2]) # reset
time.sleep(1 / 1000)
self._i2cPort.write_i2c_block_data(SHT3x_I2C_ADDRESS, 0x24, [0x00]) # high accuracy, no clock stretching
time.sleep(15 / 1000.0)
data = self._i2cPort.read_i2c_block_data(SHT3x_I2C_ADDRESS, 0, 6)
if (data[2] == self._crc(bytearray([data[0], data[1]]))):
self.tempC = ((((data[0] * 256.0) + data[1]) * 175) / 65535.0) - 45
self.tempF = (self.tempC * 1.8) + 32
self.status = 0
else:
self.tempC = 0
self.tempF = 0
self.vpd_kPa = 0
self.dew_pointC = 0
self.dew_pointF = 0
self.RH = 0
self.status = 3
return
if (data[5] == self._crc(bytearray([data[3], data[4]]))):
self.RH = 100 * (data[3] * 256 + data[4]) / 65535.0
# vpd
es = 0.61078 * math.exp(17.2694 * self.tempC / (self.tempC + 238.3))
ae = self.RH / 100 * es
self.vpd_kPa = es - ae
#dp
tem = -1.0 * self.tempC
esdp = 6.112 * math.exp(-1.0 * 17.67 * tem / (243.5 - tem))
ed = self.RH / 100.0 * esdp
eln = math.log(ed / 6.112)
self.dew_pointC = -243.5 * eln / (eln - 17.67)
self.dew_pointF = (self.dew_pointC * 1.8) + 32
self.status = 0
else:
self.tempC = 0
self.tempF = 0
self.vpd_kPa = 0
self.dew_pointC = 0
self.dew_pointF = 0
self.RH = 0
self.status = 3
return
@exception_catch
def _crc(self, data):
crc = 0xff
for byte in data:
crc ^= byte
for _ in range(8):
if crc & 0x80:
crc <<= 1
crc ^= 0x131
else:
crc <<= 1
return crc | PypiClean |
/grave-0.0.3.tar.gz/grave-0.0.3/README.rst | Grave—dead simple graph visualization
=====================================
.. image:: https://travis-ci.org/networkx/grave.svg?branch=master
:target: https://travis-ci.org/networkx/grave
:alt: Automated test status (Linux and MacOS)
.. image:: https://ci.appveyor.com/api/projects/status/github/networkx/grave?branch=master&svg=true
:target: https://ci.appveyor.com/project/networkx/grave
:alt: Automated test status (Windows)
.. image:: https://codecov.io/gh/networkx/grave/branch/master/graph/badge.svg
:target: https://codecov.io/gh/networkx/grave
:alt: Test coverage
.. GH breaks rendering of SVG from the repo, so we redirect through rawgit.com.
GH ignores the width and align directives for PNGs.
.. image:: https://rawgit.com/networkx/grave/master/doc/_static/default.svg
:width: 250px
:align: right
:alt: Logo
Grave is a graph visualization package combining ideas from Matplotlib,
NetworkX, and seaborn. Its goal is to provide a network drawing API that
covers the most use cases with sensible defaults and simple style
configuration. Currently, it supports drawing graphs from NetworkX.
- **Website (including documentation):** https://networkx.github.io/grave/
- **Mailing list:** https://groups.google.com/forum/#!forum/networkx-discuss
- **Source:** https://github.com/networkx/grave
- **Bug reports:** https://github.com/networkx/grave/issues
Example
-------
Here, we create a graph and color the nodes in its minimum weighted
dominating set:
.. code:: python
import matplotlib.pyplot as plt
import networkx as nx
from networkx.algorithms.approximation.dominating_set import min_weighted_dominating_set
from grave import plot_network
network = nx.powerlaw_cluster_graph(50, 1, .2)
dom_set = min_weighted_dominating_set(network)
for node, node_attrs in network.nodes(data=True):
node_attrs['is_dominator'] = True if node in dom_set else False
def color_dominators(node_attrs):
if node_attrs.get('is_dominator', False):
return {'color': 'red'}
else:
return {'color': 'black'}
fig, ax = plt.subplots()
plot_network(network, node_style=color_dominators)
plt.show()
The result:
.. image:: https://rawgit.com/networkx/grave/master/doc/_static/dominators.svg
:width: 700
:align: center
:alt: Coloring the minimum weighted dominating set of a graph
License
-------
Released under the 3-Clause BSD license (see `LICENSE`).
| PypiClean |
/Apiamtic_python-1.6.9-py3-none-any.whl/verizon5gmecvnspapi/controllers/csp_profiles_controller.py | from verizon5gmecvnspapi.api_helper import APIHelper
from verizon5gmecvnspapi.configuration import Server
from verizon5gmecvnspapi.controllers.base_controller import BaseController
from apimatic_core.request_builder import RequestBuilder
from apimatic_core.response_handler import ResponseHandler
from apimatic_core.types.parameter import Parameter
from verizon5gmecvnspapi.http.http_method_enum import HttpMethodEnum
from apimatic_core.authentication.multiple.single_auth import Single
from apimatic_core.authentication.multiple.and_auth_group import And
from apimatic_core.authentication.multiple.or_auth_group import Or
from verizon5gmecvnspapi.models.edge_service_onboarding_delete_result import EdgeServiceOnboardingDeleteResult
from verizon5gmecvnspapi.models.csp_profile import CSPProfile
from verizon5gmecvnspapi.models.csp_profile_data import CSPProfileData
from verizon5gmecvnspapi.exceptions.edge_service_onboarding_result_error_exception import EdgeServiceOnboardingResultErrorException
class CSPProfilesController(BaseController):
"""A Controller to access Endpoints in the verizon5gmecvnspapi API."""
def __init__(self, config):
super(CSPProfilesController, self).__init__(config)
def remove_cloud_credential(self,
account_name,
id,
correlation_id=None):
"""Does a DELETE request to /v1/cspProfiles/{id}.
Remove a cloud credential from user's organization.
Args:
account_name (string): User account name.
id (string): CSP Profile Id.
correlation_id (string, optional): TODO: type description here.
Returns:
EdgeServiceOnboardingDeleteResult: Response from the API. OK.
Raises:
APIException: When an error occurs while fetching the data from
the remote API. This exception includes the HTTP Response
code, an error message, and the HTTP body that was received in
the request.
"""
return super().new_api_call_builder.request(
RequestBuilder().server(Server.SERVICES)
.path('/v1/cspProfiles/{id}')
.http_method(HttpMethodEnum.DELETE)
.header_param(Parameter()
.key('AccountName')
.value(account_name))
.template_param(Parameter()
.key('id')
.value(id)
.should_encode(True))
.header_param(Parameter()
.key('correlationId')
.value(correlation_id))
.header_param(Parameter()
.key('accept')
.value('application/json'))
.auth(Single('global'))
).response(
ResponseHandler()
.deserializer(APIHelper.json_deserialize)
.deserialize_into(EdgeServiceOnboardingDeleteResult.from_dictionary)
.local_error('401', 'Unauthorized.', EdgeServiceOnboardingResultErrorException)
.local_error('404', 'Not Found.', EdgeServiceOnboardingResultErrorException)
.local_error('500', 'Internal Server Error.', EdgeServiceOnboardingResultErrorException)
).execute()
def create_cloud_credential(self,
account_name,
body,
correlation_id=None):
"""Does a POST request to /v1/cspProfiles/.
Create a new cloud credential within user's organization.
Args:
account_name (string): User account name.
body (CSPProfile): TODO: type description here.
correlation_id (string, optional): TODO: type description here.
Returns:
CSPProfile: Response from the API. Created.
Raises:
APIException: When an error occurs while fetching the data from
the remote API. This exception includes the HTTP Response
code, an error message, and the HTTP body that was received in
the request.
"""
return super().new_api_call_builder.request(
RequestBuilder().server(Server.SERVICES)
.path('/v1/cspProfiles/')
.http_method(HttpMethodEnum.POST)
.header_param(Parameter()
.key('AccountName')
.value(account_name))
.header_param(Parameter()
.key('Content-Type')
.value('application/json'))
.body_param(Parameter()
.value(body))
.header_param(Parameter()
.key('correlationId')
.value(correlation_id))
.header_param(Parameter()
.key('accept')
.value('application/json'))
.body_serializer(APIHelper.json_serialize)
.auth(Single('global'))
).response(
ResponseHandler()
.deserializer(APIHelper.json_deserialize)
.deserialize_into(CSPProfile.from_dictionary)
.local_error('400', 'Bad Request.', EdgeServiceOnboardingResultErrorException)
.local_error('401', 'Unauthorized.', EdgeServiceOnboardingResultErrorException)
.local_error('403', 'Forbidden.', EdgeServiceOnboardingResultErrorException)
.local_error('429', 'Too many requests.', EdgeServiceOnboardingResultErrorException)
.local_error('500', 'Internal Server Error.', EdgeServiceOnboardingResultErrorException)
.local_error('default', 'Forbidden.', EdgeServiceOnboardingResultErrorException)
).execute()
def fetch_cloud_credential_details(self,
account_name,
correlation_id=None,
q=None,
limit=None,
off_set=None):
"""Does a GET request to /v1/cspProfiles/.
Fetch available cloud credentials within user's organization.
Args:
account_name (string): User account name.
correlation_id (string, optional): TODO: type description here.
q (string, optional): Use the coloumn (:) character to separate
multiple query params eg
type=AWS:awsCspProfile.credType=ACCESS_KEY,ROLE_ARN:state=UNVER
IFIED,VERIFIED.
limit (long|int, optional): Number of items to return.
off_set (long|int, optional): Id of the last respose value in the
previous list.
Returns:
CSPProfileData: Response from the API. OK.
Raises:
APIException: When an error occurs while fetching the data from
the remote API. This exception includes the HTTP Response
code, an error message, and the HTTP body that was received in
the request.
"""
return super().new_api_call_builder.request(
RequestBuilder().server(Server.SERVICES)
.path('/v1/cspProfiles/')
.http_method(HttpMethodEnum.GET)
.header_param(Parameter()
.key('AccountName')
.value(account_name))
.header_param(Parameter()
.key('correlationId')
.value(correlation_id))
.query_param(Parameter()
.key('q')
.value(q))
.query_param(Parameter()
.key('limit')
.value(limit))
.query_param(Parameter()
.key('offSet')
.value(off_set))
.header_param(Parameter()
.key('accept')
.value('application/json'))
.auth(Single('global'))
).response(
ResponseHandler()
.deserializer(APIHelper.json_deserialize)
.deserialize_into(CSPProfileData.from_dictionary)
.local_error('401', 'Unauthorized.', EdgeServiceOnboardingResultErrorException)
.local_error('403', 'Forbidden.', EdgeServiceOnboardingResultErrorException)
.local_error('404', 'Not found.', EdgeServiceOnboardingResultErrorException)
.local_error('429', 'Too many requests.', EdgeServiceOnboardingResultErrorException)
.local_error('500', 'Internal Server Error.', EdgeServiceOnboardingResultErrorException)
.local_error('default', 'Forbidden.', EdgeServiceOnboardingResultErrorException)
).execute() | PypiClean |
/ESMValTool-2.9.0-py3-none-any.whl/esmvaltool/diag_scripts/shared/_supermeans.py | import os.path
import cf_units
import iris
import iris.coord_categorisation
from iris.coord_categorisation import _pt_date
import numpy as np
class NoBoundsError(ValueError):
"""Return error and pass."""
class InvalidPeriod(ValueError):
"""Return error and pass."""
def get_supermean(name, season, data_dir, obs_flag=None):
"""Calculated supermeans from retrieved data, which are pickled Iris cubes.
:param name: Cube name. Should be CF-standard name. If no CF-standard name
exists the STASH code in msi format (for example m01s30i403)
is used as name.
:param season: Supermean for a season (including annual).
['ann', 'djf', 'mam', 'jja', 'son']
:param data_dir: Directory containing cubes of model output data for
supermeans.
:returns: Supermeaned cube.
:rtype Cube:
The monthly and seasonal supermeans are periodic averages, for example
the seasonal supermean consists of the averaged season, where each
season is averaged over several years.
The annual supermean is a continuous mean over multiple years.
Supermeans are only applied to full clima years (Starting Dec 1st).
"""
name_constraint = iris.Constraint(name=name)
if not obs_flag:
cubes_path = os.path.join(data_dir, 'cubeList.nc')
else:
cubes_path = os.path.join(data_dir, obs_flag + '_cubeList.nc')
cubes = iris.load(cubes_path)
# use STASH if no standard name
for cube in cubes:
if cube.name() == 'unknown':
cube.rename(str(cube.attributes['STASH']))
cube = cubes.extract_cube(name_constraint)
if season in ['djf', 'mam', 'jja', 'son']:
supermeans_cube = periodic_mean(cube, period='season')
return supermeans_cube.extract(iris.Constraint(season=season))
elif season == 'ann':
return periodic_mean(cube)
else:
raise ValueError(
"Argument 'season' must be one of "
"['ann', 'djf', 'mam', 'jja', 'son']. "
"It is: " + str(season))
def contains_full_climate_years(cube):
"""Test whether cube covers full climate year(s).
A climate year begins at YYYY-12-01 00:00:00,
and ends at YYYY-12-01 00:00:00.
In case of diurnal data, which is sampled at certain hours of the day, the
climate year is shifted by up to 23 hours. The climate year boundaries of
data sampled at 18:00 would be YYYY-12-01 18:00:00.
:param Cube: Cube.
:returns: True if first and last time bound
in cube are at YYYY-12-01 00:00:00.
:rtype: boolean
"""
origin = cube.coord('time').units.origin
calendar = cube.coord('time').units.calendar
format_ = 'YYYY-%m-%d %H:%M:%S'
if not cube.coord('time').has_bounds():
raise NoBoundsError()
def _num2date(num):
return cf_units.num2date(num, origin, calendar)
if is_24h_sampled(cube):
# find out number of sampling intervals (difference < 24 h)
intervals = []
for i in range(len(cube.coord('time').points) - 1):
diff = cube.coord('time').points[i] - cube.coord('time').points[0]
if diff < 24:
intervals.append(round(diff))
intervals = len(intervals)
year_boundaries = [
'YYYY-12-01 {:02d}:00:00'.format(hour) for hour in range(24)
]
bounding_datetimes = []
time_bounds = cube.coord('time').bounds
for i in range(intervals):
start = _num2date(time_bounds[i][0]).strftime(format_)
end = _num2date(time_bounds[i - intervals][1]).strftime(format_)
bounding_datetimes.append((start, end))
return all(start == end and start in year_boundaries and
end in year_boundaries
for start, end in bounding_datetimes)
else:
start = _num2date(cube.coord('time').bounds[0][0]).strftime(format_)
end = _num2date(cube.coord('time').bounds[-1][1]).strftime(format_)
year_boundary = 'YYYY-12-01 00:00:00'
return start == year_boundary and end == year_boundary
def is_24h_sampled(cube):
"""Check if cube data was sample once per day."""
meaning_periods = []
for c_m in cube.cell_methods:
if c_m.method == 'mean' and 'time' in c_m.coord_names:
meaning_periods.extend(c_m.intervals)
return '24 hour' in meaning_periods
def periodic_mean(cube, period=None):
"""Return cube in which all identical periods are averaged into one.
In case of months this would be averages over all Januaries, Februaries,
etc. In case of season this would averages over all Winters, Springs,
Summers and Autumns.
If no period is specified the average of all data in `cube` is calculated.
Averaging works with data sampled multiple times per day (diurnal data).
The averaging takes the different lengths of periods in the Gregorian
calendar into account.
Requires cube with data for full Climate Years. Climate years start at the
1st of December.
:param cube: Cube with data for each calendar month.
:param period: 'month', 'season'
:returns: Cube with periodic monthly averages.
:rtype: Cube
Note: In the returned cube, the bounds for each
period are the start boundary
of the first period that is averaged over,
and the end boundary of the last period that is averaged over.
"""
if period not in [None, 'month', 'season']:
raise InvalidPeriod('Invalid period: ' + str(period))
_cube = cube.copy()
if _cube.coord('time').has_bounds():
add_start_hour(_cube, 'time', name='start_hour')
else:
iris.coord_categorisation.add_hour(_cube, 'time', name='start_hour')
if period == 'month':
iris.coord_categorisation.add_month(_cube, 'time', name='month')
elif period == 'season':
iris.coord_categorisation.add_season(_cube, 'time')
elif period is None:
pass
else:
raise InvalidPeriod('Invalid period: ' + str(period))
time_points_per_day = len(set(_cube.coord('start_hour').points))
if period is None: # multi-annual mean
if time_points_per_day > 1:
_cube = time_average_by(_cube, 'start_hour')
else:
_cube.remove_coord('start_hour')
_cube = time_average_by(_cube)
else:
if time_points_per_day > 1:
_cube = time_average_by(_cube, [period, 'start_hour'])
else:
_cube.remove_coord('start_hour')
_cube = time_average_by(_cube, period)
return _cube
def add_start_hour(cube, coord, name='diurnal_sampling_hour'):
"""Add AuxCoord for diurnal data. Diurnal data is sampled every 24 hours.
The hour value is taken from the first time bound, or the time point if no
bounds exist.
"""
_add_categorised_coord(cube, name, coord, start_hour_from_bounds)
def start_hour_from_bounds(coord, _, bounds):
"""Add hour from bounds."""
return np.array([_pt_date(coord, _bounds[0]).hour for _bounds in bounds])
def _add_categorised_coord(cube,
name,
from_coord,
category_function,
units='1'):
"""
Add categorized coordinate.
This function creates a category from coordinate bounds. To derive the
category from the points use:
`iris.coord_categorisation.add_categorised_coord`
This function has the same interface as
`iris.coord_categorisation.add_categorised_coord`
######################################################################
Add a new coordinate to a cube, by categorising an existing one.
Make a new :class:`iris.coords.AuxCoord` from mapped values, and add it to
the cube.
Args:
* cube (:class:`iris.cube.Cube`):
the cube containing 'from_coord'. The new coord will be added into it.
* name (string):
name of the created coordinate
* from_coord (:class:`iris.coords.Coord` or string):
coordinate in 'cube', or the name of one
* category_function (callable):
function(coordinate, value), returning a category value for a
coordinate point-value
Kwargs:
* units:
units of the category value, typically 'no_unit' or '1'.
"""
# Interpret coord, if given as a name
if isinstance(from_coord, str):
from_coord = cube.coord(from_coord)
if cube.coords(name):
msg = 'A coordinate "%s" already exists in the cube.' % name
raise ValueError(msg)
new_coord = iris.coords.AuxCoord(
category_function(from_coord, from_coord.points, from_coord.bounds),
units=units,
attributes=from_coord.attributes.copy())
new_coord.rename(name)
# Add into the cube
cube.add_aux_coord(new_coord, cube.coord_dims(from_coord))
def time_average_by(cube, periods='time'):
"""Average cube over time or over periods.
i. e. time-based categorical
coordinates, with calendar dependent weighting.
"""
if isinstance(periods, str):
periods = [periods]
# create new cube with time coord and orig duration as data
durations_cube = iris.cube.Cube(
# durations normalised to 1
durations(cube.coord('time')) / np.max(durations(cube.coord('time'))),
long_name='duration',
units='1',
attributes=None,
dim_coords_and_dims=[(cube.coord('time').copy(), 0)])
# there must be an AuxCoord for each period
for period in periods:
if period != 'time':
durations_cube.add_aux_coord(cube.coord(period), 0)
# calculate weighted sum
orig_cell_methods = cube.cell_methods
# multiply each time slice by its duration
idx_obj = [None] * cube.data.ndim
idx_obj[cube.coord_dims('time')[0]] = slice(
None) # [None, slice(None), None] == [np.newaxis, :, np.newaxis]
cube.data *= durations_cube.data[tuple(idx_obj)]
if periods == ['time']: # duration weighted averaging
cube = cube.collapsed(periods, iris.analysis.SUM)
durations_cube = durations_cube.collapsed(periods, iris.analysis.SUM)
else:
cube = cube.aggregated_by(periods, iris.analysis.SUM)
durations_cube = durations_cube.aggregated_by(periods,
iris.analysis.SUM)
# divide by aggregated weights
if durations_cube.data.shape == ():
cube.data /= durations_cube.data
else:
cube.data /= durations_cube.data[tuple(idx_obj)]
# correct cell methods
cube.cell_methods = orig_cell_methods
time_averaging_method = iris.coords.CellMethod(
method='mean', coords=periods)
cube.add_cell_method(time_averaging_method)
return cube
def durations(time_coord):
"""Return durations of time periods."""
assert time_coord.has_bounds(), 'No bounds. Do not guess.'
durs = np.array(
[bounds[1] - bounds[0] for bounds in time_coord.bounds])
return durs | PypiClean |
/Nasdaq%20Data%20Link-1.0.4.tar.gz/Nasdaq Data Link-1.0.4/nasdaqdatalink/model/merged_dataset.py | from more_itertools import unique_everseen
import pandas as pd
from six import string_types
from .model_base import ModelBase
from nasdaqdatalink.util import Util
from .merged_data_list import MergedDataList
from .data import Data
from nasdaqdatalink.message import Message
from .dataset import Dataset
class MergedDataset(ModelBase):
def __init__(self, dataset_codes, **options):
self.dataset_codes = dataset_codes
self._datasets = None
self._raw_data = None
self.options = options
@property
def column_names(self):
return self._merged_column_names_from(self.__dataset_objects__())
@property
def oldest_available_date(self):
return min(self._get_dataset_attribute('oldest_available_date'))
@property
def newest_available_date(self):
return max(self._get_dataset_attribute('newest_available_date'))
def data(self, **options):
# if there is only one column_index, use the api to fetch
# else fetch all the data and filter column indexes requested locally
dataset_data_list = [self._get_dataset_data(dataset, **options)
for dataset in self.__dataset_objects__()]
# build data frames and filter locally when necessary
data_frames = [dataset_data.to_pandas(
keep_column_indexes=self._keep_column_indexes(index))
for index, dataset_data in enumerate(dataset_data_list)]
merged_data_frame = pd.DataFrame()
for index, data_frame in enumerate(data_frames):
metadata = self.__dataset_objects__()[index]
# use code to prevent metadata api call
data_frame.rename(
columns=lambda x: self._rename_columns(metadata.code, x), inplace=True)
merged_data_frame = pd.merge(
merged_data_frame, data_frame, right_index=True, left_index=True, how='outer')
merged_data_metadata = self._build_data_meta(dataset_data_list, merged_data_frame)
# check if descending was explicitly set
# if set we need to sort in descending order
# since panda merged dataframe will
# by default sort everything in ascending
return MergedDataList(
Data, merged_data_frame, merged_data_metadata,
ascending=self._order_is_ascending(**options))
# for MergeDataset data calls
def _get_dataset_data(self, dataset, **options):
updated_options = options
# if we have only one column index, let the api
# handle the column filtering since the api supports this
if len(dataset.requested_column_indexes) == 1:
params = {'column_index': dataset.requested_column_indexes[0]}
# only change the options per request
updated_options = options.copy()
updated_options = Util.merge_options('params', params, **updated_options)
return dataset.data(**updated_options)
def _build_data_meta(self, dataset_data_list, df):
merged_data_metadata = {}
# for sanity check if list has items
if dataset_data_list:
# meta should be the same for every individual Dataset
# request, just take the first one
merged_data_metadata = dataset_data_list[0].meta.copy()
# set the start_date and end_date to
# the actual values we got back from data
num_rows = len(df.index)
if num_rows > 0:
merged_data_metadata['start_date'] = df.index[0].date()
merged_data_metadata['end_date'] = df.index[num_rows - 1].date()
# remove column_index if it exists because this would be per request data
merged_data_metadata.pop('column_index', None)
# don't use self.column_names to prevent metadata api call
# instead, get the column_names from the dataset_data_objects
merged_data_metadata['column_names'] = self._merged_column_names_from(dataset_data_list)
return merged_data_metadata
def _keep_column_indexes(self, index):
# no need to filter if we only have one column_index
# since leveraged the server to do the filtering
col_index = self.__dataset_objects__()[index].requested_column_indexes
if len(self.__dataset_objects__()[index].requested_column_indexes) == 1:
# empty array for no filtering
col_index = []
return col_index
def _rename_columns(self, code, original_column_name):
return code + ' - ' + original_column_name
def _get_dataset_attribute(self, k):
elements = []
for dataset in self.__dataset_objects__():
elements.append(dataset.__get_raw_data__()[k])
return list(unique_everseen(elements))
def _order_is_ascending(self, **options):
return not (self._in_query_param('order', **options) and
options['params']['order'] == 'desc')
def _in_query_param(self, name, **options):
return ('params' in options and
name in options['params'])
# can take in a list of dataset_objects
# or a list of dataset_data_objects
def _merged_column_names_from(self, dataset_list):
elements = []
for idx_dataset, dataset in enumerate(dataset_list):
# require getting the code from the dataset object always
code = self.__dataset_objects__()[idx_dataset].code
for index, column_name in enumerate(dataset.column_names):
# only include column names that are not filtered out
# by specification of the column_indexes list
if self._include_column(dataset, index):
# first index is the date, don't modify the date name
if index > 0:
elements.append(self._rename_columns(code, column_name))
else:
elements.append(column_name)
return list(unique_everseen(elements))
def _include_column(self, dataset_metadata, column_index):
# non-pandas/dataframe:
# keep column 0 around because we want to keep Date
if (hasattr(dataset_metadata, 'requested_column_indexes') and
len(dataset_metadata.requested_column_indexes) > 0 and
column_index != 0):
return column_index in dataset_metadata.requested_column_indexes
return True
def _initialize_raw_data(self):
datasets = self.__dataset_objects__()
self._raw_data = {}
if not datasets:
return self._raw_data
self._raw_data = datasets[0].__get_raw_data__().copy()
for k, v in list(self._raw_data.items()):
self._raw_data[k] = getattr(self, k)
return self._raw_data
def _build_dataset_object(self, dataset_code, **options):
options_copy = options.copy()
# data_codes are tuples
# e.g., ('WIKI/AAPL', {'column_index": [1,2]})
# or strings
# e.g., 'NSE/OIL'
code = self._get_request_dataset_code(dataset_code)
dataset = Dataset(code, None, **options_copy)
# save column_index param requested dynamically
# used later on to determine:
# if column_index is an array, fetch all data and use locally to filter columns
# if column_index is an empty array, fetch all data and don't filter columns
dataset.requested_column_indexes = self._get_req_dataset_col_indexes(dataset_code, code)
return dataset
def _get_req_dataset_col_indexes(self, dataset_code, code_str):
# ensure if column_index dict is specified, value is a list
params = self._get_request_params(dataset_code)
if 'column_index' in params:
column_index = params['column_index']
if not isinstance(column_index, list):
raise ValueError(
Message.ERROR_COLUMN_INDEX_LIST % code_str)
return column_index
# default, no column indexes to filter
return []
def _get_request_dataset_code(self, dataset_code):
if isinstance(dataset_code, tuple):
return dataset_code[0]
elif isinstance(dataset_code, string_types):
return dataset_code
else:
raise ValueError(Message.ERROR_ARGUMENTS_LIST_FORMAT)
def _get_request_params(self, dataset_code):
if isinstance(dataset_code, tuple):
return dataset_code[1]
return {}
def __getattr__(self, k):
if k[0] == '_' and k != '_raw_data':
raise AttributeError(k)
elif hasattr(MergedDataset, k):
return super(MergedDataset, self).__getattr__(k)
elif k in self.__dataset_objects__()[0].__get_raw_data__():
return self._get_dataset_attribute(k)
return super(MergedDataset, self).__getattr__(k)
def __get_raw_data__(self):
if self._raw_data is None:
self._initialize_raw_data()
return ModelBase.__get_raw_data__(self)
def __dataset_objects__(self):
if self._datasets:
return self._datasets
if not isinstance(self.dataset_codes, list):
raise ValueError('dataset codes must be specified in a list')
# column_index is handled by individual dataset get's
if 'params' in self.options:
self.options['params'].pop("column_index", None)
self._datasets = list([self._build_dataset_object(dataset_code, **self.options)
for dataset_code in self.dataset_codes])
return self._datasets | PypiClean |
/MatrixDemos-0.3.tar.gz/MatrixDemos-0.3/matrixdemos/scripts/TextSwarm.py | """Display swarms of text on the matrix"""
import time
import random
from optparse import OptionParser
from rgbmatrix import RGBMatrix, RGBMatrixOptions
from matrixdemos.scripts.utils import *
from PIL import ImageDraw
from pygame.time import Clock
parser = OptionParser()
parser.set_description("""Show lots of text swarming about""")
parser.add_option("-t", "--text", dest="text",
help="the text to show", default="Text")
parser.add_option("-c", "--color", dest="color",
help="the color of the text", default="PURPLE")
parser.add_option("-r", "--rcolor", dest="rcolor",
help="the color the rear text gets blended to", default=None)
parser.add_option("-b", "--bgcolor", dest="bgcolor",
help="the color of the background", default="BLACK")
(options, args) = parser.parse_args()
# Configuration for the matrix
_options = RGBMatrixOptions()
_options.drop_privileges = False
_options.rows = 32
_options.chain_length = 1
_options.parallel = 1
_options.hardware_mapping = 'adafruit-hat' # If you have an Adafruit HAT: 'adafruit-hat'
matrix = RGBMatrix(options=_options)
REAR_COLOR = options.rcolor
if REAR_COLOR == None:
REAR_COLOR = options.bgcolor
BACKGROUND_COLOR = options.bgcolor
SIZE = 8
FONT = "monospace"
SPEED = 30
NUM_TEXT = 15
FPS = 35
def p(pix):
if pix:
return 1
return 0
class Text:
def __init__(self, canvas, start=False):
self.order = random.randint(1, 255)
self.text = options.text
self.color = options.color
self.size = round(SIZE + random.randint(-2, 2) + (2 * self.order / 255))
self.orient = random.randint(0, 1) # 0: Horz, 1: Vert
text_size = canvas.textsize(self.text, font=get_font(FONT, self.size))
self.image = Image.new("RGB", [max(text_size)] * 2, 0)
self.canvas = ImageDraw.Draw(self.image)
second_axis = random.randint(-self.size, 32 + self.size)
self.pos = [0, 0]
self.pos[not self.orient] = second_axis
if start:
self.pos[self.orient] = random.randint(-32, 32)
else:
self.pos[self.orient] = -text_size[0]
self.speed = 30 # px/second
DrawText(self.canvas, (0, 0), self.size, self.text, color_fade(self.color, REAR_COLOR, self.order), font=FONT, bold=True)
if self.orient:
self.image = self.image.rotate(90)
self.mask = self.image.convert("L").point(p, "1")
self.dead = False
def get_order(self):
return self.order
def draw(self, image):
pos = (int(self.pos[0]), int(self.pos[1]))
image.paste(self.image, pos, self.mask)
def update(self, time_passed):
lag = (self.order / 200)
if lag > 1:
lag = 1
self.pos[self.orient] += self.speed * time_passed * lag
if self.pos[self.orient] > 32:
self.dead = True
def run():
time_passed = 0
texts = []
dead_text = []
clock = Clock()
image, canvas = new_canvas("RGB", BACKGROUND_COLOR)
for x in range(NUM_TEXT):
texts.append(Text(canvas, start=True))
while True:
canvas.rectangle(((0, 0), (32, 32)), BACKGROUND_COLOR)
texts.sort(key=Text.get_order)
for text in texts:
text.draw(image)
text.update(time_passed)
if text.dead:
dead_text.append(text)
for dead in dead_text:
texts.remove(dead)
texts.append(Text(canvas))
dead_text.clear()
matrix.SetImage(image)
time_passed = clock.tick(FPS) / 1000
def main():
try:
run()
except KeyboardInterrupt:
print()
if __name__ == "__main__":
main() | PypiClean |
/GeoNode-3.2.0-py3-none-any.whl/geonode/static/geonode/js/ol-2.13/lib/OpenLayers/Layer/ArcIMS.js | * @requires OpenLayers/Layer/Grid.js
* @requires OpenLayers/Format/ArcXML.js
* @requires OpenLayers/Request.js
*/
/**
* Class: OpenLayers.Layer.ArcIMS
* Instances of OpenLayers.Layer.ArcIMS are used to display data from ESRI ArcIMS
* Mapping Services. Create a new ArcIMS layer with the <OpenLayers.Layer.ArcIMS>
* constructor.
*
* Inherits from:
* - <OpenLayers.Layer.Grid>
*/
OpenLayers.Layer.ArcIMS = OpenLayers.Class(OpenLayers.Layer.Grid, {
/**
* Constant: DEFAULT_PARAMS
* {Object} Default query string parameters.
*/
DEFAULT_PARAMS: {
ClientVersion: "9.2",
ServiceName: ''
},
/**
* APIProperty: featureCoordSys
* {String} Code for feature coordinate system. Default is "4326".
*/
featureCoordSys: "4326",
/**
* APIProperty: filterCoordSys
* {String} Code for filter coordinate system. Default is "4326".
*/
filterCoordSys: "4326",
/**
* APIProperty: layers
* {Array} An array of objects with layer properties.
*/
layers: null,
/**
* APIProperty: async
* {Boolean} Request images asynchronously. Default is true.
*/
async: true,
/**
* APIProperty: name
* {String} Layer name. Default is "ArcIMS".
*/
name: "ArcIMS",
/**
* APIProperty: isBaseLayer
* {Boolean} The layer is a base layer. Default is true.
*/
isBaseLayer: true,
/**
* Constant: DEFAULT_OPTIONS
* {Object} Default layers properties.
*/
DEFAULT_OPTIONS: {
tileSize: new OpenLayers.Size(512, 512),
featureCoordSys: "4326",
filterCoordSys: "4326",
layers: null,
isBaseLayer: true,
async: true,
name: "ArcIMS"
},
/**
* Constructor: OpenLayers.Layer.ArcIMS
* Create a new ArcIMS layer object.
*
* Example:
* (code)
* var arcims = new OpenLayers.Layer.ArcIMS(
* "Global Sample",
* "http://sample.avencia.com/servlet/com.esri.esrimap.Esrimap",
* {
* service: "OpenLayers_Sample",
* layers: [
* // layers to manipulate
* {id: "1", visible: true}
* ]
* }
* );
* (end)
*
* Parameters:
* name - {String} A name for the layer
* url - {String} Base url for the ArcIMS server
* options - {Object} Optional object with properties to be set on the
* layer.
*/
initialize: function(name, url, options) {
this.tileSize = new OpenLayers.Size(512, 512);
// parameters
this.params = OpenLayers.Util.applyDefaults(
{ServiceName: options.serviceName},
this.DEFAULT_PARAMS
);
this.options = OpenLayers.Util.applyDefaults(
options, this.DEFAULT_OPTIONS
);
OpenLayers.Layer.Grid.prototype.initialize.apply(
this, [name, url, this.params, options]
);
//layer is transparent
if (this.transparent) {
// unless explicitly set in options, make layer an overlay
if (!this.isBaseLayer) {
this.isBaseLayer = false;
}
// jpegs can never be transparent, so intelligently switch the
// format, depending on the browser's capabilities
if (this.format == "image/jpeg") {
this.format = OpenLayers.Util.alphaHack() ? "image/gif" : "image/png";
}
}
// create an empty layer list if no layers specified in the options
if (this.options.layers === null) {
this.options.layers = [];
}
},
/**
* Method: getURL
* Return an image url this layer.
*
* Parameters:
* bounds - {<OpenLayers.Bounds>} A bounds representing the bbox for the
* request.
*
* Returns:
* {String} A string with the map image's url.
*/
getURL: function(bounds) {
var url = "";
bounds = this.adjustBounds(bounds);
// create an arcxml request to generate the image
var axlReq = new OpenLayers.Format.ArcXML(
OpenLayers.Util.extend(this.options, {
requesttype: "image",
envelope: bounds.toArray(),
tileSize: this.tileSize
})
);
// create a synchronous ajax request to get an arcims image
var req = new OpenLayers.Request.POST({
url: this.getFullRequestString(),
data: axlReq.write(),
async: false
});
// if the response exists
if (req != null) {
var doc = req.responseXML;
if (!doc || !doc.documentElement) {
doc = req.responseText;
}
// create a new arcxml format to read the response
var axlResp = new OpenLayers.Format.ArcXML();
var arcxml = axlResp.read(doc);
url = this.getUrlOrImage(arcxml.image.output);
}
return url;
},
/**
* Method: getURLasync
* Get an image url this layer asynchronously, and execute a callback
* when the image url is generated.
*
* Parameters:
* bounds - {<OpenLayers.Bounds>} A bounds representing the bbox for the
* request.
* callback - {Function} Function to call when image url is retrieved.
* scope - {Object} The scope of the callback method.
*/
getURLasync: function(bounds, callback, scope) {
bounds = this.adjustBounds(bounds);
// create an arcxml request to generate the image
var axlReq = new OpenLayers.Format.ArcXML(
OpenLayers.Util.extend(this.options, {
requesttype: "image",
envelope: bounds.toArray(),
tileSize: this.tileSize
})
);
// create an asynchronous ajax request to get an arcims image
OpenLayers.Request.POST({
url: this.getFullRequestString(),
async: true,
data: axlReq.write(),
callback: function(req) {
// process the response from ArcIMS, and call the callback function
// to set the image URL
var doc = req.responseXML;
if (!doc || !doc.documentElement) {
doc = req.responseText;
}
// create a new arcxml format to read the response
var axlResp = new OpenLayers.Format.ArcXML();
var arcxml = axlResp.read(doc);
callback.call(scope, this.getUrlOrImage(arcxml.image.output));
},
scope: this
});
},
/**
* Method: getUrlOrImage
* Extract a url or image from the ArcXML image output.
*
* Parameters:
* output - {Object} The image.output property of the object returned from
* the ArcXML format read method.
*
* Returns:
* {String} A URL for an image (potentially with the data protocol).
*/
getUrlOrImage: function(output) {
var ret = "";
if(output.url) {
// If the image response output url is a string, then the image
// data is not inline.
ret = output.url;
} else if(output.data) {
// The image data is inline and base64 encoded, create a data
// url for the image. This will only work for small images,
// due to browser url length limits.
ret = "data:image/" + output.type +
";base64," + output.data;
}
return ret;
},
/**
* Method: setLayerQuery
* Set the query definition on this layer. Query definitions are used to
* render parts of the spatial data in an image, and can be used to
* filter features or layers in the ArcIMS service.
*
* Parameters:
* id - {String} The ArcIMS layer ID.
* querydef - {Object} The query definition to apply to this layer.
*/
setLayerQuery: function(id, querydef) {
// find the matching layer, if it exists
for (var lyr = 0; lyr < this.options.layers.length; lyr++) {
if (id == this.options.layers[lyr].id) {
// replace this layer definition
this.options.layers[lyr].query = querydef;
return;
}
}
// no layer found, create a new definition
this.options.layers.push({id: id, visible: true, query: querydef});
},
/**
* Method: getFeatureInfo
* Get feature information from ArcIMS. Using the applied geometry, apply
* the options to the query (buffer, area/envelope intersection), and
* query the ArcIMS service.
*
* A note about accuracy:
* ArcIMS interprets the accuracy attribute in feature requests to be
* something like the 'modulus' operator on feature coordinates,
* applied to the database geometry of the feature. It doesn't round,
* so your feature coordinates may be up to (1 x accuracy) offset from
* the actual feature coordinates. If the accuracy of the layer is not
* specified, the accuracy will be computed to be approximately 1
* feature coordinate per screen pixel.
*
* Parameters:
* geometry - {<OpenLayers.LonLat>} or {<OpenLayers.Geometry.Polygon>} The
* geometry to use when making the query. This should be a closed
* polygon for behavior approximating a free selection.
* layer - {Object} The ArcIMS layer definition. This is an anonymous object
* that looks like:
* (code)
* {
* id: "ArcXML layer ID", // the ArcXML layer ID
* query: {
* where: "STATE = 'PA'", // the where clause of the query
* accuracy: 100 // the accuracy of the returned feature
* }
* }
* (end)
* options - {Object} Object with non-default properties to set on the layer.
* Supported properties are buffer, callback, scope, and any other
* properties applicable to the ArcXML format. Set the 'callback' and
* 'scope' for an object and function to recieve the parsed features
* from ArcIMS.
*/
getFeatureInfo: function(geometry, layer, options) {
// set the buffer to 1 unit (dd/m/ft?) by default
var buffer = options.buffer || 1;
// empty callback by default
var callback = options.callback || function() {};
// default scope is window (global)
var scope = options.scope || window;
// apply these option to the request options
var requestOptions = {};
OpenLayers.Util.extend(requestOptions, this.options);
// this is a feature request
requestOptions.requesttype = "feature";
if (geometry instanceof OpenLayers.LonLat) {
// create an envelope if the geometry is really a lon/lat
requestOptions.polygon = null;
requestOptions.envelope = [
geometry.lon - buffer,
geometry.lat - buffer,
geometry.lon + buffer,
geometry.lat + buffer
];
} else if (geometry instanceof OpenLayers.Geometry.Polygon) {
// use the polygon assigned, and empty the envelope
requestOptions.envelope = null;
requestOptions.polygon = geometry;
}
// create an arcxml request to get feature requests
var arcxml = new OpenLayers.Format.ArcXML(requestOptions);
// apply any get feature options to the arcxml request
OpenLayers.Util.extend(arcxml.request.get_feature, options);
arcxml.request.get_feature.layer = layer.id;
if (typeof layer.query.accuracy == "number") {
// set the accuracy if it was specified
arcxml.request.get_feature.query.accuracy = layer.query.accuracy;
} else {
// guess that the accuracy is 1 per screen pixel
var mapCenter = this.map.getCenter();
var viewPx = this.map.getViewPortPxFromLonLat(mapCenter);
viewPx.x++;
var mapOffCenter = this.map.getLonLatFromPixel(viewPx);
arcxml.request.get_feature.query.accuracy = mapOffCenter.lon - mapCenter.lon;
}
// set the get_feature query to be the same as the layer passed in
arcxml.request.get_feature.query.where = layer.query.where;
// use area_intersection
arcxml.request.get_feature.query.spatialfilter.relation = "area_intersection";
// create a new asynchronous request to get the feature info
OpenLayers.Request.POST({
url: this.getFullRequestString({'CustomService': 'Query'}),
data: arcxml.write(),
callback: function(request) {
// parse the arcxml response
var response = arcxml.parseResponse(request.responseText);
if (!arcxml.iserror()) {
// if the arcxml is not an error, call the callback with the features parsed
callback.call(scope, response.features);
} else {
// if the arcxml is an error, return null features selected
callback.call(scope, null);
}
}
});
},
/**
* Method: clone
* Create a clone of this layer
*
* Returns:
* {<OpenLayers.Layer.ArcIMS>} An exact clone of this layer
*/
clone: function (obj) {
if (obj == null) {
obj = new OpenLayers.Layer.ArcIMS(this.name,
this.url,
this.getOptions());
}
//get all additions from superclasses
obj = OpenLayers.Layer.Grid.prototype.clone.apply(this, [obj]);
// copy/set any non-init, non-simple values here
return obj;
},
CLASS_NAME: "OpenLayers.Layer.ArcIMS"
}); | PypiClean |
/HyperKitty-1.3.7.tar.gz/HyperKitty-1.3.7/hyperkitty/lib/mockup.py |
class Email(object):
""" Email class containing the information needed to store and
display email threads.
"""
def __init__(self):
""" Constructor.
Instanciate the default attributes of the object.
"""
self.email_id = ''
self.title = ''
self.body = ''
self.tags = []
self.category = 'question'
self.category_tag = None
self.participants = set(['Pierre-Yves Chibon'])
self.answers = []
self.liked = 0
self.author = ''
self.avatar = None
self.age = '6 days'
class Author(object):
""" Author class containing the information needed to get the top
author of the month!
"""
def __init__(self):
""" Constructor.
Instanciate the default attributes of the object.
"""
self.name = None
self.kudos = 0
self.avatar = None
def get_email_tag(tag):
threads = generate_random_thread()
output = []
for email in threads:
if tag in email.tags or tag in email.category:
output.append(email)
elif email.category_tag and tag in email.category_tag:
output.append(email)
return output
def generate_thread_per_category():
threads = generate_random_thread()
categories = {}
for thread in threads:
category = thread.category
if thread.category_tag:
category = thread.category_tag
if category in categories.keys():
categories[category].append(thread)
else:
categories[category] = [thread]
return categories
def generate_top_author():
authors = []
author = Author()
author.name = 'Pierre-Yves Chibon'
author.avatar = ('https://secure.gravatar.com/avatar/'
'072b4416fbfad867a44bc7a5be5eddb9')
author.kudos = 3
authors.append(author)
author = Author()
author.name = 'Stanislav Ochotnický'
author.avatar = 'http://sochotni.fedorapeople.org/sochotni.jpg'
author.kudos = 4
authors.append(author)
author = Author()
author.name = 'Toshio Kuratomi'
author.avatar = ('https://secure.gravatar.com/avatar/'
'7a9c1d88f484c9806bceca0d6d91e948')
author.kudos = 5
authors.append(author)
return authors
def generate_random_thread():
threads = []
# 1
email = Email()
email.email_id = 1
email.title = 'Headsup! krb5 ccache defaults are changing in Rawhide'
email.age = '6 days'
email.body = '''Dear fellow developers,
with the upcoming Fedora 18 release (currently Rawhide) we are going to
change the place where krb5 credential cache files are saved by default.
The new default for credential caches will be the /run/user/username directory.
'''
email.tags.extend(['rawhide', 'krb5'])
email.participants = set([
'Stephen Gallagher', 'Toshio Kuratomi', 'Kevin Fenzi', 'Seth Vidal',
])
email.answers.extend([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12])
email.liked = 1
email.author = 'Stephen Gallagher'
email.avatar = 'http://fedorapeople.org/~sgallagh/karrde712.png'
threads.append(email)
# 2
email = Email()
email.email_id = 2
email.title = 'Problem in packaging kicad'
email.age = '6 days'
email.body = '''Paragraph 1: Lorem ipsum dolor sit amet, consectetur
adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore
magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco
laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in
reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla
pariatur. '''
email.tags.extend(['packaging', 'kicad'])
email.participants = set([
'Pierre-Yves Chibon', 'Tom "spot" Callaway', 'Toshio Kuratomi',
'Kevin Fenzi'])
email.answers.extend([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12])
email.liked = 0
email.author = 'Pierre-Yves Chibon'
email.avatar = ('https://secure.gravatar.com/avatar/'
'072b4416fbfad867a44bc7a5be5eddb9')
threads.append(email)
# 3
email = Email()
email.email_id = 3
email.title = 'Update Java Guideline'
email.age = '6 days'
email.body = '''Paragraph 1: Lorem ipsum dolor sit amet, consectetur
adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore
magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco
laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in
reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla
pariatur. '''
email.tags.extend(['rawhide', 'krb5'])
email.participants = set([
'Stanislav Ochotnický', 'Tom "spot" Callaway', 'Stephen Gallagher',
'Jason Tibbitts', 'Rex Dieter', 'Toshio Kuratomi'])
email.answers.extend([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15,
16, 17, 18, 19])
email.liked = 5
email.category = 'todo'
email.author = 'Stanislav Ochotnický'
email.avatar = 'http://sochotni.fedorapeople.org/sochotni.jpg'
threads.append(email)
# 4
email = Email()
email.email_id = 4
email.title = 'Agenda for the next Board Meeting'
email.age = '6 days'
email.body = '''Paragraph 1: Lorem ipsum dolor sit amet, consectetur
adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore
magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco
laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in
reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla
pariatur. '''
email.tags.extend(['agenda', 'board'])
email.participants = set([
'Toshio Kuratomi', 'Tom "spot" Callaway', 'Robyn Bergeron',
'Max Spevack'])
email.answers.extend([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12])
email.liked = 20
email.category = 'agenda'
email.author = 'Toshio Kuratomi'
email.avatar = ('https://secure.gravatar.com/avatar/'
'7a9c1d88f484c9806bceca0d6d91e948')
threads.append(email)
# 5
email = Email()
email.email_id = 5
email.title = 'I told you so! '
email.age = '6 days'
email.body = '''Paragraph 1: Lorem ipsum dolor sit amet, consectetur
adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore
magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco
laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in
reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla
pariatur. '''
email.tags.extend(['systemd', 'mp3', 'pulseaudio'])
email.participants = set(['Pierre-Yves Chibon'])
email.answers.extend([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12])
email.liked = 0
email.author = 'Pierre-Yves Chibon'
email.avatar = ('https://secure.gravatar.com/avatar/'
'072b4416fbfad867a44bc7a5be5eddb9')
email.category = 'shut down'
email.category_tag = 'dead'
threads.append(email)
return threads | PypiClean |
/KratosSwimmingDEMApplication-9.2.0-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl/KratosMultiphysics/SwimmingDEMApplication/custom_body_force/manufactured_solution.py | import KratosMultiphysics
def CreateManufacturedSolution(custom_settings):
return ManufacturedSolution(custom_settings)
class ManufacturedSolution():
def __init__(self, settings):
'''
This is a base class to build manufactured fluid solutions.
At least, it should return the body force and the velocity.
The input viscosity is the DYNAMIC viscosity
NOTE: the operators are implemented for the 2D case. It could be extended to the 3D case.
'''
default_settings = KratosMultiphysics.Parameters("""
{
"viscosity" : 1.0e-2,
"density" : 1.0
}
"""
)
settings.ValidateAndAssignDefaults(default_settings)
self.rho = settings["density"].GetDouble()
self.nu = settings["viscosity"].GetDouble() / self.rho
# Public methods
def BodyForce(self, x1, x2, x3, t):
return [self.body_force1(x1, x2, t), self.body_force2(x1, x2, t), self.body_force3(x1, x2, t)]
def Velocity(self, x1, x2, x3, t):
return [self.u1(x1, x2, t), self.u2(x1, x2, t), self.u3(x1, x2, t)]
def Pressure(self, x1, x2, x3, t):
return self.p(x1, x2, t)
# Operators
def body_force1(self, x1, x2, t):
return self.du1dt(x1, x2, t) + self.convective1(x1, x2, t) + 1 / self.rho * self.press_grad1(x1, x2, t) - self.nu * self.laplacian1(x1, x2, t)
def body_force2(self, x1, x2, t):
return self.du2dt(x1, x2, t) + self.convective2(x1, x2, t) + 1 / self.rho * self.press_grad2(x1, x2, t) - self.nu * self.laplacian2(x1, x2, t)
def body_force3(self, x1, x2, t):
return 0.0
def convective1(self, x1, x2, t):
return self.u1(x1, x2, t) * self.du11(x1, x2, t) + self.u2(x1, x2, t) * self.du12(x1, x2, t)
def convective2(self, x1, x2, t):
return self.u1(x1, x2, t) * self.du21(x1, x2, t) + self.u2(x1, x2, t) * self.du22(x1, x2, t)
def laplacian1(self, x1, x2, t):
return self.du111(x1, x2, t) + self.du122(x1, x2, t)
def laplacian2(self, x1, x2, t):
return self.du211(x1, x2, t) + self.du222(x1, x2, t)
def press_grad1(self, x1, x2, t):
return self.dp1(x1, x2, t)
def press_grad2(self, x1, x2, t):
return self.dp2(x1, x2, t)
# Velocity and derivatives
def u1(self, x1, x2, t):
""" Velocity
"""
raise Exception("Method not implemented")
def u2(self, x1, x2, t):
""" Velocity
"""
raise Exception("Method not implemented")
def u3(self, x1, x2, t):
return 0.0
def du1dt(self, x1, x2, t):
raise Exception("Method not implemented")
def du2dt(self, x1, x2, t):
raise Exception("Method not implemented")
def du11(self, x1, x2, t):
raise Exception("Method not implemented")
def du12(self, x1, x2, t):
raise Exception("Method not implemented")
def du21(self, x1, x2, t):
raise Exception("Method not implemented")
def du22(self, x1, x2, t):
raise Exception("Method not implemented")
def du111(self, x1, x2, t):
raise Exception("Method not implemented")
def du122(self, x1, x2, t):
raise Exception("Method not implemented")
def du211(self, x1, x2, t):
raise Exception("Method not implemented")
def du222(self, x1, x2, t):
raise Exception("Method not implemented")
# Pressure and derivatives
def p(self, x1, x2, t):
'''
By default, pressure is 0
'''
return 0.0
def dp1(self, x1, x2, t):
'''
By default, pressure is 0
'''
return 0.0
def dp2(self, x1, x2, t):
'''
By default, pressure is 0
'''
return 0.0 | PypiClean |
/Flask-Gulp-0.3.0.tar.gz/Flask-Gulp-0.3.0/flask_gulp/static.py | from __future__ import print_function
import os
import sys
from collections import OrderedDict
from flask import url_for
from jinja2 import Markup
from . import wildcard, File, Task
from .extensions import extensions
from .watcher import Watcher
class Static(object):
def __init__(self, app=None):
if app is not None:
self.init_app(app)
else:
self.app = None
self.tasks = OrderedDict()
def init_app(self, app):
app.config.setdefault('STATIC_WATCHER_INTERVAL', 1)
app.config.setdefault('STATIC_INITIAL_PATH', app.root_path)
app.config.setdefault('STATIC_GENERATED_LINKS_PATH', app.static_folder)
app.config.setdefault('STATIC_RUN_ON_REFRESH', False)
app.config.setdefault('STATIC_DEBUG', app.debug)
self.app = app
@app.context_processor
def context_processor():
def build_html(wrapper, *tasks):
root = app.config.get('STATIC_GENERATED_LINKS_PATH')
markup = ''
for task in tasks:
markup += Markup('<!-- %s -->\n' % task)
markup += Markup('\n'.join(
(wrapper %
url_for('static', filename=os.path
.relpath(item, root).replace('\\', '/'))
for item in self.tasks[task].items)))
markup += '\n'
return markup
def css(*tasks):
"""
Create links to style files using results from task
"""
run_tasks = self.app.config.get('STATIC_RUN_ON_REFRESH')
# run unwatched tasks
if run_tasks:
self.run(*(task for task in tasks
if not self.tasks[task].watched))
return build_html('<link rel="stylesheet" href="%s"/>', *tasks)
def js(*tasks, **options):
"""
Create links to script files using results from task
"""
run_tasks = self.app.config.get('STATIC_RUN_ON_REFRESH')
options.setdefault('defer', False)
options.setdefault('asynchro', False)
attrs = ['src="%s"']
if options['defer']:
attrs.append('defer')
if options['asynchro']:
attrs.append('async')
# run unwatched tasks
if run_tasks:
self.run(*(task for task in tasks
if not self.tasks[task].watched))
return build_html("<script %s></script>" % ' '.join(attrs),
*tasks)
return dict(js=js, css=css)
def task(self, name):
"""
Decorator to create tasks
Inside the decorated function scope extensions will be available as
globals, also, the `src` function, wich return the object to create
the pipeline.
"""
def decorator(f):
self.tasks[name] = Task(function=f, items=[], watched=False)
def wrapper(*args, **kwargs):
return f(*args, **kwargs)
return decorator
def watch(self, paths, *tasks):
for task in tasks:
self.tasks[task] = self.tasks[task]._replace(watched=True)
watcher = Watcher(paths, self, tasks,
debug=self.app.config.get('STATIC_DEBUG'),
interval=self.app.config
.get('STATIC_WATCHER_INTERVAL'))
self.run(*tasks)
watcher.daemon = True
watcher.start()
def findFiles(self, *paths):
if self.app is None:
raise ValueError('You should pass a valid application')
root = self.app.config.get('STATIC_INITIAL_PATH')
for path in paths:
for filename in wildcard.wildcard(os.path.join(root, path)):
yield filename
def __loadResources(self, *paths):
res = StaticResources()
for filename, relativ in self.findFiles(*paths):
res.add(filename, relativ)
return res
def run(self, *tasks):
def src(*paths):
global res
res = self.__loadResources(*paths)
return res
for task in tasks:
t = self.tasks[task]
# extend function scope
t.function.__globals__.update(extensions)
t.function.__globals__['src'] = src
if self.app.config.get('STATIC_DEBUG'):
print('[*] running %s...' % task)
t.function()
res.close()
self.tasks[task] = t._replace(items=[f.filename
for f in res
if f.filename])
# retrieve normal scope
del t.function.__globals__['src']
for k in extensions:
del t.function.__globals__[k]
def runall(self):
self.run(*(task for task in self.tasks))
class StaticResources(object):
def __init__(self, *files):
self.resources = []
for f in files:
self.add(f)
self.gen = None
def pipe(self, extension):
if self.gen:
self.close()
self.gen = extension(self.resources)
return self
def close(self):
self.resources = []
for generated in self.gen:
if not generated.filename and generated.content:
print(generated.content, file=sys.stderr)
else:
self.resources.append(generated)
def add(self, filename, rel):
self.resources.append(File(filename, rel, None))
def __iter__(self):
return iter(self.resources) | PypiClean |
/ExcelAlchemy-1.1.0.tar.gz/ExcelAlchemy-1.1.0/excelalchemy/types/value/date.py | import logging
from datetime import datetime
from typing import Any
from typing import cast
import pendulum
from pendulum import DateTime
from excelalchemy.const import DATE_FORMAT_TO_HINT_MAPPING
from excelalchemy.const import MILLISECOND_TO_SECOND
from excelalchemy.const import DataRangeOption
from excelalchemy.exc import ConfigError
from excelalchemy.types.abstract import ABCValueType
from excelalchemy.types.field import FieldMetaInfo
class Date(ABCValueType, datetime):
__name__ = '日期选择'
@classmethod
def comment(cls, field_meta: FieldMetaInfo) -> str:
if not field_meta.date_format:
raise ConfigError('日期格式未定义')
return '\n'.join(
[
field_meta.comment_required,
field_meta.comment_date_format,
field_meta.comment_date_range_option,
field_meta.comment_hint,
]
)
@classmethod
def serialize(cls, value: str | DateTime | Any, field_meta: FieldMetaInfo) -> datetime | Any:
if isinstance(value, DateTime):
logging.info('类型【%s】无需序列化: %s, 返回原值 %s ', cls.__name__, field_meta.label, value)
return value
if not field_meta.date_format:
raise ConfigError('日期格式未定义')
value = str(value).strip()
try:
# pyright: reportPrivateImportUsage=false
# pyright: reportUnknownMemberType=false
# pyright: reportGeneralTypeIssues=false
v = value.replace('/', '-') # pendulum 不支持 / 作为日期分隔符
dt: DateTime = cast(DateTime, pendulum.parse(v))
return dt.replace(tzinfo=field_meta.timezone)
except Exception as exc:
logging.warning('ValueType 类型 <%s> 无法解析 Excel 输入,返回原值:%s,原因:%s', cls.__name__, value, exc)
return value
@classmethod
def deserialize(cls, value: str | datetime | None | Any, field_meta: FieldMetaInfo) -> str:
match value:
case None | '':
return ''
case datetime():
return value.strftime(field_meta.python_date_format)
case int() | float():
return datetime.fromtimestamp(int(value) / MILLISECOND_TO_SECOND).strftime(
field_meta.python_date_format
)
case _:
return str(value) if value is not None else ''
@classmethod
def __validate__(cls, value: Any, field_meta: FieldMetaInfo) -> int:
if field_meta.date_format is None:
raise ConfigError('日期格式未定义')
if not isinstance(value, datetime):
raise ValueError(f'请输入格式为{DATE_FORMAT_TO_HINT_MAPPING[field_meta.date_format]}的日期')
parsed = cls._parse_date(value, field_meta)
errors = cls._validate_date_range(parsed, field_meta)
if errors:
raise ValueError(*errors)
else:
return int(parsed.timestamp() * MILLISECOND_TO_SECOND)
@staticmethod
def _parse_date(v: datetime, field_meta: FieldMetaInfo) -> datetime:
format_ = field_meta.python_date_format
parsed = pendulum.parse(v.strftime(format_)).replace(tzinfo=field_meta.timezone) # type: ignore
return parsed
@staticmethod
def _validate_date_range(parsed: datetime, field_meta: FieldMetaInfo) -> list[str]:
now = datetime.now(tz=field_meta.timezone)
errors: list[str] = []
match field_meta.date_range_option:
case DataRangeOption.PRE:
if parsed > now:
errors.append('需早于当前时间(含当前时间)')
case DataRangeOption.NEXT:
if parsed < now:
errors.append('需晚于当前时间(含当前时间)')
case DataRangeOption.NONE | None:
...
return errors | PypiClean |
/FlexGet-3.9.6-py3-none-any.whl/flexget/plugins/clients/deluge.py | import base64
import os
import re
import sys
import time
from pathlib import Path
from loguru import logger
from flexget import plugin
from flexget.entry import Entry
from flexget.event import event
from flexget.utils.pathscrub import pathscrub
from flexget.utils.template import RenderError
logger = logger.bind(name='deluge')
class DelugePlugin:
"""Base class for deluge plugins, contains settings and methods for connecting to a deluge daemon."""
def on_task_start(self, task, config):
"""Fail early if we can't import/configure the deluge client."""
self.setup_client(config)
def setup_client(self, config):
try:
from deluge_client import DelugeRPCClient
except ImportError as e:
logger.debug('Error importing deluge-client: {}', e)
raise plugin.DependencyError(
'deluge',
'deluge-client',
'deluge-client >=1.5 is required. `pip install deluge-client` to install.',
logger,
)
config = self.prepare_config(config)
if config['host'] in ['localhost', '127.0.0.1'] and not config.get('username'):
# If an username is not specified, we have to do a lookup for the localclient username/password
auth = self.get_localhost_auth(config.get('config_path'))
if auth and auth[0]:
config['username'], config['password'] = auth
if not config.get('username') or not config.get('password'):
raise plugin.PluginError(
'Unable to get authentication info for Deluge. You may need to '
'specify an username and password from your Deluge auth file.'
)
return DelugeRPCClient(
config['host'],
config['port'],
config['username'],
config['password'],
decode_utf8=True,
)
def prepare_config(self, config):
config.setdefault('host', 'localhost')
config.setdefault('port', 58846)
return config
@staticmethod
def get_localhost_auth(config_path=None):
if config_path is None:
if sys.platform.startswith('win'):
auth_file = os.path.join(os.getenv('APPDATA'), 'deluge', 'auth')
else:
auth_file = os.path.expanduser('~/.config/deluge/auth')
else:
auth_file = os.path.join(config_path, 'auth')
if not os.path.isfile(auth_file):
return None
with open(auth_file) as auth:
for line in auth:
line = line.strip()
if line.startswith('#') or not line:
# This is a comment or blank line
continue
lsplit = line.split(':')
if lsplit[0] == 'localclient':
username, password = lsplit[:2]
return username, password
class InputDeluge(DelugePlugin):
"""Create entries for torrents in the deluge session."""
# Fields we provide outside of the deluge_ prefixed namespace
settings_map = {
'name': 'title',
'hash': 'torrent_info_hash',
'num_peers': 'torrent_peers',
'num_seeds': 'torrent_seeds',
'total_size': 'content_size',
'files': ('content_files', lambda file_dicts: [f['path'] for f in file_dicts]),
}
schema = {
'anyOf': [
{'type': 'boolean'},
{
'type': 'object',
'properties': {
'host': {'type': 'string'},
'port': {'type': 'integer'},
'username': {'type': 'string'},
'password': {'type': 'string'},
'config_path': {'type': 'string', 'format': 'path'},
'filter': {
'type': 'object',
'properties': {
'label': {'type': 'string'},
'state': {
'type': 'string',
'enum': ['active', 'downloading', 'seeding', 'queued', 'paused'],
},
},
'additionalProperties': False,
},
},
'additionalProperties': False,
},
]
}
def on_task_start(self, task, config):
config = self.prepare_config(config)
super().on_task_start(task, config)
def prepare_config(self, config):
if isinstance(config, bool):
config = {}
if 'filter' in config:
filter = config['filter']
if 'label' in filter:
filter['label'] = filter['label'].lower()
if 'state' in filter:
filter['state'] = filter['state'].capitalize()
super().prepare_config(config)
return config
def on_task_input(self, task, config):
"""Generates and returns a list of entries from the deluge daemon."""
config = self.prepare_config(config)
# Reset the entries list
client = self.setup_client(config)
try:
client.connect()
except ConnectionError as exc:
raise plugin.PluginError(
f'Error connecting to deluge daemon: {exc}', logger=logger
) from exc
entries = self.generate_entries(client, config)
client.disconnect()
return entries
def generate_entries(self, client, config):
entries = []
filter = config.get('filter', {})
torrents = client.call('core.get_torrents_status', filter or {}, [])
for hash, torrent_dict in torrents.items():
# Make sure it has a url so no plugins crash
entry = Entry(deluge_id=hash, url='')
if config.get('config_path'):
config_path = Path(config['config_path']).expanduser()
torrent_path = config_path / 'state' / f'{hash}.torrent'
if torrent_path.is_file():
entry['location'] = str(torrent_path)
entry['url'] = torrent_path.as_uri()
else:
logger.warning('Did not find torrent file at {}', torrent_path)
# Pieces is just a really long list, cluttering up the entry and --dump output
blacklist_fields = ['pieces']
for key, value in torrent_dict.items():
# All fields (except a few) provided by deluge get placed under the deluge_ namespace
if key in blacklist_fields:
continue
entry['deluge_' + key] = value
# Some fields also get special handling
if key in self.settings_map:
flexget_key = self.settings_map[key]
if isinstance(flexget_key, tuple):
flexget_key, format_func = flexget_key
value = format_func(value)
entry[flexget_key] = value
entries.append(entry)
return entries
class OutputDeluge(DelugePlugin):
"""Add the torrents directly to deluge, supporting custom save paths."""
schema = {
'anyOf': [
{'type': 'boolean'},
{
'type': 'object',
'properties': {
'host': {'type': 'string'},
'port': {'type': 'integer'},
'username': {'type': 'string'},
'password': {'type': 'string'},
'config_path': {'type': 'string', 'format': 'path'},
'action': {
'type': 'string',
'enum': ['add', 'remove', 'purge', 'pause', 'resume'],
},
'path': {'type': 'string'},
'move_completed_path': {'type': 'string'},
'label': {'type': 'string'},
'queue_to_top': {'type': 'boolean'},
'auto_managed': {'type': 'boolean'},
'max_up_speed': {'type': 'number'},
'max_down_speed': {'type': 'number'},
'max_connections': {'type': 'integer'},
'max_up_slots': {'type': 'integer'},
'ratio': {'type': 'number'},
'remove_at_ratio': {'type': 'boolean'},
'add_paused': {'type': 'boolean'},
'compact': {'type': 'boolean'},
'content_filename': {'type': 'string'},
'main_file_only': {'type': 'boolean'},
'main_file_ratio': {'type': 'number'},
'magnetization_timeout': {'type': 'integer'},
'keep_subs': {'type': 'boolean'},
'hide_sparse_files': {'type': 'boolean'},
'enabled': {'type': 'boolean'},
'container_directory': {'type': 'string'},
'force_recheck': {'type': 'boolean'},
},
'additionalProperties': False,
},
]
}
def prepare_config(self, config):
if isinstance(config, bool):
config = {'enabled': config}
super().prepare_config(config)
config.setdefault('enabled', True)
config.setdefault('action', 'add')
config.setdefault('path', '')
config.setdefault('move_completed_path', '')
config.setdefault('label', '')
config.setdefault('main_file_ratio', 0.90)
config.setdefault('magnetization_timeout', 0)
config.setdefault(
'keep_subs', True
) # does nothing without 'content_filename' or 'main_file_only' enabled
config.setdefault(
'hide_sparse_files', False
) # does nothing without 'main_file_only' enabled
config.setdefault('force_recheck', False)
return config
def __init__(self):
self.deluge_version = None
self.options = {
'max_up_speed': 'max_upload_speed',
'max_down_speed': 'max_download_speed',
'max_connections': 'max_connections',
'max_up_slots': 'max_upload_slots',
'auto_managed': 'auto_managed',
'ratio': 'stop_ratio',
'remove_at_ratio': 'remove_at_ratio',
'add_paused': 'add_paused',
'compact': 'compact_allocation',
}
@plugin.priority(120)
def on_task_download(self, task, config):
"""
Call download plugin to generate the temp files we will load into deluge
then verify they are valid torrents
"""
config = self.prepare_config(config)
if not config['enabled']:
return
# If the download plugin is not enabled, we need to call it to get our temp .torrent files
if 'download' not in task.config:
download = plugin.get('download', self)
for entry in task.accepted:
if entry.get('deluge_id'):
# The torrent is already loaded in deluge, we don't need to get anything
continue
if config['action'] != 'add' and entry.get('torrent_info_hash'):
# If we aren't adding the torrent new, all we need is info hash
continue
download.get_temp_file(task, entry, handle_magnets=True)
@plugin.priority(135)
def on_task_output(self, task, config):
"""Add torrents to deluge at exit."""
config = self.prepare_config(config)
client = self.setup_client(config)
# don't add when learning
if task.options.learn:
return
if not config['enabled'] or not (task.accepted or task.options.test):
return
try:
client.connect()
except ConnectionError as exc:
raise plugin.PluginError(
f'Error connecting to deluge daemon: {exc}', logger=logger
) from exc
if task.options.test:
logger.debug('Test connection to deluge daemon successful.')
client.disconnect()
return
# loop through entries to get a list of labels to add
labels = set()
for entry in task.accepted:
label = entry.get('label') or config.get('label')
if label and label.lower() != 'no label':
try:
label = self._format_label(entry.render(label))
logger.debug('Rendered label: {}', label)
except RenderError as e:
logger.error('Error rendering label `{}`: {}', label, e)
continue
labels.add(label)
if labels:
# Make sure the label plugin is available and enabled, then add appropriate labels
enabled_plugins = client.call('core.get_enabled_plugins')
label_enabled = 'Label' in enabled_plugins
if not label_enabled:
available_plugins = client.call('core.get_available_plugins')
if 'Label' in available_plugins:
logger.debug('Enabling label plugin in deluge')
label_enabled = client.call('core.enable_plugin', 'Label')
else:
logger.error('Label plugin is not installed in deluge')
if label_enabled:
d_labels = client.call('label.get_labels')
for label in labels:
if label not in d_labels:
logger.debug('Adding the label `{}` to deluge', label)
client.call('label.add', label)
# add the torrents
torrent_ids = client.call('core.get_session_state')
for entry in task.accepted:
# Generate deluge options dict for torrent add
add_opts = {}
try:
path = entry.render(entry.get('path') or config['path'])
if path:
add_opts['download_location'] = pathscrub(os.path.expanduser(path))
except RenderError as e:
logger.error('Could not set path for {}: {}', entry['title'], e)
for fopt, dopt in self.options.items():
value = entry.get(fopt, config.get(fopt))
if value is not None:
add_opts[dopt] = value
if fopt == 'ratio':
add_opts['stop_at_ratio'] = True
# Make another set of options, that get set after the torrent has been added
modify_opts = {
'queue_to_top': entry.get('queue_to_top', config.get('queue_to_top')),
'main_file_only': entry.get('main_file_only', config.get('main_file_only', False)),
'main_file_ratio': entry.get('main_file_ratio', config.get('main_file_ratio')),
'hide_sparse_files': entry.get(
'hide_sparse_files', config.get('hide_sparse_files', True)
),
'keep_subs': entry.get('keep_subs', config.get('keep_subs', True)),
'container_directory': config.get('container_directory', ''),
'force_recheck': entry.get('force_recheck', config.get('force_recheck')),
}
try:
label = entry.render(entry.get('label') or config['label'])
modify_opts['label'] = self._format_label(label)
except RenderError as e:
logger.error('Error setting label for `{}`: {}', entry['title'], e)
try:
move_completed_path = entry.render(
entry.get('move_completed_path') or config['move_completed_path']
)
modify_opts['move_completed_path'] = pathscrub(
os.path.expanduser(move_completed_path)
)
except RenderError as e:
logger.error('Error setting move_completed_path for {}: {}', entry['title'], e)
try:
content_filename = entry.get('content_filename') or config.get(
'content_filename', ''
)
modify_opts['content_filename'] = pathscrub(entry.render(content_filename))
except RenderError as e:
logger.error('Error setting content_filename for {}: {}', entry['title'], e)
torrent_id = entry.get('deluge_id') or entry.get('torrent_info_hash')
torrent_id = torrent_id and torrent_id.lower()
if torrent_id in torrent_ids:
logger.info('{} is already loaded in deluge, setting options', entry['title'])
# Entry has a deluge id, verify the torrent is still in the deluge session and apply options
# Since this is already loaded in deluge, we may also need to change the path
modify_opts['path'] = add_opts.pop('download_location', None)
client.call('core.set_torrent_options', [torrent_id], add_opts)
self._set_torrent_options(client, torrent_id, entry, modify_opts)
elif config['action'] != 'add':
logger.warning(
'Cannot {} {}, because it is not loaded in deluge.',
config['action'],
entry['title'],
)
continue
else:
magnet, filedump = None, None
if entry.get('url', '').startswith('magnet:'):
magnet = entry['url']
else:
if not os.path.exists(entry['file']):
entry.fail('Downloaded temp file \'%s\' doesn\'t exist!' % entry['file'])
del entry['file']
return
with open(entry['file'], 'rb') as f:
filedump = base64.encodebytes(f.read())
logger.verbose('Adding {} to deluge.', entry['title'])
added_torrent = None
if magnet:
try:
added_torrent = client.call('core.add_torrent_magnet', magnet, add_opts)
except Exception as exc:
logger.error('{} was not added to deluge! {}', entry['title'], exc)
logger.opt(exception=True).debug('Error adding magnet:')
entry.fail('Could not be added to deluge')
else:
if config.get('magnetization_timeout'):
timeout = config['magnetization_timeout']
logger.verbose(
'Waiting {} seconds for "{}" to magnetize', timeout, entry['title']
)
for _ in range(timeout):
time.sleep(1)
try:
status = client.call(
'core.get_torrent_status', added_torrent, ['files']
)
except Exception as err:
logger.error('wait_for_metadata Error: {}', err)
break
if status.get('files'):
logger.info('"{}" magnetization successful', entry['title'])
break
else:
logger.warning(
'"{}" did not magnetize before the timeout elapsed, '
'file list unavailable for processing.',
entry['title'],
)
else:
try:
added_torrent = client.call(
'core.add_torrent_file', entry['title'], filedump, add_opts
)
except Exception as e:
logger.error('{} was not added to deluge! {}', entry['title'], e)
entry.fail('Could not be added to deluge')
if not added_torrent:
logger.error('There was an error adding {} to deluge.', entry['title'])
else:
logger.info('{} successfully added to deluge.', entry['title'])
self._set_torrent_options(client, added_torrent, entry, modify_opts)
if config['action'] in ('remove', 'purge'):
client.call('core.remove_torrent', torrent_id, config['action'] == 'purge')
logger.info('{} removed from deluge.', entry['title'])
elif config['action'] == 'pause':
client.call('core.pause_torrent', [torrent_id])
logger.info('{} has been paused in deluge.', entry['title'])
elif config['action'] == 'resume':
client.call('core.resume_torrent', [torrent_id])
logger.info('{} has been resumed in deluge.', entry['title'])
client.disconnect()
def on_task_learn(self, task, config):
"""Make sure all temp files are cleaned up when entries are learned"""
# If download plugin is enabled, it will handle cleanup.
if 'download' not in task.config:
download = plugin.get('download', self)
download.cleanup_temp_files(task)
def on_task_abort(self, task, config):
"""Make sure normal cleanup tasks still happen on abort."""
self.on_task_learn(task, config)
def _format_label(self, label):
"""Makes a string compliant with deluge label naming rules"""
# "No Label" is a special identifier to unset a label
if label.lower() == 'no label':
return 'No Label'
return re.sub(r'[^\w-]+', '_', label.lower())
def _set_torrent_options(self, client, torrent_id, entry, opts):
"""Gets called when a torrent was added to the daemon."""
entry['deluge_id'] = torrent_id
if opts.get('move_completed_path'):
client.call(
'core.set_torrent_options',
[torrent_id],
{'move_completed': True, 'move_completed_path': opts['move_completed_path']},
)
logger.debug(
'{} move on complete set to {}', entry['title'], opts['move_completed_path']
)
if opts.get('label'):
client.call('label.set_torrent', torrent_id, opts['label'])
if opts.get('queue_to_top') is not None:
if opts['queue_to_top']:
client.call('core.queue_top', [torrent_id])
logger.debug('{} moved to top of queue', entry['title'])
else:
client.call('core.queue_bottom', [torrent_id])
logger.debug('{} moved to bottom of queue', entry['title'])
status_keys = [
'files',
'total_size',
'save_path',
'move_on_completed_path',
'move_on_completed',
'progress',
]
status = client.call('core.get_torrent_status', torrent_id, status_keys)
# Determine where the file should be
move_now_path = None
if opts.get('move_completed_path'):
if status['progress'] == 100:
move_now_path = opts['move_completed_path']
else:
# Deluge will unset the move completed option if we move the storage, forgo setting proper
# path, in favor of leaving proper final location.
logger.debug(
'Not moving storage for {}, as this will prevent move_completed_path.',
entry['title'],
)
elif opts.get('path'):
move_now_path = opts['path']
if move_now_path and os.path.normpath(move_now_path) != os.path.normpath(
status['save_path']
):
logger.debug('Moving storage for {} to {}', entry['title'], move_now_path)
client.call('core.move_storage', [torrent_id], move_now_path)
big_file_name = ''
if opts.get('content_filename') or opts.get('main_file_only'):
# find a file that makes up more than main_file_ratio (default: 90%) of the total size
main_file = None
for file in status['files']:
if file['size'] > (status['total_size'] * opts.get('main_file_ratio')):
main_file = file
break
def file_exists(filename):
# Checks the download path as well as the move completed path for existence of the file
if os.path.exists(os.path.join(status['save_path'], filename)):
return True
elif status.get('move_on_completed') and status.get('move_on_completed_path'):
if os.path.exists(os.path.join(status['move_on_completed_path'], filename)):
return True
else:
return False
def unused_name(name):
# If on local computer, tries appending a (#) suffix until a unique filename is found
if client.host in ['127.0.0.1', 'localhost']:
counter = 2
while file_exists(name):
name = ''.join(
[
os.path.splitext(name)[0],
" (",
str(counter),
')',
os.path.splitext(name)[1],
]
)
counter += 1
else:
logger.debug(
'Cannot ensure content_filename is unique when adding to a remote deluge daemon.'
)
return name
def rename(file, new_name):
# Renames a file in torrent
client.call('core.rename_files', torrent_id, [(file['index'], new_name)])
logger.debug('File {} in {} renamed to {}', file['path'], entry['title'], new_name)
if main_file is not None:
# proceed with renaming only if such a big file is found
# find the subtitle file
keep_subs = opts.get('keep_subs')
sub_file = None
if keep_subs:
sub_exts = [".srt", ".sub"]
for file in status['files']:
ext = os.path.splitext(file['path'])[1]
if ext in sub_exts:
sub_file = file
break
# check for single file torrents so we dont add unnecessary folders
top_files_dir = "/"
if os.path.dirname(main_file['path']) not in ("", "/"):
# check for top folder in user config
if (
opts.get('content_filename')
and os.path.dirname(opts['content_filename']) != ""
):
top_files_dir = os.path.dirname(opts['content_filename']) + "/"
else:
top_files_dir = os.path.dirname(main_file['path']) + "/"
if opts.get('content_filename'):
# rename the main file
big_file_name = (
top_files_dir
+ os.path.basename(opts['content_filename'])
+ os.path.splitext(main_file['path'])[1]
)
big_file_name = unused_name(big_file_name)
rename(main_file, big_file_name)
# rename subs along with the main file
if sub_file is not None and keep_subs:
sub_file_name = (
os.path.splitext(big_file_name)[0]
+ os.path.splitext(sub_file['path'])[1]
)
rename(sub_file, sub_file_name)
if opts.get('main_file_only'):
# download only the main file (and subs)
file_priorities = [
1 if f == main_file or f == sub_file and keep_subs else 0
for f in status['files']
]
client.call(
'core.set_torrent_options',
[torrent_id],
{'file_priorities': file_priorities},
)
if opts.get('hide_sparse_files'):
# hide the other sparse files that are not supposed to download but are created anyway
# http://dev.deluge-torrent.org/ticket/1827
# Made sparse files behave better with deluge http://flexget.com/ticket/2881
sparse_files = [
f
for f in status['files']
if f != main_file and (f != sub_file or not keep_subs)
]
rename_pairs = [
(
f['index'],
top_files_dir + ".sparse_files/" + os.path.basename(f['path']),
)
for f in sparse_files
]
client.call('core.rename_files', torrent_id, rename_pairs)
else:
logger.warning(
'No files in "{}" are > {:.0f}% of content size, no files renamed.',
entry['title'],
opts.get('main_file_ratio') * 100,
)
container_directory = pathscrub(
entry.render(entry.get('container_directory') or opts.get('container_directory', ''))
)
if container_directory:
if big_file_name:
folder_structure = big_file_name.split(os.sep)
elif len(status['files']) > 0:
folder_structure = status['files'][0]['path'].split(os.sep)
else:
folder_structure = []
if len(folder_structure) > 1:
logger.verbose(
'Renaming Folder {} to {}', folder_structure[0], container_directory
)
client.call(
'core.rename_folder', torrent_id, folder_structure[0], container_directory
)
else:
logger.debug(
'container_directory specified however the torrent {} does not have a directory structure; skipping folder rename',
entry['title'],
)
if opts.get('force_recheck'):
client.call('core.force_recheck', [torrent_id])
logger.debug('Forced a data recheck on {}', entry['title'])
@event('plugin.register')
def register_plugin():
plugin.register(InputDeluge, 'from_deluge', api_ver=2)
plugin.register(OutputDeluge, 'deluge', api_ver=2) | PypiClean |
/CslBot-0.21-py3-none-any.whl/cslbot/helpers/misc.py |
import logging
import os
import re
import subprocess
from datetime import datetime, timedelta
from os.path import exists, join
from random import choice, random
import pkg_resources
from . import orm
def get_users(args):
with args['handler'].data_lock:
users = list(args['handler'].channels[args['target']].users()) if args['target'] != 'private' else ['you']
return users
def parse_time(time):
time, unit = time[:-1], time[-1].lower()
if time.isdigit():
time = int(time)
else:
return None
conv = {'s': 1,
'm': 60,
'h': timedelta(hours=1).total_seconds(),
'd': timedelta(days=1).total_seconds(),
'w': timedelta(weeks=1).total_seconds(),
'y': timedelta(weeks=52).total_seconds()}
if unit in conv.keys():
return time * conv[unit]
else:
return None if unit else time
def do_pull(srcdir=None, repo=None):
try:
if repo is None:
proc = subprocess.run(['git', 'pull'], stdout=subprocess.PIPE, stderr=subprocess.STDOUT, universal_newlines=True, check=True)
return proc.stdout.splitlines()[-1]
else:
proc = subprocess.run(['pip', 'install', '--no-deps', '-U', 'git+git://github.com/%s' % repo],
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
env=os.environ.copy(),
universal_newlines=True,
check=True)
output = proc.stdout.splitlines()[-1]
# Strip ascii color codes
return re.sub(r'\x1b[^m]*h', '', output)
except subprocess.CalledProcessError as e:
for line in e.output.decode().splitlines():
logging.error(line)
raise e
def do_nuke(c, nick, target, channel):
c.privmsg(channel, "Please Stand By, Nuking " + target)
c.privmsg_many([nick, target], " ____________________ ")
c.privmsg_many([nick, target], " :-' , '; ., ) '-: ")
c.privmsg_many([nick, target], " / ( / / \\ ")
c.privmsg_many([nick, target], " / ;' \\ , . / ) \\ ")
c.privmsg_many([nick, target], " ( ( . ., ; ; ' ; ) ")
c.privmsg_many([nick, target], " \\ ,---:----------:---, / ")
c.privmsg_many([nick, target], " '--' \\ \\ / / '--' ")
c.privmsg_many([nick, target], " \\ \\ / / ")
c.privmsg_many([nick, target], " \\ / ")
c.privmsg_many([nick, target], " | . | ")
c.privmsg_many([nick, target], " |, '; | ")
c.privmsg_many([nick, target], " | ,. | ")
c.privmsg_many([nick, target], " | ., ;| ")
c.privmsg_many([nick, target], " |:; ; | ")
c.privmsg_many([nick, target], " ________/;';,.',\\ ________ ")
c.privmsg_many([nick, target], " ( ;' . ;';,.;', ; '; ; ) ")
def ping(ping_map, c, e, pongtime):
if e.arguments[1] == 'No such nick/channel':
nick = e.arguments[0]
if nick not in ping_map:
return
target = ping_map.pop(nick)
c.privmsg(target, "%s: %s" % (e.arguments[1], e.arguments[0]))
return
nick = e.source.split('!')[0]
response = e.arguments[1].replace(' ', '.')
try:
pingtime = float(response)
delta = pongtime - datetime.fromtimestamp(pingtime)
elapsed = "%s.%s seconds" % (delta.seconds, delta.microseconds)
except (ValueError, OverflowError):
elapsed = response
target = ping_map.pop(nick) if nick in ping_map else nick
c.privmsg(target, "CTCP reply from %s: %s" % (nick, elapsed))
def get_channels(chanlist, nick):
channels = []
for name, channel in chanlist.items():
if nick in channel.users():
channels.append(name)
return channels
def get_cmdchar(config, connection, msg, msgtype):
cmdchar = config['core']['cmdchar']
botnick = '%s: ' % connection.real_nickname
if msg.startswith(botnick):
msg = msg.replace(botnick, cmdchar, 1)
altchars = [x.strip() for x in config['core']['altcmdchars'].split(',')]
if altchars and altchars[0] != '':
for i in altchars:
if msg.startswith(i):
msg = msg.replace(i, cmdchar, 1)
# Don't require cmdchar in PMs.
if msgtype == 'privmsg' and not msg.startswith(cmdchar):
msg = cmdchar + msg
return msg
def parse_header(header, msg):
proc = subprocess.run(['gcc', '-include', '%s.h' % header, '-fdirectives-only', '-E', '-xc', '/dev/null'],
stdout=subprocess.PIPE,
universal_newlines=True,
check=True)
if header == 'errno':
defines = re.findall('^#define (E[A-Z]*) ([0-9]+)', proc.stdout, re.MULTILINE)
else:
defines = re.findall('^#define (SIG[A-Z]*) ([0-9]+)', proc.stdout, re.MULTILINE)
deftoval = dict((x, y) for x, y in defines)
valtodef = dict((y, x) for x, y in defines)
if not msg:
msg = choice(list(valtodef.keys()))
if msg == 'list':
return ", ".join(sorted(deftoval.keys()))
elif msg in deftoval:
return '#define %s %s' % (msg, deftoval[msg])
elif msg in valtodef:
return '#define %s %s' % (valtodef[msg], msg)
else:
return "%s not found in %s.h" % (msg, header)
def list_fortunes(offensive=False):
cmd = ['fortune', '-f']
if offensive:
cmd.append('-o')
proc = subprocess.run(cmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, universal_newlines=True, check=True)
output = re.sub(r'[0-9]{1,2}\.[0-9]{2}%', '', proc.stdout)
fortunes = [x.strip() for x in output.splitlines()[1:]]
if offensive:
fortunes = ['off/%s' % x for x in fortunes]
return sorted(fortunes)
def get_fortune(msg, name='fortune'):
fortunes = list_fortunes() + list_fortunes(True)
cmd = ['fortune', '-s']
match = re.match('(-[ao])( .+|$)', msg)
if match:
cmd.append(match.group(1))
msg = match.group(2).strip()
if 'bofh' in name or 'excuse' in name:
if random() < 0.05:
return "BOFH Excuse #1337:\nYou don't exist, go away!"
cmd.append('bofh-excuses')
elif msg in fortunes:
cmd.append(msg)
elif msg:
return "%s is not a valid fortune module" % msg
return subprocess.check_output(cmd).decode()
def ignore(session, nick):
row = session.query(orm.Ignore).filter(orm.Ignore.nick == nick).first()
if row is None:
# FIXME: support expiration times for ignores
session.add(orm.Ignore(nick=nick, expire=datetime.min))
return "Now ignoring %s" % nick
else:
return "%s is already ignored." % nick
def get_version(srcdir):
gitdir = join(srcdir, ".git")
if not exists(gitdir):
return None, pkg_resources.get_distribution('CslBot').version
try:
commit = subprocess.check_output(['git', '--git-dir=%s' % gitdir, 'rev-parse', 'HEAD']).decode().splitlines()[0]
version = subprocess.check_output(['git', '--git-dir=%s' % gitdir, 'describe', '--tags']).decode().splitlines()[0]
return commit, version
except subprocess.CalledProcessError:
return None, None
def split_msg(msgs, max_len):
"""Splits as close to the end as possible."""
msg = ""
while len(msg.encode()) < max_len:
if len(msg.encode()) + len(msgs[0]) > max_len:
return msg, msgs
char = msgs.pop(0).decode()
# If we have a space within 15 chars of the length limit, split there to avoid words being broken up.
if char == ' ' and len(msg.encode()) > max_len - 15:
return msg, msgs
msg += char
return msg, msgs
def truncate_msg(msg, max_len):
if len(msg.encode()) > max_len:
msg = [x.encode() for x in msg]
msg, _ = split_msg(msg, max_len - 3)
return msg + "..."
return msg
def escape(data):
# handle arguments that end in '\', which is valid in irc, but causes issues with sql.
return data.replace('\\', '\\\\') | PypiClean |
/Nuitka-1.8.tar.gz/Nuitka-1.8/nuitka/build/inline_copy/lib/scons-2.3.2/SCons/Tool/filesystem.py |
__revision__ = "src/engine/SCons/Tool/filesystem.py 2014/07/05 09:42:21 garyo"
import SCons
from SCons.Tool.install import copyFunc
copyToBuilder, copyAsBuilder = None, None
def copyto_emitter(target, source, env):
""" changes the path of the source to be under the target (which
are assumed to be directories.
"""
n_target = []
for t in target:
n_target = n_target + [t.File( str( s ) ) for s in source]
return (n_target, source)
def copy_action_func(target, source, env):
assert( len(target) == len(source) ), "\ntarget: %s\nsource: %s" %(list(map(str, target)),list(map(str, source)))
for t, s in zip(target, source):
if copyFunc(t.get_path(), s.get_path(), env):
return 1
return 0
def copy_action_str(target, source, env):
return env.subst_target_source(env['COPYSTR'], 0, target, source)
copy_action = SCons.Action.Action( copy_action_func, copy_action_str )
def generate(env):
try:
env['BUILDERS']['CopyTo']
env['BUILDERS']['CopyAs']
except KeyError, e:
global copyToBuilder
if copyToBuilder is None:
copyToBuilder = SCons.Builder.Builder(
action = copy_action,
target_factory = env.fs.Dir,
source_factory = env.fs.Entry,
multi = 1,
emitter = [ copyto_emitter, ] )
global copyAsBuilder
if copyAsBuilder is None:
copyAsBuilder = SCons.Builder.Builder(
action = copy_action,
target_factory = env.fs.Entry,
source_factory = env.fs.Entry )
env['BUILDERS']['CopyTo'] = copyToBuilder
env['BUILDERS']['CopyAs'] = copyAsBuilder
env['COPYSTR'] = 'Copy file(s): "$SOURCES" to "$TARGETS"'
def exists(env):
return 1
# Local Variables:
# tab-width:4
# indent-tabs-mode:nil
# End:
# vim: set expandtab tabstop=4 shiftwidth=4: | PypiClean |
/Flask-Project-0.1.1.tar.gz/Flask-Project-0.1.1/flask_project/templates/default/web_site/static/js/bootstrap.js | if (typeof jQuery === 'undefined') { throw new Error('Bootstrap\'s JavaScript requires jQuery') }
/* ========================================================================
* Bootstrap: transition.js v3.2.0
* http://getbootstrap.com/javascript/#transitions
* ========================================================================
* Copyright 2011-2014 Twitter, Inc.
* Licensed under MIT (https://github.com/twbs/bootstrap/blob/master/LICENSE)
* ======================================================================== */
+function ($) {
'use strict';
// CSS TRANSITION SUPPORT (Shoutout: http://www.modernizr.com/)
// ============================================================
function transitionEnd() {
var el = document.createElement('bootstrap')
var transEndEventNames = {
WebkitTransition : 'webkitTransitionEnd',
MozTransition : 'transitionend',
OTransition : 'oTransitionEnd otransitionend',
transition : 'transitionend'
}
for (var name in transEndEventNames) {
if (el.style[name] !== undefined) {
return { end: transEndEventNames[name] }
}
}
return false // explicit for ie8 ( ._.)
}
// http://blog.alexmaccaw.com/css-transitions
$.fn.emulateTransitionEnd = function (duration) {
var called = false
var $el = this
$(this).one('bsTransitionEnd', function () { called = true })
var callback = function () { if (!called) $($el).trigger($.support.transition.end) }
setTimeout(callback, duration)
return this
}
$(function () {
$.support.transition = transitionEnd()
if (!$.support.transition) return
$.event.special.bsTransitionEnd = {
bindType: $.support.transition.end,
delegateType: $.support.transition.end,
handle: function (e) {
if ($(e.target).is(this)) return e.handleObj.handler.apply(this, arguments)
}
}
})
}(jQuery);
/* ========================================================================
* Bootstrap: alert.js v3.2.0
* http://getbootstrap.com/javascript/#alerts
* ========================================================================
* Copyright 2011-2014 Twitter, Inc.
* Licensed under MIT (https://github.com/twbs/bootstrap/blob/master/LICENSE)
* ======================================================================== */
+function ($) {
'use strict';
// ALERT CLASS DEFINITION
// ======================
var dismiss = '[data-dismiss="alert"]'
var Alert = function (el) {
$(el).on('click', dismiss, this.close)
}
Alert.VERSION = '3.2.0'
Alert.prototype.close = function (e) {
var $this = $(this)
var selector = $this.attr('data-target')
if (!selector) {
selector = $this.attr('href')
selector = selector && selector.replace(/.*(?=#[^\s]*$)/, '') // strip for ie7
}
var $parent = $(selector)
if (e) e.preventDefault()
if (!$parent.length) {
$parent = $this.hasClass('alert') ? $this : $this.parent()
}
$parent.trigger(e = $.Event('close.bs.alert'))
if (e.isDefaultPrevented()) return
$parent.removeClass('in')
function removeElement() {
// detach from parent, fire event then clean up data
$parent.detach().trigger('closed.bs.alert').remove()
}
$.support.transition && $parent.hasClass('fade') ?
$parent
.one('bsTransitionEnd', removeElement)
.emulateTransitionEnd(150) :
removeElement()
}
// ALERT PLUGIN DEFINITION
// =======================
function Plugin(option) {
return this.each(function () {
var $this = $(this)
var data = $this.data('bs.alert')
if (!data) $this.data('bs.alert', (data = new Alert(this)))
if (typeof option == 'string') data[option].call($this)
})
}
var old = $.fn.alert
$.fn.alert = Plugin
$.fn.alert.Constructor = Alert
// ALERT NO CONFLICT
// =================
$.fn.alert.noConflict = function () {
$.fn.alert = old
return this
}
// ALERT DATA-API
// ==============
$(document).on('click.bs.alert.data-api', dismiss, Alert.prototype.close)
}(jQuery);
/* ========================================================================
* Bootstrap: button.js v3.2.0
* http://getbootstrap.com/javascript/#buttons
* ========================================================================
* Copyright 2011-2014 Twitter, Inc.
* Licensed under MIT (https://github.com/twbs/bootstrap/blob/master/LICENSE)
* ======================================================================== */
+function ($) {
'use strict';
// BUTTON PUBLIC CLASS DEFINITION
// ==============================
var Button = function (element, options) {
this.$element = $(element)
this.options = $.extend({}, Button.DEFAULTS, options)
this.isLoading = false
}
Button.VERSION = '3.2.0'
Button.DEFAULTS = {
loadingText: 'loading...'
}
Button.prototype.setState = function (state) {
var d = 'disabled'
var $el = this.$element
var val = $el.is('input') ? 'val' : 'html'
var data = $el.data()
state = state + 'Text'
if (data.resetText == null) $el.data('resetText', $el[val]())
$el[val](data[state] == null ? this.options[state] : data[state])
// push to event loop to allow forms to submit
setTimeout($.proxy(function () {
if (state == 'loadingText') {
this.isLoading = true
$el.addClass(d).attr(d, d)
} else if (this.isLoading) {
this.isLoading = false
$el.removeClass(d).removeAttr(d)
}
}, this), 0)
}
Button.prototype.toggle = function () {
var changed = true
var $parent = this.$element.closest('[data-toggle="buttons"]')
if ($parent.length) {
var $input = this.$element.find('input')
if ($input.prop('type') == 'radio') {
if ($input.prop('checked') && this.$element.hasClass('active')) changed = false
else $parent.find('.active').removeClass('active')
}
if (changed) $input.prop('checked', !this.$element.hasClass('active')).trigger('change')
}
if (changed) this.$element.toggleClass('active')
}
// BUTTON PLUGIN DEFINITION
// ========================
function Plugin(option) {
return this.each(function () {
var $this = $(this)
var data = $this.data('bs.button')
var options = typeof option == 'object' && option
if (!data) $this.data('bs.button', (data = new Button(this, options)))
if (option == 'toggle') data.toggle()
else if (option) data.setState(option)
})
}
var old = $.fn.button
$.fn.button = Plugin
$.fn.button.Constructor = Button
// BUTTON NO CONFLICT
// ==================
$.fn.button.noConflict = function () {
$.fn.button = old
return this
}
// BUTTON DATA-API
// ===============
$(document).on('click.bs.button.data-api', '[data-toggle^="button"]', function (e) {
var $btn = $(e.target)
if (!$btn.hasClass('btn')) $btn = $btn.closest('.btn')
Plugin.call($btn, 'toggle')
e.preventDefault()
})
}(jQuery);
/* ========================================================================
* Bootstrap: carousel.js v3.2.0
* http://getbootstrap.com/javascript/#carousel
* ========================================================================
* Copyright 2011-2014 Twitter, Inc.
* Licensed under MIT (https://github.com/twbs/bootstrap/blob/master/LICENSE)
* ======================================================================== */
+function ($) {
'use strict';
// CAROUSEL CLASS DEFINITION
// =========================
var Carousel = function (element, options) {
this.$element = $(element).on('keydown.bs.carousel', $.proxy(this.keydown, this))
this.$indicators = this.$element.find('.carousel-indicators')
this.options = options
this.paused =
this.sliding =
this.interval =
this.$active =
this.$items = null
this.options.pause == 'hover' && this.$element
.on('mouseenter.bs.carousel', $.proxy(this.pause, this))
.on('mouseleave.bs.carousel', $.proxy(this.cycle, this))
}
Carousel.VERSION = '3.2.0'
Carousel.DEFAULTS = {
interval: 5000,
pause: 'hover',
wrap: true
}
Carousel.prototype.keydown = function (e) {
switch (e.which) {
case 37: this.prev(); break
case 39: this.next(); break
default: return
}
e.preventDefault()
}
Carousel.prototype.cycle = function (e) {
e || (this.paused = false)
this.interval && clearInterval(this.interval)
this.options.interval
&& !this.paused
&& (this.interval = setInterval($.proxy(this.next, this), this.options.interval))
return this
}
Carousel.prototype.getItemIndex = function (item) {
this.$items = item.parent().children('.item')
return this.$items.index(item || this.$active)
}
Carousel.prototype.to = function (pos) {
var that = this
var activeIndex = this.getItemIndex(this.$active = this.$element.find('.item.active'))
if (pos > (this.$items.length - 1) || pos < 0) return
if (this.sliding) return this.$element.one('slid.bs.carousel', function () { that.to(pos) }) // yes, "slid"
if (activeIndex == pos) return this.pause().cycle()
return this.slide(pos > activeIndex ? 'next' : 'prev', $(this.$items[pos]))
}
Carousel.prototype.pause = function (e) {
e || (this.paused = true)
if (this.$element.find('.next, .prev').length && $.support.transition) {
this.$element.trigger($.support.transition.end)
this.cycle(true)
}
this.interval = clearInterval(this.interval)
return this
}
Carousel.prototype.next = function () {
if (this.sliding) return
return this.slide('next')
}
Carousel.prototype.prev = function () {
if (this.sliding) return
return this.slide('prev')
}
Carousel.prototype.slide = function (type, next) {
var $active = this.$element.find('.item.active')
var $next = next || $active[type]()
var isCycling = this.interval
var direction = type == 'next' ? 'left' : 'right'
var fallback = type == 'next' ? 'first' : 'last'
var that = this
if (!$next.length) {
if (!this.options.wrap) return
$next = this.$element.find('.item')[fallback]()
}
if ($next.hasClass('active')) return (this.sliding = false)
var relatedTarget = $next[0]
var slideEvent = $.Event('slide.bs.carousel', {
relatedTarget: relatedTarget,
direction: direction
})
this.$element.trigger(slideEvent)
if (slideEvent.isDefaultPrevented()) return
this.sliding = true
isCycling && this.pause()
if (this.$indicators.length) {
this.$indicators.find('.active').removeClass('active')
var $nextIndicator = $(this.$indicators.children()[this.getItemIndex($next)])
$nextIndicator && $nextIndicator.addClass('active')
}
var slidEvent = $.Event('slid.bs.carousel', { relatedTarget: relatedTarget, direction: direction }) // yes, "slid"
if ($.support.transition && this.$element.hasClass('slide')) {
$next.addClass(type)
$next[0].offsetWidth // force reflow
$active.addClass(direction)
$next.addClass(direction)
$active
.one('bsTransitionEnd', function () {
$next.removeClass([type, direction].join(' ')).addClass('active')
$active.removeClass(['active', direction].join(' '))
that.sliding = false
setTimeout(function () {
that.$element.trigger(slidEvent)
}, 0)
})
.emulateTransitionEnd($active.css('transition-duration').slice(0, -1) * 1000)
} else {
$active.removeClass('active')
$next.addClass('active')
this.sliding = false
this.$element.trigger(slidEvent)
}
isCycling && this.cycle()
return this
}
// CAROUSEL PLUGIN DEFINITION
// ==========================
function Plugin(option) {
return this.each(function () {
var $this = $(this)
var data = $this.data('bs.carousel')
var options = $.extend({}, Carousel.DEFAULTS, $this.data(), typeof option == 'object' && option)
var action = typeof option == 'string' ? option : options.slide
if (!data) $this.data('bs.carousel', (data = new Carousel(this, options)))
if (typeof option == 'number') data.to(option)
else if (action) data[action]()
else if (options.interval) data.pause().cycle()
})
}
var old = $.fn.carousel
$.fn.carousel = Plugin
$.fn.carousel.Constructor = Carousel
// CAROUSEL NO CONFLICT
// ====================
$.fn.carousel.noConflict = function () {
$.fn.carousel = old
return this
}
// CAROUSEL DATA-API
// =================
$(document).on('click.bs.carousel.data-api', '[data-slide], [data-slide-to]', function (e) {
var href
var $this = $(this)
var $target = $($this.attr('data-target') || (href = $this.attr('href')) && href.replace(/.*(?=#[^\s]+$)/, '')) // strip for ie7
if (!$target.hasClass('carousel')) return
var options = $.extend({}, $target.data(), $this.data())
var slideIndex = $this.attr('data-slide-to')
if (slideIndex) options.interval = false
Plugin.call($target, options)
if (slideIndex) {
$target.data('bs.carousel').to(slideIndex)
}
e.preventDefault()
})
$(window).on('load', function () {
$('[data-ride="carousel"]').each(function () {
var $carousel = $(this)
Plugin.call($carousel, $carousel.data())
})
})
}(jQuery);
/* ========================================================================
* Bootstrap: collapse.js v3.2.0
* http://getbootstrap.com/javascript/#collapse
* ========================================================================
* Copyright 2011-2014 Twitter, Inc.
* Licensed under MIT (https://github.com/twbs/bootstrap/blob/master/LICENSE)
* ======================================================================== */
+function ($) {
'use strict';
// COLLAPSE PUBLIC CLASS DEFINITION
// ================================
var Collapse = function (element, options) {
this.$element = $(element)
this.options = $.extend({}, Collapse.DEFAULTS, options)
this.transitioning = null
if (this.options.parent) this.$parent = $(this.options.parent)
if (this.options.toggle) this.toggle()
}
Collapse.VERSION = '3.2.0'
Collapse.DEFAULTS = {
toggle: true
}
Collapse.prototype.dimension = function () {
var hasWidth = this.$element.hasClass('width')
return hasWidth ? 'width' : 'height'
}
Collapse.prototype.show = function () {
if (this.transitioning || this.$element.hasClass('in')) return
var startEvent = $.Event('show.bs.collapse')
this.$element.trigger(startEvent)
if (startEvent.isDefaultPrevented()) return
var actives = this.$parent && this.$parent.find('> .panel > .in')
if (actives && actives.length) {
var hasData = actives.data('bs.collapse')
if (hasData && hasData.transitioning) return
Plugin.call(actives, 'hide')
hasData || actives.data('bs.collapse', null)
}
var dimension = this.dimension()
this.$element
.removeClass('collapse')
.addClass('collapsing')[dimension](0)
this.transitioning = 1
var complete = function () {
this.$element
.removeClass('collapsing')
.addClass('collapse in')[dimension]('')
this.transitioning = 0
this.$element
.trigger('shown.bs.collapse')
}
if (!$.support.transition) return complete.call(this)
var scrollSize = $.camelCase(['scroll', dimension].join('-'))
this.$element
.one('bsTransitionEnd', $.proxy(complete, this))
.emulateTransitionEnd(350)[dimension](this.$element[0][scrollSize])
}
Collapse.prototype.hide = function () {
if (this.transitioning || !this.$element.hasClass('in')) return
var startEvent = $.Event('hide.bs.collapse')
this.$element.trigger(startEvent)
if (startEvent.isDefaultPrevented()) return
var dimension = this.dimension()
this.$element[dimension](this.$element[dimension]())[0].offsetHeight
this.$element
.addClass('collapsing')
.removeClass('collapse')
.removeClass('in')
this.transitioning = 1
var complete = function () {
this.transitioning = 0
this.$element
.trigger('hidden.bs.collapse')
.removeClass('collapsing')
.addClass('collapse')
}
if (!$.support.transition) return complete.call(this)
this.$element
[dimension](0)
.one('bsTransitionEnd', $.proxy(complete, this))
.emulateTransitionEnd(350)
}
Collapse.prototype.toggle = function () {
this[this.$element.hasClass('in') ? 'hide' : 'show']()
}
// COLLAPSE PLUGIN DEFINITION
// ==========================
function Plugin(option) {
return this.each(function () {
var $this = $(this)
var data = $this.data('bs.collapse')
var options = $.extend({}, Collapse.DEFAULTS, $this.data(), typeof option == 'object' && option)
if (!data && options.toggle && option == 'show') option = !option
if (!data) $this.data('bs.collapse', (data = new Collapse(this, options)))
if (typeof option == 'string') data[option]()
})
}
var old = $.fn.collapse
$.fn.collapse = Plugin
$.fn.collapse.Constructor = Collapse
// COLLAPSE NO CONFLICT
// ====================
$.fn.collapse.noConflict = function () {
$.fn.collapse = old
return this
}
// COLLAPSE DATA-API
// =================
$(document).on('click.bs.collapse.data-api', '[data-toggle="collapse"]', function (e) {
var href
var $this = $(this)
var target = $this.attr('data-target')
|| e.preventDefault()
|| (href = $this.attr('href')) && href.replace(/.*(?=#[^\s]+$)/, '') // strip for ie7
var $target = $(target)
var data = $target.data('bs.collapse')
var option = data ? 'toggle' : $this.data()
var parent = $this.attr('data-parent')
var $parent = parent && $(parent)
if (!data || !data.transitioning) {
if ($parent) $parent.find('[data-toggle="collapse"][data-parent="' + parent + '"]').not($this).addClass('collapsed')
$this[$target.hasClass('in') ? 'addClass' : 'removeClass']('collapsed')
}
Plugin.call($target, option)
})
}(jQuery);
/* ========================================================================
* Bootstrap: dropdown.js v3.2.0
* http://getbootstrap.com/javascript/#dropdowns
* ========================================================================
* Copyright 2011-2014 Twitter, Inc.
* Licensed under MIT (https://github.com/twbs/bootstrap/blob/master/LICENSE)
* ======================================================================== */
+function ($) {
'use strict';
// DROPDOWN CLASS DEFINITION
// =========================
var backdrop = '.dropdown-backdrop'
var toggle = '[data-toggle="dropdown"]'
var Dropdown = function (element) {
$(element).on('click.bs.dropdown', this.toggle)
}
Dropdown.VERSION = '3.2.0'
Dropdown.prototype.toggle = function (e) {
var $this = $(this)
if ($this.is('.disabled, :disabled')) return
var $parent = getParent($this)
var isActive = $parent.hasClass('open')
clearMenus()
if (!isActive) {
if ('ontouchstart' in document.documentElement && !$parent.closest('.navbar-nav').length) {
// if mobile we use a backdrop because click events don't delegate
$('<div class="dropdown-backdrop"/>').insertAfter($(this)).on('click', clearMenus)
}
var relatedTarget = { relatedTarget: this }
$parent.trigger(e = $.Event('show.bs.dropdown', relatedTarget))
if (e.isDefaultPrevented()) return
$this.trigger('focus')
$parent
.toggleClass('open')
.trigger('shown.bs.dropdown', relatedTarget)
}
return false
}
Dropdown.prototype.keydown = function (e) {
if (!/(38|40|27)/.test(e.keyCode)) return
var $this = $(this)
e.preventDefault()
e.stopPropagation()
if ($this.is('.disabled, :disabled')) return
var $parent = getParent($this)
var isActive = $parent.hasClass('open')
if (!isActive || (isActive && e.keyCode == 27)) {
if (e.which == 27) $parent.find(toggle).trigger('focus')
return $this.trigger('click')
}
var desc = ' li:not(.divider):visible a'
var $items = $parent.find('[role="menu"]' + desc + ', [role="listbox"]' + desc)
if (!$items.length) return
var index = $items.index($items.filter(':focus'))
if (e.keyCode == 38 && index > 0) index-- // up
if (e.keyCode == 40 && index < $items.length - 1) index++ // down
if (!~index) index = 0
$items.eq(index).trigger('focus')
}
function clearMenus(e) {
if (e && e.which === 3) return
$(backdrop).remove()
$(toggle).each(function () {
var $parent = getParent($(this))
var relatedTarget = { relatedTarget: this }
if (!$parent.hasClass('open')) return
$parent.trigger(e = $.Event('hide.bs.dropdown', relatedTarget))
if (e.isDefaultPrevented()) return
$parent.removeClass('open').trigger('hidden.bs.dropdown', relatedTarget)
})
}
function getParent($this) {
var selector = $this.attr('data-target')
if (!selector) {
selector = $this.attr('href')
selector = selector && /#[A-Za-z]/.test(selector) && selector.replace(/.*(?=#[^\s]*$)/, '') // strip for ie7
}
var $parent = selector && $(selector)
return $parent && $parent.length ? $parent : $this.parent()
}
// DROPDOWN PLUGIN DEFINITION
// ==========================
function Plugin(option) {
return this.each(function () {
var $this = $(this)
var data = $this.data('bs.dropdown')
if (!data) $this.data('bs.dropdown', (data = new Dropdown(this)))
if (typeof option == 'string') data[option].call($this)
})
}
var old = $.fn.dropdown
$.fn.dropdown = Plugin
$.fn.dropdown.Constructor = Dropdown
// DROPDOWN NO CONFLICT
// ====================
$.fn.dropdown.noConflict = function () {
$.fn.dropdown = old
return this
}
// APPLY TO STANDARD DROPDOWN ELEMENTS
// ===================================
$(document)
.on('click.bs.dropdown.data-api', clearMenus)
.on('click.bs.dropdown.data-api', '.dropdown form', function (e) { e.stopPropagation() })
.on('click.bs.dropdown.data-api', toggle, Dropdown.prototype.toggle)
.on('keydown.bs.dropdown.data-api', toggle + ', [role="menu"], [role="listbox"]', Dropdown.prototype.keydown)
}(jQuery);
/* ========================================================================
* Bootstrap: modal.js v3.2.0
* http://getbootstrap.com/javascript/#modals
* ========================================================================
* Copyright 2011-2014 Twitter, Inc.
* Licensed under MIT (https://github.com/twbs/bootstrap/blob/master/LICENSE)
* ======================================================================== */
+function ($) {
'use strict';
// MODAL CLASS DEFINITION
// ======================
var Modal = function (element, options) {
this.options = options
this.$body = $(document.body)
this.$element = $(element)
this.$backdrop =
this.isShown = null
this.scrollbarWidth = 0
if (this.options.remote) {
this.$element
.find('.modal-content')
.load(this.options.remote, $.proxy(function () {
this.$element.trigger('loaded.bs.modal')
}, this))
}
}
Modal.VERSION = '3.2.0'
Modal.DEFAULTS = {
backdrop: true,
keyboard: true,
show: true
}
Modal.prototype.toggle = function (_relatedTarget) {
return this.isShown ? this.hide() : this.show(_relatedTarget)
}
Modal.prototype.show = function (_relatedTarget) {
var that = this
var e = $.Event('show.bs.modal', { relatedTarget: _relatedTarget })
this.$element.trigger(e)
if (this.isShown || e.isDefaultPrevented()) return
this.isShown = true
this.checkScrollbar()
this.$body.addClass('modal-open')
this.setScrollbar()
this.escape()
this.$element.on('click.dismiss.bs.modal', '[data-dismiss="modal"]', $.proxy(this.hide, this))
this.backdrop(function () {
var transition = $.support.transition && that.$element.hasClass('fade')
if (!that.$element.parent().length) {
that.$element.appendTo(that.$body) // don't move modals dom position
}
that.$element
.show()
.scrollTop(0)
if (transition) {
that.$element[0].offsetWidth // force reflow
}
that.$element
.addClass('in')
.attr('aria-hidden', false)
that.enforceFocus()
var e = $.Event('shown.bs.modal', { relatedTarget: _relatedTarget })
transition ?
that.$element.find('.modal-dialog') // wait for modal to slide in
.one('bsTransitionEnd', function () {
that.$element.trigger('focus').trigger(e)
})
.emulateTransitionEnd(300) :
that.$element.trigger('focus').trigger(e)
})
}
Modal.prototype.hide = function (e) {
if (e) e.preventDefault()
e = $.Event('hide.bs.modal')
this.$element.trigger(e)
if (!this.isShown || e.isDefaultPrevented()) return
this.isShown = false
this.$body.removeClass('modal-open')
this.resetScrollbar()
this.escape()
$(document).off('focusin.bs.modal')
this.$element
.removeClass('in')
.attr('aria-hidden', true)
.off('click.dismiss.bs.modal')
$.support.transition && this.$element.hasClass('fade') ?
this.$element
.one('bsTransitionEnd', $.proxy(this.hideModal, this))
.emulateTransitionEnd(300) :
this.hideModal()
}
Modal.prototype.enforceFocus = function () {
$(document)
.off('focusin.bs.modal') // guard against infinite focus loop
.on('focusin.bs.modal', $.proxy(function (e) {
if (this.$element[0] !== e.target && !this.$element.has(e.target).length) {
this.$element.trigger('focus')
}
}, this))
}
Modal.prototype.escape = function () {
if (this.isShown && this.options.keyboard) {
this.$element.on('keyup.dismiss.bs.modal', $.proxy(function (e) {
e.which == 27 && this.hide()
}, this))
} else if (!this.isShown) {
this.$element.off('keyup.dismiss.bs.modal')
}
}
Modal.prototype.hideModal = function () {
var that = this
this.$element.hide()
this.backdrop(function () {
that.$element.trigger('hidden.bs.modal')
})
}
Modal.prototype.removeBackdrop = function () {
this.$backdrop && this.$backdrop.remove()
this.$backdrop = null
}
Modal.prototype.backdrop = function (callback) {
var that = this
var animate = this.$element.hasClass('fade') ? 'fade' : ''
if (this.isShown && this.options.backdrop) {
var doAnimate = $.support.transition && animate
this.$backdrop = $('<div class="modal-backdrop ' + animate + '" />')
.appendTo(this.$body)
this.$element.on('click.dismiss.bs.modal', $.proxy(function (e) {
if (e.target !== e.currentTarget) return
this.options.backdrop == 'static'
? this.$element[0].focus.call(this.$element[0])
: this.hide.call(this)
}, this))
if (doAnimate) this.$backdrop[0].offsetWidth // force reflow
this.$backdrop.addClass('in')
if (!callback) return
doAnimate ?
this.$backdrop
.one('bsTransitionEnd', callback)
.emulateTransitionEnd(150) :
callback()
} else if (!this.isShown && this.$backdrop) {
this.$backdrop.removeClass('in')
var callbackRemove = function () {
that.removeBackdrop()
callback && callback()
}
$.support.transition && this.$element.hasClass('fade') ?
this.$backdrop
.one('bsTransitionEnd', callbackRemove)
.emulateTransitionEnd(150) :
callbackRemove()
} else if (callback) {
callback()
}
}
Modal.prototype.checkScrollbar = function () {
if (document.body.clientWidth >= window.innerWidth) return
this.scrollbarWidth = this.scrollbarWidth || this.measureScrollbar()
}
Modal.prototype.setScrollbar = function () {
var bodyPad = parseInt((this.$body.css('padding-right') || 0), 10)
if (this.scrollbarWidth) this.$body.css('padding-right', bodyPad + this.scrollbarWidth)
}
Modal.prototype.resetScrollbar = function () {
this.$body.css('padding-right', '')
}
Modal.prototype.measureScrollbar = function () { // thx walsh
var scrollDiv = document.createElement('div')
scrollDiv.className = 'modal-scrollbar-measure'
this.$body.append(scrollDiv)
var scrollbarWidth = scrollDiv.offsetWidth - scrollDiv.clientWidth
this.$body[0].removeChild(scrollDiv)
return scrollbarWidth
}
// MODAL PLUGIN DEFINITION
// =======================
function Plugin(option, _relatedTarget) {
return this.each(function () {
var $this = $(this)
var data = $this.data('bs.modal')
var options = $.extend({}, Modal.DEFAULTS, $this.data(), typeof option == 'object' && option)
if (!data) $this.data('bs.modal', (data = new Modal(this, options)))
if (typeof option == 'string') data[option](_relatedTarget)
else if (options.show) data.show(_relatedTarget)
})
}
var old = $.fn.modal
$.fn.modal = Plugin
$.fn.modal.Constructor = Modal
// MODAL NO CONFLICT
// =================
$.fn.modal.noConflict = function () {
$.fn.modal = old
return this
}
// MODAL DATA-API
// ==============
$(document).on('click.bs.modal.data-api', '[data-toggle="modal"]', function (e) {
var $this = $(this)
var href = $this.attr('href')
var $target = $($this.attr('data-target') || (href && href.replace(/.*(?=#[^\s]+$)/, ''))) // strip for ie7
var option = $target.data('bs.modal') ? 'toggle' : $.extend({ remote: !/#/.test(href) && href }, $target.data(), $this.data())
if ($this.is('a')) e.preventDefault()
$target.one('show.bs.modal', function (showEvent) {
if (showEvent.isDefaultPrevented()) return // only register focus restorer if modal will actually get shown
$target.one('hidden.bs.modal', function () {
$this.is(':visible') && $this.trigger('focus')
})
})
Plugin.call($target, option, this)
})
}(jQuery);
/* ========================================================================
* Bootstrap: tooltip.js v3.2.0
* http://getbootstrap.com/javascript/#tooltip
* Inspired by the original jQuery.tipsy by Jason Frame
* ========================================================================
* Copyright 2011-2014 Twitter, Inc.
* Licensed under MIT (https://github.com/twbs/bootstrap/blob/master/LICENSE)
* ======================================================================== */
+function ($) {
'use strict';
// TOOLTIP PUBLIC CLASS DEFINITION
// ===============================
var Tooltip = function (element, options) {
this.type =
this.options =
this.enabled =
this.timeout =
this.hoverState =
this.$element = null
this.init('tooltip', element, options)
}
Tooltip.VERSION = '3.2.0'
Tooltip.DEFAULTS = {
animation: true,
placement: 'top',
selector: false,
template: '<div class="tooltip" role="tooltip"><div class="tooltip-arrow"></div><div class="tooltip-inner"></div></div>',
trigger: 'hover focus',
title: '',
delay: 0,
html: false,
container: false,
viewport: {
selector: 'body',
padding: 0
}
}
Tooltip.prototype.init = function (type, element, options) {
this.enabled = true
this.type = type
this.$element = $(element)
this.options = this.getOptions(options)
this.$viewport = this.options.viewport && $(this.options.viewport.selector || this.options.viewport)
var triggers = this.options.trigger.split(' ')
for (var i = triggers.length; i--;) {
var trigger = triggers[i]
if (trigger == 'click') {
this.$element.on('click.' + this.type, this.options.selector, $.proxy(this.toggle, this))
} else if (trigger != 'manual') {
var eventIn = trigger == 'hover' ? 'mouseenter' : 'focusin'
var eventOut = trigger == 'hover' ? 'mouseleave' : 'focusout'
this.$element.on(eventIn + '.' + this.type, this.options.selector, $.proxy(this.enter, this))
this.$element.on(eventOut + '.' + this.type, this.options.selector, $.proxy(this.leave, this))
}
}
this.options.selector ?
(this._options = $.extend({}, this.options, { trigger: 'manual', selector: '' })) :
this.fixTitle()
}
Tooltip.prototype.getDefaults = function () {
return Tooltip.DEFAULTS
}
Tooltip.prototype.getOptions = function (options) {
options = $.extend({}, this.getDefaults(), this.$element.data(), options)
if (options.delay && typeof options.delay == 'number') {
options.delay = {
show: options.delay,
hide: options.delay
}
}
return options
}
Tooltip.prototype.getDelegateOptions = function () {
var options = {}
var defaults = this.getDefaults()
this._options && $.each(this._options, function (key, value) {
if (defaults[key] != value) options[key] = value
})
return options
}
Tooltip.prototype.enter = function (obj) {
var self = obj instanceof this.constructor ?
obj : $(obj.currentTarget).data('bs.' + this.type)
if (!self) {
self = new this.constructor(obj.currentTarget, this.getDelegateOptions())
$(obj.currentTarget).data('bs.' + this.type, self)
}
clearTimeout(self.timeout)
self.hoverState = 'in'
if (!self.options.delay || !self.options.delay.show) return self.show()
self.timeout = setTimeout(function () {
if (self.hoverState == 'in') self.show()
}, self.options.delay.show)
}
Tooltip.prototype.leave = function (obj) {
var self = obj instanceof this.constructor ?
obj : $(obj.currentTarget).data('bs.' + this.type)
if (!self) {
self = new this.constructor(obj.currentTarget, this.getDelegateOptions())
$(obj.currentTarget).data('bs.' + this.type, self)
}
clearTimeout(self.timeout)
self.hoverState = 'out'
if (!self.options.delay || !self.options.delay.hide) return self.hide()
self.timeout = setTimeout(function () {
if (self.hoverState == 'out') self.hide()
}, self.options.delay.hide)
}
Tooltip.prototype.show = function () {
var e = $.Event('show.bs.' + this.type)
if (this.hasContent() && this.enabled) {
this.$element.trigger(e)
var inDom = $.contains(document.documentElement, this.$element[0])
if (e.isDefaultPrevented() || !inDom) return
var that = this
var $tip = this.tip()
var tipId = this.getUID(this.type)
this.setContent()
$tip.attr('id', tipId)
this.$element.attr('aria-describedby', tipId)
if (this.options.animation) $tip.addClass('fade')
var placement = typeof this.options.placement == 'function' ?
this.options.placement.call(this, $tip[0], this.$element[0]) :
this.options.placement
var autoToken = /\s?auto?\s?/i
var autoPlace = autoToken.test(placement)
if (autoPlace) placement = placement.replace(autoToken, '') || 'top'
$tip
.detach()
.css({ top: 0, left: 0, display: 'block' })
.addClass(placement)
.data('bs.' + this.type, this)
this.options.container ? $tip.appendTo(this.options.container) : $tip.insertAfter(this.$element)
var pos = this.getPosition()
var actualWidth = $tip[0].offsetWidth
var actualHeight = $tip[0].offsetHeight
if (autoPlace) {
var orgPlacement = placement
var $parent = this.$element.parent()
var parentDim = this.getPosition($parent)
placement = placement == 'bottom' && pos.top + pos.height + actualHeight - parentDim.scroll > parentDim.height ? 'top' :
placement == 'top' && pos.top - parentDim.scroll - actualHeight < 0 ? 'bottom' :
placement == 'right' && pos.right + actualWidth > parentDim.width ? 'left' :
placement == 'left' && pos.left - actualWidth < parentDim.left ? 'right' :
placement
$tip
.removeClass(orgPlacement)
.addClass(placement)
}
var calculatedOffset = this.getCalculatedOffset(placement, pos, actualWidth, actualHeight)
this.applyPlacement(calculatedOffset, placement)
var complete = function () {
that.$element.trigger('shown.bs.' + that.type)
that.hoverState = null
}
$.support.transition && this.$tip.hasClass('fade') ?
$tip
.one('bsTransitionEnd', complete)
.emulateTransitionEnd(150) :
complete()
}
}
Tooltip.prototype.applyPlacement = function (offset, placement) {
var $tip = this.tip()
var width = $tip[0].offsetWidth
var height = $tip[0].offsetHeight
// manually read margins because getBoundingClientRect includes difference
var marginTop = parseInt($tip.css('margin-top'), 10)
var marginLeft = parseInt($tip.css('margin-left'), 10)
// we must check for NaN for ie 8/9
if (isNaN(marginTop)) marginTop = 0
if (isNaN(marginLeft)) marginLeft = 0
offset.top = offset.top + marginTop
offset.left = offset.left + marginLeft
// $.fn.offset doesn't round pixel values
// so we use setOffset directly with our own function B-0
$.offset.setOffset($tip[0], $.extend({
using: function (props) {
$tip.css({
top: Math.round(props.top),
left: Math.round(props.left)
})
}
}, offset), 0)
$tip.addClass('in')
// check to see if placing tip in new offset caused the tip to resize itself
var actualWidth = $tip[0].offsetWidth
var actualHeight = $tip[0].offsetHeight
if (placement == 'top' && actualHeight != height) {
offset.top = offset.top + height - actualHeight
}
var delta = this.getViewportAdjustedDelta(placement, offset, actualWidth, actualHeight)
if (delta.left) offset.left += delta.left
else offset.top += delta.top
var arrowDelta = delta.left ? delta.left * 2 - width + actualWidth : delta.top * 2 - height + actualHeight
var arrowPosition = delta.left ? 'left' : 'top'
var arrowOffsetPosition = delta.left ? 'offsetWidth' : 'offsetHeight'
$tip.offset(offset)
this.replaceArrow(arrowDelta, $tip[0][arrowOffsetPosition], arrowPosition)
}
Tooltip.prototype.replaceArrow = function (delta, dimension, position) {
this.arrow().css(position, delta ? (50 * (1 - delta / dimension) + '%') : '')
}
Tooltip.prototype.setContent = function () {
var $tip = this.tip()
var title = this.getTitle()
$tip.find('.tooltip-inner')[this.options.html ? 'html' : 'text'](title)
$tip.removeClass('fade in top bottom left right')
}
Tooltip.prototype.hide = function () {
var that = this
var $tip = this.tip()
var e = $.Event('hide.bs.' + this.type)
this.$element.removeAttr('aria-describedby')
function complete() {
if (that.hoverState != 'in') $tip.detach()
that.$element.trigger('hidden.bs.' + that.type)
}
this.$element.trigger(e)
if (e.isDefaultPrevented()) return
$tip.removeClass('in')
$.support.transition && this.$tip.hasClass('fade') ?
$tip
.one('bsTransitionEnd', complete)
.emulateTransitionEnd(150) :
complete()
this.hoverState = null
return this
}
Tooltip.prototype.fixTitle = function () {
var $e = this.$element
if ($e.attr('title') || typeof ($e.attr('data-original-title')) != 'string') {
$e.attr('data-original-title', $e.attr('title') || '').attr('title', '')
}
}
Tooltip.prototype.hasContent = function () {
return this.getTitle()
}
Tooltip.prototype.getPosition = function ($element) {
$element = $element || this.$element
var el = $element[0]
var isBody = el.tagName == 'BODY'
return $.extend({}, (typeof el.getBoundingClientRect == 'function') ? el.getBoundingClientRect() : null, {
scroll: isBody ? document.documentElement.scrollTop || document.body.scrollTop : $element.scrollTop(),
width: isBody ? $(window).width() : $element.outerWidth(),
height: isBody ? $(window).height() : $element.outerHeight()
}, isBody ? { top: 0, left: 0 } : $element.offset())
}
Tooltip.prototype.getCalculatedOffset = function (placement, pos, actualWidth, actualHeight) {
return placement == 'bottom' ? { top: pos.top + pos.height, left: pos.left + pos.width / 2 - actualWidth / 2 } :
placement == 'top' ? { top: pos.top - actualHeight, left: pos.left + pos.width / 2 - actualWidth / 2 } :
placement == 'left' ? { top: pos.top + pos.height / 2 - actualHeight / 2, left: pos.left - actualWidth } :
/* placement == 'right' */ { top: pos.top + pos.height / 2 - actualHeight / 2, left: pos.left + pos.width }
}
Tooltip.prototype.getViewportAdjustedDelta = function (placement, pos, actualWidth, actualHeight) {
var delta = { top: 0, left: 0 }
if (!this.$viewport) return delta
var viewportPadding = this.options.viewport && this.options.viewport.padding || 0
var viewportDimensions = this.getPosition(this.$viewport)
if (/right|left/.test(placement)) {
var topEdgeOffset = pos.top - viewportPadding - viewportDimensions.scroll
var bottomEdgeOffset = pos.top + viewportPadding - viewportDimensions.scroll + actualHeight
if (topEdgeOffset < viewportDimensions.top) { // top overflow
delta.top = viewportDimensions.top - topEdgeOffset
} else if (bottomEdgeOffset > viewportDimensions.top + viewportDimensions.height) { // bottom overflow
delta.top = viewportDimensions.top + viewportDimensions.height - bottomEdgeOffset
}
} else {
var leftEdgeOffset = pos.left - viewportPadding
var rightEdgeOffset = pos.left + viewportPadding + actualWidth
if (leftEdgeOffset < viewportDimensions.left) { // left overflow
delta.left = viewportDimensions.left - leftEdgeOffset
} else if (rightEdgeOffset > viewportDimensions.width) { // right overflow
delta.left = viewportDimensions.left + viewportDimensions.width - rightEdgeOffset
}
}
return delta
}
Tooltip.prototype.getTitle = function () {
var title
var $e = this.$element
var o = this.options
title = $e.attr('data-original-title')
|| (typeof o.title == 'function' ? o.title.call($e[0]) : o.title)
return title
}
Tooltip.prototype.getUID = function (prefix) {
do prefix += ~~(Math.random() * 1000000)
while (document.getElementById(prefix))
return prefix
}
Tooltip.prototype.tip = function () {
return (this.$tip = this.$tip || $(this.options.template))
}
Tooltip.prototype.arrow = function () {
return (this.$arrow = this.$arrow || this.tip().find('.tooltip-arrow'))
}
Tooltip.prototype.validate = function () {
if (!this.$element[0].parentNode) {
this.hide()
this.$element = null
this.options = null
}
}
Tooltip.prototype.enable = function () {
this.enabled = true
}
Tooltip.prototype.disable = function () {
this.enabled = false
}
Tooltip.prototype.toggleEnabled = function () {
this.enabled = !this.enabled
}
Tooltip.prototype.toggle = function (e) {
var self = this
if (e) {
self = $(e.currentTarget).data('bs.' + this.type)
if (!self) {
self = new this.constructor(e.currentTarget, this.getDelegateOptions())
$(e.currentTarget).data('bs.' + this.type, self)
}
}
self.tip().hasClass('in') ? self.leave(self) : self.enter(self)
}
Tooltip.prototype.destroy = function () {
clearTimeout(this.timeout)
this.hide().$element.off('.' + this.type).removeData('bs.' + this.type)
}
// TOOLTIP PLUGIN DEFINITION
// =========================
function Plugin(option) {
return this.each(function () {
var $this = $(this)
var data = $this.data('bs.tooltip')
var options = typeof option == 'object' && option
if (!data && option == 'destroy') return
if (!data) $this.data('bs.tooltip', (data = new Tooltip(this, options)))
if (typeof option == 'string') data[option]()
})
}
var old = $.fn.tooltip
$.fn.tooltip = Plugin
$.fn.tooltip.Constructor = Tooltip
// TOOLTIP NO CONFLICT
// ===================
$.fn.tooltip.noConflict = function () {
$.fn.tooltip = old
return this
}
}(jQuery);
/* ========================================================================
* Bootstrap: popover.js v3.2.0
* http://getbootstrap.com/javascript/#popovers
* ========================================================================
* Copyright 2011-2014 Twitter, Inc.
* Licensed under MIT (https://github.com/twbs/bootstrap/blob/master/LICENSE)
* ======================================================================== */
+function ($) {
'use strict';
// POPOVER PUBLIC CLASS DEFINITION
// ===============================
var Popover = function (element, options) {
this.init('popover', element, options)
}
if (!$.fn.tooltip) throw new Error('Popover requires tooltip.js')
Popover.VERSION = '3.2.0'
Popover.DEFAULTS = $.extend({}, $.fn.tooltip.Constructor.DEFAULTS, {
placement: 'right',
trigger: 'click',
content: '',
template: '<div class="popover" role="tooltip"><div class="arrow"></div><h3 class="popover-title"></h3><div class="popover-content"></div></div>'
})
// NOTE: POPOVER EXTENDS tooltip.js
// ================================
Popover.prototype = $.extend({}, $.fn.tooltip.Constructor.prototype)
Popover.prototype.constructor = Popover
Popover.prototype.getDefaults = function () {
return Popover.DEFAULTS
}
Popover.prototype.setContent = function () {
var $tip = this.tip()
var title = this.getTitle()
var content = this.getContent()
$tip.find('.popover-title')[this.options.html ? 'html' : 'text'](title)
$tip.find('.popover-content').empty()[ // we use append for html objects to maintain js events
this.options.html ? (typeof content == 'string' ? 'html' : 'append') : 'text'
](content)
$tip.removeClass('fade top bottom left right in')
// IE8 doesn't accept hiding via the `:empty` pseudo selector, we have to do
// this manually by checking the contents.
if (!$tip.find('.popover-title').html()) $tip.find('.popover-title').hide()
}
Popover.prototype.hasContent = function () {
return this.getTitle() || this.getContent()
}
Popover.prototype.getContent = function () {
var $e = this.$element
var o = this.options
return $e.attr('data-content')
|| (typeof o.content == 'function' ?
o.content.call($e[0]) :
o.content)
}
Popover.prototype.arrow = function () {
return (this.$arrow = this.$arrow || this.tip().find('.arrow'))
}
Popover.prototype.tip = function () {
if (!this.$tip) this.$tip = $(this.options.template)
return this.$tip
}
// POPOVER PLUGIN DEFINITION
// =========================
function Plugin(option) {
return this.each(function () {
var $this = $(this)
var data = $this.data('bs.popover')
var options = typeof option == 'object' && option
if (!data && option == 'destroy') return
if (!data) $this.data('bs.popover', (data = new Popover(this, options)))
if (typeof option == 'string') data[option]()
})
}
var old = $.fn.popover
$.fn.popover = Plugin
$.fn.popover.Constructor = Popover
// POPOVER NO CONFLICT
// ===================
$.fn.popover.noConflict = function () {
$.fn.popover = old
return this
}
}(jQuery);
/* ========================================================================
* Bootstrap: scrollspy.js v3.2.0
* http://getbootstrap.com/javascript/#scrollspy
* ========================================================================
* Copyright 2011-2014 Twitter, Inc.
* Licensed under MIT (https://github.com/twbs/bootstrap/blob/master/LICENSE)
* ======================================================================== */
+function ($) {
'use strict';
// SCROLLSPY CLASS DEFINITION
// ==========================
function ScrollSpy(element, options) {
var process = $.proxy(this.process, this)
this.$body = $('body')
this.$scrollElement = $(element).is('body') ? $(window) : $(element)
this.options = $.extend({}, ScrollSpy.DEFAULTS, options)
this.selector = (this.options.target || '') + ' .nav li > a'
this.offsets = []
this.targets = []
this.activeTarget = null
this.scrollHeight = 0
this.$scrollElement.on('scroll.bs.scrollspy', process)
this.refresh()
this.process()
}
ScrollSpy.VERSION = '3.2.0'
ScrollSpy.DEFAULTS = {
offset: 10
}
ScrollSpy.prototype.getScrollHeight = function () {
return this.$scrollElement[0].scrollHeight || Math.max(this.$body[0].scrollHeight, document.documentElement.scrollHeight)
}
ScrollSpy.prototype.refresh = function () {
var offsetMethod = 'offset'
var offsetBase = 0
if (!$.isWindow(this.$scrollElement[0])) {
offsetMethod = 'position'
offsetBase = this.$scrollElement.scrollTop()
}
this.offsets = []
this.targets = []
this.scrollHeight = this.getScrollHeight()
var self = this
this.$body
.find(this.selector)
.map(function () {
var $el = $(this)
var href = $el.data('target') || $el.attr('href')
var $href = /^#./.test(href) && $(href)
return ($href
&& $href.length
&& $href.is(':visible')
&& [[$href[offsetMethod]().top + offsetBase, href]]) || null
})
.sort(function (a, b) { return a[0] - b[0] })
.each(function () {
self.offsets.push(this[0])
self.targets.push(this[1])
})
}
ScrollSpy.prototype.process = function () {
var scrollTop = this.$scrollElement.scrollTop() + this.options.offset
var scrollHeight = this.getScrollHeight()
var maxScroll = this.options.offset + scrollHeight - this.$scrollElement.height()
var offsets = this.offsets
var targets = this.targets
var activeTarget = this.activeTarget
var i
if (this.scrollHeight != scrollHeight) {
this.refresh()
}
if (scrollTop >= maxScroll) {
return activeTarget != (i = targets[targets.length - 1]) && this.activate(i)
}
if (activeTarget && scrollTop <= offsets[0]) {
return activeTarget != (i = targets[0]) && this.activate(i)
}
for (i = offsets.length; i--;) {
activeTarget != targets[i]
&& scrollTop >= offsets[i]
&& (!offsets[i + 1] || scrollTop <= offsets[i + 1])
&& this.activate(targets[i])
}
}
ScrollSpy.prototype.activate = function (target) {
this.activeTarget = target
$(this.selector)
.parentsUntil(this.options.target, '.active')
.removeClass('active')
var selector = this.selector +
'[data-target="' + target + '"],' +
this.selector + '[href="' + target + '"]'
var active = $(selector)
.parents('li')
.addClass('active')
if (active.parent('.dropdown-menu').length) {
active = active
.closest('li.dropdown')
.addClass('active')
}
active.trigger('activate.bs.scrollspy')
}
// SCROLLSPY PLUGIN DEFINITION
// ===========================
function Plugin(option) {
return this.each(function () {
var $this = $(this)
var data = $this.data('bs.scrollspy')
var options = typeof option == 'object' && option
if (!data) $this.data('bs.scrollspy', (data = new ScrollSpy(this, options)))
if (typeof option == 'string') data[option]()
})
}
var old = $.fn.scrollspy
$.fn.scrollspy = Plugin
$.fn.scrollspy.Constructor = ScrollSpy
// SCROLLSPY NO CONFLICT
// =====================
$.fn.scrollspy.noConflict = function () {
$.fn.scrollspy = old
return this
}
// SCROLLSPY DATA-API
// ==================
$(window).on('load.bs.scrollspy.data-api', function () {
$('[data-spy="scroll"]').each(function () {
var $spy = $(this)
Plugin.call($spy, $spy.data())
})
})
}(jQuery);
/* ========================================================================
* Bootstrap: tab.js v3.2.0
* http://getbootstrap.com/javascript/#tabs
* ========================================================================
* Copyright 2011-2014 Twitter, Inc.
* Licensed under MIT (https://github.com/twbs/bootstrap/blob/master/LICENSE)
* ======================================================================== */
+function ($) {
'use strict';
// TAB CLASS DEFINITION
// ====================
var Tab = function (element) {
this.element = $(element)
}
Tab.VERSION = '3.2.0'
Tab.prototype.show = function () {
var $this = this.element
var $ul = $this.closest('ul:not(.dropdown-menu)')
var selector = $this.data('target')
if (!selector) {
selector = $this.attr('href')
selector = selector && selector.replace(/.*(?=#[^\s]*$)/, '') // strip for ie7
}
if ($this.parent('li').hasClass('active')) return
var previous = $ul.find('.active:last a')[0]
var e = $.Event('show.bs.tab', {
relatedTarget: previous
})
$this.trigger(e)
if (e.isDefaultPrevented()) return
var $target = $(selector)
this.activate($this.closest('li'), $ul)
this.activate($target, $target.parent(), function () {
$this.trigger({
type: 'shown.bs.tab',
relatedTarget: previous
})
})
}
Tab.prototype.activate = function (element, container, callback) {
var $active = container.find('> .active')
var transition = callback
&& $.support.transition
&& $active.hasClass('fade')
function next() {
$active
.removeClass('active')
.find('> .dropdown-menu > .active')
.removeClass('active')
element.addClass('active')
if (transition) {
element[0].offsetWidth // reflow for transition
element.addClass('in')
} else {
element.removeClass('fade')
}
if (element.parent('.dropdown-menu')) {
element.closest('li.dropdown').addClass('active')
}
callback && callback()
}
transition ?
$active
.one('bsTransitionEnd', next)
.emulateTransitionEnd(150) :
next()
$active.removeClass('in')
}
// TAB PLUGIN DEFINITION
// =====================
function Plugin(option) {
return this.each(function () {
var $this = $(this)
var data = $this.data('bs.tab')
if (!data) $this.data('bs.tab', (data = new Tab(this)))
if (typeof option == 'string') data[option]()
})
}
var old = $.fn.tab
$.fn.tab = Plugin
$.fn.tab.Constructor = Tab
// TAB NO CONFLICT
// ===============
$.fn.tab.noConflict = function () {
$.fn.tab = old
return this
}
// TAB DATA-API
// ============
$(document).on('click.bs.tab.data-api', '[data-toggle="tab"], [data-toggle="pill"]', function (e) {
e.preventDefault()
Plugin.call($(this), 'show')
})
}(jQuery);
/* ========================================================================
* Bootstrap: affix.js v3.2.0
* http://getbootstrap.com/javascript/#affix
* ========================================================================
* Copyright 2011-2014 Twitter, Inc.
* Licensed under MIT (https://github.com/twbs/bootstrap/blob/master/LICENSE)
* ======================================================================== */
+function ($) {
'use strict';
// AFFIX CLASS DEFINITION
// ======================
var Affix = function (element, options) {
this.options = $.extend({}, Affix.DEFAULTS, options)
this.$target = $(this.options.target)
.on('scroll.bs.affix.data-api', $.proxy(this.checkPosition, this))
.on('click.bs.affix.data-api', $.proxy(this.checkPositionWithEventLoop, this))
this.$element = $(element)
this.affixed =
this.unpin =
this.pinnedOffset = null
this.checkPosition()
}
Affix.VERSION = '3.2.0'
Affix.RESET = 'affix affix-top affix-bottom'
Affix.DEFAULTS = {
offset: 0,
target: window
}
Affix.prototype.getPinnedOffset = function () {
if (this.pinnedOffset) return this.pinnedOffset
this.$element.removeClass(Affix.RESET).addClass('affix')
var scrollTop = this.$target.scrollTop()
var position = this.$element.offset()
return (this.pinnedOffset = position.top - scrollTop)
}
Affix.prototype.checkPositionWithEventLoop = function () {
setTimeout($.proxy(this.checkPosition, this), 1)
}
Affix.prototype.checkPosition = function () {
if (!this.$element.is(':visible')) return
var scrollHeight = $(document).height()
var scrollTop = this.$target.scrollTop()
var position = this.$element.offset()
var offset = this.options.offset
var offsetTop = offset.top
var offsetBottom = offset.bottom
if (typeof offset != 'object') offsetBottom = offsetTop = offset
if (typeof offsetTop == 'function') offsetTop = offset.top(this.$element)
if (typeof offsetBottom == 'function') offsetBottom = offset.bottom(this.$element)
var affix = this.unpin != null && (scrollTop + this.unpin <= position.top) ? false :
offsetBottom != null && (position.top + this.$element.height() >= scrollHeight - offsetBottom) ? 'bottom' :
offsetTop != null && (scrollTop <= offsetTop) ? 'top' : false
if (this.affixed === affix) return
if (this.unpin != null) this.$element.css('top', '')
var affixType = 'affix' + (affix ? '-' + affix : '')
var e = $.Event(affixType + '.bs.affix')
this.$element.trigger(e)
if (e.isDefaultPrevented()) return
this.affixed = affix
this.unpin = affix == 'bottom' ? this.getPinnedOffset() : null
this.$element
.removeClass(Affix.RESET)
.addClass(affixType)
.trigger($.Event(affixType.replace('affix', 'affixed')))
if (affix == 'bottom') {
this.$element.offset({
top: scrollHeight - this.$element.height() - offsetBottom
})
}
}
// AFFIX PLUGIN DEFINITION
// =======================
function Plugin(option) {
return this.each(function () {
var $this = $(this)
var data = $this.data('bs.affix')
var options = typeof option == 'object' && option
if (!data) $this.data('bs.affix', (data = new Affix(this, options)))
if (typeof option == 'string') data[option]()
})
}
var old = $.fn.affix
$.fn.affix = Plugin
$.fn.affix.Constructor = Affix
// AFFIX NO CONFLICT
// =================
$.fn.affix.noConflict = function () {
$.fn.affix = old
return this
}
// AFFIX DATA-API
// ==============
$(window).on('load', function () {
$('[data-spy="affix"]').each(function () {
var $spy = $(this)
var data = $spy.data()
data.offset = data.offset || {}
if (data.offsetBottom) data.offset.bottom = data.offsetBottom
if (data.offsetTop) data.offset.top = data.offsetTop
Plugin.call($spy, data)
})
})
}(jQuery); | PypiClean |
/Mashup-Govind-102016060-0.0.1.tar.gz/Mashup-Govind-102016060-0.0.1/README.md | Name - Govind Singla
Group - CS10
Roll No. - 102016060
Title - Mashup
## Description
This module will create a mashup sort of thing of any singer you want of any duration with just simple command.
## Requirements
pip install pytube
pip install pydub
Note - if you want to run this program, you must add these 3 files in your working directory -
1. ffmpeg.exe
2. ffplay.exe
3. ffprobe.exe
## Install
Download the code using pip -
pip install Govind-Singla-102016060
## Running the code using the following command
102016060 102016060.py "Sharry Mann" 20 20 102016060-output.mp3
| PypiClean |
/Flask-Breadcrumbs-0.5.1.tar.gz/Flask-Breadcrumbs-0.5.1/README.rst | ===================
Flask-Breadcrumbs
===================
.. image:: https://travis-ci.org/inveniosoftware/flask-breadcrumbs.png?branch=master
:target: https://travis-ci.org/inveniosoftware/flask-breadcrumbs
.. image:: https://coveralls.io/repos/inveniosoftware/flask-breadcrumbs/badge.png?branch=master
:target: https://coveralls.io/r/inveniosoftware/flask-breadcrumbs
.. image:: https://pypip.in/v/Flask-Breadcrumbs/badge.png
:target: https://pypi.python.org/pypi/Flask-Breadcrumbs/
.. image:: https://pypip.in/d/Flask-Breadcrumbs/badge.png
:target: https://pypi.python.org/pypi/Flask-Breadcrumbs/
About
=====
Flask-Breadcrumbs is a Flask extension that adds support for
generating site breadcrumb navigation.
Installation
============
Flask-Breadcrumbs is on PyPI so all you need is: ::
pip install Flask-Breadcrumbs
Documentation
=============
Documentation is readable at http://flask-breadcrumbs.readthedocs.io/ or
can be build using Sphinx: ::
git submodule init
git submodule update
pip install Sphinx
python setup.py build_sphinx
Testing
=======
Running the test suite is as simple as: ::
python setup.py test
or, to also show code coverage: ::
./run-tests.sh
| PypiClean |
/GenIce2-2.1.7.1.tar.gz/GenIce2-2.1.7.1/genice2/lattices/13.py |
import numpy as np
from genice2 import CIF
from genice2.cell import cellvectors
import genice2.lattices
desc = {"ref": {"13": 'Salzmann 2006'},
"usage": "No options available.",
"brief": "Ice XIII, a hydrogen-ordered counterpart of ice V."
}
class Lattice(genice2.lattices.Lattice):
def __init__(self):
atoms = """
O1 0.2541(6) 0.5629(5) 0.2517(5)
O2 0.4771(6) 0.7992(5) 0.4089(5)
O3 0.0503(6) 0.8082(6) 0.0941(5)
O4 0.2613(5) 0.4045(6) 0.4992(5)
O5 0.2113(4) 0.4029(5) 0.0034(5)
O6 0.4147(5) 0.1103(7) 0.2336(4)
O7 0.1245(5) 0.1142(6) 0.2643(4)
D8 0.3444(4) 0.6427(5) 0.3008(3)
D10 0.2458(5) 0.4942(5) 0.3299(5)
D13 0.1074(4) 0.7187(5) 0.1563(4)
D16 0.4820(4) 0.9075(5) 0.3558(4)
D18 0.5763(5) 0.7499(5) 0.4437(4)
D19 0.9486(5) 0.7508(5) 0.0478(4)
D21 0.2372(3) 0.4543(5) 0.0989(4)
D24 0.3043(4) 0.4904(6) 0.5777(4)
D26 0.1708(4) 0.3555(6) 0.5137(4)
D27 0.3072(4) 0.3737(6) 0.9904(3)
D29 0.0781(4) 0.0194(6) 0.1989(4)
D30 0.3250(5) 0.1374(5) 0.2554(5)
D32 0.3823(5) 0.0496(6) 0.1467(5)
D35 0.0509(4) 0.2082(6) 0.2548(5)
"""
# space group: P2_1/a No. 14
# http://img.chem.ucl.ac.uk/sgp/large/014dy1.htm
symops = """
x y z
1/2-x 1/2+y -z
-x -y -z
1/2+x 1/2-y z
"""
a = 9.2417 / 10.0 # nm
b = 7.4724 / 10.0 # nm
c = 10.2970 / 10.0 # nm
A = 90
B = 109.6873
C = 90
self.cell = cellvectors(a, b, c, A, B, C)
# helper routines to make from CIF-like data
atomd = CIF.atomdic(atoms)
sops = CIF.symmetry_operators(symops)
self.waters, self.fixed = CIF.waters_and_pairs(self.cell, atomd, sops)
# set self.pairs in this way for hydrogen-ordered ices.
self.pairs = self.fixed
self.density = 18 * len(self.waters) / 6.022e23 / \
(np.linalg.det(self.cell) * 1e-21)
self.coord = "relative" | PypiClean |
/OBITools-1.2.13.tar.gz/OBITools-1.2.13/src/obitools/dnahash/__init__.py | _A=[0]
_C=[1]
_G=[2]
_T=[3]
_R= _A + _G
_Y= _C + _T
_M= _C + _A
_K= _T + _G
_W= _T + _A
_S= _C + _G
_B= _C + _G + _T
_D= _A + _G + _T
_H= _A + _C + _T
_V= _A + _C + _G
_N= _A + _C + _G + _T
_dnahash={'a':_A,
'c':_C,
'g':_G,
't':_T,
'r':_R,
'y':_Y,
'm':_M,
'k':_K,
'w':_W,
's':_S,
'b':_B,
'd':_D,
'h':_H,
'v':_V,
'n':_N,
}
def hashCodeIterator(sequence,wsize,degeneratemax=0,offset=0):
errors = 0
emask = [0] * wsize
epointer = 0
size = 0
position = offset
hashs = set([0])
hashmask = 0
for i in xrange(wsize):
hashmask <<= 2
hashmask +=3
for l in sequence:
l = l.lower()
hl = _dnahash[l]
if emask[epointer]:
errors-=1
emask[epointer]=0
if len(hl) > 1:
errors +=1
emask[epointer]=1
epointer+=1
epointer%=wsize
if errors > degeneratemax:
hl=set([hl[0]])
hashs=set((((hc<<2) | cl) & hashmask)
for hc in hashs
for cl in hl)
if size < wsize:
size+=1
if size==wsize:
if errors <= degeneratemax:
yield (position,hashs,errors)
position+=1
def hashSequence(sequence,wsize,degeneratemax=0,offset=0,hashs=None):
if hashs is None:
hashs=[[] for x in xrange(4**wsize)]
for pos,keys,errors in hashCodeIterator(sequence, wsize, degeneratemax, offset):
for k in keys:
hashs[k].append(pos)
return hashs
def hashSequences(sequences,wsize,maxpos,degeneratemax=0):
hashs=None
offsets=[]
offset=0
for s in sequences:
offsets.append(offset)
hashSequence(s,wsize,degeneratemax=degeneratemax,offset=offset,hashs=hashs)
offset+=len(s)
return hashs,offsets | PypiClean |
/Mocha-0.12.1.tar.gz/Mocha-0.12.1/mocha/contrib/auth/__init__.py | import logging
from . import signals, exceptions, oauth
import flask_login
from flask import current_app
from mocha.exceptions import AppError
import mocha.cli
from mocha import (_,
utc_now,
config,
abort,
send_mail,
url_for,
views,
models,
utils,
request,
redirect,
flash,
session,
init_app,
decorators as h_deco)
import models as auth_models
from flask_login import current_user
__version__ = "1.0.0"
__options__ = utils.dict_dot({})
# USER ROLES
ROLES_SUPERADMIN = ["SUPERADMIN"]
ROLES_ADMIN = ROLES_SUPERADMIN + ["ADMIN"]
ROLES_MANAGER = ROLES_ADMIN + ["MANAGER"]
ROLES_CONTRIBUTOR = ROLES_MANAGER + ["EDITOR", "CONTRIBUTOR"]
ROLES_MODERATOR = ROLES_CONTRIBUTOR + ["MODERATOR"]
ACTIONS = {
"USERNAME": "USERNAME",
"PASSWORD": "PASSWORD",
"EMAIL": "EMAIL",
"STATUS": "STATUS",
"PROFILE_IMAGE": "PROFILE_IMAGE",
"UPDATE": "UPDATE"
}
# LOGIN MANAGER: Flask Login
login_manager = flask_login.LoginManager()
login_manager.login_message_category = "error"
init_app(login_manager.init_app)
@login_manager.user_loader
def load_user(userid):
return get_user_by_id(userid)
# def init(app):
# @app.before_request
# def force_password_change():
# print("THIS IS BEFORE REQUEST")
# _ = __options__.get("require_password_change_exclude_endpoints")
# _ = [] if not isinstance(_, list) else _
#
# exclude_endpoints = ["static", "ContactPage:index", "Index:index",
# "AuthLogin:logout"] + _
#
# if current_user and current_user.is_authenticated:
# if request.endpoint \
# and request.endpoint not in exclude_endpoints:
# if request.endpoint != "AuthAccount:change_password" \
# and session_get_require_password_change():
# flash("Password Change is required", "info")
# return redirect(views.auth.Account.account_info, edit_password=1)
# ------------------------------------------------------------------------------
def is_authenticated():
""" A shortcut to check if a user is authenticated """
return current_user and current_user.is_authenticated and current_user.is_active
def not_authenticated():
""" A shortcut to check if user not authenticated."""
return not is_authenticated()
def get_random_password(length=8):
return utils.generate_random_string(length)
# ------------------------------------------------------------------------------
# VISIBILITY:
# The methods below return bool that are meant be pass in the
# nav_title(visible=fn) `visible` args
#
def visible_to_roles(*roles):
"""
This is a @nav_title specific function to set the visibility of menu based on
roles
:param roles:
:return: callback fn
"""
if is_authenticated():
return True if current_user.has_any_roles(*roles) else False
return False
# Alias
def visible_to_superadmins():
return visible_to_roles(*ROLES_SUPERADMIN)
def visible_to_admins():
return visible_to_roles(*ROLES_ADMIN)
def visible_to_managers():
return visible_to_roles(*ROLES_MANAGER)
def visible_to_contributors():
return visible_to_roles(*ROLES_CONTRIBUTOR)
def visible_to_moderators():
return visible_to_roles(*ROLES_MODERATOR)
def visible_to_authenticated():
return is_authenticated()
def visible_to_non_authenticated():
return not_authenticated()
# ------------------------------------------------------------------------------
# TOKENIZATION
def get_jwt_secret():
"""
Get the JWT secret
:return: str
"""
secret_key = __options__.get("jwt_secret") or config("JWT_SECRET") or config("SECRET_KEY")
if not secret_key:
raise exceptions.AuthError("Missing config JWT/SECRET_KEY")
return secret_key
def get_jwt_salt():
"""
Get the JWT salt
:return: str
"""
return __options__.get("jwt_salt", "mocha:contrib:auth")
def get_jwt_ttl():
"""
Get JWT time to live
:return:
"""
return __options__.get("jwt_ttl", 3600)
# ------------------------------------------------------------------------------
# SIGNUP + LOGIN
def _user(user):
"""
Factory function to AuthUser
:param user: AuthUser
:return:
"""
return UserModel(user) if user else None
def create_user(username, password=None, email=None, first_name="", last_name="",
role="MEMBER", login_method=None):
"""
Create a new user
:param username:
:param password:
:param email:
:param first_name:
:param last_name:
:param role: str
:return: AuthUser
"""
if not login_method:
login_method = "email" if "@" in username else "username"
def cb():
return _user(models.AuthUser.new(username=username,
password=password,
email=email,
first_name=first_name,
last_name=last_name,
login_method=login_method,
role=role))
return signals.create_user(cb)
def get_user(id=None, username=None, email=None, federated_id=None, provider=None, jwt=None):
"""
Retrieve the user based on the data provided
:param id:
:param username:
:param email:
:param federated_id:
:param provider:
:param jwt:
:return: AuthUser
"""
if id:
return _user(models.AuthUser.get(id))
elif username:
return _user(models.AuthUser.get_by_username(username))
elif email:
return _user(models.AuthUser.get_by_email(email))
elif federated_id and provider:
user = models.AuthUserFederation.get_user(provider, federated_id)
return _user(user) if user else None
elif jwt:
pass
def get_user_by_auth_token(token=None):
"""
Return the AuthUser associated to the token, otherwise it will return None.
If token is not provided, it will pull it from the headers: Authorization
Exception:
Along with AuthError, it may
:param token:
:return: AuthUser
"""
if not token:
token = request.get_auth_token()
secret_key = get_jwt_secret()
s = utils.unsign_jwt(token=token,
secret_key=secret_key,
salt=get_jwt_salt())
if "id" not in s:
raise exceptions.AuthError("Invalid Authorization Bearer Token")
return get_user_by_id(int(s["id"]))
def get_user_by_action_token(action, token):
"""
Get the user by action token
:param action: str
:param token: str
:return: AuthUser
"""
data = utils.unsign_url_safe(token,
secret_key=get_jwt_secret(),
salt=action)
if data is None:
raise exceptions.AuthError("Invalid Token")
return get_user_by_id(int(data))
def with_username(username, password):
"""
To authenticate a user with user and password
*** authenticate doesn't create a session. To create a session,
use login_user
:param username:
:param password:
:return: UserModel
"""
user = models.AuthUser.get_by_username(username)
return _user(user) if user and user.password_matched(password) else None
def with_federation(provider, federated_id):
"""
To authenticate with Federated Login
:param provider:
:param federated_id:
:return: UserModel
"""
return get_user(federated_id=federated_id, provider=provider)
def create_session(user):
"""
Create the login session
:param user: UserModel
:return:
"""
def cb():
if user:
if __options__.get("require_email_verification") and not user.email_verified:
raise exceptions.VerifyEmailError()
if flask_login.login_user(user):
user.update(last_login_at=utc_now())
return user
return None
return signals.user_login(cb)
#
class UserModel(flask_login.UserMixin):
def __init__(self, user):
self.user = user.user if isinstance(user, self.__class__) else user
self.user_salt = "USER:%s" % self.user.id
def __getattr__(self, item):
return getattr(self.user, item)
# ------ FLASK-LOGIN REQUIRED METHODS ----------------------------------
@property
def is_active(self):
return self.active
# ---------- END FLASK-LOGIN REQUIREMENTS ------------------------------
def change_username(self, username):
"""
Change user's login email
:param user: AuthUser
:param email:
:return:
"""
def cb():
if self.login_method == "username" and "@" in username:
raise exceptions.AuthError(_("Username can't be an email"))
elif self.login_method == "email" and "@" not in username:
raise exceptions.AuthError(_("Invalid email login"))
if "@" in username:
if not utils.is_email_valid(username):
raise exceptions.AuthError("Email address invalid")
elif not utils.is_username_valid(username):
raise exceptions.AuthError("Username invalid")
# Change both email and
if self.login_method == "email":
if not models.AuthUser.get_by_username(username) \
and not models.AuthUser.get_by_email(username):
self.user.change_username(username)
self.user.change_email(username)
else:
self.user.change_username(username)
return username
return signals.user_update(self, ACTIONS["USERNAME"], cb)
def change_email(self, email):
"""
Change user's login email
:param user: AuthUser
:param email:
:return:
"""
def cb():
if not utils.is_email_valid(email):
raise exceptions.AuthError("Email address invalid")
self.user.change_email(email)
return email
return signals.user_update(self, ACTIONS["EMAIL"], cb,
{"email": self.email})
def update_info(self, _action=None, **kwargs):
"""
UPdate info
:param user:
:param email:
:return:
"""
def cb():
kwargs.pop("email", None)
kwargs.pop("username", None)
kwargs.pop("password_hash", None)
kwargs.pop("require_password_change", None)
self.user.update(**kwargs)
return kwargs
_action = ACTIONS["UPDATE"] if _action is None else _action
return signals.user_update(self, _action, cb, data=self.to_dict())
def change_password(self, password):
"""
Change a user's password
:param user:
:param password:
:param password_confirm:
:return:
"""
def cb():
if not utils.is_password_valid(password):
raise exceptions.AuthError("Invalid Password")
self.user.change_password(password)
return True
return signals.user_update(self, ACTIONS["PASSWORD"], cb)
def reset_password(self):
"""
Return the new random password that has been reset
:param user_login: AuthUserLogin
:return: string - the new password
"""
def cb():
password = get_random_password()
self.change_password(password)
return password
return signals.user_update(self, ACTIONS["PASSWORD"], cb)
def change_status(self, status):
"""
Change the user's status
:param user:
:param email:
:return:
"""
def cb():
self.user.update(status=status)
return status
return signals.user_update(self, ACTIONS["STATUS"], cb,
data={"status": self.status})
def create_jwt(self, expires_in=None):
"""
Create a secure timed JWT token that can be passed. It save the user id,
which later will be used to retrieve the data
:param user: AuthUser, the user's object
:param expires_in: - time in second for the token to expire
:return: string
"""
s = utils.sign_jwt(data={"id": self.user.id},
secret_key=get_jwt_secret(),
salt=get_jwt_salt(),
expires_in=expires_in or get_jwt_ttl())
return s
def create_action_token(self, action, expires_in):
"""
Create a url safe action token attached to the user
:param action:
:param expires_in:
:return:
"""
return utils.sign_url_safe(self.user.id,
secret_key=get_jwt_secret(),
salt=action,
expires_in=expires_in)
def sign_data(self, data, expires_in=None, url_safe=True):
"""
To safely sign a user data. It will be signed with the user key
:param data: mixed
:param expires_in: The time for it to expire
:param url_safe: bool. If true it will allow it to be passed in URL
:return: str - the token/signed data
"""
if url_safe:
return utils.sign_url_safe(data,
secret_key=self.secret_key,
salt=self.user_salt,
expires_in=expires_in)
else:
return utils.sign_data(data,
secret_key=self.secret_key,
salt=self.user_salt,
expires_in=expires_in)
def unsign_data(self, data, url_safe=True):
"""
Retrieve the signed data. If it is expired, it will throw an exception
:param data: token/signed data
:param url_safe: bool. If true it will allow it to be passed in URL
:return: mixed, the data in its original form
"""
if url_safe:
return utils.unsign_url_safe(data,
secret_key=self.secret_key,
salt=self.user_salt)
else:
return utils.unsign_data(data,
secret_key=self.secret_key,
salt=self.user_salt)
def signed_data_match(self, data, matched_data, url_safe=True):
"""
See if a data matched a signed one
:param data:
:param matched_data:
:param url_safe:
:return:
"""
try:
u_data = self.unsign_data(data, url_safe=url_safe)
return u_data == matched_data
except Exception as e:
return False
def send_email(self, template, **kwargs):
"""
To send email to user
:param template:
:param kwargs:
:return:
"""
user_data = {
"id": self.id,
"username": self.username,
"name": self.name,
"first_name": self.first_name,
"last_name": self.last_name,
"email": self.email
}
kwargs.pop("user", None)
send_mail(to=self.email, template=template, user=user_data, **kwargs)
def send_password_reset(self, base_url=None, view_class=None, **kw):
"""
Reset a password and send email
:param user: AuthUser
:param email: str - The auth user login email
:param base_url: str - By default it will use the current url, base_url will allow custom url
:param template: str - The email template
:param method: str - token or email - The method to reset the password
:param view_class: obj - The view instance of the login
:param kwargs: Any data to pass
:return:
"""
view = view_class or views.auth.Login
endpoint_reset = getattr(view, "reset_password")
endpoint_login = getattr(view, "login")
action = "reset-password"
method = __options__.get("reset_password_method", "TOKEN")
template = __options__.get("email_templates.reset_password",
"auth/reset-password.txt")
new_password = None
if method.upper() == "TOKEN":
expires_in = __options__.get("reset_password_token_ttl", 1)
action_token = self.create_action_token(action, expires_in)
signed_data = self.sign_data(action, expires_in=expires_in)
url = _url_for_email(endpoint_reset,
base_url=base_url,
action_token=action_token,
signed_data=signed_data)
else:
new_password = self.reset_password()
url = _url_for_email(endpoint_login, base_url=base_url)
self.send_email(template=template,
action={
"reset_method": method.upper(),
"url": url,
"new_password": new_password
},
data=kw)
def send_verification_email(self, base_url=None, view_class=None, **kw):
template = __options__.get("email_templates.verify_email",
"auth/verify-email.txt")
url = self._create_verify_email_token_url(base_url=base_url,
view_class=view_class)
self.send_email(template=template,
action={
"url": url,
},
data=kw)
def send_welcome_email(self, base_url=None, view_class=None, **kw):
verify_email = __options__.get("require_email_verification") or False
template = __options__.get("email_templates.welcome",
"auth/welcome.txt")
url = self._create_verify_email_token_url(base_url=base_url,
view_class=view_class)
self.send_email(template=template,
action={
"url": url,
"require_email_verification": verify_email,
},
data=kw)
def _create_verify_email_token_url(self, base_url=None, view_class=None):
"""
To create a verify email token url
:param user: (object) AuthUser
:param base_url: a base_url to use instead of the native one
:param view_class: (obj) the view class, to allow build the url
:return: string
"""
view = view_class or views.auth.Login
endpoint = getattr(view, "verify_email")
action = "verify-email"
expires_in = __options__.get("verify_email_token_ttl") or (60 * 24)
action_token = self.create_action_token(action, expires_in)
signed_data = self.sign_data(action, expires_in=expires_in)
url = _url_for_email(endpoint,
base_url=base_url,
action_token=action_token,
signed_data=signed_data)
return url
def add_federation(self, provider, federated_id):
"""
Add federated login to the current user
:param provider:
:param federated_id:
:return:
"""
models.AuthUserFederation.new(user=self,
provider=provider,
federated_id=federated_id)
# ------------------------------------------------------------------------------
# EMAIL SENDING
def _url_for_email(endpoint, base_url=None, **kw):
"""
Create an external url_for by using a custom base_url different from the domain we
are on
:param endpoint:
:param base_url:
:param kw:
:return:
"""
base_url = base_url or config("MAIL_EXTERNAL_BASE_URL")
_external = True if not base_url else False
url = url_for(endpoint, _external=_external, **kw)
if base_url and not _external:
url = "%s/%s" % (base_url.strip("/"), url.lstrip("/"))
return url
def session_set_require_password_change(change=True):
session["auth:require_password_change"] = change
def session_get_require_password_change():
return session.get("auth:require_password_change")
# ------------------------------------------------------------------------------
# CLI
class CLI(mocha.cli.Manager):
def __init__(self, command, click):
@command("auth:create-super-admin")
@click.argument("email")
def create_super_admin(email):
"""
To create a super admin by providing the email address
"""
print("-" * 80)
print("Mocha Auth: Create Super Admin")
print("Email: %s" % email)
try:
password = get_random_password()
user = create_user(username=email,
password=password,
first_name="SuperAdmin",
role="Superadmin")
user.update(require_password_change=True)
print("Password: %s" % password)
except Exception as e:
print("ERROR: %s" % e)
print("Done!")
@command("auth:reset-password")
@click.argument("email")
def reset_password(email):
"""
To reset password by email
"""
print("-" * 80)
print("Mocha Auth: Reset Password")
try:
ul = models.AuthUserLogin.get_by_email(email)
if not ul:
raise Exception("Email '%s' doesn't exist" % email)
password = get_random_password()
ul.change_password(password)
ul.update(require_password_change=True)
print("Email: %s" % email)
print("New Password: %s" % password)
except Exception as e:
print("ERROR: %s" % e)
print("Done!")
@command("auth:user-info")
@click.option("--email")
@click.option("--id")
def reset_password(email=None, id=None):
"""
Get the user info by email or ID
"""
print("-" * 80)
print("Mocha Auth: User Info")
print("")
try:
if email:
ul = models.AuthUserLogin.get_by_email(email)
if not ul:
raise Exception("Invalid Email address")
user_info = ul.user
elif id:
user_info = models.AuthUser.get(id)
if not user_info:
raise Exception("Invalid User ID")
k = [
("ID", "id"), ("Name", "name"),
("First Name", "first_name"),
("Last Name", "last_name"), ("Signup", "created_at"),
("Last Login", "last_login"),
("Signup Method", "register_method"),
("Status", "status")
]
print("Email: %s" % user_info.get_email_login().email)
for _ in k:
print("%s : %s" % (_[0], getattr(user_info, _[1])))
except Exception as e:
print("ERROR: %s" % e)
print("")
print("Done!")
# ---
from .decorators import *
# -----
# DEPRACTED
def get_user_by_id(id):
# Deprecated
return get_user(id=id)
def get_user_by_username(username):
# Deprecated
return get_user(username=username)
def get_user_by_email(email):
# Deprecated
return get_user(email=email)
def authenticate(username, password):
# deprecated
return with_username(username, password)
def login_user(user):
# Deprecated
return create_session(user)
def get_user_by_jwt(token):
# Deprecated
return get_user_by_auth_token(token) | PypiClean |
/DpmModule-1.1.0-py3-none-any.whl/dpmModule/jobs/demonslayer.py | from ..kernel import core
from ..kernel.core import VSkillModifier as V
from ..character import characterKernel as ck
from functools import partial
from ..status.ability import Ability_tool
from . import globalSkill
###### Passive Skill ######
class JobGenerator(ck.JobGenerator):
def __init__(self):
super(JobGenerator, self).__init__()
self.buffrem = False
self.jobtype = "str"
self.vEnhanceNum = 15
self.preEmptiveSkills = 1
self.ability_list = Ability_tool.get_ability_set('boss_pdamage', 'reuse', 'mess')
def get_modifier_optimization_hint(self):
return core.CharacterModifier(armor_ignore = 20)
def get_passive_skill_list(self):
#데몬스퓨리 : 보공15%, 링크에 반영되므로 미고려.
DeathCurse = core.InformedCharacterModifier("데스 커스",pdamage = 1)
Outrage = core.InformedCharacterModifier("아웃레이지",att = 50, crit = 20)
PhisicalTraining = core.InformedCharacterModifier("피지컬 트레이닝",stat_main = 30, stat_sub = 30)
Concentration = core.InformedCharacterModifier("컨센트레이션",pdamage_indep = 25)
AdvancedWeaponMastery = core.InformedCharacterModifier("어드밴스드 웨폰 마스터리",att = 50, crit_damage = 15)
DarkBindPassive = core.InformedCharacterModifier("다크 바인드(패시브)", armor_ignore = 30)
return [DeathCurse, Outrage, PhisicalTraining, Concentration, AdvancedWeaponMastery, DarkBindPassive]
def get_not_implied_skill_list(self):
WeaponConstant = core.InformedCharacterModifier("무기상수", pdamage_indep = 20)
Mastery = core.InformedCharacterModifier("숙련도",pdamage_indep = -5)
EvilTorture = core.InformedCharacterModifier("이블 토쳐",pdamage_indep = 15, crit = 15) #상태이상에 걸렷을때만.
return [WeaponConstant, Mastery, EvilTorture]
def generate(self, vEhc, chtr : ck.AbstractCharacter, combat : bool = False):
'''
코강 순서:
슬래시-임팩트-서버-익스플로전-메타-데빌크라이
#####하이퍼 #####
# 데몬슬래시 - 리인포스, 리메인타임 리인포스
# 데몬 입팩트 - 리인포스, 보너스 어택, 리듀스 포스
'''
#Buff skills
Booster = core.BuffSkill("부스터", 600, 180*1000, rem = True).wrap(core.BuffSkillWrapper)
DemonSlash1 = core.DamageSkill("데몬 슬래시(1타)", 390, 110, 2, modifier = core.CharacterModifier(pdamage = 370)).setV(vEhc, 0, 2, False).wrap(core.DamageSkillWrapper)
DemonSlash2 = core.DamageSkill("데몬 슬래시(2타)", 330, 110, 2, modifier = core.CharacterModifier(pdamage = 370)).setV(vEhc, 0, 2, False).wrap(core.DamageSkillWrapper)
DemonSlash3 = core.DamageSkill("데몬 슬래시(3타)", 330, 100, 3, modifier = core.CharacterModifier(pdamage = 370)).setV(vEhc, 0, 2, False).wrap(core.DamageSkillWrapper)
DemonSlash4 = core.DamageSkill("데몬 슬래시(4타)", 330, 100, 4, modifier = core.CharacterModifier(pdamage = 370)).setV(vEhc, 0, 2, False).wrap(core.DamageSkillWrapper)
DemonSlashAW1 = core.DamageSkill("데몬 슬래시 강화(1타)", 390, 600, 3, modifier = core.CharacterModifier(pdamage = 370+50, armor_ignore = 50)).setV(vEhc, 0, 2, False).wrap(core.DamageSkillWrapper)
DemonSlashAW2 = core.DamageSkill("데몬 슬래시 강화(2타)", 300, 600, 3, modifier = core.CharacterModifier(pdamage = 370+50, armor_ignore = 50)).setV(vEhc, 0, 2, False).wrap(core.DamageSkillWrapper)
DemonSlashAW3 = core.DamageSkill("데몬 슬래시 강화(3타)", 210, 700, 3, modifier = core.CharacterModifier(pdamage = 370+50, armor_ignore = 50)).setV(vEhc, 0, 2, False).wrap(core.DamageSkillWrapper)
DemonSlashAW4 = core.DamageSkill("데몬 슬래시 강화(4타)", 210, 800, 3, modifier = core.CharacterModifier(pdamage = 370+50, armor_ignore = 50)).setV(vEhc, 0, 2, False).wrap(core.DamageSkillWrapper)
DemonSlashAWBB1 = core.DamageSkill("데몬 슬래시 강화(1타)블블", 390, 600, 3*1.9, modifier = core.CharacterModifier(pdamage = 370+50, armor_ignore = 50)).setV(vEhc, 0, 2, False).wrap(core.DamageSkillWrapper)
DemonSlashAWBB2 = core.DamageSkill("데몬 슬래시 강화(2타)블블", 300, 600, 3*1.9, modifier = core.CharacterModifier(pdamage = 370+50, armor_ignore = 50)).setV(vEhc, 0, 2, False).wrap(core.DamageSkillWrapper)
DemonSlashAWBB3 = core.DamageSkill("데몬 슬래시 강화(3타)블블", 210, 700, 3*1.9, modifier = core.CharacterModifier(pdamage = 370+50, armor_ignore = 50)).setV(vEhc, 0, 2, False).wrap(core.DamageSkillWrapper)
DemonSlashAWBB4 = core.DamageSkill("데몬 슬래시 강화(4타)블블", 210, 800, 3*1.9, modifier = core.CharacterModifier(pdamage = 370+50, armor_ignore = 50)).setV(vEhc, 0, 2, False).wrap(core.DamageSkillWrapper)
DemonImpact = core.DamageSkill("데몬 임팩트", 660, 460, (6+1)*1.9, modifier = core.CharacterModifier(crit = 100, armor_ignore = 30, boss_pdamage = 40, pdamage = 20)).setV(vEhc, 1, 2, False).wrap(core.DamageSkillWrapper)
DevilCry = core.DamageSkill("데빌 크라이", 1620, 515, 7*1.9, cooltime = 20 * 1000).setV(vEhc, 5, 2, False).wrap(core.DamageSkillWrapper) #이블 토쳐 위해 사용필수.
DevilCryBuff = core.BuffSkill("데빌 크라이(위협)", 0, 20000, cooltime = -1).wrap(core.BuffSkillWrapper)
InfinityForce = core.BuffSkill("인피니티 포스", 990, 50*1000, cooltime = 200 * 1000).wrap(core.BuffSkillWrapper)
Metamorphosis = core.BuffSkill("메타모포시스", 1680, 180*1000, rem = True, pdamage = 35).wrap(core.BuffSkillWrapper)
MetamorphosisSummon = core.SummonSkill("메타모포시스(소환)", 0, 500, 250, 1, 180*1000, cooltime = -1).setV(vEhc, 4, 2, False).wrap(core.SummonSkillWrapper)
#블루블러드는 소환수 적용이 안됨.
BlueBlood = core.BuffSkill("블루 블러드", 1020, 60000, cooltime = 120000 - 60000).wrap(core.BuffSkillWrapper) #모든 공격에 최종데미지의 90%로 추가타 발생. 포스50수급시 -3초, 인피니티 포스시 4초마다 2초 감소, 모든 스킬 포스소모량 20%감소.
Cerberus = core.DamageSkill("서버러스", 900, 450, 6, cooltime = 5000, modifier = core.CharacterModifier(boss_pdamage = 50, armor_ignore = 50)).setV(vEhc, 2, 2, False).wrap(core.DamageSkillWrapper)#포스50 추가흡수
DemonFortitude = core.BuffSkill("데몬 포티튜드", 0, 60000, cooltime = 120000).wrap(core.BuffSkillWrapper)
CallMastema = core.SummonSkill("콜 마스테마", 690, 5000, 1100, 8, (30+vEhc.getV(4,4))*1000, cooltime = 150*1000).isV(vEhc,4,4).wrap(core.SummonSkillWrapper)
#CallMastemaAnother = core.SummonSkill("콜 마스테마+", 0, ).wrap(core.BuffSkillWrapper) #러블리 테리토리..데미지 없음.
DemonAwakning = core.BuffSkill("데몬 어웨이크닝", 1110, (35 + vEhc.getV(0,0))*1000, cooltime = 120 * 1000, crit = (50 + int(0.5*vEhc.getV(0,0)))).isV(vEhc,0,0).wrap(core.BuffSkillWrapper)
DemonAwakningSummon = core.SummonSkill("데몬 어웨이크닝(더미)", 0, 8000, 0, 0, (35 + vEhc.getV(0,0))*1000, cooltime = -1).isV(vEhc,0,0).wrap(core.SummonSkillWrapper)
SpiritOfRage = core.SummonSkill("요르문간드", 810, 1080, (850+34*vEhc.getV(3,3)), 12, (10+int(0.2*vEhc.getV(3,3)))*1000, cooltime = (120 - int(0.5*vEhc.getV(3,3)))*1000, modifier = core.CharacterModifier(crit = 100, armor_ignore = 50)).isV(vEhc,3,3).wrap(core.SummonSkillWrapper)
SpiritOfRageEnd = core.DamageSkill("요르문간드(종료)", 0, 900+36*vEhc.getV(3,3), 15, cooltime = -1).isV(vEhc,3,3).wrap(core.DamageSkillWrapper)
Orthros = core.SummonSkill("오르트로스(네메아)", 1000, 2000, 400+16*vEhc.getV(1,1), 12, 40000, cooltime = 120*1000, modifier = core.CharacterModifier(crit = 100, armor_ignore = 50)).isV(vEhc,1,1).wrap(core.SummonSkillWrapper)
Orthros_ = core.SummonSkill("오르트로스(게리온)", 0, 3000, 900+36*vEhc.getV(1,1), 10, 40000, cooltime = -1, modifier = core.CharacterModifier(crit = 100, armor_ignore = 50)).isV(vEhc,1,1).wrap(core.SummonSkillWrapper)
###### Skill Wrapper ######
'''딜 사이클 정리
어웨이크닝일 경우 -> 데몬슬래시
어웨이크닝 없을 경우 -> 데몬 임팩트
나머지 쿨마다 시전
데빌 크라인 20초마다 시전
서버러스 자동시전만 시전
나머지는 알아서 시전
가정 : 블블 100%
TODO--> 포스 사용을 반영해서 블블 지속시간 시뮬레이션(엄청 어려울듯)
'''
DemonSlashAWBB1.onAfter(DemonSlashAWBB2)
DemonSlashAWBB2.onAfter(DemonSlashAWBB3)
DemonSlashAWBB3.onAfter(DemonSlashAWBB4)
BasicAttack = core.OptionalElement(DemonAwakning.is_active, DemonSlashAWBB1, DemonImpact, name = "어웨이크닝 ON")
BasicAttackWrapper = core.DamageSkill('기본 공격', 0,0,0).wrap(core.DamageSkillWrapper)
BasicAttackWrapper.onAfter(BasicAttack)
DevilCry.onAfter(DevilCryBuff)
DemonAwakning.onAfter(DemonAwakningSummon)
DemonAwakningSummon.onTick(Cerberus)
SpiritOfRage.onAfter(SpiritOfRageEnd.controller((10+int(0.2*vEhc.getV(3,3)))*1000))
Orthros.onAfter(Orthros_)
Metamorphosis.onAfter(MetamorphosisSummon)
# 오라 웨폰
auraweapon_builder = globalSkill.AuraWeaponBuilder(vEhc, 3, 2)
for sk in [DemonSlashAWBB1, DemonSlashAWBB2, DemonSlashAWBB3, DemonSlashAWBB4, DemonImpact]:
auraweapon_builder.add_aura_weapon(sk)
AuraWeaponBuff, AuraWeaponCooltimeDummy = auraweapon_builder.get_buff()
return(BasicAttackWrapper,
[globalSkill.maple_heros(chtr.level), globalSkill.useful_sharp_eyes(),
Booster, DevilCryBuff, InfinityForce, Metamorphosis, BlueBlood, DemonFortitude, AuraWeaponBuff, DemonAwakning,
globalSkill.soul_contract()] +\
[Cerberus, DevilCry, SpiritOfRageEnd] +\
[MetamorphosisSummon, CallMastema, DemonAwakningSummon, SpiritOfRage, Orthros, Orthros_] +\
[AuraWeaponCooltimeDummy] +\
[BasicAttackWrapper]) | PypiClean |
/observations-0.1.4.tar.gz/observations-0.1.4/observations/r/urine.py | from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import csv
import numpy as np
import os
import sys
from observations.util import maybe_download_and_extract
def urine(path):
"""Urine Analysis Data
The `urine` data frame has 79 rows and 7 columns.
79 urine specimens were analyzed in an effort to determine if certain
physical characteristics of the urine might be related to the formation
of calcium oxalate crystals.
This data frame contains the following columns:
`r`
Indicator of the presence of calcium oxalate crystals.
`gravity`
The specific gravity of the urine.
`ph`
The pH reading of the urine.
`osmo`
The osmolarity of the urine. Osmolarity is proportional to the
concentration of molecules in solution.
`cond`
The conductivity of the urine. Conductivity is proportional to the
concentration of charged ions in solution.
`urea`
The urea concentration in millimoles per litre.
`calc`
The calcium concentration in millimoles per litre.
The data were obtained from
Andrews, D.F. and Herzberg, A.M. (1985) *Data: A Collection of Problems
from Many Fields for the Student and Research Worker*. Springer-Verlag.
Args:
path: str.
Path to directory which either stores file or otherwise file will
be downloaded and extracted there.
Filename is `urine.csv`.
Returns:
Tuple of np.ndarray `x_train` with 79 rows and 7 columns and
dictionary `metadata` of column headers (feature names).
"""
import pandas as pd
path = os.path.expanduser(path)
filename = 'urine.csv'
if not os.path.exists(os.path.join(path, filename)):
url = 'http://dustintran.com/data/r/boot/urine.csv'
maybe_download_and_extract(path, url,
save_file_name='urine.csv',
resume=False)
data = pd.read_csv(os.path.join(path, filename), index_col=0,
parse_dates=True)
x_train = data.values
metadata = {'columns': data.columns}
return x_train, metadata | PypiClean |
/FastGets-0.3.5.tar.gz/FastGets-0.3.5/fastgets/web/static/dist/plugins/advlist/plugin.js | (function () {
var defs = {}; // id -> {dependencies, definition, instance (possibly undefined)}
// Used when there is no 'main' module.
// The name is probably (hopefully) unique so minification removes for releases.
var register_3795 = function (id) {
var module = dem(id);
var fragments = id.split('.');
var target = Function('return this;')();
for (var i = 0; i < fragments.length - 1; ++i) {
if (target[fragments[i]] === undefined) { target[fragments[i]] = {}; }
target = target[fragments[i]];
}
target[fragments[fragments.length - 1]] = module;
};
var instantiate = function (id) {
var actual = defs[id];
var dependencies = actual.deps;
var definition = actual.defn;
var len = dependencies.length;
var instances = new Array(len);
for (var i = 0; i < len; ++i) { instances[i] = dem(dependencies[i]); }
var defResult = definition.apply(null, instances);
if (defResult === undefined) { throw 'module [' + id + '] returned undefined'; }
actual.instance = defResult;
};
var def = function (id, dependencies, definition) {
if (typeof id !== 'string') { throw 'module id must be a string'; } else if (dependencies === undefined) { throw 'no dependencies for ' + id; } else if (definition === undefined) { throw 'no definition function for ' + id; }
defs[id] = {
deps: dependencies,
defn: definition,
instance: undefined
};
};
var dem = function (id) {
var actual = defs[id];
if (actual === undefined) { throw 'module [' + id + '] was undefined'; } else if (actual.instance === undefined) { instantiate(id); }
return actual.instance;
};
var req = function (ids, callback) {
var len = ids.length;
var instances = new Array(len);
for (var i = 0; i < len; ++i) { instances[i] = dem(ids[i]); }
callback.apply(null, instances);
};
var ephox = {};
ephox.bolt = {
module: {
api: {
define: def,
require: req,
demand: dem
}
}
};
var define = def;
var require = req;
var demand = dem;
// this helps with minification when using a lot of global references
var defineGlobal = function (id, ref) {
define(id, [], function () { return ref; });
};
/* jsc
["tinymce.plugins.advlist.Plugin","tinymce.core.PluginManager","tinymce.core.util.Tools","tinymce.plugins.advlist.api.Commands","tinymce.plugins.advlist.ui.Buttons","global!tinymce.util.Tools.resolve","tinymce.plugins.advlist.core.Actions","tinymce.plugins.advlist.api.Settings","tinymce.plugins.advlist.core.ListUtils","tinymce.plugins.advlist.ui.ListStyles"]
jsc */
defineGlobal('global!tinymce.util.Tools.resolve', tinymce.util.Tools.resolve);
/**
* ResolveGlobal.js
*
* Released under LGPL License.
* Copyright (c) 1999-2017 Ephox Corp. All rights reserved
*
* License: http://www.tinymce.com/license
* Contributing: http://www.tinymce.com/contributing
*/
define(
'tinymce.core.PluginManager',
[
'global!tinymce.util.Tools.resolve'
],
function (resolve) {
return resolve('tinymce.PluginManager');
}
);
/**
* ResolveGlobal.js
*
* Released under LGPL License.
* Copyright (c) 1999-2017 Ephox Corp. All rights reserved
*
* License: http://www.tinymce.com/license
* Contributing: http://www.tinymce.com/contributing
*/
define(
'tinymce.core.util.Tools',
[
'global!tinymce.util.Tools.resolve'
],
function (resolve) {
return resolve('tinymce.util.Tools');
}
);
/**
* Actions.js
*
* Released under LGPL License.
* Copyright (c) 1999-2017 Ephox Corp. All rights reserved
*
* License: http://www.tinymce.com/license
* Contributing: http://www.tinymce.com/contributing
*/
define(
'tinymce.plugins.advlist.core.Actions',
[
],
function () {
var applyListFormat = function (editor, listName, styleValue) {
var cmd = listName === 'UL' ? 'InsertUnorderedList' : 'InsertOrderedList';
editor.execCommand(cmd, false, styleValue === false ? null : { 'list-style-type': styleValue });
};
return {
applyListFormat: applyListFormat
};
}
);
/**
* Commands.js
*
* Released under LGPL License.
* Copyright (c) 1999-2017 Ephox Corp. All rights reserved
*
* License: http://www.tinymce.com/license
* Contributing: http://www.tinymce.com/contributing
*/
define(
'tinymce.plugins.advlist.api.Commands',
[
'tinymce.plugins.advlist.core.Actions'
],
function (Actions) {
var register = function (editor) {
editor.addCommand('ApplyUnorderedListStyle', function (ui, value) {
Actions.applyListFormat(editor, 'UL', value['list-style-type']);
});
editor.addCommand('ApplyOrderedListStyle', function (ui, value) {
Actions.applyListFormat(editor, 'OL', value['list-style-type']);
});
};
return {
register: register
};
}
);
/**
* Settings.js
*
* Released under LGPL License.
* Copyright (c) 1999-2017 Ephox Corp. All rights reserved
*
* License: http://www.tinymce.com/license
* Contributing: http://www.tinymce.com/contributing
*/
define(
'tinymce.plugins.advlist.api.Settings',
[
],
function () {
var getNumberStyles = function (editor) {
var styles = editor.getParam('advlist_number_styles', 'default,lower-alpha,lower-greek,lower-roman,upper-alpha,upper-roman');
return styles ? styles.split(/[ ,]/) : [];
};
var getBulletStyles = function (editor) {
var styles = editor.getParam('advlist_bullet_styles', 'default,circle,disc,square');
return styles ? styles.split(/[ ,]/) : [];
};
return {
getNumberStyles: getNumberStyles,
getBulletStyles: getBulletStyles
};
}
);
/**
* ListUtils.js
*
* Released under LGPL License.
* Copyright (c) 1999-2017 Ephox Corp. All rights reserved
*
* License: http://www.tinymce.com/license
* Contributing: http://www.tinymce.com/contributing
*/
define(
'tinymce.plugins.advlist.core.ListUtils',
[
],
function () {
var isChildOfBody = function (editor, elm) {
return editor.$.contains(editor.getBody(), elm);
};
var isListNode = function (editor) {
return function (node) {
return node && (/^(OL|UL|DL)$/).test(node.nodeName) && isChildOfBody(editor, node);
};
};
var getSelectedStyleType = function (editor) {
var listElm = editor.dom.getParent(editor.selection.getNode(), 'ol,ul');
return editor.dom.getStyle(listElm, 'listStyleType') || '';
};
return {
isListNode: isListNode,
getSelectedStyleType: getSelectedStyleType
};
}
);
/**
* ListStyles.js
*
* Released under LGPL License.
* Copyright (c) 1999-2017 Ephox Corp. All rights reserved
*
* License: http://www.tinymce.com/license
* Contributing: http://www.tinymce.com/contributing
*/
define(
'tinymce.plugins.advlist.ui.ListStyles',
[
'tinymce.core.util.Tools'
],
function (Tools) {
var styleValueToText = function (styleValue) {
return styleValue.replace(/\-/g, ' ').replace(/\b\w/g, function (chr) {
return chr.toUpperCase();
});
};
var toMenuItems = function (styles) {
return Tools.map(styles, function (styleValue) {
var text = styleValueToText(styleValue);
var data = styleValue === 'default' ? '' : styleValue;
return { text: text, data: data };
});
};
return {
toMenuItems: toMenuItems
};
}
);
/**
* Buttons.js
*
* Released under LGPL License.
* Copyright (c) 1999-2017 Ephox Corp. All rights reserved
*
* License: http://www.tinymce.com/license
* Contributing: http://www.tinymce.com/contributing
*/
define(
'tinymce.plugins.advlist.ui.Buttons',
[
'tinymce.core.util.Tools',
'tinymce.plugins.advlist.api.Settings',
'tinymce.plugins.advlist.core.Actions',
'tinymce.plugins.advlist.core.ListUtils',
'tinymce.plugins.advlist.ui.ListStyles'
],
function (Tools, Settings, Actions, ListUtils, ListStyles) {
var listState = function (editor, listName) {
return function (e) {
var ctrl = e.control;
editor.on('NodeChange', function (e) {
var lists = Tools.grep(e.parents, ListUtils.isListNode(editor));
ctrl.active(lists.length > 0 && lists[0].nodeName === listName);
});
};
};
var updateSelection = function (editor) {
return function (e) {
var listStyleType = ListUtils.getSelectedStyleType(editor);
e.control.items().each(function (ctrl) {
ctrl.active(ctrl.settings.data === listStyleType);
});
};
};
var addSplitButton = function (editor, id, tooltip, cmd, nodeName, styles) {
editor.addButton(id, {
type: 'splitbutton',
tooltip: tooltip,
menu: ListStyles.toMenuItems(styles),
onPostRender: listState(editor, nodeName),
onshow: updateSelection(editor),
onselect: function (e) {
Actions.applyListFormat(editor, nodeName, e.control.settings.data);
},
onclick: function () {
editor.execCommand(cmd);
}
});
};
var addButton = function (editor, id, tooltip, cmd, nodeName, styles) {
editor.addButton(id, {
type: 'button',
tooltip: tooltip,
onPostRender: listState(editor, nodeName),
onclick: function () {
editor.execCommand(cmd);
}
});
};
var addControl = function (editor, id, tooltip, cmd, nodeName, styles) {
if (styles.length > 0) {
addSplitButton(editor, id, tooltip, cmd, nodeName, styles);
} else {
addButton(editor, id, tooltip, cmd, nodeName, styles);
}
};
var register = function (editor) {
addControl(editor, 'numlist', 'Numbered list', 'InsertOrderedList', 'OL', Settings.getNumberStyles(editor));
addControl(editor, 'bullist', 'Bullet list', 'InsertUnorderedList', 'UL', Settings.getBulletStyles(editor));
};
return {
register: register
};
}
);
/**
* Plugin.js
*
* Released under LGPL License.
* Copyright (c) 1999-2017 Ephox Corp. All rights reserved
*
* License: http://www.tinymce.com/license
* Contributing: http://www.tinymce.com/contributing
*/
define(
'tinymce.plugins.advlist.Plugin',
[
'tinymce.core.PluginManager',
'tinymce.core.util.Tools',
'tinymce.plugins.advlist.api.Commands',
'tinymce.plugins.advlist.ui.Buttons'
],
function (PluginManager, Tools, Commands, Buttons) {
PluginManager.add('advlist', function (editor) {
var hasPlugin = function (editor, plugin) {
var plugins = editor.settings.plugins ? editor.settings.plugins : '';
return Tools.inArray(plugins.split(/[ ,]/), plugin) !== -1;
};
if (hasPlugin(editor, 'lists')) {
Buttons.register(editor);
Commands.register(editor);
}
});
return function () { };
}
);
dem('tinymce.plugins.advlist.Plugin')();
})(); | PypiClean |
/ConsenSys-Utils-0.2.0b1.tar.gz/ConsenSys-Utils-0.2.0b1/consensys_utils/gunicorn/workers.py | import errno
import ssl
import gunicorn.http as http
import gunicorn.util as util
from gunicorn.workers.sync import SyncWorker, StopWaiting
from ..exceptions import PauseIteration
class SyncIteratingWorker(SyncWorker):
"""A Gunicorn synchronous worker that allows to run an iterable WSGI application.
It allows to run a loop process that iterates over a WSGI application object
while allowing to process HTTP requests.
Since the worker is synchronous it is thread safe to modify
the WSGI object either when iterating or when handling an HTTP request.
**Remark**
Such a worker should not be considered highly performing as HTTP server but
for dealing with a few requests to control the iterable WSGI application
it is well suited.
"""
def accept(self, listener): # pragma: no cover
client, address = listener.accept()
# :class:`SyncIteratingWorker` uses non blocking connection sockets so we
# directly fall back on iteration when no data is available on connection
client.setblocking(False)
util.close_on_exec(client)
self.handle(listener, client, address)
def iterate(self):
"""Iterate on WSGI object"""
next(self.wsgi)
def handle(self, listener, client, address): # noqa: C901, pragma: no cover
"""Handle a request
Method is almost identical to :meth:`gunicorn.workers.sync.SyncWorker` one.
We need to overide this method because we use non blocking socket connections
thus we are more sensitive to :meth:`errno.EAGAIN` errors.
"""
req = None
try:
if self.cfg.is_ssl:
client = ssl.wrap_socket(client, server_side=True, **self.cfg.ssl_options)
parser = http.RequestParser(self.cfg, client)
req = next(parser)
self.handle_request(listener, req, client, address)
except http.errors.NoMoreData as e:
self.log.debug("Ignored premature client disconnection. %s", e)
except StopIteration as e:
self.log.debug("Closing connection. %s", e)
except ssl.SSLError as e:
if e.args[0] == ssl.SSL_ERROR_EOF:
self.log.debug("ssl connection closed")
client.close()
else:
self.log.debug("Error processing SSL request.")
self.handle_error(req, client, address, e)
except EnvironmentError as e:
# Added in ConsenSys-Utils: we do not log exception on :meth:`errno.EAGAIN`
if e.errno not in (errno.EPIPE, errno.ECONNRESET, errno.EAGAIN):
self.log.exception("Socket error processing request.")
else:
if e.errno == errno.ECONNRESET:
self.log.debug("Ignoring connection reset")
elif e.errno == errno.EAGAIN:
self.log.debug("Ignoring EAGAIN")
else:
self.log.debug("Ignoring EPIPE")
except Exception as e:
self.handle_error(req, client, address, e)
finally:
util.close(client)
def run(self): # noqa: C901
"""Run the main worker loop
At each step of the loop it
1. Handles entry socket request if available
2. Iterate on the WSGI iterable object
If a :meth:`consensys_utils.exceptions.PauseIteration` is caught when iterating
on the WSGI object then the loop waits by entering a stale state freeing CPU usage.
Receiving an HTTP request instantaneously gets the loop out of stale state.
"""
# self.socket appears to lose its blocking status after
# we fork in the arbiter. Reset it here.
for s in self.sockets:
s.setblocking(0)
listener = self.sockets[0]
while self.alive: # pragma: no branch
self.notify()
# Accept a connection. If we get an error telling us
# that no connection is waiting we fall back to iteration
try:
self.accept(listener)
# Keep processing client until no one is waiting
continue
except EnvironmentError as e:
if e.errno not in (errno.EAGAIN, errno.ECONNABORTED, errno.EWOULDBLOCK): # pragma: no cover
raise
# If no client is waiting we fall back on iteration
try:
self.iterate()
# Keep iterating until an error is raised
continue
except PauseIteration as e:
timeout = e.timeout or self.timeout or 1
except StopIteration: # pragma: no cover
self.log.info("Stop iteration")
raise
except Exception:
self.log.exception("Error during iteration")
raise
if not self.is_parent_alive():
return
try:
# We wait until it is time to iterate again or
# we have received a message through the socket
self.log.debug("Pausing iteration for %s seconds" % timeout)
self.wait(timeout)
except StopWaiting: # pragma: no cover
return | PypiClean |
/HDXrate-0.2.0.tar.gz/HDXrate-0.2.0/docs/installation.rst | .. highlight:: shell
============
Installation
============
Stable release
--------------
To install HDXrate, run this command in your terminal:
.. code-block:: console
$ pip install hdxrate
Or
.. code-block:: console
$ conda install -c conda-forge hdxrate
From sources
------------
The sources for HDXrate can be downloaded from the `Github repo`_.
You can either clone the public repository:
.. code-block:: console
$ git clone git://github.com/Jhsmit/hdxrate
Or download the `tarball`_:
.. code-block:: console
$ curl -OJL https://github.com/Jhsmit/hdxrate/tarball/master
Once you have a copy of the source, you can install it with:
.. code-block:: console
$ python setup.py install
.. _Github repo: https://github.com/Jhsmit/hdxrate
.. _tarball: https://github.com/Jhsmit/hdxrate/tarball/master
| PypiClean |
/ARGs_OAP-2.3.2.tar.gz/ARGs_OAP-2.3.2/ARGs_OAP/bin/bbmap/kcompress.sh |
usage(){
echo "
Written by Brian Bushnell
Last modified July 16, 2018
Description: Compresses sequence data into a fasta file containing each kmer
exactly once. Allows arbitrary kmer set operations via multiple passes.
Usage: kcompress.sh in=<reads> out=<contigs> min=<1> max=<2147483647>
Input parameters:
in=<file> Primary input file for reads to use as kmer data.
in2=<file> Second input file for paired data.
reads=-1 Only process this number of reads, then quit (-1 means all).
Output parameters:
out=<file> Write contigs (in contig mode).
showstats=t Print assembly statistics after writing contigs.
fuse=0 Fuse output sequences into chunks at least this long,
padded with 1 N between sequences.
Prefiltering parameters:
prefilter=0 If set to a positive integer, use a countmin sketch
to ignore kmers with depth of that value or lower.
prehashes=2 Number of hashes for prefilter.
prefiltersize=0.2 (pff) Fraction of memory to use for prefilter.
minprobprefilter=t (mpp) Use minprob for the prefilter.
prepasses=1 Use this many prefiltering passes; higher be more thorough
if the filter is very full. Set to 'auto' to iteratively
prefilter until the remaining kmers will fit in memory.
Hashing parameters:
k=31 Kmer length (1 to 31).
prealloc=t Pre-allocate memory rather than dynamically growing;
faster and more memory-efficient. A float fraction (0-1)
may be specified; default is 1.
minprob=0.5 Ignore kmers with overall probability of correctness below this.
minprobmain=t (mpm) Use minprob for the primary kmer counts.
threads=X Spawn X threads (default is number of logical processors).
Assembly parameters:
mincount=1 (min) Only retain kmers that occur at least this many times.
maxcount=BIG (max) Only retain kmers that occur at most this many times.
requiresamecount (rsc) Only build contigs from kmers with exactly the same count.
rcomp=t Store forward and reverse kmers together. Setting this to
false will only use forward kmers.
Java Parameters:
-Xmx This will set Java's memory usage, overriding autodetection.
-Xmx20g will specify 20 gigs of RAM, and -Xmx200m will specify 200 megs.
The max is typically 85% of physical memory.
-eoom This flag will cause the process to exit if an
out-of-memory exception occurs. Requires Java 8u92+.
-da Disable assertions.
"
}
#This block allows symlinked shellscripts to correctly set classpath.
pushd . > /dev/null
DIR="${BASH_SOURCE[0]}"
while [ -h "$DIR" ]; do
cd "$(dirname "$DIR")"
DIR="$(readlink "$(basename "$DIR")")"
done
cd "$(dirname "$DIR")"
DIR="$(pwd)/"
popd > /dev/null
#DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )/"
CP="$DIR""current/"
z="-Xmx14g"
z2="-Xms14g"
set=0
if [ -z "$1" ] || [[ $1 == -h ]] || [[ $1 == --help ]]; then
usage
exit
fi
calcXmx () {
source "$DIR""/calcmem.sh"
setEnvironment
parseXmx "$@"
if [[ $set == 1 ]]; then
return
fi
freeRam 15000m 84
z="-Xmx${RAM}m"
z2="-Xms${RAM}m"
}
calcXmx "$@"
kcompress() {
local CMD="java $EA $EOOM $z $z2 -cp $CP assemble.KmerCompressor $@"
echo $CMD >&2
eval $CMD
}
kcompress "$@" | PypiClean |
/ForIocCrawler-1.2.1.tar.gz/ForIocCrawler-1.2.1/crawler/crawlererr.py |
## --------------------------------------------------------------------------------------------------------------------
## Cralwer base exception
class CrawlerError(Exception):
'Base class for exception'
def __init__(self, what):
self.msg = '[!] CrawlerError. Error message: ' + what
## Exception for config errors
class CrawlerConfigError(CrawlerError):
def __init__(self, what):
self.msg = '[!] Error while loading config file. Error message: ' + what
## Exception for setting new configzration settings
class CrawelrSetNewConfigError(CrawlerError):
def __init__(self, file, what):
self.msg = '[!] Error while setting up the new config file: \"' + file + '\". ' + what
## Exeption for config attribute errors
class CralwerConfigAttributeError(CrawlerError):
def __init__(self, attribute, file, msg):
self.msg = '[!] Error while reading the attribute \"' + attribute + '\" of the config file \"' + file + '\". ' + msg
## Exception for pattern errors
class CrawlerPatternError(CrawlerError):
def __init__(self, what):
self.msg = '[!] Error while loading pattern file. Error message: ' + what
class CrawlerMatchError(CrawlerError):
def __init__(self, what):
self.msg = '[!] Error while processing pattern match. Error message: ' + what
## Exception for file handling errors
class CrawlerFileReadError(CrawlerError):
def __init__(self, what, fileNameSrc=None):
if fileNameSrc:
self.msg = '[!] Error while reading file : %s. Error message: %s' % (fileNameSrc, what)
else:
self.msg = '[!] Error while reading source. Error message: %s' % (what)
## Exception for processing errors
class CrawlerProcessError(CrawlerError):
def __init__(self, what):
self.msg = '[!] Error while processing files. Message: ' + what
## Exception for export errors
class CrawlerExportError(CrawlerError):
def __init__(self, what):
self.msg = '[!] Error while export. Message: ' + what | PypiClean |
/LWTools-1.0.5.tar.gz/LWTools-1.0.5/LWT/lmtanalysis/BuildEventGroup3.py | import sqlite3
from time import *
from lmtanalysis.Chronometer import Chronometer
from lmtanalysis.Animal import *
from lmtanalysis.Detection import *
from lmtanalysis.Measure import *
import matplotlib.pyplot as plt
import numpy as np
from lmtanalysis.Event import *
from lmtanalysis.Measure import *
from lmtanalysis.EventTimeLineCache import EventTimeLineCached
def flush( connection ):
''' flush event in database '''
deleteEventTimeLineInBase(connection, "Group3" )
def reBuildEvent( connection, file, tmin=None, tmax=None, pool = None ):
pool = AnimalPool( )
pool.loadAnimals( connection )
#pool.loadDetection( start = tmin, end = tmax )
if pool.getNbAnimals() <= 2:
print( "Not enough animals to process group 3")
return
contact = {}
for animal in range( 1 , 5 ):
for idAnimalB in range( 1 , 5 ):
if ( animal == idAnimalB ):
continue
contact[animal, idAnimalB] = EventTimeLineCached( connection, file, "Contact", animal, idAnimalB, minFrame=tmin, maxFrame=tmax )
for animal in range( 1 , 5 ):
for idAnimalB in range( 1 , 5 ):
if( animal == idAnimalB ):
continue
for idAnimalC in range( 1 , 5 ):
if( animal == idAnimalC ):
continue
if( idAnimalB == idAnimalC ):
continue
for idAnimalD in range( 1 , 5 ):
if( animal == idAnimalD ):
continue
if( idAnimalB == idAnimalD ):
continue
if( idAnimalC == idAnimalD ):
continue
eventName = "Group3"
print ( eventName )
groupTimeLine = EventTimeLine( None, eventName , animal , idAnimalB , idAnimalC , None , loadEvent=False )
result={}
dicAB = contact[ animal , idAnimalB ].getDictionnary()
dicAC = contact[ animal , idAnimalC ].getDictionnary()
dicAD = contact[ animal , idAnimalD ].getDictionnary()
dicBC = contact[ idAnimalB , idAnimalC ].getDictionnary()
dicBD = contact[ idAnimalB , idAnimalD ].getDictionnary()
dicCD = contact[ idAnimalC , idAnimalD ].getDictionnary()
for t in dicAB.keys():
if ( t in dicAC or t in dicBC ):
if ( t in dicAD or t in dicBD or t in dicCD ):
continue
else:
result[t]=True
for t in dicAC.keys():
if ( t in dicBC ):
if ( t in dicAD or t in dicBD or t in dicCD ):
continue
else:
result[t]=True
groupTimeLine.reBuildWithDictionnary( result )
groupTimeLine.endRebuildEventTimeLine(connection)
# log process
from lmtanalysis.TaskLogger import TaskLogger
t = TaskLogger( connection )
t.addLog( "Build Event Group 3" , tmin=tmin, tmax=tmax )
print( "Rebuild event finished." ) | PypiClean |
/Flask-AppBuilder-jwi078-2.1.13.tar.gz/Flask-AppBuilder-jwi078-2.1.13/flask_appbuilder/models/group.py | from __future__ import unicode_literals
import calendar
import datetime
from functools import reduce
from itertools import groupby
import logging
from flask_appbuilder._compat import as_unicode
from flask_babel import lazy_gettext as _
from .. import const as c
log = logging.getLogger(__name__)
def aggregate(label=""):
"""
Use this decorator to set a label for your aggregation functions on charts.
:param label:
The label to complement with the column
"""
def wrap(f):
f._label = label
return f
return wrap
@aggregate(_("Count of"))
def aggregate_count(items, col):
"""
Function to use on Group by Charts.
accepts a list and returns the count of the list's items
"""
return len(list(items))
@aggregate(_("Sum of"))
def aggregate_sum(items, col):
"""
Function to use on Group by Charts.
accepts a list and returns the sum of the list's items
"""
return sum(getattr(item, col) for item in items)
@aggregate(_("Avg. of"))
def aggregate_avg(items, col):
"""
Function to use on Group by Charts.
accepts a list and returns the average of the list's items
"""
try:
return aggregate_sum(items, col) / aggregate_count(items, col)
except Exception:
log.warning(c.LOGMSG_WAR_DBI_AVG_ZERODIV)
return 0.0
class BaseGroupBy(object):
column_name = ""
name = ""
aggregate_func = None
aggregate_col = ""
def __init__(
self, column_name, name, aggregate_func=aggregate_count, aggregate_col=""
):
"""
Constructor.
:param column_name:
Model field name
:param name:
The group by name
"""
self.column_name = column_name
self.name = name
self.aggregate_func = aggregate_func
self.aggregate_col = aggregate_col
def apply(self, data):
"""
Override this to implement you own new filters
"""
pass
def get_group_col(self, item):
return getattr(item, self.column_name)
def get_format_group_col(self, item):
return item
def get_aggregate_col_name(self):
if self.aggregate_col:
return self.aggregate_func.__name__ + "_" + self.aggregate_col
else:
return self.aggregate_func.__name__
def __repr__(self):
return self.name
class GroupByCol(BaseGroupBy):
def _apply(self, data):
data = sorted(data, key=self.get_group_col)
json_data = dict()
json_data["cols"] = [
{"id": self.column_name, "label": self.column_name, "type": "string"},
{
"id": self.aggregate_func.__name__ + "_" + self.column_name,
"label": self.aggregate_func.__name__ + "_" + self.column_name,
"type": "number",
},
]
json_data["rows"] = []
for (grouped, items) in groupby(data, self.get_group_col):
aggregate_value = self.aggregate_func(items, self.aggregate_col)
json_data["rows"].append(
{
"c": [
{"v": self.get_format_group_col(grouped)},
{"v": aggregate_value},
]
}
)
return json_data
def apply(self, data):
data = sorted(data, key=self.get_group_col)
return [
[
self.get_format_group_col(grouped),
self.aggregate_func(items, self.aggregate_col),
]
for (grouped, items) in groupby(data, self.get_group_col)
]
class GroupByDateYear(BaseGroupBy):
def apply(self, data):
data = sorted(data, key=self.get_group_col)
return [
[
self.get_format_group_col(grouped),
self.aggregate_func(items, self.aggregate_col),
]
for (grouped, items) in groupby(data, self.get_group_col)
]
def get_group_col(self, item):
value = getattr(item, self.column_name)
if value:
return value.year
class GroupByDateMonth(BaseGroupBy):
def apply(self, data):
data = sorted(data, key=self.get_group_col)
return [
[
self.get_format_group_col(grouped),
self.aggregate_func(items, self.aggregate_col),
]
for (grouped, items) in groupby(data, self.get_group_col)
if grouped
]
def get_group_col(self, item):
value = getattr(item, self.column_name)
if value:
return value.year, value.month
def get_format_group_col(self, item):
return calendar.month_name[item[1]] + " " + str(item[0])
class BaseProcessData(object):
"""
Base class to process data.
It will group data by one or many columns or functions.
The aggregation is made by an already defined function, or by a custom function
:group_bys_cols: A list of columns or functions to group data.
:aggr_by_cols: A list of tuples [(<AGGR FUNC>,'<COLNAME>'),...].
:formatter_by_cols: A dict.
"""
group_bys_cols = None
# ['<COLNAME>',<FUNC>, ....]
aggr_by_cols = None
# [(<AGGR FUNC>,'<COLNAME>'),...]
formatter_by_cols = {}
# {<FUNC>: '<COLNAME>',...}
def __init__(self, group_by_cols, aggr_by_cols, formatter_by_cols):
self.group_bys_cols = group_by_cols
self.aggr_by_cols = aggr_by_cols
self.formatter_by_cols = formatter_by_cols
def attrgetter(self, *items):
if len(items) == 1:
attr = items[0]
def g(obj):
return self.resolve_attr(obj, attr)
else:
def g(obj):
return tuple(self.resolve_attr(obj, attr) for attr in items)
return g
def resolve_attr(self, obj, attr):
if not hasattr(obj, attr):
# it's an inner obj attr
return reduce(getattr, attr.split("."), obj)
if hasattr(getattr(obj, attr), "__call__"):
# its a function
return getattr(obj, attr)()
else:
# it's an attribute
return getattr(obj, attr)
def format_columns(self, *values):
if len(values) == 1:
return self.format_column(self.group_bys_cols[0], values[0])
else:
return tuple(
self.format_column(item, value)
for item, value in (self.group_bys_cols, values)
)
def format_column(self, item, value):
if item in self.formatter_by_cols:
return self.formatter_by_cols[item](value)
else:
return value
def apply(self, data):
pass
def to_dict(self, data):
ret = []
for item in data:
row = {}
if not isinstance(item[0], tuple):
row[self.group_bys_cols[0]] = str(item[0])
else:
for group_col_data, i in zip(item[0], enumerate(item[0])):
row[self.group_bys_cols[i]] = str(group_col_data)
for col_data, i in zip(item[1:], enumerate(item[1:])):
log.debug("{0},{1}".format(col_data, i))
key = self.aggr_by_cols[i].__name__ + self.aggr_by_cols[i]
if isinstance(col_data, datetime.date):
row[key] = str(col_data)
else:
row[key] = col_data
ret.append(row)
return ret
def to_json(self, data, labels=None):
"""
Will return a dict with Google JSON structure for charts
The Google structure::
{
cols: [{id:<COL_NAME>, label:<LABEL FOR COL>, type: <COL TYPE>}, ...]
rows: [{c: [{v: <COL VALUE}, ...], ... ]
}
:param data:
:param labels: dict with labels to include on Google JSON strcut
:return: dict with Google JSON structure
"""
labels = labels or dict()
json_data = dict()
json_data["cols"] = []
# Create Structure to identify the grouped columns
for group_col in self.group_bys_cols:
label = "" or as_unicode(labels[group_col])
json_data["cols"].append(
{"id": group_col, "label": label, "type": "string"}
)
# Create Structure to identify the Aggregated columns
for aggr_col in self.aggr_by_cols:
if isinstance(aggr_col, tuple):
label_key = aggr_col[0].__name__ + aggr_col[1]
aggr_col = aggr_col[1]
else:
label_key = aggr_col
label = "" or as_unicode(labels[label_key])
json_data["cols"].append({"id": aggr_col, "label": label, "type": "number"})
# Create Structure with the data
json_data["rows"] = []
for item in data:
row = {"c": []}
if not isinstance(item[0], tuple):
row["c"].append({"v": "{0}".format(item[0])})
else:
for group_col_data in item[0]:
row["c"].append({"v": "{0}".format(group_col_data)})
for col_data in item[1:]:
if isinstance(col_data, datetime.date):
row["c"].append({"v": "{0}".format(col_data)})
else:
row["c"].append({"v": col_data})
json_data["rows"].append(row)
return json_data
class DirectProcessData(BaseProcessData):
def apply(self, data, sort=True):
group_by = self.group_bys_cols[0]
if sort:
data = sorted(data, key=self.attrgetter(group_by))
result = []
for item in data:
result_item = [self.format_columns(self.attrgetter(group_by)(item))]
for aggr_by_col in self.aggr_by_cols:
result_item.append(self.attrgetter(aggr_by_col)(item))
result.append(result_item)
return result
class GroupByProcessData(BaseProcessData):
"""
Groups by data by chosen columns (property group_bys_cols).
:data: A list of objects
:sort: boolean, if true python will sort the data
:return: A List of lists with group column and aggregation
"""
def apply(self, data, sort=True):
if sort:
data = sorted(data, key=self.attrgetter(*self.group_bys_cols))
result = []
for (grouped, items) in groupby(
data, key=self.attrgetter(*self.group_bys_cols)
):
items = list(items)
result_item = [self.format_columns(grouped)]
for aggr_by_col in self.aggr_by_cols:
result_item.append(aggr_by_col[0](items, aggr_by_col[1]))
result.append(result_item)
return result | PypiClean |
/Bis-Miner-3.11.1.tar.gz/Bis-Miner-3.11.0/Orange/data/variable.py | import collections
import re
from datetime import datetime, timedelta, timezone
from numbers import Number, Real, Integral
from math import isnan, floor
from pickle import PickleError
import numpy as np
from Orange.data import _variable
from Orange.util import Registry, color_to_hex, hex_to_color, Reprable
__all__ = ["Unknown", "MISSING_VALUES", "make_variable", "is_discrete_values",
"Value", "Variable", "ContinuousVariable", "DiscreteVariable",
"StringVariable", "TimeVariable"]
# For storing unknowns
Unknown = ValueUnknown = float("nan")
# For checking for unknowns
MISSING_VALUES = {np.nan, "?", "nan", ".", "", "NA", "~", None}
DISCRETE_MAX_VALUES = 3 # == 2 + nan
def make_variable(cls, compute_value, *args):
if compute_value is not None:
return cls(*args, compute_value=compute_value)
return cls.make(*args)
def is_discrete_values(values):
"""
Return set of uniques if `values` is an iterable of discrete values
else False if non-discrete, or None if indeterminate.
Note
----
Assumes consistent type of items of `values`.
"""
if not len(values):
return None
# If the first few values are, or can be converted to, floats,
# the type is numeric
try:
isinstance(next(iter(values)), Number) or \
[float(v) for _, v in zip(range(min(3, len(values))), values)]
except ValueError:
is_numeric = False
max_values = int(round(len(values)**.7))
else:
is_numeric = True
max_values = DISCRETE_MAX_VALUES
# If more than max values => not discrete
unique = set()
for i in values:
unique.add(i)
if len(unique) > max_values:
return False
# Strip NaN from unique
unique = {i for i in unique
if (not i in MISSING_VALUES and
not (isinstance(i, Number) and np.isnan(i)))}
# All NaNs => indeterminate
if not unique:
return None
# Strings with |values| < max_unique
if not is_numeric:
return unique
# Handle numbers
try:
unique_float = set(map(float, unique))
except ValueError:
# Converting all the values to floats resulted in an error.
# Since the values have enough unique values, they are probably
# string values and discrete.
return unique
# If only values are {0, 1} or {1, 2} (or a subset of those sets) => discrete
return (not (unique_float - {0, 1}) or
not (unique_float - {1, 2})) and unique
class Value(float):
"""
The class representing a value. The class is not used to store values but
only to return them in contexts in which we want the value to be accompanied
with the descriptor, for instance to print the symbolic value of discrete
variables.
The class is derived from `float`, with an additional attribute `variable`
which holds the descriptor of type :obj:`Orange.data.Variable`. If the
value continuous or discrete, it is stored as a float. Other types of
values, like strings, are stored in the attribute `value`.
The class overloads the methods for printing out the value:
`variable.repr_val` and `variable.str_val` are used to get a suitable
representation of the value.
Equivalence operator is overloaded as follows:
- unknown values are equal; if one value is unknown and the other is not,
they are different;
- if the value is compared with the string, the value is converted to a
string using `variable.str_val` and the two strings are compared
- if the value is stored in attribute `value`, it is compared with the
given other value
- otherwise, the inherited comparison operator for `float` is called.
Finally, value defines a hash, so values can be put in sets and appear as
keys in dictionaries.
.. attribute:: variable (:obj:`Orange.data.Variable`)
Descriptor; used for printing out and for comparing with strings
.. attribute:: value
Value; the value can be of arbitrary type and is used only for variables
that are neither discrete nor continuous. If `value` is `None`, the
derived `float` value is used.
"""
__slots__ = "variable", "_value"
def __new__(cls, variable, value=Unknown):
"""
Construct a new instance of Value with the given descriptor and value.
If the argument `value` can be converted to float, it is stored as
`float` and the attribute `value` is set to `None`. Otherwise, the
inherited float is set to `Unknown` and the value is held by the
attribute `value`.
:param variable: descriptor
:type variable: Orange.data.Variable
:param value: value
"""
if variable.is_primitive():
self = super().__new__(cls, value)
self.variable = variable
self._value = None
else:
isunknown = value == variable.Unknown
self = super().__new__(
cls, np.nan if isunknown else np.finfo(float).min)
self.variable = variable
self._value = value
return self
def __init__(self, _, __=Unknown):
pass
def __repr__(self):
return "Value('%s', %s)" % (self.variable.name,
self.variable.repr_val(self))
def __str__(self):
return self.variable.str_val(self)
def __eq__(self, other):
if isinstance(self, Real) and isnan(self):
return (isinstance(other, Real) and isnan(other)
or other in self.variable.unknown_str)
if isinstance(other, str):
return self.variable.str_val(self) == other
if isinstance(other, Value):
return self.value == other.value
return super().__eq__(other)
def __ne__(self, other):
return not self.__eq__(other)
def __lt__(self, other):
if self.variable.is_primitive():
if isinstance(other, str):
return super().__lt__(self.variable.to_val(other))
else:
return super().__lt__(other)
else:
if isinstance(other, str):
return self.value < other
else:
return self.value < other.value
def __le__(self, other):
return self.__lt__(other) or self.__eq__(other)
def __gt__(self, other):
return not self.__le__(other)
def __ge__(self, other):
return not self.__lt__(other)
def __contains__(self, other):
if (self._value is not None
and isinstance(self._value, str)
and isinstance(other, str)):
return other in self._value
raise TypeError("invalid operation on Value()")
def __hash__(self):
if self._value is None:
return super().__hash__()
else:
return hash((super().__hash__(), self._value))
@property
def value(self):
if self.variable.is_discrete:
return Unknown if isnan(self) else self.variable.values[int(self)]
if self.variable.is_string:
return self._value
return float(self)
def __getnewargs__(self):
return self.variable, float(self)
def __getstate__(self):
return dict(value=getattr(self, '_value', None))
def __setstate__(self, state):
self._value = state.get('value', None)
class VariableMeta(Registry):
def __new__(cls, name, bases, attrs):
obj = super().__new__(cls, name, bases, attrs)
if not hasattr(obj, '_all_vars') or obj._all_vars is Variable._all_vars:
obj._all_vars = {}
return obj
class _predicatedescriptor(property):
"""
A property that behaves as a class method if accessed via a class
>>> class A:
... foo = False
... @_predicatedescriptor
... def is_foo(self):
... return self.foo
...
>>> a = A()
>>> a.is_foo
False
>>> A.is_foo(a)
False
"""
def __get__(self, instance, objtype=None):
if instance is None:
return self.fget
else:
return super().__get__(instance, objtype)
class Variable(Reprable, metaclass=VariableMeta):
"""
The base class for variable descriptors contains the variable's
name and some basic properties.
.. attribute:: name
The name of the variable.
.. attribute:: unknown_str
A set of values that represent unknowns in conversion from textual
formats. Default is `{"?", ".", "", "NA", "~", None}`.
.. attribute:: compute_value
A function for computing the variable's value when converting from
another domain which does not contain this variable. The base class
defines a static method `compute_value`, which returns `Unknown`.
Non-primitive variables must redefine it to return `None`.
.. attribute:: sparse
A flag about sparsity of the variable. When set, the variable suggests
it should be stored in a sparse matrix.
.. attribute:: source_variable
An optional descriptor of the source variable - if any - from which
this variable is derived and computed via :obj:`compute_value`.
.. attribute:: attributes
A dictionary with user-defined attributes of the variable
.. attribute:: master
The variable that this variable is a copy of. If a copy is made from a
copy, the copy has a reference to the original master. If the variable
is not a copy, it is its own master.
"""
Unknown = ValueUnknown
def __init__(self, name="", compute_value=None, *, sparse=False):
"""
Construct a variable descriptor.
"""
self.name = name
self._compute_value = compute_value
self.unknown_str = MISSING_VALUES
self.source_variable = None
self.sparse = sparse
self.attributes = {}
self.master = self
if name and compute_value is None:
if isinstance(self._all_vars, collections.defaultdict):
self._all_vars[name].append(self)
else:
self._all_vars[name] = self
self._colors = None
def make_proxy(self):
"""
Copy the variable and set the master to `self.master` or to `self`.
:return: copy of self
:rtype: Variable
"""
var = self.__class__()
var.__dict__.update(self.__dict__)
var.attributes = dict(self.attributes)
var.master = self.master
return var
def __eq__(self, other):
"""Two variables are equivalent if the originate from the same master"""
return hasattr(other, "master") and self.master is other.master
def __hash__(self):
if self.master is not self:
return hash(self.master)
else:
return super().__hash__()
@classmethod
def make(cls, name):
"""
Return an existing continuous variable with the given name, or
construct and return a new one.
"""
if not name:
raise ValueError("Variables without names cannot be stored or made")
var = cls._all_vars.get(name) or cls(name)
return var.make_proxy()
@classmethod
def _clear_cache(cls):
"""
Clear the list of variables for reuse by :obj:`make`.
"""
cls._all_vars.clear()
@staticmethod
def _clear_all_caches():
"""
Clears list of stored variables for all subclasses
"""
for cls in Variable.registry.values():
cls._clear_cache()
@classmethod
def is_primitive(cls, var=None):
"""
`True` if the variable's values are stored as floats.
Non-primitive variables can appear in the data only as meta attributes.
"""
to_check = cls if var is None else type(var)
return issubclass(to_check, (DiscreteVariable, ContinuousVariable))
@_predicatedescriptor
def is_discrete(self):
return isinstance(self, DiscreteVariable)
@_predicatedescriptor
def is_continuous(self):
return isinstance(self, ContinuousVariable)
@_predicatedescriptor
def is_string(self):
return isinstance(self, StringVariable)
@_predicatedescriptor
def is_time(self):
return isinstance(self, TimeVariable)
def repr_val(self, val):
"""
Return a textual representation of variable's value `val`. Argument
`val` must be a float (for primitive variables) or an arbitrary
Python object (for non-primitives).
Derived classes must overload the function.
"""
raise RuntimeError("variable descriptors must overload repr_val()")
str_val = repr_val
def to_val(self, s):
"""
Convert the given argument to a value of the variable. The
argument can be a string, a number or `None`. For primitive variables,
the base class provides a method that returns
:obj:`~Orange.data.Unknown` if `s` is found in
:obj:`~Orange.data.Variable.unknown_str`, and raises an exception
otherwise. For non-primitive variables it returns the argument itself.
Derived classes of primitive variables must overload the function.
:param s: value, represented as a number, string or `None`
:type s: str, float or None
:rtype: float or object
"""
if not self.is_primitive():
return s
if s in self.unknown_str:
return Unknown
raise RuntimeError(
"primitive variable descriptors must overload to_val()")
def val_from_str_add(self, s):
"""
Convert the given string to a value of the variable. The method
is similar to :obj:`to_val` except that it only accepts strings and
that it adds new values to the variable's domain where applicable.
The base class method calls `to_val`.
:param s: symbolic representation of the value
:type s: str
:rtype: float or object
"""
return self.to_val(s)
def __str__(self):
return self.name
@property
def compute_value(self):
return self._compute_value
def __reduce__(self):
if not self.name:
raise PickleError("Variables without names cannot be pickled")
# Use make to unpickle variables.
# "master" attribute is removed from the dict since make will point
# it to the correct variable. If we did not remove it, the (pickled)
# value would replace the one set by make.
__dict__ = dict(self.__dict__)
__dict__.pop("master", None)
return make_variable, (self.__class__, self._compute_value, self.name), __dict__
def copy(self, compute_value):
var = type(self)(self.name, compute_value=compute_value, sparse=self.sparse)
var.attributes = dict(self.attributes)
return var
del _predicatedescriptor
class ContinuousVariable(Variable):
"""
Descriptor for continuous variables.
.. attribute:: number_of_decimals
The number of decimals when the value is printed out (default: 3).
.. attribute:: adjust_decimals
A flag regulating whether the `number_of_decimals` is being adjusted
by :obj:`to_val`.
The value of `number_of_decimals` is set to 3 and `adjust_decimals`
is set to 2. When :obj:`val_from_str_add` is called for the first
time with a string as an argument, `number_of_decimals` is set to the
number of decimals in the string and `adjust_decimals` is set to 1.
In the subsequent calls of `to_val`, the nubmer of decimals is
increased if the string argument has a larger number of decimals.
If the `number_of_decimals` is set manually, `adjust_decimals` is
set to 0 to prevent changes by `to_val`.
"""
TYPE_HEADERS = ('continuous', 'c', 'numeric', 'n')
def __init__(self, name="", number_of_decimals=None, compute_value=None, *, sparse=False):
"""
Construct a new continuous variable. The number of decimals is set to
three, but adjusted at the first call of :obj:`to_val`.
"""
super().__init__(name, compute_value, sparse=sparse)
if number_of_decimals is None:
self.number_of_decimals = 3
self.adjust_decimals = 2
else:
self.number_of_decimals = number_of_decimals
@property
def number_of_decimals(self):
return self._number_of_decimals
@property
def colors(self):
if self._colors is None:
try:
col1, col2, black = self.attributes["colors"]
self._colors = (hex_to_color(col1), hex_to_color(col2), black)
except (KeyError, ValueError):
# Stored colors were not available or invalid, use defaults
self._colors = ((0, 0, 255), (255, 255, 0), False)
return self._colors
@colors.setter
def colors(self, value):
col1, col2, black = self._colors = value
self.attributes["colors"] = \
[color_to_hex(col1), color_to_hex(col2), black]
# noinspection PyAttributeOutsideInit
@number_of_decimals.setter
def number_of_decimals(self, x):
self._number_of_decimals = x
self.adjust_decimals = 0
self._out_format = "%.{}f".format(self.number_of_decimals)
def to_val(self, s):
"""
Convert a value, given as an instance of an arbitrary type, to a float.
"""
if s in self.unknown_str:
return Unknown
return float(s)
def val_from_str_add(self, s):
"""
Convert a value from a string and adjust the number of decimals if
`adjust_decimals` is non-zero.
"""
return _variable.val_from_str_add_cont(self, s)
def repr_val(self, val):
"""
Return the value as a string with the prescribed number of decimals.
"""
if isnan(val):
return "?"
return self._out_format % val
str_val = repr_val
def copy(self, compute_value=None):
var = type(self)(self.name, self.number_of_decimals, compute_value, sparse=self.sparse)
var.attributes = dict(self.attributes)
return var
class DiscreteVariable(Variable):
"""
Descriptor for symbolic, discrete variables. Values of discrete variables
are stored as floats; the numbers corresponds to indices in the list of
values.
.. attribute:: values
A list of variable's values.
.. attribute:: ordered
Some algorithms (and, in particular, visualizations) may
sometime reorder the values of the variable, e.g. alphabetically.
This flag hints that the given order of values is "natural"
(e.g. "small", "middle", "large") and should not be changed.
.. attribute:: base_value
The index of the base value, or -1 if there is none. The base value is
used in some methods like, for instance, when creating dummy variables
for regression.
"""
TYPE_HEADERS = ('discrete', 'd', 'categorical')
_all_vars = collections.defaultdict(list)
presorted_values = []
def __init__(self, name="", values=(), ordered=False, base_value=-1,
compute_value=None, *, sparse=False):
""" Construct a discrete variable descriptor with the given values. """
self.values = list(values)
if not all(isinstance(value, str) for value in self.values):
raise TypeError("values of DiscreteVariables must be strings")
super().__init__(name, compute_value, sparse=sparse)
self.ordered = ordered
self.base_value = base_value
@property
def colors(self):
if self._colors is None:
from Orange.widgets.utils.colorpalette import ColorPaletteGenerator
self._colors = ColorPaletteGenerator.palette(self)
colors = self.attributes.get('colors')
if colors:
self._colors[:len(colors)] = [hex_to_color(color) for color in colors]
self._colors.flags.writeable = False
return self._colors
@colors.setter
def colors(self, value):
self._colors = value
self._colors.flags.writeable = False
self.attributes["colors"] = [color_to_hex(col) for col in value]
def set_color(self, i, color):
self.colors = self.colors
self._colors.flags.writeable = True
self._colors[i, :] = color
self._colors.flags.writeable = False
self.attributes["colors"][i] = color_to_hex(color)
def to_val(self, s):
"""
Convert the given argument to a value of the variable (`float`).
If the argument is numeric, its value is returned without checking
whether it is integer and within bounds. `Unknown` is returned if the
argument is one of the representations for unknown values. Otherwise,
the argument must be a string and the method returns its index in
:obj:`values`.
:param s: values, represented as a number, string or `None`
:rtype: float
"""
if s is None:
return ValueUnknown
if isinstance(s, Integral):
return s
if isinstance(s, Real):
return s if isnan(s) else floor(s + 0.25)
if s in self.unknown_str:
return ValueUnknown
if not isinstance(s, str):
raise TypeError('Cannot convert {} to value of "{}"'.format(
type(s).__name__, self.name))
return self.values.index(s)
def add_value(self, s):
""" Add a value `s` to the list of values.
"""
if not isinstance(s, str):
raise TypeError("values of DiscreteVariables must be strings")
self.values.append(s)
self._colors = None
def val_from_str_add(self, s):
"""
Similar to :obj:`to_val`, except that it accepts only strings and that
it adds the value to the list if it does not exist yet.
:param s: symbolic representation of the value
:type s: str
:rtype: float
"""
s = str(s) if s is not None else s
try:
return ValueUnknown if s in self.unknown_str \
else self.values.index(s)
except ValueError:
self.add_value(s)
return len(self.values) - 1
def repr_val(self, val):
"""
Return a textual representation of the value (`self.values[int(val)]`)
or "?" if the value is unknown.
:param val: value
:type val: float (should be whole number)
:rtype: str
"""
if isnan(val):
return "?"
return '{}'.format(self.values[int(val)])
str_val = repr_val
def __reduce__(self):
if not self.name:
raise PickleError("Variables without names cannot be pickled")
return make_variable, (self.__class__, self._compute_value, self.name,
self.values, self.ordered, self.base_value), \
self.__dict__
@classmethod
def make(cls, name, values=(), ordered=False, base_value=-1):
"""
Return a variable with the given name and other properties. The method
first looks for a compatible existing variable: the existing
variable must have the same name and both variables must have either
ordered or unordered values. If values are ordered, the order must be
compatible: all common values must have the same order. If values are
unordered, the existing variable must have at least one common value
with the new one, except when any of the two lists of values is empty.
If a compatible variable is find, it is returned, with missing values
appended to the end of the list. If there is no explicit order, the
values are ordered using :obj:`ordered_values`. Otherwise, it
constructs and returns a new variable descriptor.
:param name: the name of the variable
:type name: str
:param values: symbolic values for the variable
:type values: list
:param ordered: tells whether the order of values is fixed
:type ordered: bool
:param base_value: the index of the base value, or -1 if there is none
:type base_value: int
:returns: an existing compatible variable or `None`
"""
if not name:
raise ValueError("Variables without names cannot be stored or made")
var = cls._find_compatible(
name, values, ordered, base_value)
if var:
return var
if not ordered:
base_value_rep = base_value != -1 and values[base_value]
values = cls.ordered_values(values)
if base_value != -1:
base_value = values.index(base_value_rep)
return cls(name, values, ordered, base_value)
@classmethod
def _find_compatible(cls, name, values=(), ordered=False, base_value=-1):
"""
Return a compatible existing value, or `None` if there is None.
See :obj:`make` for details; this function differs by returning `None`
instead of constructing a new descriptor. (Method :obj:`make` calls
this function.)
:param name: the name of the variable
:type name: str
:param values: symbolic values for the variable
:type values: list
:param ordered: tells whether the order of values is fixed
:type ordered: bool
:param base_value: the index of the base value, or -1 if there is none
:type base_value: int
:returns: an existing compatible variable or `None`
"""
base_rep = base_value != -1 and values[base_value]
existing = cls._all_vars.get(name)
if existing is None:
return None
if not ordered:
values = cls.ordered_values(values)
for var in existing:
if (var.ordered != ordered or
var.base_value != -1
and var.values[var.base_value] != base_rep):
continue
if not values:
break # we have the variable - any existing values are OK
if not set(var.values) & set(values):
continue # empty intersection of values; not compatible
if ordered:
i = 0
for val in var.values:
if values[i] == val:
i += 1
if i == len(values):
break # we have all the values
else: # we have some remaining values: check them, add them
if set(values[i:]) & set(var.values):
continue # next var in existing
for val in values[i:]:
var.add_value(val)
break # we have the variable
else: # not ordered
vv = set(var.values)
for val in values:
if val not in vv:
var.add_value(val)
break # we have the variable
else:
return None
if base_value != -1 and var.base_value == -1:
var.base_value = var.values.index(base_rep)
return var
@staticmethod
def ordered_values(values):
"""
Return a sorted list of values. If there exists a prescribed order for
such set of values, it is returned. Otherwise, values are sorted
alphabetically.
"""
for presorted in DiscreteVariable.presorted_values:
if values == set(presorted):
return presorted
try:
return sorted(values, key=float)
except ValueError:
return sorted(values)
def copy(self, compute_value=None):
var = DiscreteVariable(self.name, self.values, self.ordered,
self.base_value, compute_value, sparse=self.sparse)
var.attributes = dict(self.attributes)
return var
class StringVariable(Variable):
"""
Descriptor for string variables. String variables can only appear as
meta attributes.
"""
Unknown = ""
TYPE_HEADERS = ('string', 's', 'text')
def to_val(self, s):
"""
Return the value as a string. If it is already a string, the same
object is returned.
"""
if s is None:
return ""
if isinstance(s, str):
return s
return str(s)
val_from_str_add = to_val
@staticmethod
def str_val(val):
"""Return a string representation of the value."""
if val is "":
return "?"
if isinstance(val, Value):
if val.value is "":
return "?"
val = val.value
return str(val)
def repr_val(self, val):
"""Return a string representation of the value."""
return '"{}"'.format(self.str_val(val))
class TimeVariable(ContinuousVariable):
"""
TimeVariable is a continuous variable with Unix epoch
(1970-01-01 00:00:00+0000) as the origin (0.0). Later dates are positive
real numbers (equivalent to Unix timestamp, with microseconds in the
fraction part), and the dates before it map to the negative real numbers.
Unfortunately due to limitation of Python datetime, only dates
with year >= 1 (A.D.) are supported.
If time is specified without a date, Unix epoch is assumed.
If time is specified wihout an UTC offset, localtime is assumed.
"""
_all_vars = {}
TYPE_HEADERS = ('time', 't')
UNIX_EPOCH = datetime(1970, 1, 1)
_ISO_FORMATS = [
# have_date, have_time, format_str
# in order of decreased probability
(1, 1, '%Y-%m-%d %H:%M:%S%z'),
(1, 1, '%Y-%m-%d %H:%M:%S'),
(1, 1, '%Y-%m-%d %H:%M'),
(1, 1, '%Y-%m-%dT%H:%M:%S%z'),
(1, 1, '%Y-%m-%dT%H:%M:%S'),
(1, 0, '%Y-%m-%d'),
(1, 1, '%Y-%m-%d %H:%M:%S.%f'),
(1, 1, '%Y-%m-%dT%H:%M:%S.%f'),
(1, 1, '%Y-%m-%d %H:%M:%S.%f%z'),
(1, 1, '%Y-%m-%dT%H:%M:%S.%f%z'),
(1, 1, '%Y%m%dT%H%M%S%z'),
(1, 1, '%Y%m%d%H%M%S%z'),
(0, 1, '%H:%M:%S.%f'),
(0, 1, '%H:%M:%S'),
(0, 1, '%H:%M'),
# These parse as continuous features (plain numbers)
(1, 1, '%Y%m%dT%H%M%S'),
(1, 1, '%Y%m%d%H%M%S'),
(1, 0, '%Y%m%d'),
(1, 0, '%Y%j'),
(1, 0, '%Y'),
(0, 1, '%H%M%S.%f'),
# BUG: In Python as in C, %j doesn't necessitate 0-padding,
# so these two lines must be in this order
(1, 0, '%Y-%m'),
(1, 0, '%Y-%j'),
]
# The regex that matches all above formats
REGEX = (r'^('
r'\d{1,4}-\d{2}-\d{2}([ T]\d{2}:\d{2}(:\d{2}(\.\d+)?([+-]\d{4})?)?)?|'
r'\d{1,4}\d{2}\d{2}(T?\d{2}\d{2}\d{2}([+-]\d{4})?)?|'
r'\d{2}:\d{2}(:\d{2}(\.\d+)?)?|'
r'\d{2}\d{2}\d{2}\.\d+|'
r'\d{1,4}(-?\d{2,3})?'
r')$')
_matches_iso_format = re.compile(REGEX).match
# UTC offset and associated timezone. If parsed datetime values provide an
# offset, it is used for display. If not all values have the same offset,
# +0000 (=UTC) timezone is used and utc_offset is set to False.
utc_offset = None
timezone = timezone.utc
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.have_date = 0
self.have_time = 0
def copy(self, compute_value=None):
copy = super().copy(compute_value=compute_value)
copy.have_date = self.have_date
copy.have_time = self.have_time
return copy
@staticmethod
def _tzre_sub(s, _subtz=re.compile(r'([+-])(\d\d):(\d\d)$').sub):
# Replace +ZZ:ZZ with ISO-compatible +ZZZZ, or strip +0000
return s[:-6] if s.endswith(('+00:00', '-00:00')) else _subtz(r'\1\2\3', s)
def repr_val(self, val):
if isnan(val):
return '?'
if not self.have_date and not self.have_time:
# The time is relative, unitless. The value is absolute.
return str(val.value) if isinstance(val, Value) else str(val)
# If you know how to simplify this, be my guest
seconds = int(val)
microseconds = int(round((val - seconds) * 1e6))
if val < 0:
if microseconds:
seconds, microseconds = seconds - 1, int(1e6) + microseconds
date = datetime.fromtimestamp(0, tz=self.timezone) + timedelta(seconds=seconds)
else:
date = datetime.fromtimestamp(seconds, tz=self.timezone)
date = str(date.replace(microsecond=microseconds))
if self.have_date and not self.have_time:
date = date.split()[0]
elif not self.have_date and self.have_time:
date = date.split()[1]
date = self._tzre_sub(date)
return date
str_val = repr_val
def parse(self, datestr):
"""
Return `datestr`, a datetime provided in one of ISO 8601 formats,
parsed as a real number. Value 0 marks the Unix epoch, positive values
are the dates after it, negative before.
If date is unspecified, epoch date is assumed.
If time is unspecified, 00:00:00.0 is assumed.
If timezone is unspecified, local time is assumed.
"""
if datestr in MISSING_VALUES:
return Unknown
datestr = datestr.strip().rstrip('Z')
ERROR = ValueError("Invalid datetime format '{}'. "
"Only ISO 8601 supported.".format(datestr))
if not self._matches_iso_format(datestr):
try:
# If it is a number, assume it is a unix timestamp
value = float(datestr)
self.have_date = self.have_time = 1
return value
except ValueError:
raise ERROR
for i, (have_date, have_time, fmt) in enumerate(self._ISO_FORMATS):
try:
dt = datetime.strptime(datestr, fmt)
except ValueError:
continue
else:
# Pop this most-recently-used format to front
if 0 < i < len(self._ISO_FORMATS) - 2:
self._ISO_FORMATS[i], self._ISO_FORMATS[0] = \
self._ISO_FORMATS[0], self._ISO_FORMATS[i]
self.have_date |= have_date
self.have_time |= have_time
if not have_date:
dt = dt.replace(self.UNIX_EPOCH.year,
self.UNIX_EPOCH.month,
self.UNIX_EPOCH.day)
break
else:
raise ERROR
# Remember UTC offset. If not all parsed values share the same offset,
# remember none of it.
offset = dt.utcoffset()
if self.utc_offset is not False:
if offset and self.utc_offset is None:
self.utc_offset = offset
self.timezone = timezone(offset)
elif self.utc_offset != offset:
self.utc_offset = False
self.timezone = timezone.utc
# Convert time to UTC timezone. In dates without timezone,
# localtime is assumed. See also:
# https://docs.python.org/3.4/library/datetime.html#datetime.datetime.timestamp
if dt.tzinfo:
dt -= dt.utcoffset()
dt = dt.replace(tzinfo=timezone.utc)
# Unix epoch is the origin, older dates are negative
try:
return dt.timestamp()
except OverflowError:
return -(self.UNIX_EPOCH - dt).total_seconds()
def to_val(self, s):
"""
Convert a value, given as an instance of an arbitrary type, to a float.
"""
if isinstance(s, str):
return self.parse(s)
else:
return super().to_val(s) | PypiClean |
/Flask-Dropbox-0.3.tar.gz/Flask-Dropbox-0.3/README.rst | =============
Flask-Dropbox
=============
.. image:: https://travis-ci.org/playpauseandstop/Flask-Dropbox.png?branch=master
:target: https://travis-ci.org/playpauseandstop/Flask-Dropbox
.. image:: https://pypip.in/v/Flask-Dropbox/badge.png
:target: https://crate.io/packages/Flask-Dropbox
Dropbox Python SDK support for Flask applications.
Requirements
============
* `Python <http://www.python.org/>`_ 2.6 or 2.7
* `Flask <http://flask.pocoo.org/>`_ 0.8 or higher
* `Dropbox Python SDK <http://pypi.python.org/pypi/dropbox>`_ 1.4 or higher
Installation
============
::
$ pip install Flask-Dropbox
License
=======
``Flask-Dropbox`` is licensed under the `BSD License
<https://github.com/playpauseandstop/Flask-Dropbox/blob/master/LICENSE>`_.
Configuration
=============
SECRET_KEY
----------
**REQUIRED.** As token would be stored in Flask's `session
<http://flask.pocoo.org/docs/quickstart/#sessions>`_ instance, you need to
configure secret key for your application.
DROPBOX_KEY
-----------
**REQUIRED.** App key from Dropbox developer site.
DROPBOX_SECRET
--------------
**REQUIRED.** Secret key from Dropbox developer site.
DROPBOX_ACCESS_TYPE
-------------------
**REQUIRED.** Should be ``'dropbox'`` or ``'app_folder'`` as configured for
your app.
DROPBOX_CALLBACK_URL
--------------------
By default, you don't need to provide this setting, cause ``Flask-Dropbox``
will setup callback URL automaticly usign current host and type of request,
but if you don't trust us, you could to rewrite this setting manually.
DROPBOX_CALLBACK_TEMPLATE
-------------------------
Template to be used for showing errors while trying to process oAuth callback
from Dropbox API. By default: ``'dropbox/callback.html'``.
Next boolean vars could be sent to the template:
* ``error_oauth_token`` - Dropbox API didn't return oAuth token.
* ``error_not_equal_tokens`` - oAuth token from Dropbox API is not equal to
request token stored in Flask session.
* ``error_response`` - Dropbox API returns ``ErrorResponse`` instance. Also
actual exception as ``error`` var would be sent to the template too.
DROPBOX_LOGIN_REDIRECT
----------------------
Page to redirect to after user successfully logged in with Dropbox account. By
default: ``/``.
DROPBOX_LOGOUT_REDIRECT
-----------------------
Page to redirect to after user logged out from authenticated Dropbox session.
By default: ``/``.
DROPBOX_CACHE_STORAGE
---------------------
.. versionadded:: 0.3
Where to place account info, Dropbox client and Dropbox session instances. In
0.2 and lower all this info stored in ``flask_dropbox.Dropbox`` instance, which
isn't thread safe, but from 0.3 all these values stored to ``flask.g``. If you
need custom storage you can override this setting with object or string which
would be imported.
Usage
=====
``app.py``::
from flask import Flask
from flask.ext.dropbox import Dropbox, DropboxBlueprint
import settings
app = Flask(__name__)
app.config.from_object(settings)
dropbox = Dropbox(app)
dropbox.register_blueprint(url_prefix='/dropbox')
``settings.py``::
SECRET_KEY = 'some-secret-key'
DROPBOX_KEY = 'dropbox-app-key'
DROPBOX_SECRET = 'dropbox-app-secret'
DROPBOX_ACCESS_TYPE = 'app_folder'
``views.py``::
from flask import url_for, redirect, request
from werkzeug import secure_filename
from app import app, dropbox
@app.route('/')
def home():
return u'Click <a href="%s">here</a> to login with Dropbox.' % \
dropbox.login_url
@app.route('/success/<path:filename>')
def success(filename):
return u'File successfully uploaded as /%s' % filename
@app.route('/upload', methods=('GET', 'POST'))
def upload():
if not dropbox.is_authenticated:
return redirect(url_for('home'))
if request.method == 'POST':
file_obj = request.files['file']
if file_obj:
client = dropbox.client
filename = secure_filename(file.filename)
# Actual uploading process
result = client.put_file('/' + filename, file_obj.read())
path = result['path'].lstrip('/')
return redirect(url_for('success', filename=path))
return u'<form action="" method="post">' \
u'<input name="file" type="file">' \
u'<input type="submit" value="Upload">' \
u'</form>'
Bugs, feature requests?
=======================
If you found some bug in ``Flask-Dropbox`` library, please, add new issue to
the project's `GitHub issues
<https://github.com/playpauseandstop/Flask-Dropbox/issues>`_.
ChangeLog
=========
0.3
---
+ Flask 0.10 support
+ Store account info, Dropbox client and session in thread-safe ``flask.g``
storage instead of ``flask_dropbox.Dropbox`` instance
+ Introduce ``DROPBOX_CACHE_STORAGE`` setting
0.2
---
+ Add ``init_app`` method to ``Dropbox`` extension class.
+ Do not send ``dropbox`` instance for initialization of ``DropboxBlueprint``
class.
+ Use ``current_app.extensions['dropbox']`` statement in views for getting
initialized ``Dropbox`` instance.
0.1.5
-----
+ Add ``register_blueprint`` shortcut to initialize ``DropboxBlueprint`` with
default values in one line.
+ Move ``Dropbox`` class from ``flask.ext.dropbox.utils`` to
``flask.ext.dropbox.extension`` module. But mainly, it wouldn't affected to
your code if you used ``from flask.ext.dropbox import Dropbox`` statements.
0.1.4
-----
+ Add ``dropbox`` library as install requirement in ``setup.py``.
+ Update project short description.
0.1.3
-----
+ Fix handling templates while installing via setup.py
0.1.2
-----
+ Add support of Dropbox SDK 1.4.1
0.1.1
-----
+ Check that access token is the instance of ``oauth.OAuthToken`` class if it
exists in session.
0.1
---
* Initial release.
| PypiClean |
/AyiinXd-0.0.8-cp311-cp311-macosx_10_9_universal2.whl/fipper/methods/chats/ban_chat_member.py |
from datetime import datetime
from typing import Union
import fipper
from fipper import raw, utils
from fipper import types
class BanChatMember:
async def ban_chat_member(
self: "fipper.Client",
chat_id: Union[int, str],
user_id: Union[int, str],
until_date: datetime = utils.zero_datetime()
) -> Union["types.Message", bool]:
"""Ban a user from a group, a supergroup or a channel.
In the case of supergroups and channels, the user will not be able to return to the group on their own using
invite links, etc., unless unbanned first. You must be an administrator in the chat for this to work and must
have the appropriate admin rights.
Note:
In regular groups (non-supergroups), this method will only work if the "All Members Are Admins" setting is
off in the target group. Otherwise members may only be removed by the group's creator or by the member
that added them.
.. include:: /_includes/usable-by/users-bots.rst
Parameters:
chat_id (``int`` | ``str``):
Unique identifier (int) or username (str) of the target chat.
user_id (``int`` | ``str``):
Unique identifier (int) or username (str) of the target user.
For a contact that exists in your Telegram address book you can use his phone number (str).
until_date (:py:obj:`~datetime.datetime`, *optional*):
Date when the user will be unbanned.
If user is banned for more than 366 days or less than 30 seconds from the current time they are
considered to be banned forever. Defaults to epoch (ban forever).
Returns:
:obj:`~fipper.types.Message` | ``bool``: On success, a service message will be returned (when applicable),
otherwise, in case a message object couldn't be returned, True is returned.
Example:
.. code-block:: python
from datetime import datetime, timedelta
# Ban chat member forever
await app.ban_chat_member(chat_id, user_id)
# Ban chat member and automatically unban after 24h
await app.ban_chat_member(chat_id, user_id, datetime.now() + timedelta(days=1))
"""
chat_peer = await self.resolve_peer(chat_id)
user_peer = await self.resolve_peer(user_id)
if isinstance(chat_peer, raw.types.InputPeerChannel):
r = await self.invoke(
raw.functions.channels.EditBanned(
channel=chat_peer,
participant=user_peer,
banned_rights=raw.types.ChatBannedRights(
until_date=utils.datetime_to_timestamp(until_date),
view_messages=True,
send_messages=True,
send_media=True,
send_stickers=True,
send_gifs=True,
send_games=True,
send_inline=True,
embed_links=True
)
)
)
else:
r = await self.invoke(
raw.functions.messages.DeleteChatUser(
chat_id=abs(chat_id),
user_id=user_peer
)
)
for i in r.updates:
if isinstance(i, (raw.types.UpdateNewMessage, raw.types.UpdateNewChannelMessage)):
return await types.Message._parse(
self, i.message,
{i.id: i for i in r.users},
{i.id: i for i in r.chats}
)
else:
return True | PypiClean |
/DACBench-0.2.0.tar.gz/DACBench-0.2.0/dacbench/envs/sgd.py | import json
import math
import numbers
import random
import warnings
from enum import IntEnum, auto
from functools import reduce
import numpy as np
import torch
from backpack import backpack, extend
from backpack.extensions import BatchGrad
from torchvision import datasets, transforms
from dacbench import AbstractEnv
warnings.filterwarnings("ignore")
def reward_range(frange):
def wrapper(f):
f.frange = frange
return f
return wrapper
class Reward(IntEnum):
TrainingLoss = auto()
ValidationLoss = auto()
LogTrainingLoss = auto()
LogValidationLoss = auto()
DiffTraining = auto()
DiffValidation = auto()
LogDiffTraining = auto()
LogDiffValidation = auto()
FullTraining = auto()
def __call__(self, f):
if hasattr(self, "func"):
raise ValueError("Can not assign the same reward to a different function!")
self.func = f
return f
class SGDEnv(AbstractEnv):
"""
Environment to control the learning rate of adam
"""
def __init__(self, config):
"""
Initialize SGD Env
Parameters
-------
config : objdict
Environment configuration
"""
super(SGDEnv, self).__init__(config)
self.batch_size = config.training_batch_size
self.validation_batch_size = config.validation_batch_size
self.no_cuda = config.no_cuda
self.current_batch_size = config.training_batch_size
self.on_features = config.features
self.cd_paper_reconstruction = config.cd_paper_reconstruction
self.cd_bias_correction = config.cd_bias_correction
self.crashed = False
self.terminate_on_crash = config.terminate_on_crash
self.crash_penalty = config.crash_penalty
if isinstance(config.reward_type, Reward):
self.reward_type = config.reward_type
elif isinstance(config.reward_type, str):
try:
self.reward_type = getattr(Reward, config.reward_type)
except AttributeError:
raise ValueError(f"{config.reward_type} is not a valid reward type!")
else:
raise ValueError(f"Type {type(config.reward_type)} is not valid!")
self.use_cuda = not self.no_cuda and torch.cuda.is_available()
self.device = torch.device("cuda" if self.use_cuda else "cpu")
self.training_validation_ratio = config.train_validation_ratio
self.dataloader_shuffle = config.dataloader_shuffle
# self.test_dataset = None
self.train_dataset = None
self.validation_dataset = None
self.train_loader = None
# self.test_loader = None
self.validation_loader = None
self.train_loader_it = None
self.validation_loader_it = None
self.train_batch_index = 0
self.epoch_index = 0
self.current_training_loss = None
self.loss_batch = None
self.prev_training_loss = None
self._current_validation_loss = torch.zeros(
1, device=self.device, requires_grad=False
)
self._current_validation_loss.calculated = False
self.prev_validation_loss = torch.zeros(
1, device=self.device, requires_grad=False
)
self.model = None
self.val_model = None
# TODO:
"""
TODO: Samuel Mueller (PhD student in our group) also uses backpack and has ran into a similar memory leak.
He solved it calling this custom made RECURSIVE memory_cleanup function:
# from backpack import memory_cleanup
# def recursive_backpack_memory_cleanup(module: torch.nn.Module):
# memory_cleanup(module)
# for m in module.modules():
# memory_cleanup(m)
(calling this after computing the training loss/gradients and after validation loss should suffice)
"""
self.parameter_count = 0
self.layer_sizes = []
self.loss_function = config.loss_function(**config.loss_function_kwargs)
self.loss_function = extend(self.loss_function)
self.val_loss_function = config.loss_function(**config.val_loss_function_kwargs)
self.initial_lr = config.lr * torch.ones(
1, device=self.device, requires_grad=False
)
self.current_lr = config.lr * torch.ones(
1, device=self.device, requires_grad=False
)
self.optimizer_name = config.optimizer
self.beta1 = config.beta1
self.beta2 = config.beta2
self.epsilon = config.epsilon
# RMSprop parameters
self.beta2 = config.beta2
self.m = 0
self.v = 0
# Momentum parameters
self.sgd_momentum_v = 0
self.sgd_rho = 0.9
self.clip_grad = config.clip_grad
self.t = 0
self.step_count = torch.zeros(1, device=self.device, requires_grad=False)
self.prev_direction = None
self.current_direction = None
self.predictiveChangeVarDiscountedAverage = torch.zeros(
1, device=self.device, requires_grad=False
)
self.predictiveChangeVarUncertainty = torch.zeros(
1, device=self.device, requires_grad=False
)
self.lossVarDiscountedAverage = torch.zeros(
1, device=self.device, requires_grad=False
)
self.lossVarUncertainty = torch.zeros(
1, device=self.device, requires_grad=False
)
self.discount_factor = config.discount_factor
self.firstOrderMomentum = torch.zeros(
1, device=self.device, requires_grad=False
)
self.secondOrderMomentum = torch.zeros(
1, device=self.device, requires_grad=False
)
if self.optimizer_name == "adam":
self.get_optimizer_direction = self.get_adam_direction
elif self.optimizer_name == "rmsprop":
self.get_optimizer_direction = self.get_rmsprop_direction
elif self.optimizer_name == "momentum":
self.get_optimizer_direction = self.get_momentum_direction
else:
raise NotImplementedError
if "reward_function" in config.keys():
self._get_reward = config["reward_function"]
else:
self._get_reward = self.reward_type.func
if "state_method" in config.keys():
self.get_state = config["state_method"]
else:
self.get_state = self.get_default_state
self.reward_range = self.reward_type.func.frange
def get_reward(self):
return self._get_reward(self)
@reward_range([-(10**9), 0])
@Reward.TrainingLoss
def get_training_reward(self):
return -self.current_training_loss.item()
@reward_range([-(10**9), 0])
@Reward.ValidationLoss
def get_validation_reward(self):
return -self.current_validation_loss.item()
@reward_range([-(10**9), (10**9)])
@Reward.LogTrainingLoss
def get_log_training_reward(self):
return -torch.log(self.current_training_loss).item()
@reward_range([-(10**9), (10**9)])
@Reward.LogValidationLoss
def get_log_validation_reward(self):
return -torch.log(self.current_validation_loss).item()
@reward_range([-(10**9), (10**9)])
@Reward.LogDiffTraining
def get_log_diff_training_reward(self):
return -(
torch.log(self.current_training_loss) - torch.log(self.prev_training_loss)
).item()
@reward_range([-(10**9), (10**9)])
@Reward.LogDiffValidation
def get_log_diff_validation_reward(self):
return -(
torch.log(self.current_validation_loss)
- torch.log(self.prev_validation_loss)
).item()
@reward_range([-(10**9), (10**9)])
@Reward.DiffTraining
def get_diff_training_reward(self):
return (self.current_training_loss - self.prev_training_loss).item()
@reward_range([-(10**9), (10**9)])
@Reward.DiffValidation
def get_diff_validation_reward(self):
return (self.current_validation_loss - self.prev_validation_loss).item()
@reward_range([-(10**9), 0])
@Reward.FullTraining
def get_full_training_reward(self):
return -self._get_full_training_loss(loader=self.train_loader).item()
def get_full_training_loss(self):
return -self.get_full_training_reward()
@property
def crash(self):
self.crashed = True
truncated = False
terminated = False
if self.c_step >= self.n_steps:
truncated = True
else:
terminated = self.terminate_on_crash
return self.get_state(self), self.crash_penalty, terminated, truncated, {}
def seed(self, seed=None, seed_action_space=False):
"""
Set rng seed
Parameters
----------
seed:
seed for rng
seed_action_space: bool, default False
if to seed the action space as well
"""
(seed,) = super().seed(seed, seed_action_space)
if seed is not None:
torch.manual_seed(seed)
torch.cuda.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
torch.backends.cudnn.benchmark = False
torch.backends.cudnn.deterministic = True
return [seed]
def step(self, action):
"""
Execute environment step
Parameters
----------
action : list
action to execute
Returns
-------
np.array, float, bool, bool, dict
state, reward, terminated, truncated, info
"""
truncated = super(SGDEnv, self).step_()
self.step_count += 1
index = 0
if not isinstance(action, int) and not isinstance(action, float):
action = action.item()
if not isinstance(action, numbers.Number):
action = action[0]
if np.isnan(action):
return self.crash
new_lr = torch.Tensor([action]).to(self.device)
self.current_lr = new_lr
direction = self.get_optimizer_direction()
if np.isnan(direction).any():
return self.crash
self.current_direction = direction
delta_w = torch.mul(new_lr, direction)
for i, p in enumerate(self.model.parameters()):
layer_size = self.layer_sizes[i]
p.data = p.data - delta_w[index : index + layer_size].reshape(
shape=p.data.shape
)
index += layer_size
self.model.zero_grad()
self.prev_training_loss = self.current_training_loss
if self._current_validation_loss.calculated:
self.prev_validation_loss = self.current_validation_loss
self.train_network()
reward = self.get_reward()
if np.isnan(reward):
return self.crash
state = self.get_state(self)
for value in state.values():
if np.isnan(value):
return self.crash
return state, reward, False, truncated, {}
def _architecture_constructor(self, arch_str):
layer_specs = []
layer_strs = arch_str.split("-")
for layer_str in layer_strs:
idx = layer_str.find("(")
if idx == -1:
nn_module_name = layer_str
vargs = []
else:
nn_module_name = layer_str[:idx]
vargs_json_str = '{"tmp": [' + layer_str[idx + 1 : -1] + "]}"
vargs = json.loads(vargs_json_str)["tmp"]
layer_specs.append((getattr(torch.nn, nn_module_name), vargs))
def model_constructor():
layers = [cls(*vargs) for cls, vargs in layer_specs]
return torch.nn.Sequential(*layers)
return model_constructor
def reset(self, seed=None, options={}):
"""
Reset environment
Returns
-------
np.array
Environment state
"""
super(SGDEnv, self).reset_(seed)
dataset = self.instance[0]
instance_seed = self.instance[1]
construct_model = self._architecture_constructor(self.instance[2])
self.n_steps = self.instance[3]
dataset_size = self.instance[4]
self.crashed = False
self.seed(instance_seed)
self.model = construct_model().to(self.device)
self.val_model = construct_model().to(self.device)
def init_weights(m):
if type(m) == torch.nn.Linear or type(m) == torch.nn.Conv2d:
torch.nn.init.xavier_normal(m.weight)
m.bias.data.fill_(0.0)
if self.cd_paper_reconstruction:
self.model.apply(init_weights)
train_dataloader_args = {
"batch_size": self.batch_size,
"drop_last": True,
"shuffle": self.dataloader_shuffle,
}
validation_dataloader_args = {
"batch_size": self.validation_batch_size,
"drop_last": True,
"shuffle": False,
} # SA: shuffling empty data loader causes exception
if self.use_cuda:
param = {"num_workers": 1, "pin_memory": True}
train_dataloader_args.update(param)
validation_dataloader_args.update(param)
if dataset == "MNIST":
transform = transforms.Compose(
[transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))]
)
train_dataset = datasets.MNIST(
"../data", train=True, download=True, transform=transform
)
# self.test_dataset = datasets.MNIST('../data', train=False, transform=transform)
elif dataset == "CIFAR":
transform = transforms.Compose(
[
transforms.ToTensor(),
transforms.Normalize(
(0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010)
),
]
)
train_dataset = datasets.CIFAR10(
"../data", train=True, download=True, transform=transform
)
# self.test_dataset = datasets.MNIST('../data', train=False, transform=transform)
else:
raise NotImplementedError
if dataset_size is not None:
train_dataset = torch.utils.data.Subset(
train_dataset, range(0, dataset_size)
)
training_dataset_limit = math.floor(
len(train_dataset) * self.training_validation_ratio
)
validation_dataset_limit = len(train_dataset)
self.train_dataset = torch.utils.data.Subset(
train_dataset, range(0, training_dataset_limit - 1)
)
self.validation_dataset = torch.utils.data.Subset(
train_dataset, range(training_dataset_limit, validation_dataset_limit)
)
self.train_loader = torch.utils.data.DataLoader(
self.train_dataset, **train_dataloader_args
)
# self.test_loader = torch.utils.data.DataLoader(self.test_dataset, **train_dataloader_args)
self.validation_loader = torch.utils.data.DataLoader(
self.validation_dataset, **validation_dataloader_args
)
self.train_batch_index = 0
self.epoch_index = 0
self.train_loader_it = iter(self.train_loader)
self.validation_loader_it = iter(self.validation_loader)
self.parameter_count = 0
self.layer_sizes = []
for p in self.model.parameters():
layer_size = reduce(lambda x, y: x * y, p.shape)
self.layer_sizes.append(layer_size)
self.parameter_count += layer_size
self.model = extend(self.model)
self.model.zero_grad()
self.model.train()
self.val_model.eval()
self.current_training_loss = None
self.loss_batch = None
# Momentum parameters
self.m = 0
self.v = 0
self.sgd_momentum_v = 0
self.t = 0
self.step_count = torch.zeros(1, device=self.device, requires_grad=False)
self.current_lr = self.initial_lr
self.prev_direction = torch.zeros(
(self.parameter_count,), device=self.device, requires_grad=False
)
self.current_direction = torch.zeros(
(self.parameter_count,), device=self.device, requires_grad=False
)
self.predictiveChangeVarDiscountedAverage = torch.zeros(
1, device=self.device, requires_grad=False
)
self.predictiveChangeVarUncertainty = torch.zeros(
1, device=self.device, requires_grad=False
)
self.lossVarDiscountedAverage = torch.zeros(
1, device=self.device, requires_grad=False
)
self.lossVarUncertainty = torch.zeros(
1, device=self.device, requires_grad=False
)
self.firstOrderMomentum = torch.zeros(
1, device=self.device, requires_grad=False
)
self.secondOrderMomentum = torch.zeros(
1, device=self.device, requires_grad=False
)
self._current_validation_loss = torch.zeros(
1, device=self.device, requires_grad=False
)
self._current_validation_loss.calculated = False
self.prev_validation_loss = torch.zeros(
1, device=self.device, requires_grad=False
)
self.train_network()
return self.get_state(self), {}
def set_writer(self, writer):
self.writer = writer
def close(self):
"""
No additional cleanup necessary
Returns
-------
bool
Cleanup flag
"""
return True
def render(self, mode: str = "human"):
"""
Render env in human mode
Parameters
----------
mode : str
Execution mode
"""
if mode != "human":
raise NotImplementedError
pass
def get_default_state(self, _):
"""
Gather state description
Returns
-------
dict
Environment state
"""
self.gradients = self._get_gradients()
self.gradients = self.gradients.clip(*self.clip_grad)
(
self.firstOrderMomentum,
self.secondOrderMomentum,
self.sgdMomentum,
) = self._get_momentum(self.gradients)
if (
"predictiveChangeVarDiscountedAverage" in self.on_features
or "predictiveChangeVarUncertainty" in self.on_features
):
(
predictiveChangeVarDiscountedAverage,
predictiveChangeVarUncertainty,
) = self._get_predictive_change_features(self.current_lr)
if (
"lossVarDiscountedAverage" in self.on_features
or "lossVarUncertainty" in self.on_features
):
lossVarDiscountedAverage, lossVarUncertainty = self._get_loss_features()
if "alignment" in self.on_features:
alignment = self._get_alignment()
state = {}
if "predictiveChangeVarDiscountedAverage" in self.on_features:
state[
"predictiveChangeVarDiscountedAverage"
] = predictiveChangeVarDiscountedAverage.item()
if "predictiveChangeVarUncertainty" in self.on_features:
state[
"predictiveChangeVarUncertainty"
] = predictiveChangeVarUncertainty.item()
if "lossVarDiscountedAverage" in self.on_features:
state["lossVarDiscountedAverage"] = lossVarDiscountedAverage.item()
if "lossVarUncertainty" in self.on_features:
state["lossVarUncertainty"] = lossVarUncertainty.item()
if "currentLR" in self.on_features:
state["currentLR"] = self.current_lr.item()
if "trainingLoss" in self.on_features:
if self.crashed:
state["trainingLoss"] = 0.0
else:
state["trainingLoss"] = self.current_training_loss.item()
if "validationLoss" in self.on_features:
if self.crashed:
state["validationLoss"] = 0.0
else:
state["validationLoss"] = self.current_validation_loss.item()
if "step" in self.on_features:
state["step"] = self.step_count.item()
if "alignment" in self.on_features:
state["alignment"] = alignment.item()
if "crashed" in self.on_features:
state["crashed"] = self.crashed
return state
def _train_batch_(self):
(data, target) = next(self.train_loader_it)
data, target = data.to(self.device), target.to(self.device)
self.current_batch_size = data.size()[0]
output = self.model(data)
loss = self.loss_function(output, target)
with backpack(BatchGrad()):
loss.mean().backward()
loss_value = loss.mean()
self.loss_batch = loss
self.current_training_loss = torch.unsqueeze(loss_value.detach(), dim=0)
self.train_batch_index += 1
self._current_validation_loss.calculated = False
def train_network(self):
try:
self._train_batch_()
except StopIteration:
self.train_batch_index = 0
self.epoch_index += 1
self.train_loader_it = iter(self.train_loader)
self._train_batch_()
def _get_full_training_loss(self, loader):
for target_param, param in zip(
self.val_model.parameters(), self.model.parameters()
):
target_param.data.copy_(param.data)
loss = torch.zeros(1, device=self.device, requires_grad=False)
with torch.no_grad():
for data, target in loader:
data, target = data.to(self.device), target.to(self.device)
output = self.val_model(data)
loss += self.val_loss_function(output, target).sum().detach().detach()
loss /= len(loader.dataset)
return loss
@property
def current_validation_loss(self):
if not self._current_validation_loss.calculated:
self._current_validation_loss = self._get_validation_loss()
self._current_validation_loss.calculated = True
return self._current_validation_loss
def _get_validation_loss_(self):
with torch.no_grad():
(data, target) = next(self.validation_loader_it)
data, target = data.to(self.device), target.to(self.device)
output = self.val_model(data)
validation_loss = self.val_loss_function(output, target).mean()
validation_loss = torch.unsqueeze(validation_loss.detach(), dim=0)
return validation_loss
def _get_validation_loss(self):
for target_param, param in zip(
self.val_model.parameters(), self.model.parameters()
):
target_param.data.copy_(param.data)
try:
validation_loss = self._get_validation_loss_()
except StopIteration:
self.validation_loader_it = iter(self.validation_loader)
validation_loss = self._get_validation_loss_()
return validation_loss
def _get_gradients(self):
gradients = []
for p in self.model.parameters():
if p.grad is None:
continue
gradients.append(p.grad.flatten())
gradients = torch.cat(gradients, dim=0)
return gradients
def _get_momentum(self, gradients):
self.t += 1
self.m = self.beta1 * self.m + (1 - self.beta1) * gradients
self.v = self.beta2 * self.v + (1 - self.beta2) * torch.square(gradients)
bias_corrected_m = self.m / (1 - self.beta1**self.t)
bias_corrected_v = self.v / (1 - self.beta2**self.t)
self.sgd_momentum_v = self.sgd_rho * self.sgd_momentum_v + gradients
return bias_corrected_m, bias_corrected_v, self.sgd_momentum_v
def get_adam_direction(self):
return self.firstOrderMomentum / (
torch.sqrt(self.secondOrderMomentum) + self.epsilon
)
def get_rmsprop_direction(self):
return self.gradients / (torch.sqrt(self.secondOrderMomentum) + self.epsilon)
def get_momentum_direction(self):
return self.sgd_momentum_v
def _get_loss_features(self):
if self.crashed:
return torch.tensor(0.0), torch.tensor(0.0)
bias_correction = (
(1 - self.discount_factor ** (self.c_step + 1))
if self.cd_bias_correction
else 1
)
with torch.no_grad():
loss_var = torch.log(torch.var(self.loss_batch))
self.lossVarDiscountedAverage = (
self.discount_factor * self.lossVarDiscountedAverage
+ (1 - self.discount_factor) * loss_var
)
self.lossVarUncertainty = (
self.discount_factor * self.lossVarUncertainty
+ (1 - self.discount_factor)
* (loss_var - self.lossVarDiscountedAverage / bias_correction) ** 2
)
return (
self.lossVarDiscountedAverage / bias_correction,
self.lossVarUncertainty / bias_correction,
)
def _get_predictive_change_features(self, lr):
if self.crashed:
return torch.tensor(0.0), torch.tensor(0.0)
bias_correction = (
(1 - self.discount_factor ** (self.c_step + 1))
if self.cd_bias_correction
else 1
)
batch_gradients = []
for i, (name, param) in enumerate(self.model.named_parameters()):
grad_batch = param.grad_batch.reshape(
self.current_batch_size, self.layer_sizes[i]
)
batch_gradients.append(grad_batch)
batch_gradients = torch.cat(batch_gradients, dim=1)
update_value = torch.mul(lr, self.get_optimizer_direction())
predictive_change = torch.log(
torch.var(-1 * torch.matmul(batch_gradients, update_value))
)
self.predictiveChangeVarDiscountedAverage = (
self.discount_factor * self.predictiveChangeVarDiscountedAverage
+ (1 - self.discount_factor) * predictive_change
)
self.predictiveChangeVarUncertainty = (
self.discount_factor * self.predictiveChangeVarUncertainty
+ (1 - self.discount_factor)
* (
predictive_change
- self.predictiveChangeVarDiscountedAverage / bias_correction
)
** 2
)
return (
self.predictiveChangeVarDiscountedAverage / bias_correction,
self.predictiveChangeVarUncertainty / bias_correction,
)
def _get_alignment(self):
if self.crashed:
return torch.tensor(0.0)
alignment = torch.mean(
torch.sign(torch.mul(self.prev_direction, self.current_direction))
)
alignment = torch.unsqueeze(alignment, dim=0)
self.prev_direction = self.current_direction
return alignment
def generate_instance_file(self, file_name, mode="test", n=100):
header = ["ID", "dataset", "architecture", "seed", "steps"]
# dataset name, architecture, dataset size, sample dimension, number of max pool layers, hidden layers, test architecture convolutional layers
architectures = [
(
"MNIST",
"Conv2d(1, {0}, 3, 1, 1)-MaxPool2d(2, 2)-Conv2d({0}, {1}, 3, 1, 1)-MaxPool2d(2, 2)-Conv2d({1}, {2}, 3, 1, 1)-ReLU-Flatten-Linear({3}, 10)-LogSoftmax(1)",
60000,
28,
2,
3,
[20, 50, 500],
),
(
"CIFAR",
"Conv2d(3, {0}, 3, 1, 1)-MaxPool2d(2, 2)-ReLU-Conv2d({0}, {1}, 3, 1, 1)-ReLU-MaxPool2d(2, 2)-Conv2d({1}, {2}, 3, 1, 1)-ReLU-MaxPool2d(2, 2)-Conv2d({2}, {3}, 3, 1, 1)-ReLU-Flatten-Linear({4}, 10)-LogSoftmax(1)",
60000,
32,
3,
4,
[32, 32, 64, 64],
),
]
if mode == "test":
seed_list = [random.randrange(start=0, stop=1e9) for _ in range(n)]
for i in range(len(architectures)):
fname = file_name + "_" + architectures[i][0].lower() + ".csv"
steps = int(1e8)
conv = architectures[i][6]
hidden_layers = architectures[i][5]
sample_size = architectures[i][3]
pool_layer_count = architectures[i][4]
linear_layer_size = conv[-1] * pow(
sample_size / pow(2, pool_layer_count), 2
)
linear_layer_size = int(round(linear_layer_size))
dataset = architectures[i][0]
if hidden_layers == 3:
architecture = architectures[i][1].format(
conv[0], conv[1], conv[2], linear_layer_size
)
else:
architecture = architectures[i][1].format(
conv[0], conv[1], conv[2], conv[3], linear_layer_size
)
# args = conv
# args.append(linear_layer_size)
# # architecture = architectures[i][1].format(**conv)
# args = {0: conv[0], 1: conv[1], 2: conv[2], 3: linear_layer_size}
# architecture = architectures[i][1].format(**args)
with open(fname, "w", encoding="UTF8") as f:
for h in header:
f.write(h + ";")
f.write("\n")
for id in range(0, n):
f.write(str(id) + ";")
f.write(dataset + ";")
f.write(architecture + ";")
seed = seed_list[id]
f.write(str(seed) + ";")
f.write(str(steps) + ";")
f.write("\n")
f.close()
else:
dataset_index = 0
dataset_size_start = 0.1
dataset_size_stop = 0.5
steps_start = 300
steps_stop = 1000
conv1_start = 2
conv1_stop = 10
conv2_start = 5
conv2_stop = 25
conv3_start = 50
conv3_stop = 250
dataset_list = [dataset_index for _ in range(n)]
dataset_size_list = [
random.uniform(dataset_size_start, dataset_size_stop) for _ in range(n)
]
seed_list = [random.randrange(start=0, stop=1e9) for _ in range(n)]
steps_list = [
random.randrange(start=steps_start, stop=steps_stop) for _ in range(n)
]
conv1_list = [
random.randrange(start=conv1_start, stop=conv1_stop) for _ in range(n)
]
conv2_list = [
random.randrange(start=conv2_start, stop=conv2_stop) for _ in range(n)
]
conv3_list = [
random.randrange(start=conv3_start, stop=conv3_stop) for _ in range(n)
]
fname = file_name + ".csv"
with open(fname, "w", encoding="UTF8") as f:
for h in header:
f.write(h + ";")
f.write("\n")
for id in range(0, n):
f.write(str(id) + ";")
sample_size = architectures[dataset_list[id]][3]
pool_layer_count = architectures[dataset_list[id]][4]
linear_layer_size = conv3_list[id] * pow(
sample_size / pow(2, pool_layer_count), 2
)
linear_layer_size = int(round(linear_layer_size))
dataset_size = int(
dataset_size_list[id] * architectures[dataset_list[id]][2]
)
dataset = (
architectures[dataset_list[id]][0] + "_" + str(dataset_size)
)
architecture = architectures[dataset_list[id]][1].format(
conv1_list[id],
conv2_list[id],
conv3_list[id],
linear_layer_size,
)
f.write(dataset + ";")
f.write(architecture + ";")
seed = seed_list[id]
f.write(str(seed) + ";")
steps = steps_list[id]
f.write(str(steps) + ";")
f.write("\n")
f.close() | PypiClean |
/Attendance%20Module-0.3.tar.gz/Attendance Module-0.3/Attendance/app.py | import json
import os.path
from pathlib import Path
from typing import Dict
import requests
import yaml
from flask import Flask # For web server
from flask import request, send_file, send_from_directory
from flask_cors import CORS, cross_origin
from pydantic import BaseModel
from Attendance.attendance import Attendance
from Attendance.database import AttendanceAlreadyExists, Database
from Attendance.external_connector import ExternalConnector, ExternalConnectorStub
# APP Initialization ################
app = Flask(__name__)
cors = CORS(app)
CORS(app)
CONFIG_FILE = Path("attendance_config.yaml")
if CONFIG_FILE.is_file():
with CONFIG_FILE.open(mode="r", encoding="utf-8") as f:
app.config.update(**yaml.safe_load(f))
else:
app.config.update(**{"debug": False})
DB = Database()
if not app.config.get("debug", False):
app.config["services"]: ExternalConnector = ExternalConnector()
else:
app.config["services"]: ExternalConnector = ExternalConnectorStub()
STATIC_DIRECTORY = Path(os.path.dirname(__file__)) / "static"
##################################
@app.route("/test")
def test_view():
return send_file(STATIC_DIRECTORY / "test.html")
@app.route("/")
def teacher_view():
return send_file(STATIC_DIRECTORY / "teacher-view.html")
@app.route("/images/<path:path>")
def send_image(path: Path):
return send_from_directory(STATIC_DIRECTORY / "images", path)
@app.route("/scripts/<path:path>")
def send_script(path: Path):
return send_from_directory(STATIC_DIRECTORY / "scripts", path)
@app.route("/style/<path:path>")
def send_style(path: Path):
return send_from_directory(STATIC_DIRECTORY / "style", path)
@app.route("/api/attendance", methods=["GET"])
def get_summary_attendance():
"""Get Summary List of Attendances"""
result = DB.get_summary_attendance()
return {"ids": result}, 200 # tuple, return code
@app.route("/api/attendance/<string:attendance_id>", methods=["GET", "POST"])
def api_attendance(attendance_id):
if request.method == "GET":
val = DB.get_attendance(attendance_id)
return val.json()
if request.method == "POST":
request_json = request.get_json()
attendance_object = Attendance(
id=request_json.get("id"), records=request_json.get("records")
)
try:
DB.create_attendance(attendance_object)
except AttendanceAlreadyExists:
return "Attendance Item already exists", 400
return "Successfully added attendance item", 201
@app.route("/api/classlist", methods=["GET"])
def get_classlist():
try:
response = app.config["services"].getClasslist()
return response, 200
except Exception as e:
return str(e), 500
@app.route("/api/calendar", methods=["GET"])
def get_calendar():
try:
response = app.config["services"].getCalendar()
return response, 200
except Exception as e:
return str(e), 500 | PypiClean |
/AbrIO-0.0.5.tar.gz/AbrIO-0.0.5/abriocli/component/component.py | import click, zipfile, json
import requests
import datetime
from colorclass import Color
from terminaltables import AsciiTable
from requests.auth import HTTPBasicAuth
from requests_toolbelt import MultipartEncoder, MultipartEncoderMonitor
from clint.textui.progress import Bar as ProgressBar
from ..util.file import *
from ..util.checker import *
from ..conf.conf import config, get_full_path
from ..conf.conf import config,errors
@click.option('--version', prompt="Enter component version", default="0.0.1" )
@click.option('--public', is_flag=True, prompt='Do you want to mark this component as public?' )
@click.option('--name', prompt="Enter component name", default="Test")
@click.command()
def init(name, public, version):
'''
Create new abrio component.
'''
if not ensure_abrio_root() :
click.secho('\nAbrio Root Directory Not Detected.\n' , fg="red", bold=True)
return
if os.path.exists(name) :
click.secho("\nDirecotry with name <{0}> already exists.\n".format(name), fg="red", bold=True)
return
click.secho("\nConnection to sever..." , bold=True, fg="yellow")
project_config = load_project_config()
email = project_config['email']
pwd = project_config['password']
name = name
is_private = public
response = requests.post(
config['server']['host']+'component/create',
auth=HTTPBasicAuth(email, pwd),
json={'isPrivate': is_private, 'name': name}
)
if response.status_code == 201 :
pkey = response.json()['token']
os.mkdir(name)
zip_file = zipfile.ZipFile(os.path.join(get_full_path('data', config['sdk_file'])))
zip_file.extractall(name)
component_config = {
'pkey': pkey,
'public': public,
'version': version,
'name': name,
'last_uploaded': ''
}
# with open(os.path.join(name, (name+'.json')), 'w') as config_file :
# config_file.write(json.dumps(component_config, indent=4, separators=(',', ': ')))
add_component_project(component_config)
click.secho("\nComponent <{0}> created.\n".format(name), bold=True, fg='green')
else :
click.secho(errors["UNKNOWN_NETWORK"],bold=True, fg="red")
@click.command()
@click.argument('name')
def upload(name) :
'''
Upload Abrio component to server.
'''
if not ensure_abrio_root():
click.secho('\nAbrio Root Directory Not Detected.\n', fg="red", bold=True)
return
if not ensure_component_exists(name):
click.secho("\nComponent <{0}> does not exist.\n".format(name), bold=True, fg="red")
build_dir = '/sample/build/libs/'
os.system('cd {0} && gradle jar && cd ..'.format(name))
jar_dir = name + build_dir + name + '.jar'
os.rename(name + build_dir + 'sample.jar',jar_dir)
encoder = create_upload(jar_dir)
callback = create_callback(encoder)
monitor = MultipartEncoderMonitor(encoder, callback)
component_config = load_component_config(name)
component_config['last_uploaded'] = str(datetime.datetime.now())
write_component_config(name, component_config)
headers = {
'Content-Type': monitor.content_type,
'private key': component_config['pkey'],
'version' : component_config['version']
}
upload_response = requests.post(
config['server']['host'] + "component/upload",
data=monitor,
# auth=HTTPBasicAuth(email, pwd),
headers=headers)
if upload_response.status_code == 200 :
click.secho('\n\n\nComponent uploaded\n', bold=True, fg="green")
else :
click.secho(errors["UNKNOWN_NETWORK"], bold=True, fg="red")
@click.option('--sure', prompt="Are you sure you want to delete this component", is_flag=True)
@click.argument('name')
@click.command()
def rm(name, sure) :
'''
Delete Abrio Component.
'''
if not ensure_abrio_root():
click.secho('\nAbrio Root Directory Not Detected.\n', fg="red", bold=True)
return
if sure :
if ensure_component_exists(name) :
os.system('rm -Rf {0}'.format(name))
remove_component_project(name)
# todo delete from server
click.secho("\nComponent <{0}> deleted.\n".format(name), bold=True, fg="yellow")
else :
click.secho("\nComponent <{0}> does not exist.\n".format(name), bold=True, fg="red")
@click.command()
def ls() :
'''
List Available Abrio components
'''
if not ensure_abrio_root():
click.secho('\nAbrio Root Directory Not Detected.\n', fg="red", bold=True)
return
project_config = load_project_config()
response = requests.get(
config['server']['host'] + "project/list_components",
json={'private_key': project_config['private_key']}
)
if response.status_code == 200 :
component_table = [
['Component Name', 'Version', 'Public', "Last Upload" , "Type"]] + \
[
[
component['name'],
component['version'],
str(component['public']),
component['last_uploaded'],
Color('{autoyellow}Local{/autoyellow}')
] for component in project_config['components']
]
component_table += [
[
comp['name'],
comp['deploy_version'],
str(not comp['private']),
"---",
Color('{autocyan}Online{/autocyan}')
] for comp in json.loads(response.content)['result']]
table = AsciiTable(component_table)
click.echo(table.table)
else :
click.secho(errors["UNKNOWN_NETWORK"], bold=True, fg="red")
def create_callback(encoder):
encoder_len = encoder.len
bar = ProgressBar(expected_size=encoder_len, filled_char='=')
def callback(monitor):
bar.show(monitor.bytes_read)
return callback
def create_upload(file_path):
file_name = file_path.split("/")[-1]
return MultipartEncoder({'files':(file_name,open(file_path, 'rb'))}) | PypiClean |
/DLTA-AI-1.1.tar.gz/DLTA-AI-1.1/DLTA_AI_app/mmdetection/tools/deployment/pytorch2onnx.py | import argparse
import os.path as osp
import warnings
from functools import partial
import numpy as np
import onnx
import torch
from mmcv import Config, DictAction
from mmdet.core.export import build_model_from_cfg, preprocess_example_input
from mmdet.core.export.model_wrappers import ONNXRuntimeDetector
def pytorch2onnx(model,
input_img,
input_shape,
normalize_cfg,
opset_version=11,
show=False,
output_file='tmp.onnx',
verify=False,
test_img=None,
do_simplify=False,
dynamic_export=None,
skip_postprocess=False):
input_config = {
'input_shape': input_shape,
'input_path': input_img,
'normalize_cfg': normalize_cfg
}
# prepare input
one_img, one_meta = preprocess_example_input(input_config)
img_list, img_meta_list = [one_img], [[one_meta]]
if skip_postprocess:
warnings.warn('Not all models support export onnx without post '
'process, especially two stage detectors!')
model.forward = model.forward_dummy
torch.onnx.export(
model,
one_img,
output_file,
input_names=['input'],
export_params=True,
keep_initializers_as_inputs=True,
do_constant_folding=True,
verbose=show,
opset_version=opset_version)
print(f'Successfully exported ONNX model without '
f'post process: {output_file}')
return
# replace original forward function
origin_forward = model.forward
model.forward = partial(
model.forward,
img_metas=img_meta_list,
return_loss=False,
rescale=False)
output_names = ['dets', 'labels']
if model.with_mask:
output_names.append('masks')
input_name = 'input'
dynamic_axes = None
if dynamic_export:
dynamic_axes = {
input_name: {
0: 'batch',
2: 'height',
3: 'width'
},
'dets': {
0: 'batch',
1: 'num_dets',
},
'labels': {
0: 'batch',
1: 'num_dets',
},
}
if model.with_mask:
dynamic_axes['masks'] = {0: 'batch', 1: 'num_dets'}
torch.onnx.export(
model,
img_list,
output_file,
input_names=[input_name],
output_names=output_names,
export_params=True,
keep_initializers_as_inputs=True,
do_constant_folding=True,
verbose=show,
opset_version=opset_version,
dynamic_axes=dynamic_axes)
model.forward = origin_forward
if do_simplify:
import onnxsim
from mmdet import digit_version
min_required_version = '0.4.0'
assert digit_version(onnxsim.__version__) >= digit_version(
min_required_version
), f'Requires to install onnxsim>={min_required_version}'
model_opt, check_ok = onnxsim.simplify(output_file)
if check_ok:
onnx.save(model_opt, output_file)
print(f'Successfully simplified ONNX model: {output_file}')
else:
warnings.warn('Failed to simplify ONNX model.')
print(f'Successfully exported ONNX model: {output_file}')
if verify:
# check by onnx
onnx_model = onnx.load(output_file)
onnx.checker.check_model(onnx_model)
# wrap onnx model
onnx_model = ONNXRuntimeDetector(output_file, model.CLASSES, 0)
if dynamic_export:
# scale up to test dynamic shape
h, w = [int((_ * 1.5) // 32 * 32) for _ in input_shape[2:]]
h, w = min(1344, h), min(1344, w)
input_config['input_shape'] = (1, 3, h, w)
if test_img is None:
input_config['input_path'] = input_img
# prepare input once again
one_img, one_meta = preprocess_example_input(input_config)
img_list, img_meta_list = [one_img], [[one_meta]]
# get pytorch output
with torch.no_grad():
pytorch_results = model(
img_list,
img_metas=img_meta_list,
return_loss=False,
rescale=True)[0]
img_list = [_.cuda().contiguous() for _ in img_list]
if dynamic_export:
img_list = img_list + [_.flip(-1).contiguous() for _ in img_list]
img_meta_list = img_meta_list * 2
# get onnx output
onnx_results = onnx_model(
img_list, img_metas=img_meta_list, return_loss=False)[0]
# visualize predictions
score_thr = 0.3
if show:
out_file_ort, out_file_pt = None, None
else:
out_file_ort, out_file_pt = 'show-ort.png', 'show-pt.png'
show_img = one_meta['show_img']
model.show_result(
show_img,
pytorch_results,
score_thr=score_thr,
show=True,
win_name='PyTorch',
out_file=out_file_pt)
onnx_model.show_result(
show_img,
onnx_results,
score_thr=score_thr,
show=True,
win_name='ONNXRuntime',
out_file=out_file_ort)
# compare a part of result
if model.with_mask:
compare_pairs = list(zip(onnx_results, pytorch_results))
else:
compare_pairs = [(onnx_results, pytorch_results)]
err_msg = 'The numerical values are different between Pytorch' + \
' and ONNX, but it does not necessarily mean the' + \
' exported ONNX model is problematic.'
# check the numerical value
for onnx_res, pytorch_res in compare_pairs:
for o_res, p_res in zip(onnx_res, pytorch_res):
np.testing.assert_allclose(
o_res, p_res, rtol=1e-03, atol=1e-05, err_msg=err_msg)
print('The numerical values are the same between Pytorch and ONNX')
def parse_normalize_cfg(test_pipeline):
transforms = None
for pipeline in test_pipeline:
if 'transforms' in pipeline:
transforms = pipeline['transforms']
break
assert transforms is not None, 'Failed to find `transforms`'
norm_config_li = [_ for _ in transforms if _['type'] == 'Normalize']
assert len(norm_config_li) == 1, '`norm_config` should only have one'
norm_config = norm_config_li[0]
return norm_config
def parse_args():
parser = argparse.ArgumentParser(
description='Convert MMDetection models to ONNX')
parser.add_argument('config', help='test config file path')
parser.add_argument('checkpoint', help='checkpoint file')
parser.add_argument('--input-img', type=str, help='Images for input')
parser.add_argument(
'--show',
action='store_true',
help='Show onnx graph and detection outputs')
parser.add_argument('--output-file', type=str, default='tmp.onnx')
parser.add_argument('--opset-version', type=int, default=11)
parser.add_argument(
'--test-img', type=str, default=None, help='Images for test')
parser.add_argument(
'--dataset',
type=str,
default='coco',
help='Dataset name. This argument is deprecated and will be removed \
in future releases.')
parser.add_argument(
'--verify',
action='store_true',
help='verify the onnx model output against pytorch output')
parser.add_argument(
'--simplify',
action='store_true',
help='Whether to simplify onnx model.')
parser.add_argument(
'--shape',
type=int,
nargs='+',
default=[800, 1216],
help='input image size')
parser.add_argument(
'--mean',
type=float,
nargs='+',
default=[123.675, 116.28, 103.53],
help='mean value used for preprocess input data.This argument \
is deprecated and will be removed in future releases.')
parser.add_argument(
'--std',
type=float,
nargs='+',
default=[58.395, 57.12, 57.375],
help='variance value used for preprocess input data. '
'This argument is deprecated and will be removed in future releases.')
parser.add_argument(
'--cfg-options',
nargs='+',
action=DictAction,
help='Override some settings in the used config, the key-value pair '
'in xxx=yyy format will be merged into config file. If the value to '
'be overwritten is a list, it should be like key="[a,b]" or key=a,b '
'It also allows nested list/tuple values, e.g. key="[(a,b),(c,d)]" '
'Note that the quotation marks are necessary and that no white space '
'is allowed.')
parser.add_argument(
'--dynamic-export',
action='store_true',
help='Whether to export onnx with dynamic axis.')
parser.add_argument(
'--skip-postprocess',
action='store_true',
help='Whether to export model without post process. Experimental '
'option. We do not guarantee the correctness of the exported '
'model.')
args = parser.parse_args()
return args
if __name__ == '__main__':
args = parse_args()
warnings.warn('Arguments like `--mean`, `--std`, `--dataset` would be \
parsed directly from config file and are deprecated and \
will be removed in future releases.')
assert args.opset_version == 11, 'MMDet only support opset 11 now'
try:
from mmcv.onnx.symbolic import register_extra_symbolics
except ModuleNotFoundError:
raise NotImplementedError('please update mmcv to version>=v1.0.4')
register_extra_symbolics(args.opset_version)
cfg = Config.fromfile(args.config)
if args.cfg_options is not None:
cfg.merge_from_dict(args.cfg_options)
if args.shape is None:
img_scale = cfg.test_pipeline[1]['img_scale']
input_shape = (1, 3, img_scale[1], img_scale[0])
elif len(args.shape) == 1:
input_shape = (1, 3, args.shape[0], args.shape[0])
elif len(args.shape) == 2:
input_shape = (1, 3) + tuple(args.shape)
else:
raise ValueError('invalid input shape')
# build the model and load checkpoint
model = build_model_from_cfg(args.config, args.checkpoint,
args.cfg_options)
if not args.input_img:
args.input_img = osp.join(osp.dirname(__file__), '../../demo/demo.jpg')
normalize_cfg = parse_normalize_cfg(cfg.test_pipeline)
# convert model to onnx file
pytorch2onnx(
model,
args.input_img,
input_shape,
normalize_cfg,
opset_version=args.opset_version,
show=args.show,
output_file=args.output_file,
verify=args.verify,
test_img=args.test_img,
do_simplify=args.simplify,
dynamic_export=args.dynamic_export,
skip_postprocess=args.skip_postprocess)
# Following strings of text style are from colorama package
bright_style, reset_style = '\x1b[1m', '\x1b[0m'
red_text, blue_text = '\x1b[31m', '\x1b[34m'
white_background = '\x1b[107m'
msg = white_background + bright_style + red_text
msg += 'DeprecationWarning: This tool will be deprecated in future. '
msg += blue_text + 'Welcome to use the unified model deployment toolbox '
msg += 'MMDeploy: https://github.com/open-mmlab/mmdeploy'
msg += reset_style
warnings.warn(msg) | PypiClean |
/BuzzAlgoTrade-0.0.2.tar.gz/BuzzAlgoTrade-0.0.2/pyalgotrade/barfeed/googlefeed.py | from pyalgotrade.barfeed import csvfeed
from pyalgotrade.barfeed import common
from pyalgotrade.utils import dt
from pyalgotrade import bar
import datetime
######################################################################
# Google Finance CSV parser
# Each bar must be on its own line and fields must be separated by comma (,).
#
# Bars Format:
# Date,Open,High,Low,Close,Volume
#
# The csv Date column must have the following format: D-B-YY
def parse_date(date):
# Sample: 3-Dec-05
# This custom parsing works faster than:
# datetime.datetime.strptime(date, "%d-%b-%y")
month_abbr = {'Jan': 1, 'Feb': 2, 'Mar': 3, 'Apr': 4,
'May': 5, 'Jun': 6, 'Jul': 7, 'Aug': 8,
'Sep': 9, 'Oct': 10, 'Nov': 11, 'Dec': 12}
date = date.split("-")
year = int(date[2]) + 2000
if year > datetime.datetime.today().year:
# it's probably 20th century
year -= 100
month = int(month_abbr[date[1]])
day = int(date[0])
ret = datetime.datetime(year, month, day)
return ret
class RowParser(csvfeed.RowParser):
def __init__(self, dailyBarTime, frequency, timezone=None, sanitize=False):
self.__dailyBarTime = dailyBarTime
self.__frequency = frequency
self.__timezone = timezone
self.__sanitize = sanitize
def __parseDate(self, dateString):
ret = parse_date(dateString)
# Time on Google Finance CSV files is empty. If told to set one, do it.
if self.__dailyBarTime is not None:
ret = datetime.datetime.combine(ret, self.__dailyBarTime)
# Localize the datetime if a timezone was given.
if self.__timezone:
ret = dt.localize(ret, self.__timezone)
return ret
def getFieldNames(self):
# It is expected for the first row to have the field names.
return None
def getDelimiter(self):
return ","
def parseBar(self, csvRowDict):
dateTime = self.__parseDate(csvRowDict["Date"])
close = float(csvRowDict["Close"])
open_ = float(csvRowDict["Open"])
high = float(csvRowDict["High"])
low = float(csvRowDict["Low"])
volume = float(csvRowDict["Volume"])
adjClose = None
if self.__sanitize:
open_, high, low, close = common.sanitize_ohlc(open_, high, low, close)
return bar.BasicBar(dateTime, open_, high, low, close, volume,
adjClose, self.__frequency)
class Feed(csvfeed.BarFeed):
"""A :class:`pyalgotrade.barfeed.csvfeed.BarFeed` that loads bars from CSV files downloaded from Google Finance.
:param frequency: The frequency of the bars. Only **pyalgotrade.bar.Frequency.DAY** is currently supported.
:param timezone: The default timezone to use to localize bars. Check :mod:`pyalgotrade.marketsession`.
:type timezone: A pytz timezone.
:param maxLen: The maximum number of values that the :class:`pyalgotrade.dataseries.bards.BarDataSeries` will hold.
Once a bounded length is full, when new items are added, a corresponding number of items are discarded from the
opposite end. If None then dataseries.DEFAULT_MAX_LEN is used.
:type maxLen: int.
.. note::
Google Finance csv files lack timezone information.
When working with multiple instruments:
* If all the instruments loaded are in the same timezone, then the timezone parameter may not be specified.
* If any of the instruments loaded are in different timezones, then the timezone parameter must be set.
"""
def __init__(self, frequency=bar.Frequency.DAY, timezone=None, maxLen=None):
if frequency not in [bar.Frequency.DAY]:
raise Exception("Invalid frequency.")
super(Feed, self).__init__(frequency, maxLen)
self.__timezone = timezone
self.__sanitizeBars = False
def sanitizeBars(self, sanitize):
self.__sanitizeBars = sanitize
def barsHaveAdjClose(self):
return False
def addBarsFromCSV(self, instrument, path, timezone=None):
"""Loads bars for a given instrument from a CSV formatted file.
The instrument gets registered in the bar feed.
:param instrument: Instrument identifier.
:type instrument: string.
:param path: The path to the CSV file.
:type path: string.
:param timezone: The timezone to use to localize bars. Check :mod:`pyalgotrade.marketsession`.
:type timezone: A pytz timezone.
"""
if timezone is None:
timezone = self.__timezone
rowParser = RowParser(self.getDailyBarTime(), self.getFrequency(), timezone, self.__sanitizeBars)
super(Feed, self).addBarsFromCSV(instrument, path, rowParser) | PypiClean |
/MTGProxyPrinter-0.25.0.tar.gz/MTGProxyPrinter-0.25.0/mtg_proxy_printer/document_controller/move_cards.py |
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import functools
import itertools
import typing
from PyQt5.QtCore import QModelIndex
from ._interface import DocumentAction, IllegalStateError, Self
from mtg_proxy_printer.logger import get_logger
if typing.TYPE_CHECKING:
from mtg_proxy_printer.model.document_page import Page
from mtg_proxy_printer.model.document import Document
logger = get_logger(__name__)
del get_logger
__all__ = [
"ActionMoveCards",
]
class ActionMoveCards(DocumentAction):
"""
Moves a sequence of cards from a source page to a target page.
Values of consecutive card ranges are inclusive.
"""
COMPARISON_ATTRIBUTES = ["source_page", "target_page", "card_ranges_to_move"]
def __init__(self, source: int, cards_to_move: typing.Sequence[int], target: int):
self.source_page = source
self.target_page = target
self.card_ranges_to_move = self._to_list_of_ranges(cards_to_move)
def apply(self, document: "Document") -> Self:
source_page = document.pages[self.source_page]
target_page = document.pages[self.target_page]
if not target_page.accepts_card(source_page[0].card.requested_page_type()):
raise IllegalStateError(
f"Can not move card requesting page type {source_page.page_type()} "
f"onto a page with type {target_page.page_type()}"
)
source_index = document.index(self.source_page, 0)
target_index = document.index(self.target_page, 0)
source_page_type = source_page.page_type()
target_page_type = target_page.page_type()
destination_row = len(target_page)
for source_row_first, source_row_last in reversed(self.card_ranges_to_move):
self._move_cards_to_target_page(
document, source_index, source_page, source_row_first, source_row_last, target_index,
target_page, destination_row
)
if source_page.page_type() != source_page_type:
document.page_type_changed.emit(source_index)
if target_page.page_type() != target_page_type:
document.page_type_changed.emit(target_index)
return super().apply(document)
@staticmethod
def _move_cards_to_target_page(
document: "Document",
source_index: QModelIndex, source_page: "Page", source_row_first: int, source_row_last: int,
target_index: QModelIndex, target_page: "Page", destination_row: int):
document.beginMoveRows(source_index, source_row_first, source_row_last, target_index, destination_row)
target_page[destination_row:destination_row] = source_page[source_row_first:source_row_last + 1]
for item in source_page[source_row_first:source_row_last + 1]:
item.parent = target_page
del source_page[source_row_first:source_row_last + 1]
document.endMoveRows()
def undo(self, document: "Document") -> Self:
source_page = document.pages[self.target_page] # Swap source and target page for undo
target_page = document.pages[self.source_page]
source_index = document.index(self.target_page, 0) # Same for the model index
target_index = document.index(self.source_page, 0)
source_page_type = source_page.page_type()
target_page_type = target_page.page_type()
# During apply(), all cards were appended to the target page. During undo, the ranges are extracted in order
# from the source page. Thus, the first source row is now constant across all ranges
source_row_first = len(source_page) - self._total_moved_cards()
for target_row_first, target_row_last in self.card_ranges_to_move:
source_row_last = source_row_first + target_row_last - target_row_first
self._move_cards_to_target_page(
document, source_index, source_page, source_row_first, source_row_last, target_index,
target_page, target_row_first
)
if source_page.page_type() != source_page_type:
document.page_type_changed.emit(source_index)
if target_page.page_type() != target_page_type:
document.page_type_changed.emit(target_index)
return self
@staticmethod
def _to_list_of_ranges(sequence: typing.Sequence[int]) -> typing.List[typing.Tuple[int, int]]:
ranges: typing.List[typing.Tuple[int, int]] = []
sequence = itertools.chain(sequence, (sentinel := object(),))
lower = upper = next(sequence)
for item in sequence:
if item is sentinel or upper != item-1:
ranges.append((lower, upper))
lower = upper = item
else:
upper = item
return ranges
def _total_moved_cards(self) -> int:
return sum(last-first+1 for first, last in self.card_ranges_to_move)
@functools.cached_property
def as_str(self):
return f"Move {sum(upper-lower+1 for lower, upper in self.card_ranges_to_move)} cards " \
f"from page {self.source_page+1} to page {self.target_page+1}" | PypiClean |
/CsuPMTD-1.0.27.tar.gz/CsuPMTD-1.0.27/PMTD/maskrcnn_benchmark/modeling/make_layers.py | import torch
from torch import nn
from PMTD.maskrcnn_benchmark.config import cfg
from PMTD.maskrcnn_benchmark.layers import Conv2d
def get_group_gn(dim, dim_per_gp, num_groups):
"""get number of groups used by GroupNorm, based on number of channels."""
assert dim_per_gp == -1 or num_groups == -1, \
"GroupNorm: can only specify G or C/G."
if dim_per_gp > 0:
assert dim % dim_per_gp == 0, \
"dim: {}, dim_per_gp: {}".format(dim, dim_per_gp)
group_gn = dim // dim_per_gp
else:
assert dim % num_groups == 0, \
"dim: {}, num_groups: {}".format(dim, num_groups)
group_gn = num_groups
return group_gn
def group_norm(out_channels, affine=True, divisor=1):
out_channels = out_channels // divisor
dim_per_gp = cfg.MODEL.GROUP_NORM.DIM_PER_GP // divisor
num_groups = cfg.MODEL.GROUP_NORM.NUM_GROUPS // divisor
eps = cfg.MODEL.GROUP_NORM.EPSILON # default: 1e-5
return torch.nn.GroupNorm(
get_group_gn(out_channels, dim_per_gp, num_groups),
out_channels,
eps,
affine
)
def make_conv3x3(
in_channels,
out_channels,
dilation=1,
stride=1,
use_gn=False,
use_relu=False,
kaiming_init=True
):
conv = Conv2d(
in_channels,
out_channels,
kernel_size=3,
stride=stride,
padding=dilation,
dilation=dilation,
bias=False if use_gn else True
)
if kaiming_init:
nn.init.kaiming_normal_(
conv.weight, mode="fan_out", nonlinearity="relu"
)
else:
torch.nn.init.normal_(conv.weight, std=0.01)
if not use_gn:
nn.init.constant_(conv.bias, 0)
module = [conv,]
if use_gn:
module.append(group_norm(out_channels))
if use_relu:
module.append(nn.ReLU(inplace=True))
if len(module) > 1:
return nn.Sequential(*module)
return conv
def make_fc(dim_in, hidden_dim, use_gn=False):
'''
Caffe2 implementation uses XavierFill, which in fact
corresponds to kaiming_uniform_ in PyTorch
'''
if use_gn:
fc = nn.Linear(dim_in, hidden_dim, bias=False)
nn.init.kaiming_uniform_(fc.weight, a=1)
return nn.Sequential(fc, group_norm(hidden_dim))
fc = nn.Linear(dim_in, hidden_dim)
nn.init.kaiming_uniform_(fc.weight, a=1)
nn.init.constant_(fc.bias, 0)
return fc
def conv_with_kaiming_uniform(use_gn=False, use_relu=False):
def make_conv(
in_channels, out_channels, kernel_size, stride=1, dilation=1
):
conv = Conv2d(
in_channels,
out_channels,
kernel_size=kernel_size,
stride=stride,
padding=dilation * (kernel_size - 1) // 2,
dilation=dilation,
bias=False if use_gn else True
)
# Caffe2 implementation uses XavierFill, which in fact
# corresponds to kaiming_uniform_ in PyTorch
nn.init.kaiming_uniform_(conv.weight, a=1)
if not use_gn:
nn.init.constant_(conv.bias, 0)
module = [conv,]
if use_gn:
module.append(group_norm(out_channels))
if use_relu:
module.append(nn.ReLU(inplace=True))
if len(module) > 1:
return nn.Sequential(*module)
return conv
return make_conv | PypiClean |
/CoilMQ-1.0.1.tar.gz/CoilMQ-1.0.1/coilmq/store/__init__.py | import abc
import logging
import threading
from coilmq.util.concurrency import synchronized
__authors__ = ['"Hans Lellelid" <[email protected]>']
__copyright__ = "Copyright 2009 Hans Lellelid"
__license__ = """Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License."""
lock = threading.RLock()
class QueueStore(object):
"""
Abstract base class for queue storage.
Extensions/implementations of this class must be thread-safe.
@ivar log: A logger for this class.
@type log: C{logging.Logger}
"""
__metaclass__ = abc.ABCMeta
def __init__(self):
"""
A base constructor that sets up logging.
If you extend this class, you should either call this method or at minimum make sure these values
get set.
"""
self.log = logging.getLogger('%s.%s' % (
self.__module__, self.__class__.__name__))
@abc.abstractmethod
@synchronized(lock)
def enqueue(self, destination, frame):
"""
Store message (frame) for specified destinationination.
@param destination: The destinationination queue name for this message (frame).
@type destination: C{str}
@param frame: The message (frame) to send to specified destinationination.
@type frame: C{stompclient.frame.Frame}
"""
@abc.abstractmethod
@synchronized(lock)
def dequeue(self, destination):
"""
Removes and returns an item from the queue (or C{None} if no items in queue).
@param destination: The queue name (destinationination).
@type destination: C{str}
@return: The first frame in the specified queue, or C{None} if there are none.
@rtype: C{stompclient.frame.Frame}
"""
@synchronized(lock)
def requeue(self, destination, frame):
"""
Requeue a message (frame) for storing at specified destinationination.
@param destination: The destinationination queue name for this message (frame).
@type destination: C{str}
@param frame: The message (frame) to send to specified destinationination.
@type frame: C{stompclient.frame.Frame}
"""
self.enqueue(destination, frame)
@synchronized(lock)
def size(self, destination):
"""
Size of the queue for specified destination.
@param destination: The queue destination (e.g. /queue/foo)
@type destination: C{str}
@return: The number of frames in specified queue.
@rtype: C{int}
"""
raise NotImplementedError()
@synchronized(lock)
def has_frames(self, destination):
"""
Whether specified destination has any frames.
Default implementation uses L{QueueStore.size} to determine if there
are any frames in queue. Subclasses may choose to optimize this.
@param destination: The queue destination (e.g. /queue/foo)
@type destination: C{str}
@return: The number of frames in specified queue.
@rtype: C{int}
"""
return self.size(destination) > 0
@synchronized(lock)
def destinations(self):
"""
Provides a set of destinations (queue "addresses") available.
@return: A list of the detinations available.
@rtype: C{set}
"""
raise NotImplementedError
@synchronized(lock)
def close(self):
"""
May be implemented to perform any necessary cleanup operations when store is closed.
"""
pass
# This is intentionally not synchronized, since it does not directly
# expose any shared data.
def frames(self, destination):
"""
Returns an iterator for frames in specified queue.
The iterator simply wraps calls to L{dequeue} method, so the order of the
frames from the iterator will be the reverse of the order in which the
frames were enqueued.
@param destination: The queue destination (e.g. /queue/foo)
@type destination: C{str}
"""
return QueueFrameIterator(self, destination)
class QueueFrameIterator(object):
"""
Provides an C{iterable} over the frames for a specified destination in a queue.
@ivar store: The queue store.
@type store: L{coilmq.store.QueueStore}
@ivar destination: The destination for this iterator.
@type destination: C{str}
"""
def __init__(self, store, destination):
self.store = store
self.destination = destination
def __iter__(self):
return self
def next(self):
return self.__next__()
def __next__(self):
frame = self.store.dequeue(self.destination)
if not frame:
raise StopIteration()
return frame
def __len__(self):
return self.store.size(self.destination)
class TopicStore(object):
"""
Abstract base class for non-durable topic storage.
"""
class DurableTopicStore(TopicStore):
"""
Abstract base class for durable topic storage.
""" | PypiClean |
/MezzanineFor1.7-3.1.10.tar.gz/MezzanineFor1.7-3.1.10/mezzanine/blog/migrations/south/0010_category_site_allow_comments.py |
import datetime
from south.db import db
from south.v2 import SchemaMigration
from django.db import models
from django.contrib.sites.models import Site
try:
from django.contrib.auth import get_user_model
except ImportError: # django < 1.5
from django.contrib.auth.models import User
else:
User = get_user_model()
user_orm_label = '%s.%s' % (User._meta.app_label, User._meta.object_name)
user_model_label = '%s.%s' % (User._meta.app_label, User._meta.module_name)
class Migration(SchemaMigration):
def forwards(self, orm):
site = Site.objects.get_current()
# Adding field 'BlogCategory.site'
db.add_column('blog_blogcategory', 'site', self.gf('django.db.models.fields.related.ForeignKey')(default=site.pk, to=orm['sites.Site']), keep_default=False)
# Adding field 'BlogPost.allow_comments'
db.add_column('blog_blogpost', 'allow_comments', self.gf('django.db.models.fields.BooleanField')(default=True), keep_default=False)
def backwards(self, orm):
# Deleting field 'BlogCategory.site'
db.delete_column('blog_blogcategory', 'site_id')
# Deleting field 'BlogPost.allow_comments'
db.delete_column('blog_blogpost', 'allow_comments')
models = {
'auth.group': {
'Meta': {'object_name': 'Group'},
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '80'}),
'permissions': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['auth.Permission']", 'symmetrical': 'False', 'blank': 'True'})
},
'auth.permission': {
'Meta': {'ordering': "('content_type__app_label', 'content_type__model', 'codename')", 'unique_together': "(('content_type', 'codename'),)", 'object_name': 'Permission'},
'codename': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'content_type': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['contenttypes.ContentType']"}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '50'})
},
user_model_label: {
'Meta': {'object_name': User.__name__, 'db_table': "'%s'" % User._meta.db_table},
'date_joined': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'email': ('django.db.models.fields.EmailField', [], {'max_length': '75', 'blank': 'True'}),
'first_name': ('django.db.models.fields.CharField', [], {'max_length': '30', 'blank': 'True'}),
'groups': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['auth.Group']", 'symmetrical': 'False', 'blank': 'True'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'is_active': ('django.db.models.fields.BooleanField', [], {'default': 'True'}),
'is_staff': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'is_superuser': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'last_login': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'last_name': ('django.db.models.fields.CharField', [], {'max_length': '30', 'blank': 'True'}),
'password': ('django.db.models.fields.CharField', [], {'max_length': '128'}),
'user_permissions': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['auth.Permission']", 'symmetrical': 'False', 'blank': 'True'}),
'username': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '30'})
},
'blog.blogcategory': {
'Meta': {'object_name': 'BlogCategory'},
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'site': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['sites.Site']"}),
'slug': ('django.db.models.fields.CharField', [], {'max_length': '100', 'null': 'True', 'blank': 'True'}),
'title': ('django.db.models.fields.CharField', [], {'max_length': '100'})
},
'blog.blogpost': {
'Meta': {'ordering': "('-publish_date',)", 'object_name': 'BlogPost'},
'allow_comments': ('django.db.models.fields.BooleanField', [], {'default': 'True'}),
'categories': ('django.db.models.fields.related.ManyToManyField', [], {'symmetrical': 'False', 'related_name': "'blogposts'", 'blank': 'True', 'to': "orm['blog.BlogCategory']"}),
#'comments': ('mezzanine.generic.fields.CommentsField', [], {'object_id_field': "'object_pk'", 'to': "orm['generic.ThreadedComment']"}),
'comments_count': ('django.db.models.fields.IntegerField', [], {'default': '0'}),
'content': ('mezzanine.core.fields.RichTextField', [], {}),
'description': ('django.db.models.fields.TextField', [], {'blank': 'True'}),
'expiry_date': ('django.db.models.fields.DateTimeField', [], {'null': 'True', 'blank': 'True'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
#'keywords': ('mezzanine.generic.fields.KeywordsField', [], {'object_id_field': "'object_pk'", 'to': "orm['generic.AssignedKeyword']"}),
'keywords_string': ('django.db.models.fields.CharField', [], {'max_length': '500', 'blank': 'True'}),
'publish_date': ('django.db.models.fields.DateTimeField', [], {'null': 'True', 'blank': 'True'}),
#'rating': ('mezzanine.generic.fields.RatingField', [], {'object_id_field': "'object_pk'", 'to': "orm['generic.Rating']"}),
'rating_average': ('django.db.models.fields.FloatField', [], {'default': '0'}),
'rating_count': ('django.db.models.fields.IntegerField', [], {'default': '0'}),
'short_url': ('django.db.models.fields.URLField', [], {'max_length': '200', 'null': 'True', 'blank': 'True'}),
'site': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['sites.Site']"}),
'slug': ('django.db.models.fields.CharField', [], {'max_length': '100', 'null': 'True', 'blank': 'True'}),
'status': ('django.db.models.fields.IntegerField', [], {'default': '1'}),
'title': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'user': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'blogposts'", 'to': "orm['%s']" % user_orm_label})
},
'comments.comment': {
'Meta': {'ordering': "('submit_date',)", 'object_name': 'Comment', 'db_table': "'django_comments'"},
'comment': ('django.db.models.fields.TextField', [], {'max_length': '3000'}),
'content_type': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'content_type_set_for_comment'", 'to': "orm['contenttypes.ContentType']"}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'ip_address': ('django.db.models.fields.IPAddressField', [], {'max_length': '15', 'null': 'True', 'blank': 'True'}),
'is_public': ('django.db.models.fields.BooleanField', [], {'default': 'True'}),
'is_removed': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'object_pk': ('django.db.models.fields.TextField', [], {}),
'site': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['sites.Site']"}),
'submit_date': ('django.db.models.fields.DateTimeField', [], {'default': 'None'}),
'user': ('django.db.models.fields.related.ForeignKey', [], {'blank': 'True', 'related_name': "'comment_comments'", 'null': 'True', 'to': "orm['%s']" % user_orm_label}),
'user_email': ('django.db.models.fields.EmailField', [], {'max_length': '75', 'blank': 'True'}),
'user_name': ('django.db.models.fields.CharField', [], {'max_length': '50', 'blank': 'True'}),
'user_url': ('django.db.models.fields.URLField', [], {'max_length': '200', 'blank': 'True'})
},
'contenttypes.contenttype': {
'Meta': {'ordering': "('name',)", 'unique_together': "(('app_label', 'model'),)", 'object_name': 'ContentType', 'db_table': "'django_content_type'"},
'app_label': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'model': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '100'})
},
'generic.assignedkeyword': {
'Meta': {'object_name': 'AssignedKeyword'},
'content_type': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['contenttypes.ContentType']"}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'keyword': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'assignments'", 'to': "orm['generic.Keyword']"}),
'object_pk': ('django.db.models.fields.IntegerField', [], {})
},
'generic.keyword': {
'Meta': {'object_name': 'Keyword'},
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'site': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['sites.Site']"}),
'slug': ('django.db.models.fields.CharField', [], {'max_length': '100', 'null': 'True', 'blank': 'True'}),
'title': ('django.db.models.fields.CharField', [], {'max_length': '100'})
},
'generic.rating': {
'Meta': {'object_name': 'Rating'},
'content_type': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['contenttypes.ContentType']"}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'object_pk': ('django.db.models.fields.IntegerField', [], {}),
'value': ('django.db.models.fields.IntegerField', [], {})
},
'generic.threadedcomment': {
'Meta': {'ordering': "('submit_date',)", 'object_name': 'ThreadedComment', '_ormbases': ['comments.Comment']},
'by_author': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'comment_ptr': ('django.db.models.fields.related.OneToOneField', [], {'to': "orm['comments.Comment']", 'unique': 'True', 'primary_key': 'True'}),
'email_hash': ('django.db.models.fields.CharField', [], {'max_length': '100', 'blank': 'True'}),
'replied_to': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'comments'", 'null': 'True', 'to': "orm['generic.ThreadedComment']"})
},
'sites.site': {
'Meta': {'ordering': "('domain',)", 'object_name': 'Site', 'db_table': "'django_site'"},
'domain': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '50'})
}
}
complete_apps = ['blog'] | PypiClean |
/LWTools-1.0.5.tar.gz/LWTools-1.0.5/LWT/lmtanalysis/CorrectDetectionIntegrity.py | import sqlite3
from time import *
from lmtanalysis.Animal import *
from lmtanalysis.Detection import *
from lmtanalysis.Measure import *
import matplotlib.pyplot as plt
import numpy as np
from lmtanalysis.Event import *
from lmtanalysis.Measure import *
from lmtanalysis.Chronometer import Chronometer
from lmtanalysis.FileUtil import getFilesToProcess
def loadDetectionMap(connection, animal, start=None, end=None):
chrono = Chronometer("Correct detection integrity: Load detection map")
print("processing animal ID: {}".format(animal))
result = {}
cursor = connection.cursor()
query = "SELECT FRAMENUMBER FROM DETECTION WHERE ANIMALID={}".format(animal)
if (start != None):
query += " AND FRAMENUMBER>={}".format(start)
if (end != None):
query += " AND FRAMENUMBER<={}".format(end)
print(query)
cursor.execute(query)
rows = cursor.fetchall()
cursor.close()
for row in rows:
frameNumber = row[0]
result[frameNumber] = True;
print(" detections loaded in {} seconds.".format(chrono.getTimeI()))
return result
def correct(connection, tmin=None, tmax=None):
pool = AnimalPool( )
pool.loadAnimals( connection )
#pool.loadDetection( start = tmin, end = tmax )
cursor = connection.cursor()
if tmin is None:
query = "SELECT MIN(FRAMENUMBER) FROM FRAME"
cursor.execute(query)
minFrames = cursor.fetchall()
for minFrame in minFrames:
tmin = minFrame[0]
if tmax is None:
query = "SELECT MAX(FRAMENUMBER) FROM FRAME"
cursor.execute(query)
maxFrames = cursor.fetchall()
for maxFrame in maxFrames:
tmax = maxFrame[0]
#Select the MAX ID of DETECTION
query = "SELECT MAX(ID) FROM DETECTION"
cursor.execute(query)
maxIDtemp = cursor.fetchall()
maxID = maxIDtemp[0][0]
print(maxID)
'''
get the number of expected animals
if there is not all detections expected, switch all to anonymous
'''
validDetectionTimeLine = EventTimeLine(None, "IDs integrity ok", None, None, None, None, loadEvent=False)
validDetectionTimeLineDictionnary = {}
detectionTimeLine = {}
for idAnimal in pool.getAnimalDictionnary():
detectionTimeLine[idAnimal] = loadDetectionMap(connection, idAnimal, tmin, tmax)
for t in range(tmin, tmax +1):
valid = True
for idAnimal in detectionTimeLine.keys():
if not (t in detectionTimeLine[idAnimal]):
valid = False
if (valid):
validDetectionTimeLineDictionnary[t] = True
'''
rebuild detection set
'''
#cursor = connection.cursor()
countCorrection = 0
for idAnimal in detectionTimeLine.keys():
for t in range ( tmin , tmax +1 ):
if ( t in detectionTimeLine[idAnimal] ):
if not ( t in validDetectionTimeLineDictionnary ):
query = "UPDATE `DETECTION` SET `ANIMALID`=NULL WHERE `FRAMENUMBER`='{}';".format( t )
cursor.execute( query )
print ( f"{countCorrection}: {query}" )
countCorrection += 1
connection.commit()
cursor.close()
validDetectionTimeLine.reBuildWithDictionnary( validDetectionTimeLineDictionnary )
validDetectionTimeLine.endRebuildEventTimeLine(connection )
print(f" => THERE WERE {countCorrection} CORRECTIONS IN THE DATABASE")
print(f" This represents approximately {countCorrection/maxID*100}% of Alteration of the database")
# log process
from lmtanalysis.TaskLogger import TaskLogger
t = TaskLogger(connection)
t.addLog("Correct detection integrity", tmin=tmin, tmax=tmax)
print( "Rebuild event finished." )
if __name__ == '__main__':
files = getFilesToProcess()
for file in files:
print("Processing file", file)
connection = sqlite3.connect(file) # connect to database
animalPool = AnimalPool() # create an animalPool, which basically contains your animals
animalPool.loadAnimals(connection) # load infos about the animals
correct(connection)
connection.close()
print("******* ALL JOBS DONE !!! *******") | PypiClean |
/Gerador_ficticia-1.0.4.zip/Gerador_ficticia-1.0.4/Projeto1.py | __author__ = 'Jeferson de Souza'
from calculos import *
import random
HNO3 = 58.3
CACO3 = 99.9
NO3 = 65.8
# Fator de corre????????????????o
SODA = 1.0751
EDTA = 0.8
HCL = 0.9756
# Volumes dos tanques
Acido = 2900
Coagulante = 3900
Composto = 0
Talco = 2000
file_name = 'Calculos.txt'
def calculo_solucao(nome, arquivar_op=0):
print('Calculo solido %s' % nome)
parametro_up = float(input('Valor minimo: '))
parametro_dow = float(input('Valor maximo: '))
quantidade = int(input('Quantos calculos gostaria: '))
if arquivar_op == 1:
file_tes = open(file_name, 'a+')
print('\n\n Calculo solido %s' % nome, file=file_tes)
for i in range(quantidade):
placa1_a = sort_num(66.001, 101.992)
placa1_b = sort_num(66.001, 101.992)
amostra_a = sort_num(1.500, 2.500)
amostra_b = sort_num(1.500, 2.500)
resultado_a = sort_num(parametro_dow, parametro_up) # trocar pelo parametro
resultado_b = sort_num(parametro_dow, parametro_up) # trocar pelo parametro
placa3a = find_c(resultado_a, amostra_a, placa1_a)
placa3b = find_c(resultado_b, amostra_b, placa1_b)
print('|%.3f' % placa1_a, '\t', '%.3f|' % placa1_b, file=file_tes)
print('|%.3f' % amostra_a, '\t\t', '%.3f|' % amostra_b, file=file_tes)
print('|%.3f' % placa3a, ' \t', '%.3f|' % placa3b, file=file_tes)
print('|%.1f' % resultado_a, '\t\t', '%.1f|' % resultado_b, file=file_tes)
print('|\t (%.1f)' % ((resultado_a + resultado_b) / 2), file=file_tes)
print('---------------------', file=file_tes)
file_tes.close()
for i in range(quantidade):
placa1_a = sort_num(66.001, 101.992)
placa1_b = sort_num(66.001, 101.992)
amostra_a = sort_num(1.500, 2.500)
amostra_b = sort_num(1.500, 2.500)
resultado_a = sort_num(parametro_dow, parametro_up) # trocar pelo parametro
resultado_b = sort_num(parametro_dow, parametro_up) # trocar pelo parametro
placa3a = find_c(resultado_a, amostra_a, placa1_a)
placa3b = find_c(resultado_b, amostra_b, placa1_b)
print('|%.3f' % placa1_a, '\t', '%.3f|' % placa1_b)
print('|%.3f' % amostra_a, '\t\t', '%.3f|' % amostra_b)
print('|%.3f' % placa3a, '\t', '%.3f|' % placa3b)
print('|%.1f' % resultado_a, '\t\t ', '%.1f|' % resultado_b)
print('|%.1f' % ((resultado_a + resultado_b) / 2))
print('----------------------')
def calculo_nitrato(nome, arquivar_op=0):
print('\n\n Calculo de concentra????????o %s' % nome)
parametro_no3_a = float(input('Valor Minimo: '))
parametro_no3_b = float(input('Valor Maximo: '))
quantidade = int(input('Quantos calculos gostaria: '))
if arquivar_op == 1:
file_tes = open(file_name, 'a+')
print('Calculo de concentra????????o %s' % nome, file=file_tes)
for i in range(quantidade):
print('NO3', file=file_tes)
resultado_no3 = sort_num(parametro_no3_a, parametro_no3_b) # trocar pelo parametro
volume_edta = resultado_no3 / EDTA
print('|%.1f|' % volume_edta, file=file_tes)
print('|%.1f|' % resultado_no3, file=file_tes)
print('--------', file=file_tes)
file_tes.close()
for i in range(quantidade):
print('NO3')
resultado_no3 = sort_num(parametro_no3_a, parametro_no3_b) # trocar pelo parametro
volume_edta = resultado_no3 / EDTA
print('|%.1f|' % volume_edta)
print('|%.1f|' % resultado_no3)
print('--------')
def calculo_acido(nome, arquivar=0):
print('\n\nCalculo concentra????ao de %s' % nome)
quantidade = int(input('Quantos calculos gostaria: '))
if arquivar == 1:
file_tes = open(file_name, 'a+')
print('Calculo concentra????ao de %s' % nome, file=file_tes)
for i in range(quantidade):
peso = sort_num(3501, 4000)
volume = reverse_ac(sort_num(1.0, 1.4), sort_num(3500, 4100), SODA)
print('|%d|' % peso, file=file_tes)
print('|%.1f|' % volume, file=file_tes)
print('|%.1f|' % analise_ac(volume, peso, SODA), file=file_tes)
print('--------', file=file_tes)
file_tes.close()
for i in range(quantidade):
peso = sort_num(3501, 4000)
volume = reverse_ac(sort_num(1.0, 1.4), sort_num(3500, 4100), SODA)
print('|%d|' % peso)
print('|%.1f|' % volume)
print('|%.1f|' % analise_ac(volume, peso, SODA))
print('--------')
def calculo_carbonato(nome, arquivar_op=0):
print('\n\n Calculo solido %s' % nome)
parametro_up = float(input('Valor minimo: '))
parametro_dow = float(input('Valor maximo: '))
quantidade = int(input('Quantos calculos gostaria: '))
if arquivar_op == 1:
file_tes = open(file_name, 'a+')
print('\n\n Calculo solido %s' % nome, file=file_tes)
for i in range(quantidade):
placa1_a = sort_num(0.099, 2.990)
placa1_b = sort_num(0.099, 2.990)
amostra_a = sort_num(12.000, 14.000)
amostra_b = sort_num(12.000, 14.000)
resultado_a = sort_num(parametro_dow, parametro_up) # trocar pelo parametro
resultado_b = sort_num(parametro_dow, parametro_up) # trocar pelo parametro
placa3a = find_c(resultado_a, amostra_a, placa1_a)
placa3b = find_c(resultado_b, amostra_b, placa1_b)
print('|%.3f|' % placa1_a, ' | ', '%.3f|' % placa1_b, file=file_tes)
print('|%.3f|' % amostra_a, ' | ', '%.3f|' % amostra_b, file=file_tes)
print('|%.3f|' % placa3a, ' | ', '%.3f|' % placa3b, file=file_tes)
print('|%.1f|' % resultado_a, ' | ', '%.1f|' % resultado_b, file=file_tes)
print('|%.1f|' % ((resultado_a + resultado_b) / 2), file=file_tes)
print('-----------------------', file=file_tes)
file_tes.close()
for i in range(quantidade):
placa1_a = sort_num(0.099, 2.990)
placa1_b = sort_num(0.099, 2.990)
amostra_a = sort_num(12.000, 14.000)
amostra_b = sort_num(12.000, 14.000)
resultado_a = sort_num(parametro_dow, parametro_up) # trocar pelo parametro
resultado_b = sort_num(parametro_dow, parametro_up) # trocar pelo parametro
placa3a = find_c(resultado_a, amostra_a, placa1_a)
placa3b = find_c(resultado_b, amostra_b, placa1_b)
print('|%.3f|' % placa1_a, ' |', '%.3f|' % placa1_b)
print('|%.3f|' % amostra_a, ' |', '%.3f|' % amostra_b)
print('|%.3f|' % placa3a, ' |', '%.3f|' % placa3b)
print('|%.1f|' % resultado_a, ' |', '%.1f|' % resultado_b)
print('|%.1f|' % ((resultado_a + resultado_b) / 2))
print('-----------------------')
if __name__ == '__main__':
print('-----------------------------------')
print('|Gerador de Calculos Ficticios. |')
print('|V - 1.0.4 |')
print('|Desenvolvido por: Jeferson S. |')
print('|Email: [email protected] |')
print('-----------------------------------')
solucao = 1000
while solucao != 0:
print('Escolha uma OP: ')
print('[1] - Ctf-3f')
print('[2] - Talco')
print('[3] - Composto')
print('[4] - Polimero')
print('[5] - Acido')
print('[6] - Nitrato')
print('[7] - Cabonato')
print('[0] - SAIR')
solucao = int(input('Opcao: '))
if solucao == 0:
break
salvar_op = input('Deseja Gerar um txt? [s/n]: ')
salvar_op = salvar_op.upper()
op_num = 0
if salvar_op == 'S':
op_num = 1
elif salvar_op == 'N':
op_num = 2
else:
print('Valor invalido, somento [S = Sim | N = N??o]')
if solucao == 1:
nome = 'CTF-3F'
if salvar_op == 'S':
calculo_solucao(nome, op_num)
else:
calculo_solucao(nome, op_num)
elif solucao == 2:
nome = 'Talco'
if salvar_op == 'S':
calculo_solucao(nome, op_num)
else:
calculo_solucao(nome)
elif solucao == 3:
nome = 'Composto'
if salvar_op == 'S':
calculo_solucao(nome, op_num)
else:
calculo_solucao(nome)
elif solucao == 4:
nome = 'Polimero'
if salvar_op == 'S':
calculo_solucao(nome, op_num)
else:
calculo_solucao(nome)
elif solucao == 5:
nome = 'Acido'
if salvar_op == 'S':
calculo_acido(nome, op_num)
else:
calculo_acido(nome)
elif solucao == 6:
nome = 'Nitrato'
if salvar_op == 'S':
calculo_nitrato(nome, op_num)
else:
calculo_nitrato(nome)
elif solucao == 7:
nome = 'Carbonato'
if salvar_op == 'S':
calculo_carbonato(nome, op_num)
else:
calculo_carbonato(nome)
else:
print('Valor Inv??lido!! \n\n') | PypiClean |
/MegEngine-1.13.1-cp37-cp37m-macosx_10_14_x86_64.whl/megengine/xla/compile.py | import dataclasses
import os
from typing import Any, Callable, Dict, List, Optional, Protocol, Sequence, Set, Union
import numpy as np
from .. import tensor
from ..distributed import is_distributed
from ..utils.dlpack import from_dlpack, to_dlpack
from . import ir_utils
from .lib import xla_bridge as xb
from .lib import xla_client as xc
from .lib.mlir import ir
from .sharding import (
_get_normalized_avals_and_shardings,
_get_op_sharding_shardings_from_executable,
_get_pmap_sharding,
_is_unspecified,
_pmap_sharding_spec,
is_op_sharding_replicated,
pmap_lib,
shard_args,
)
from .utils import safe_zip, unzip2
xla_extension = xc._xla
xe = xla_extension
def compile_impl(backend, computation: ir.Module, compile_options, host_callbacks):
sym_name = computation.operation.attributes["sym_name"]
module_name = ir.StringAttr(sym_name).value
serialized_computation: Union[str, bytes, ir.Module]
if getattr(backend, "needs_str_ir", True):
serialized_computation = ir_utils.module_to_bytecode(computation)
else:
serialized_computation = computation
supported_platforms = ["gpu"]
if "--xla_cpu_use_xla_runtime=true" in os.environ.get("XLA_FLAGS", ""):
supported_platforms.append("cpu")
def backend_compile(backend, built_c, options, host_callbacks):
if host_callbacks:
return backend.compile(
built_c, compile_options=options, host_callbacks=host_callbacks
)
return backend.compile(built_c, compile_options=options)
return backend_compile(
backend, serialized_computation, compile_options, host_callbacks
)
class InputsHandler:
__slots__ = ("handler", "local_devices", "in_shardings", "input_indices")
def __init__(self, local_devices, in_shardings, input_indices):
self.handler = shard_args
self.local_devices = local_devices
self.in_shardings = in_shardings
self.input_indices = input_indices
def from_dlpack(self, dlpack):
return xe.dlpack_managed_tensor_to_buffer(
dlpack, None, self.local_devices[0].client
)
def __call__(self, input_buffers):
rst = []
for idx, i in enumerate(input_buffers):
if i._is_external_value():
rst.append([i._external_obj()])
else:
capsule = to_dlpack(i)
xla_array = self.from_dlpack(capsule)
rst.append([xla_array])
return rst
def __str__(self):
return (
"InputsHandler(\n"
f"local_devices={self.local_devices},\n"
f"in_shardings={self.in_shardings},\n"
f"input_indices={self.input_indices})"
)
class ResultsHandler:
__slots__ = ("handlers", "out_shardings", "out_avals", "return_device_array")
def __init__(
self,
handlers=None,
out_shardings=None,
out_avals=None,
return_device_array=False,
):
self.return_device_array = return_device_array
if handlers is None:
def out_handler(bufs):
assert isinstance(bufs, list) and len(bufs) == 1
assert isinstance(bufs[0], xe.ArrayImpl)
if not self.return_device_array:
return np.asarray(bufs[0])
else:
return bufs[0]
self.handlers = out_handler
self.out_shardings = out_shardings
self.out_avals = out_avals
def __call__(self, out_bufs):
if isinstance(self.handlers, list):
return [h(bufs) for h, bufs in safe_zip(self.handlers, out_bufs)]
else:
return [self.handlers(bufs) for bufs in out_bufs]
class Executable(Protocol):
def call(self, *args_flat):
raise NotImplementedError
def input_shardings(self):
raise NotImplementedError
def output_shardings(self):
raise NotImplementedError
def as_text(self) -> str:
raise NotImplementedError
def cost_analysis(self) -> Any:
raise NotImplementedError
def memory_analysis(self) -> Any:
raise NotImplementedError
def runtime_executable(self) -> Any:
raise NotImplementedError
def create_cpp_call(self, no_kwargs, in_tree, out_tree) -> Any:
return None
class XlaExecutable(Executable):
def xla_extension_executable(self):
raise NotImplementedError("should be overrided")
def call(self, *args_flat):
raise NotImplementedError("should be overrided")
def input_shardings(self):
raise NotImplementedError("should be overrided")
def output_shardings(self):
raise NotImplementedError("should be overrided")
def as_text(self) -> str:
xla_ext_exe = self.xla_extension_executable()
err_msg = (
"text view unsupported on current XLA backend: " f"{type(xla_ext_exe)}"
)
if not hasattr(xla_ext_exe, "hlo_modules"):
raise NotImplementedError(err_msg)
try:
return "\n\n".join([m.to_string() for m in xla_ext_exe.hlo_modules()])
except xla_extension.XlaRuntimeError as e:
msg, *_ = e.args
if type(msg) is str and msg.startswith("UNIMPLEMENTED"):
raise NotImplementedError(err_msg) from e
else:
raise
def cost_analysis(self) -> List[Dict[str, float]]:
xla_ext_exe = self.xla_extension_executable()
err_msg = (
"cost analysis unsupported on current XLA backend: " f"{type(xla_ext_exe)}"
)
if hasattr(xla_ext_exe, "client"):
try:
return [
xla_extension.hlo_module_cost_analysis(xla_ext_exe.client, m)
for m in xla_ext_exe.hlo_modules()
]
except xla_extension.XlaRuntimeError as e:
msg, *_ = e.args
if type(msg) is str and msg.startswith("UNIMPLEMENTED"):
raise NotImplementedError(err_msg) from e
else:
raise
elif hasattr(xla_ext_exe, "cost_analysis"):
try:
return xla_ext_exe.cost_analysis()
except xla_extension.XlaRuntimeError as e:
msg, *_ = e.args
if type(msg) is str and msg.startswith("UNIMPLEMENTED"):
raise NotImplementedError(err_msg) from e
else:
raise
else:
raise NotImplementedError(err_msg)
def memory_analysis(self) -> Any:
xla_ext_exe = self.xla_extension_executable()
err_msg = (
"memory analysis unsupported on current XLA backend: "
f"{type(xla_ext_exe)}"
)
if not hasattr(xla_ext_exe, "get_compiled_memory_stats"):
raise NotImplementedError(err_msg)
try:
return xla_ext_exe.get_compiled_memory_stats()
except xla_extension.XlaRuntimeError as e:
msg, *_ = e.args
if type(msg) is str and msg.startswith("UNIMPLEMENTED"):
raise NotImplementedError(err_msg) from e
else:
raise
def runtime_executable(self) -> Any:
return self.xla_extension_executable()
# The logic to shard inputs, execute a replicated model, returning outputs
class ExecuteReplicated:
__slots__ = [
"xla_executable",
"name",
"backend",
"in_handler",
"out_handler",
"has_unordered_effects",
"ordered_effects",
"keepalive",
"has_host_callbacks",
"_local_devices",
"kept_var_idx",
"__weakref__",
]
def __init__(
self,
xla_executable,
name,
backend,
in_handler: InputsHandler,
out_handler: ResultsHandler,
unordered_effects: Any,
ordered_effects: Any,
keepalive: Any,
has_host_callbacks: bool,
kept_var_idx: Set[int],
):
self.xla_executable = xla_executable
self.name = name
self.backend = backend
self.in_handler = in_handler
self.out_handler = out_handler
self.has_unordered_effects = bool(unordered_effects)
self.ordered_effects = ordered_effects
self._local_devices = self.xla_executable.local_devices()
if ordered_effects:
assert len(self._local_devices) == 1
self.keepalive = keepalive
self.has_host_callbacks = has_host_callbacks
self.kept_var_idx = kept_var_idx
def __call__(self, *args):
args = [x for i, x in enumerate(args) if i in self.kept_var_idx]
input_bufs = self.in_handler(args)
assert not (
self.ordered_effects
or self.has_unordered_effects
or self.has_host_callbacks
)
if True or not is_distributed():
out_bufs = self.xla_executable.execute_sharded_on_local_devices(input_bufs)
return self.out_handler(out_bufs)
else:
results = self.xla_executable.execute_sharded(input_bufs)
outputs = results.disassemble_into_single_device_arrays()
assert isinstance(outputs, list)
out_bufs = []
for oup in outputs:
assert isinstance(oup, list) and len(oup) == 1
out_bufs.append(oup[0].device_buffers)
return self.out_handler(out_bufs)
@dataclasses.dataclass
class UnloadedMeshExecutable:
xla_executable: Any
trace_result: ir_utils.TraceResult
device_assignment: Sequence[xc.Device]
backend: xb.XlaBackend
input_shardings: Sequence[Any]
output_shardings: Sequence[Any]
committed: bool
are_out_shardings_from_xla: Sequence[bool]
pmap_nreps: int
name: str
unordered_effects: List[Any]
ordered_effects: List[Any]
keepalive: Sequence[Any]
host_callbacks: Sequence[Any]
kept_var_idx: Set[int]
auto_spmd_lowering: bool
return_device_array: bool = False
def load(self):
def _get_input_indices(avals, shardings):
input_indices = []
for aval, sharding in zip(avals, shardings):
proto = sharding._to_xla_op_sharding(len(aval.shape))
if is_op_sharding_replicated(proto):
index = tuple(
(slice(None),) * len(aval.shape)
for _ in range(len(sharding.addressable_devices))
)
else:
assert False
input_indices.append(index)
return input_indices
input_indices = _get_input_indices(
self.trace_result._var_inputs, self.input_shardings
)
handle_inps = InputsHandler(
self.xla_executable.local_devices(), self.input_shardings, input_indices
)
handle_oups = ResultsHandler(return_device_array=self.return_device_array)
if self.pmap_nreps > 1:
assert False
else:
unsafe_call = ExecuteReplicated(
self.xla_executable,
self.name,
self.backend,
handle_inps,
handle_oups,
self.unordered_effects,
self.ordered_effects,
self.keepalive,
bool(self.host_callbacks),
self.kept_var_idx,
)
return MeshExecutable(
self.xla_executable,
unsafe_call,
self.trace_result,
self.input_shardings,
self.output_shardings,
self.auto_spmd_lowering,
self.kept_var_idx,
self.device_assignment,
)
@staticmethod
def from_hlo(
name: str,
computation,
mesh,
trace_result: ir_utils.TraceResult,
in_shardings,
out_shardings,
spmd_lowering: bool,
tuple_args: bool,
in_is_global: Sequence[bool],
auto_spmd_lowering: bool,
_allow_propagation_to_outputs: bool,
_allow_compile_replicated: bool,
unordered_effects,
ordered_effects,
host_callbacks,
keepalive,
kept_var_idx,
backend: xb.XlaBackend,
device_assignment: Sequence[xc.Device],
committed: bool,
pmap_nreps: int = 1,
return_device_array: bool = False,
):
assert mesh == None
assert spmd_lowering == False
assert tuple_args == False
assert in_is_global == (True,) * len(trace_result.inputs)
assert auto_spmd_lowering == False
assert _allow_propagation_to_outputs == False
assert _allow_compile_replicated == True
assert unordered_effects == []
assert ordered_effects == []
assert host_callbacks == []
assert keepalive == []
assert committed == False
assert pmap_nreps == 1
dev: np.ndarray
if auto_spmd_lowering:
assert mesh is not None and spmd_lowering
dev = mesh.devices
num_replicas, num_partitions = 1, mesh.size
else:
dev = np.array(device_assignment)
if pmap_nreps > 1:
num_replicas, num_partitions = pmap_nreps, 1
elif spmd_lowering:
num_replicas, num_partitions = 1, dev.size
else:
num_replicas, num_partitions = dev.size, 1
if pmap_nreps > 1:
xla_device_assignment = None
else:
xla_device_assignment = dev.reshape((num_replicas, num_partitions))
assert num_replicas == 1 and num_partitions == 1
compile_options = xb.get_compile_options(
num_replicas=num_replicas,
num_partitions=num_partitions,
device_assignment=xla_device_assignment,
use_spmd_partitioning=spmd_lowering,
use_auto_spmd_partitioning=auto_spmd_lowering,
)
if auto_spmd_lowering:
assert False
# tuple_args is only tpu related, so in mge we close it
compile_options.parameter_is_tupled_arguments = False
allow_propagation = [_allow_propagation_to_outputs]
compile_options.executable_build_options.allow_spmd_sharding_propagation_to_output = (
allow_propagation
)
assert hasattr(backend, "compile_replicated") == False
if _allow_compile_replicated and hasattr(backend, "compile_replicated"):
assert False
else:
xla_executable = compile_impl(
backend, computation, compile_options, host_callbacks
)
if auto_spmd_lowering:
assert False
elif out_shardings and any(_is_unspecified(o) for o in out_shardings):
assert mesh is None
_, out_shardings_xla = _get_op_sharding_shardings_from_executable( # type: ignore
xla_executable,
device_assignment,
len(trace_result.inputs),
len(trace_result.outputs),
)
out_shardings_tuple = [
(x, True) if _is_unspecified(o) else (o, False)
for x, o in safe_zip(out_shardings_xla, out_shardings)
]
out_shardings, are_out_shardings_from_xla = unzip2(out_shardings_tuple)
else:
are_out_shardings_from_xla = (False,) * len(trace_result.outputs)
input_avals, input_shardings = _get_normalized_avals_and_shardings(
trace_result._var_inputs, in_shardings, in_is_global
)
return UnloadedMeshExecutable(
xla_executable=xla_executable,
trace_result=trace_result,
device_assignment=device_assignment,
backend=backend,
input_shardings=input_shardings,
output_shardings=out_shardings,
committed=committed,
are_out_shardings_from_xla=are_out_shardings_from_xla,
pmap_nreps=pmap_nreps,
name=name,
unordered_effects=unordered_effects,
ordered_effects=ordered_effects,
keepalive=keepalive,
host_callbacks=host_callbacks,
kept_var_idx=kept_var_idx,
auto_spmd_lowering=auto_spmd_lowering,
return_device_array=return_device_array,
)
class MeshExecutable(XlaExecutable):
__slots__ = [
"xla_executable",
"unsafe_call",
"trace_result",
"_in_shardings",
"_out_shardings",
"_auto_spmd_lowering",
"_kept_var_idx",
"_device_assignment",
]
def __init__(
self,
xla_executable,
unsafe_call,
trace_result,
in_shardings,
out_shardings,
auto_spmd_lowering,
kept_var_idx,
device_assignment,
):
self.xla_executable = xla_executable
self.unsafe_call = unsafe_call
self.trace_result = trace_result
self._in_shardings = in_shardings
self._out_shardings = out_shardings
self._auto_spmd_lowering = auto_spmd_lowering
self._kept_var_idx = kept_var_idx
self._device_assignment = device_assignment
def xla_extension_executable(self):
return self.xla_executable
def call(self, *args):
return self.unsafe_call(*args)
def input_shardings(self):
return self._in_shardings
def output_shardings(self):
return self._out_shardings
class Lowering(Protocol):
def compile(self) -> Executable:
raise NotImplementedError
def as_text(self, dialect: Optional[str] = None) -> str:
raise NotImplementedError
def compiler_ir(self, dialect: Optional[str] = None) -> Any:
raise NotImplementedError
class XlaLowering(Lowering):
def hlo(self) -> xc.XlaComputation:
raise NotImplementedError("must override")
# Return an MHLO IR of computation
def mhlo(self) -> ir.Module:
module_str = xla_extension.mlir.stablehlo_to_mhlo(
ir_utils.module_to_bytecode(self.stablehlo())
)
with self.stablehlo().context:
return ir.Module.parse(module_str)
# Return a StableHLO IR of computation
def stablehlo(self) -> ir.Module:
raise NotImplementedError("must override")
def compile(self) -> Executable:
raise NotImplementedError("must override")
def as_text(self, dialect: Optional[str] = None) -> str:
if dialect is None:
dialect = "stablehlo"
if dialect == "mhlo":
return str(self.mhlo())
elif dialect == "stablehlo":
return str(self.stablehlo())
elif dialect == "hlo":
return self.hlo().as_hlo_text()
else:
raise ValueError(f"unknown dialect: {dialect}")
def compiler_ir(self, dialect: Optional[str] = None) -> Any:
if dialect is None:
dialect = "stablehlo"
if dialect == "mhlo":
return self.mhlo()
elif dialect == "stablehlo":
return self.stablehlo()
elif dialect == "hlo":
return self.hlo()
else:
raise ValueError(f"unknown dialect: {dialect}")
class MeshComputation(XlaLowering):
_hlo: Optional[ir.Module]
_executable: Optional[MeshExecutable]
def __init__(
self,
name: str,
hlo: Optional[ir.Module],
donated_invars: Sequence[bool],
**compile_args
):
self._name = name
self._hlo = hlo
self._donated_invars = donated_invars
self.compile_args = compile_args
self._executable = None
def _compile_unloaded(
self,
_allow_propagation_to_outputs: bool = False,
_allow_compile_replicated: bool = True,
) -> Union[UnloadedMeshExecutable, MeshExecutable]:
return UnloadedMeshExecutable.from_hlo(
self._name,
self._hlo,
**self.compile_args,
_allow_propagation_to_outputs=_allow_propagation_to_outputs,
_allow_compile_replicated=_allow_compile_replicated,
)
def hlo(self) -> xc.XlaComputation:
return xe.mlir.mlir_module_to_xla_computation(
ir_utils.module_to_string(self._hlo),
use_tuple_args=self.compile_args["tuple_args"],
)
def mhlo(self) -> ir.Module:
return super().mhlo()
def stablehlo(self) -> ir.Module:
return self._hlo
def compile(
self,
_allow_propagation_to_outputs: bool = False,
_allow_compile_replicated: bool = True,
) -> MeshExecutable:
if self._executable is None:
executable = self._compile_unloaded(
_allow_propagation_to_outputs, _allow_compile_replicated
)
if isinstance(executable, UnloadedMeshExecutable):
executable = executable.load()
self._executable = executable
return self._executable
class PmapExecutable(XlaExecutable):
__slots__ = [
"xla_executable",
"_unsafe_call",
"build_unsafe_call",
"trace_result",
"_unloaded_executable",
]
def __init__(
self, xla_executable, build_unsafe_call, trace_result, unloaded_executable,
):
self.xla_executable = xla_executable
self._unsafe_call = None
self.build_unsafe_call = build_unsafe_call
self.trace_result = trace_result
self._unloaded_executable = unloaded_executable
@property
def unsafe_call(self) -> Callable[..., Any]:
if self._unsafe_call is None:
self._unsafe_call = self.build_unsafe_call()
return self._unsafe_call
def xla_extension_executable(self):
return self.xla_executable
def call(self, *args):
return self.unsafe_call(*args)
@dataclasses.dataclass
class UnloadedPmapExecutable:
compiled: Any
trace_result: ir_utils.TraceResult
backend: xb.XlaBackend
input_shardings: Sequence[Any]
output_shardings: Sequence[Any]
unordered_effects: List[Any]
ordered_effects: List[Any]
keepalive: Sequence[Any]
host_callbacks: Sequence[Any]
kept_var_idx: Set[int]
rank: int
return_device_array: bool = False
@staticmethod
def from_hlo(
computation,
trace_result: ir_utils.TraceResult,
unordered_effects,
ordered_effects,
tuple_args, # for tpu
in_is_global,
host_callbacks,
keepalive,
kept_var_idx,
backend,
devices,
return_device_array,
world_size,
rank,
):
assert unordered_effects == []
assert ordered_effects == []
assert host_callbacks == []
assert keepalive == []
assert tuple_args == False
assert in_is_global == (True,) * len(trace_result.inputs)
assert devices is None
if devices is None:
if world_size > xb.device_count(backend):
assert (
False
), f"world_size={world_size} is bigger than device_count={xb.device_count(backend)}"
devices = [
d
for process_index in range(xb.process_count(backend))
for d in xb.local_devices(process_index, backend)
]
else:
assert False, "impossible"
device_assignment: np.ndarray = np.array(devices).reshape((world_size, 1))
use_spmd_partitioning = False
compile_options = xb.get_compile_options(
num_replicas=world_size,
num_partitions=1,
device_assignment=device_assignment,
use_spmd_partitioning=use_spmd_partitioning,
)
compile_options.parameter_is_tupled_arguments = tuple_args
compiled = compile_impl(backend, computation, compile_options, host_callbacks)
process_index = xb.process_index(backend)
local_device_assignment = np.array(
[d for d in device_assignment.flat if d.process_index == process_index]
)
ishapes = [inp.shape for inp in trace_result._var_inputs]
input_sharding_specs = [
_pmap_sharding_spec(1, 1, 1, None, ishape, 0) for ishape in ishapes
]
in_shardings = _get_pmap_sharding(local_device_assignment, input_sharding_specs)
oshapes = [out.shape for out in trace_result._var_outputs]
out_specs = [
_pmap_sharding_spec(1, 1, 1, None, oshape, 0) for oshape in oshapes
]
out_shardings = _get_pmap_sharding(local_device_assignment, out_specs)
return UnloadedPmapExecutable(
compiled=compiled,
trace_result=trace_result,
backend=backend,
input_shardings=in_shardings,
output_shardings=out_shardings,
unordered_effects=unordered_effects,
ordered_effects=ordered_effects,
keepalive=keepalive,
host_callbacks=host_callbacks,
kept_var_idx=kept_var_idx,
rank=rank,
return_device_array=return_device_array,
).load()
def build_execute_fun(self):
input_indices = []
ishapes = [inp.shape for inp in self.trace_result._var_inputs]
for ishape, isharding in safe_zip(ishapes, self.input_shardings):
spec = isharding.sharding_spec
assert len(spec.sharding) == len(ishape) + 1
assert spec.sharding[0] == pmap_lib.Unstacked(1)
assert all(isinstance(s, pmap_lib.NoSharding) for s in spec.sharding[1:])
input_indices.append(
((tuple(slice(None, None, None) for _ in range(len(ishape)))),)
)
handle_inps = InputsHandler(
self.compiled.local_devices(), self.input_shardings, input_indices
)
handle_oups = ResultsHandler(return_device_array=self.return_device_array)
execute_fun = ExecuteReplicated(
self.compiled,
"parallel computation",
self.backend,
handle_inps,
handle_oups,
self.unordered_effects,
self.ordered_effects,
self.keepalive,
bool(self.host_callbacks),
set(range(len(input_indices))),
)
return execute_fun
def load(self) -> PmapExecutable:
return PmapExecutable(
self.compiled, self.build_execute_fun, self.trace_result, self,
)
class PmapComputation(XlaLowering):
_name: str
_hlo: ir.Module
_executable: Optional[PmapExecutable]
def __init__(self, name, hlo: ir.Module, **compile_args):
self._name = name
self._executable = None
self._hlo = hlo
self.compile_args = compile_args
def hlo(self) -> xc.XlaComputation:
return xe.mlir.mlir_module_to_xla_computation(
ir_utils.module_to_string(self._hlo),
use_tuple_args=self.compile_args["tuple_args"],
)
def mhlo(self) -> ir.Module:
return super().mhlo()
def stablehlo(self) -> ir.Module:
return self._hlo
def compile(self) -> PmapExecutable:
if self._executable is None:
self._executable = UnloadedPmapExecutable.from_hlo(
self._hlo, **self.compile_args
)
return self._executable | PypiClean |
/Newcalls-0.0.1-cp37-cp37m-win_amd64.whl/newcalls/newcalls.py | import atexit
from typing import Any
from .binding import Binding
from .environment import Environment
from .handlers import HandlersHolder
from .methods import Methods
from .mtproto import MtProtoClient
from .scaffold import Scaffold
from .types import Cache
from .types.call_holder import CallHolder
from .types.update_solver import UpdateSolver
class NewCalls(Methods, Scaffold):
"""NewCalls Client, the main means
for interacting with Group Calls.
Attributes:
active_calls (List of :obj:`~newcalls.types.GroupCall`):
Get a list of active (Playing / Paused) group calls
calls (List of :obj:`~newcalls.types.GroupCall`):
Get a list of existent group calls
cache_peer (`InputPeer (P)`_ | `InputPeer (T)`_):
Get current Telegram user
ping (``int``):
Ping of NodeJS core
is_connected (``bool``):
Check if is alive the NodeJS connection
Parameters:
app (`Client`_ | `TelegramClient`_):
Pass the MtProto Client
cache_duration (``int``):
Cache duration of Full Chat query
overload_quiet_mode (``bool``):
Disable overload cpu messages by setting true
Raises:
InvalidMtProtoClient: You set an invalid MtProto client
"""
def __init__(
self,
app: Any,
cache_duration: int = 120,
overload_quiet_mode: bool = False,
):
super().__init__()
self._app = MtProtoClient(
cache_duration,
app,
)
self._is_running = False
self._env_checker = Environment(
self._REQUIRED_NODEJS_VERSION,
self._REQUIRED_NEWGRAM_VERSION,
self._REQUIRED_TELETHON_VERSION,
self._app.client,
)
self._call_holder = CallHolder()
self._cache_user_peer = Cache()
self._wait_result = UpdateSolver()
self._on_event_update = HandlersHolder()
self._binding = Binding(
overload_quiet_mode,
)
def cleanup():
if self._async_core is not None:
self._async_core.cancel()
atexit.register(cleanup) | PypiClean |
/DiscordGame-2021.5.21.0.tar.gz/DiscordGame-2021.5.21.0/README.md | # DiscordGame
*DiscordGame is a Python Framework for making Games
from simple mini games like Tic Tac Toe
to full-fledge Dungeon and Dragon campaigns inside Discord.*
## Getting Started
### Installation
```shell script
$ pip install discordgame
```
Or clone the repo
```shell script
$ git clone https://github.com/GrandMoff100/DiscordGame
```
and run
```shell script
$ python setup.py install
```
### Usage
DiscordGame is structured like this.
Whenever a trigger event like a reaction (called a button) or a new message is sent while a game is active,
those events are passed to all games that are registered to a GameHost object.
As you can see here with the on_text_event and on_button_event...
```python
import discordgame as dg
```
> Here's a couple of examples to help you get the gist of how this framework works...
> These examples assume you have cloned the repository and have the examples folder downloaded.
- *A Simple MadLib made with ``discordgame``:*
```python
import discord
import discordgame as dg
class MadLib(dg.Game):
game_name = 'MadLib'
def __init__(self, ctx):
# Creates a list of blanks
self.word_blanks = ['(blank)'] * 8
# Assign a MadLib string to a variable.
self.lib = 'The {} {}ed across the {} to get to the {} {}. It wanted to get to the {} so it could {} with a {}.'
# Initialize the Parent Game class with the MadLib specific values.
super().__init__(self.game_name, [[self.lib.format(*self.word_blanks)]], ctx=ctx, needs_text_input=True)
# Define events to be triggered on a user's message event.
async def on_text_event(self, player: discord.User, text: str):
try:
next_index = self.word_blanks.index('(blank)') # Finds the left-most blank in the list.
self.word_blanks.pop(next_index) # Pops that blank from the list.
self.word_blanks.insert(next_index, text) # Inserts the user's word into the said blank.
self.stats['Blanks to Fill ->'] = len([word for word in self.word_blanks if word == '(blank)'])
# ^^ Updates the Blanks to fill Counter.
await self.update_layout([[self.lib.format(*self.word_blanks)]]) # Sends the changes to discord.
if '(blank)' not in self.word_blanks:
self.stop()
await player.send(self.lib.format(*self.word_blanks)) # Sends the final MadLib to the channel.
except ValueError: # If there's no blank in the list.
self.stop()
await player.send(self.lib.format(*self.word_blanks)) # Sends the final MadLib to the channel.
```
- *A Cool Snake Game made with ``discordgame``:*
Still developing a frame based example (mostly because I'm lazy and some of the library features aren't implemented yet)
- And then loading the games (see examples/example.py)
```py
from discordgame import GameHost
# Import our example games from 2 other files in the examples directory.
from .snake import Snake
from .madlib import MadLib
host = GameHost('*')
# Add our Games to the GameHost so users can play them.
host.add_game(Snake)
host.add_game(MadLib)
# Add run the GameHost.
host.run(TOKEN)
```
### More Features
## Testing and Issues
We welcome any new insights and issues with this framework.
To make an issue, head over to the issues page on our repository -> https://github.com/GrandMoff100/DiscordGame and open a new issue.
We look forward working on fixing any bugs or issues that we might have missed.
## Contribution
We'd love for you to Contribute! New features and optimizations are welcome!
Just fork the Repository and make the changes and then make a pull request with your improvements.
If you make enough improvements, consistently we'll add you as a contributor.
| PypiClean |
/Kr0nOs-3.4.1.tar.gz/Kr0nOs-3.4.1/kronbot/core/cog_manager.py | import contextlib
import pkgutil
from importlib import import_module, invalidate_caches
from importlib.machinery import ModuleSpec
from pathlib import Path
from typing import List, Optional, Union
import discord
import kronbot.cogs
from kronbot.core.utils import deduplicate_iterables
from . import checks, commands
from .config import Config
from .data_manager import cog_data_path
from .i18n import Translator, cog_i18n
from .utils.chat_formatting import box, pagify
__all__ = ["CogManager"]
class NoSuchCog(ImportError):
"""Thrown when a cog is missing.
Different from ImportError because some ImportErrors can happen inside cogs.
"""
class CogManager:
"""Directory manager for Kron's cogs.
This module allows you to load cogs from multiple directories and even from
outside the bot directory. You may also set a directory for downloader to
install new cogs to, the default being the :code:`cogs/` folder in the root
bot directory.
"""
CORE_PATH = Path(kronbot.cogs.__path__[0])
def __init__(self):
self.conf = Config.get_conf(self, 2938473984732, True)
tmp_cog_install_path = cog_data_path(self) / "cogs"
tmp_cog_install_path.mkdir(parents=True, exist_ok=True)
self.conf.register_global(paths=[], install_path=str(tmp_cog_install_path))
async def paths(self) -> List[Path]:
"""Get all currently valid path directories, in order of priority
Returns
-------
List[pathlib.Path]
A list of paths where cog packages can be found. The
install path is highest priority, followed by the
user-defined paths, and the core path has the lowest
priority.
"""
return deduplicate_iterables(
[await self.install_path()], await self.user_defined_paths(), [self.CORE_PATH]
)
async def install_path(self) -> Path:
"""Get the install path for 3rd party cogs.
Returns
-------
pathlib.Path
The path to the directory where 3rd party cogs are stored.
"""
return Path(await self.conf.install_path()).resolve()
async def user_defined_paths(self) -> List[Path]:
"""Get a list of user-defined cog paths.
All paths will be absolute and unique, in order of priority.
Returns
-------
List[pathlib.Path]
A list of user-defined paths.
"""
return list(map(Path, deduplicate_iterables(await self.conf.paths())))
async def set_install_path(self, path: Path) -> Path:
"""Set the install path for 3rd party cogs.
Note
----
The bot will not remember your old cog install path which means
that **all previously installed cogs** will no longer be found.
Parameters
----------
path : pathlib.Path
The new directory for cog installs.
Returns
-------
pathlib.Path
Absolute path to the new install directory.
Raises
------
ValueError
If :code:`path` is not an existing directory.
"""
if not path.is_dir():
raise ValueError("The install path must be an existing directory.")
resolved = path.resolve()
await self.conf.install_path.set(str(resolved))
return resolved
@staticmethod
def _ensure_path_obj(path: Union[Path, str]) -> Path:
"""Guarantee an object will be a path object.
Parameters
----------
path : `pathlib.Path` or `str`
Returns
-------
pathlib.Path
"""
try:
path.exists()
except AttributeError:
path = Path(path)
return path
async def add_path(self, path: Union[Path, str]) -> None:
"""Add a cog path to current list.
This will ignore duplicates.
Parameters
----------
path : `pathlib.Path` or `str`
Path to add.
Raises
------
ValueError
If :code:`path` does not resolve to an existing directory.
"""
path = self._ensure_path_obj(path)
# This makes the path absolute, will break if a bot install
# changes OS/Computer?
path = path.resolve()
if not path.is_dir():
raise ValueError("'{}' is not a valid directory.".format(path))
if path == await self.install_path():
raise ValueError("Cannot add the install path as an additional path.")
if path == self.CORE_PATH:
raise ValueError("Cannot add the core path as an additional path.")
current_paths = await self.user_defined_paths()
if path not in current_paths:
current_paths.append(path)
await self.set_paths(current_paths)
async def remove_path(self, path: Union[Path, str]) -> None:
"""Remove a path from the current paths list.
Parameters
----------
path : `pathlib.Path` or `str`
Path to remove.
"""
path = self._ensure_path_obj(path).resolve()
paths = await self.user_defined_paths()
paths.remove(path)
await self.set_paths(paths)
async def set_paths(self, paths_: List[Path]):
"""Set the current paths list.
Parameters
----------
paths_ : `list` of `pathlib.Path`
List of paths to set.
"""
str_paths = list(map(str, paths_))
await self.conf.paths.set(str_paths)
async def _find_ext_cog(self, name: str) -> ModuleSpec:
"""
Attempts to find a spec for a third party installed cog.
Parameters
----------
name : str
Name of the cog package to look for.
Returns
-------
importlib.machinery.ModuleSpec
Module spec to be used for cog loading.
Raises
------
NoSuchCog
When no cog with the requested name was found.
"""
real_paths = list(map(str, [await self.install_path()] + await self.user_defined_paths()))
for finder, module_name, _ in pkgutil.iter_modules(real_paths):
if name == module_name:
spec = finder.find_spec(name)
if spec:
return spec
raise NoSuchCog(
"No 3rd party module by the name of '{}' was found in any available path.".format(
name
),
name=name,
)
@staticmethod
async def _find_core_cog(name: str) -> ModuleSpec:
"""
Attempts to find a spec for a core cog.
Parameters
----------
name : str
Returns
-------
importlib.machinery.ModuleSpec
Raises
------
RuntimeError
When no matching spec can be found.
"""
real_name = ".{}".format(name)
package = "kronbot.cogs"
try:
mod = import_module(real_name, package=package)
except ImportError as e:
if e.name == package + real_name:
raise NoSuchCog(
"No core cog by the name of '{}' could be found.".format(name),
path=e.path,
name=e.name,
) from e
raise
return mod.__spec__
# noinspection PyUnreachableCode
async def find_cog(self, name: str) -> Optional[ModuleSpec]:
"""Find a cog in the list of available paths.
Parameters
----------
name : str
Name of the cog to find.
Returns
-------
Optional[importlib.machinery.ModuleSpec]
A module spec to be used for specialized cog loading, if found.
"""
with contextlib.suppress(NoSuchCog):
return await self._find_ext_cog(name)
with contextlib.suppress(NoSuchCog):
return await self._find_core_cog(name)
async def available_modules(self) -> List[str]:
"""Finds the names of all available modules to load."""
paths = list(map(str, await self.paths()))
ret = []
for finder, module_name, _ in pkgutil.iter_modules(paths):
ret.append(module_name)
return ret
@staticmethod
def invalidate_caches():
"""Re-evaluate modules in the py cache.
This is an alias for an importlib internal and should be called
any time that a new module has been installed to a cog directory.
"""
invalidate_caches()
_ = Translator("CogManagerUI", __file__)
@cog_i18n(_)
class CogManagerUI(commands.Cog):
"""Commands to interface with Kron's cog manager."""
@commands.command()
@checks.is_owner()
async def paths(self, ctx: commands.Context):
"""
Lists current cog paths in order of priority.
"""
cog_mgr = ctx.bot._cog_mgr
install_path = await cog_mgr.install_path()
core_path = cog_mgr.CORE_PATH
cog_paths = await cog_mgr.user_defined_paths()
msg = _("Install Path: {install_path}\nCore Path: {core_path}\n\n").format(
install_path=install_path, core_path=core_path
)
partial = []
for i, p in enumerate(cog_paths, start=1):
partial.append("{}. {}".format(i, p))
msg += "\n".join(partial)
await ctx.send(box(msg))
@commands.command()
@checks.is_owner()
async def addpath(self, ctx: commands.Context, path: Path):
"""
Add a path to the list of available cog paths.
"""
if not path.is_dir():
await ctx.send(_("That path does not exist or does not point to a valid directory."))
return
try:
await ctx.bot._cog_mgr.add_path(path)
except ValueError as e:
await ctx.send(str(e))
else:
await ctx.send(_("Path successfully added."))
@commands.command()
@checks.is_owner()
async def removepath(self, ctx: commands.Context, path_number: int):
"""
Removes a path from the available cog paths given the `path_number` from `[p]paths`.
"""
path_number -= 1
if path_number < 0:
await ctx.send(_("Path numbers must be positive."))
return
cog_paths = await ctx.bot._cog_mgr.user_defined_paths()
try:
to_remove = cog_paths.pop(path_number)
except IndexError:
await ctx.send(_("That is an invalid path number."))
return
await ctx.bot._cog_mgr.remove_path(to_remove)
await ctx.send(_("Path successfully removed."))
@commands.command()
@checks.is_owner()
async def reorderpath(self, ctx: commands.Context, from_: int, to: int):
"""
Reorders paths internally to allow discovery of different cogs.
"""
# Doing this because in the paths command they're 1 indexed
from_ -= 1
to -= 1
if from_ < 0 or to < 0:
await ctx.send(_("Path numbers must be positive."))
return
all_paths = await ctx.bot._cog_mgr.user_defined_paths()
try:
to_move = all_paths.pop(from_)
except IndexError:
await ctx.send(_("Invalid 'from' index."))
return
try:
all_paths.insert(to, to_move)
except IndexError:
await ctx.send(_("Invalid 'to' index."))
return
await ctx.bot._cog_mgr.set_paths(all_paths)
await ctx.send(_("Paths reordered."))
@commands.command()
@checks.is_owner()
async def installpath(self, ctx: commands.Context, path: Path = None):
"""
Returns the current install path or sets it if one is provided.
The provided path must be absolute or relative to the bot's
directory and it must already exist.
No installed cogs will be transferred in the process.
"""
if path:
if not path.is_absolute():
path = (ctx.bot._main_dir / path).resolve()
try:
await ctx.bot._cog_mgr.set_install_path(path)
except ValueError:
await ctx.send(_("That path does not exist."))
return
install_path = await ctx.bot._cog_mgr.install_path()
await ctx.send(
_("The bot will install new cogs to the `{}` directory.").format(install_path)
)
@commands.command()
@checks.is_owner()
async def cogs(self, ctx: commands.Context):
"""
Lists all loaded and available cogs.
"""
loaded = set(ctx.bot.extensions.keys())
all_cogs = set(await ctx.bot._cog_mgr.available_modules())
unloaded = all_cogs - loaded
loaded = sorted(list(loaded), key=str.lower)
unloaded = sorted(list(unloaded), key=str.lower)
if await ctx.embed_requested():
loaded = _("**{} loaded:**\n").format(len(loaded)) + ", ".join(loaded)
unloaded = _("**{} unloaded:**\n").format(len(unloaded)) + ", ".join(unloaded)
for page in pagify(loaded, delims=[", ", "\n"], page_length=1800):
e = discord.Embed(description=page, colour=discord.Colour.dark_green())
await ctx.send(embed=e)
for page in pagify(unloaded, delims=[", ", "\n"], page_length=1800):
e = discord.Embed(description=page, colour=discord.Colour.dark_red())
await ctx.send(embed=e)
else:
loaded_count = _("**{} loaded:**\n").format(len(loaded))
loaded = ", ".join(loaded)
unloaded_count = _("**{} unloaded:**\n").format(len(unloaded))
unloaded = ", ".join(unloaded)
loaded_count_sent = False
unloaded_count_sent = False
for page in pagify(loaded, delims=[", ", "\n"], page_length=1800):
if page.startswith(", "):
page = page[2:]
if not loaded_count_sent:
await ctx.send(loaded_count + box(page, lang="css"))
loaded_count_sent = True
else:
await ctx.send(box(page, lang="css"))
for page in pagify(unloaded, delims=[", ", "\n"], page_length=1800):
if page.startswith(", "):
page = page[2:]
if not unloaded_count_sent:
await ctx.send(unloaded_count + box(page, lang="ldif"))
unloaded_count_sent = True
else:
await ctx.send(box(page, lang="ldif")) | PypiClean |
/CaseRecommender-1.1.1.tar.gz/CaseRecommender-1.1.1/caserec/recommenders/item_recommendation/itemknn.py | # © 2019. Case Recommender (MIT License)
from collections import defaultdict
import numpy as np
from caserec.recommenders.item_recommendation.base_item_recommendation import BaseItemRecommendation
from caserec.utils.extra_functions import timed
__author__ = 'Arthur Fortes <[email protected]>'
class ItemKNN(BaseItemRecommendation):
def __init__(self, train_file=None, test_file=None, output_file=None, similarity_metric="cosine", k_neighbors=None,
rank_length=10, as_binary=False, as_similar_first=True, sep='\t', output_sep='\t'):
"""
Item KNN for Item Recommendation
This algorithm predicts a rank for each user based on the similar items that he/her consumed.
Usage::
>> ItemKNN(train, test, as_similar_first=True).compute()
>> ItemKNN(train, test, ranking_file, as_binary=True).compute()
:param train_file: File which contains the train set. This file needs to have at least 3 columns
(user item feedback_value).
:type train_file: str
:param test_file: File which contains the test set. This file needs to have at least 3 columns
(user item feedback_value).
:type test_file: str, default None
:param output_file: File with dir to write the final predictions
:type output_file: str, default None
:param similarity_metric: Pairwise metric to compute the similarity between the items. Reference about
distances: http://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.spatial.distance.pdist.html
:type similarity_metric: str, default cosine
:param k_neighbors: Number of neighbors to use. If None, k_neighbor = int(sqrt(n_items))
:type k_neighbors: int, default None
:param rank_length: Size of the rank that must be generated by the predictions of the recommender algorithm
:type rank_length: int, default 10
:param as_binary: If True, the explicit feedback will be transform to binary
:type as_binary: bool, default False
:param as_similar_first: If True, for each unknown item, which will be predicted, we first look for its k
most similar users and then take the intersection with the users that
seen that item.
:type as_similar_first: bool, default True
:param sep: Delimiter for input files
:type sep: str, default '\t'
:param output_sep: Delimiter for output file
:type output_sep: str, default '\t'
"""
super(ItemKNN, self).__init__(train_file=train_file, test_file=test_file, output_file=output_file,
as_binary=as_binary, rank_length=rank_length, similarity_metric=similarity_metric,
sep=sep, output_sep=output_sep)
self.recommender_name = 'ItemKNN Algorithm'
self.as_similar_first = as_similar_first
self.k_neighbors = k_neighbors
# internal vars
self.si_matrix = None
self.similar_items = None
def init_model(self):
"""
Method to initialize the model. Create and calculate a similarity matrix
"""
self.similar_items = defaultdict(list)
# Set the value for k
if self.k_neighbors is None:
self.k_neighbors = int(np.sqrt(len(self.items)))
self.create_matrix()
self.si_matrix = self.compute_similarity(transpose=True)
for i_id, item in enumerate(self.items):
self.similar_items[i_id] = sorted(range(len(self.si_matrix[i_id])),
key=lambda k: -self.si_matrix[i_id][k])[1:self.k_neighbors + 1]
def predict(self):
"""
This method predict a rank for a specific user.
"""
for u_id, user in enumerate(self.users):
if len(self.train_set['feedback'].get(user, [])) != 0:
if self.as_similar_first:
self.ranking += self.predict_similar_first_scores(user, u_id)
else:
self.ranking += self.predict_scores(user, u_id)
else:
# Implement cold start user
pass
def predict_scores(self, user, user_id):
partial_predictions = []
# Selects items that user has not interacted with.
u_list = list(np.flatnonzero(self.matrix[user_id] == 0))
seen_items_id = np.flatnonzero(self.matrix[user_id])
# predict score for item_i
for i_id in u_list:
sim_sum = sorted(np.take(self.si_matrix[i_id], seen_items_id), key=lambda x: -x)
partial_predictions.append((user, self.items[i_id], sum(sim_sum[:self.k_neighbors])))
return sorted(partial_predictions, key=lambda x: -x[2])[:self.rank_length]
def predict_similar_first_scores(self, user, user_id):
"""
In this implementation, for each unknown item, which will be
predicted, we first look for its k most similar items and then take the intersection with the seen items of
the user. Finally, the score of the unknown item will be the sum of the similarities of k's most similar
to it, taking into account only the items that each user seen.
"""
predictions = []
# Selects items that user has not interacted with.
u_list = list(np.flatnonzero(self.matrix[user_id] == 0))
seen_items_id = np.flatnonzero(self.matrix[user_id])
# predict score for item_i
for i_id in u_list:
# s_id = list(filter(set(self.similar_items[i]).__contains__, seen_items_id))
s_id = list(set(self.similar_items[i_id]).intersection(seen_items_id))
sim_sum = np.take(self.si_matrix[i_id], s_id)
predictions.append((user, self.items[i_id], sum(sim_sum)))
return sorted(predictions, key=lambda x: -x[2])[:self.rank_length]
def compute(self, verbose=True, metrics=None, verbose_evaluation=True, as_table=False, table_sep='\t', n_ranks=None):
"""
Extends compute method from BaseItemRecommendation. Method to run recommender algorithm
:param verbose: Print recommender and database information
:type verbose: bool, default True
:param metrics: List of evaluation metrics
:type metrics: list, default None
:param verbose_evaluation: Print the evaluation results
:type verbose_evaluation: bool, default True
:param as_table: Print the evaluation results as table
:type as_table: bool, default False
:param table_sep: Delimiter for print results (only work with verbose=True and as_table=True)
:type table_sep: str, default '\t'
:param n_ranks: List of positions to evaluate the ranking
:type n_ranks: list, None
"""
super(ItemKNN, self).compute(verbose=verbose)
if verbose:
print("training_time:: %4f sec" % timed(self.init_model))
if self.extra_info_header is not None:
print(self.extra_info_header)
print("prediction_time:: %4f sec" % timed(self.predict))
print('\n')
else:
self.init_model()
self.predict()
self.write_ranking()
if self.test_file is not None:
self.evaluate(metrics, verbose_evaluation, as_table=as_table, table_sep=table_sep, n_ranks=n_ranks) | PypiClean |
/CheckM2-1.0.1.tar.gz/CheckM2-1.0.1/checkm2/prodigal.py | import os
import sys
import stat
import subprocess
import logging
import numpy as np
import shutil
import gzip
import tempfile
from checkm2 import sequenceClasses
from checkm2 import fileManager
'''Prodigal module taken from CheckM1.'''
#TODO dont use meta mode for prodigal if a translation table was provided
#TODO: take provided translation table
class ProdigalRunner():
"""Wrapper for running prodigal."""
def __init__(self, out_dir, bin_file):
self.file_basename = os.path.splitext(os.path.basename(bin_file))[0]
# make sure prodigal is installed
self.checkForProdigal()
self.faa_directory = out_dir
self.aaGeneFile = os.path.join(out_dir, "{}{}".format(self.file_basename, '.faa'))
self.ntGeneFile = os.path.join(out_dir, "{}{}".format(self.file_basename, '.fna'))
self.gffFile = os.path.join(out_dir, "{}{}".format(self.file_basename, '.gff'))
def __calculate_N50(self, list_of_lengths):
tmp = []
for tmp_number in set(list_of_lengths):
tmp += [tmp_number] * list_of_lengths.count(tmp_number) * tmp_number
tmp.sort()
if (len(tmp) % 2) == 0:
median = (tmp[int(len(tmp) / 2) - 1] + tmp[int(len(tmp) / 2)]) / 2
else:
median = tmp[int(len(tmp) / 2)]
return median
def run(self, query, supplied_coding_table=None):
bNucORFs = True
prodigal_input = query
# decompress archive input files
if prodigal_input.endswith('.gz'):
tmp_dir = tempfile.mkdtemp()
prodigal_input = os.path.join(tmp_dir, os.path.basename(prodigal_input[0:-3]) + '.fna')
with gzip.open(query, 'rb') as f_in:
with open(prodigal_input, 'wb') as f_out:
shutil.copyfileobj(f_in, f_out)
seqs = sequenceClasses.SeqReader().read_nucleotide_sequences(prodigal_input)
totalBases = 0
contig_lengths = []
GC = 0
AT = 0
for seqId, seq in seqs.items():
totalBases += len(seq)
contig_lengths.append(len(seq))
GC += sum(seq.upper().count(x) for x in ("G", "C"))
AT += sum(seq.upper().count(x) for x in ("A", "T"))
GC = float(GC/(AT + GC + 1))
# call ORFs with different translation tables and select the one with the highest coding density
tableCodingDensity = {}
if supplied_coding_table is None:
ttables_to_check = [4, 11]
else:
ttables_to_check = [supplied_coding_table]
for translationTable in ttables_to_check:
aaGeneFile = self.aaGeneFile + '.' + str(translationTable)
ntGeneFile = self.ntGeneFile + '.' + str(translationTable)
gffFile = self.gffFile + '.' + str(translationTable)
# check if there is sufficient bases to calculate prodigal parameters
if totalBases < 100000:
procedureStr = 'meta' # use best precalculated parameters
else:
procedureStr = 'single' # estimate parameters from data
if bNucORFs:
cmd = ('prodigal -p %s -q -m -f gff -g %d -a %s -d %s -i %s > %s' % (procedureStr,
translationTable,
aaGeneFile,
ntGeneFile,
prodigal_input,
gffFile))
else:
cmd = ('prodigal -p %s -q -m -f gff -g %d -a %s -i %s > %s' % (procedureStr,
translationTable,
aaGeneFile,
prodigal_input,
gffFile))
os.system(cmd)
if not self.__areORFsCalled(aaGeneFile) and procedureStr == 'single':
# prodigal will fail to learn a model if the input genome has a large number of N's
# so try gene prediction with 'meta'
cmd = cmd.replace('-p single', '-p meta')
try:
os.system(cmd)
except Exception as e:
logging.error('An error occured while running prodigal: {}'.format(e))
sys.exit(1)
# determine coding density
prodigalParser = ProdigalGeneFeatureParser(gffFile)
codingBases = 0
for seqId, seq in seqs.items():
codingBases += prodigalParser.codingBases(seqId)
if totalBases != 0:
codingDensity = float(codingBases) / totalBases
else:
codingDensity = 0
tableCodingDensity[translationTable] = codingDensity
# determine best translation table
if supplied_coding_table is not None:
bestTranslationTable = supplied_coding_table
else:
bestTranslationTable = 11
if (tableCodingDensity[4] - tableCodingDensity[11] > 0.05) and tableCodingDensity[4] > 0.7:
bestTranslationTable = 4
shutil.copyfile(self.aaGeneFile + '.' + str(bestTranslationTable), self.aaGeneFile)
shutil.copyfile(self.gffFile + '.' + str(bestTranslationTable), self.gffFile)
if bNucORFs:
shutil.copyfile(self.ntGeneFile + '.' + str(bestTranslationTable), self.ntGeneFile)
# clean up redundant prodigal results
for translationTable in ttables_to_check:
os.remove(self.aaGeneFile + '.' + str(translationTable))
os.remove(self.gffFile + '.' + str(translationTable))
if bNucORFs:
os.remove(self.ntGeneFile + '.' + str(translationTable))
os.remove(self.ntGeneFile)
os.remove(self.gffFile)
gene_lengths = []
cds_count = 0
aa_seqs = sequenceClasses.SeqReader().read_nucleotide_sequences(self.aaGeneFile)
for seqId, seq in aa_seqs.items():
gene_lengths.append(len(seq))
cds_count += 1
# if prodigal_input.endswith('.gz'):
# shutil.rmtree(tmp_dir)
return self.file_basename, bestTranslationTable, tableCodingDensity[bestTranslationTable], \
self.__calculate_N50(contig_lengths), np.array(gene_lengths).mean(), totalBases,\
cds_count, GC
def __areORFsCalled(self, aaGeneFile):
return os.path.exists(aaGeneFile) and os.stat(aaGeneFile)[stat.ST_SIZE] != 0
def areORFsCalled(self, bNucORFs):
# if requested, check if nucleotide gene sequences have been generated
if bNucORFs:
return os.path.exists(self.ntGeneFile) and os.stat(self.ntGeneFile)[stat.ST_SIZE] != 0
# otherwise, only the amino acid gene sequences are required
return os.path.exists(self.aaGeneFile) and os.stat(self.aaGeneFile)[stat.ST_SIZE] != 0
def checkForProdigal(self):
"""Check to see if Prodigal is on the system before we try to run it."""
# Assume that a successful prodigal -h returns 0 and anything
# else returns something non-zero
try:
subprocess.call(['prodigal', '-h'], stdout=open(os.devnull, 'w'), stderr=subprocess.STDOUT)
except:
logging.error("Make sure prodigal is on your system path.")
sys.exit(1)
class ProdigalFastaParser():
"""Parses prodigal FASTA output."""
def __init__(self):
pass
def genePositions(self, filename):
fileManager.check_if_file_exists(filename)
gp = {}
for line in open(filename):
if line[0] == '>':
lineSplit = line[1:].split()
geneId = lineSplit[0]
startPos = int(lineSplit[2])
endPos = int(lineSplit[4])
gp[geneId] = [startPos, endPos]
return gp
class ProdigalGeneFeatureParser():
"""Parses prodigal FASTA output."""
def __init__(self, filename):
fileManager.check_if_file_exists(filename)
self.genes = {}
self.lastCodingBase = {}
self.__parseGFF(filename)
self.codingBaseMasks = {}
for seqId in self.genes:
self.codingBaseMasks[seqId] = self.__buildCodingBaseMask(seqId)
def __parseGFF(self, filename):
"""Parse genes from GFF file."""
self.translationTable = None
for line in open(filename):
if line.startswith('# Model Data') and not self.translationTable:
lineSplit = line.split(';')
for token in lineSplit:
if 'transl_table' in token:
self.translationTable = int(token[token.find('=') + 1:])
if line[0] == '#' or line.strip() == '"':
# work around for Prodigal having lines with just a
# quotation on it when FASTA files have Windows style
# line endings
continue
lineSplit = line.split('\t')
seqId = lineSplit[0]
if seqId not in self.genes:
geneCounter = 0
self.genes[seqId] = {}
self.lastCodingBase[seqId] = 0
geneId = seqId + '_' + str(geneCounter)
geneCounter += 1
start = int(lineSplit[3])
end = int(lineSplit[4])
self.genes[seqId][geneId] = [start, end]
self.lastCodingBase[seqId] = max(self.lastCodingBase[seqId], end)
def __buildCodingBaseMask(self, seqId):
"""Build mask indicating which bases in a sequences are coding."""
# safe way to calculate coding bases as it accounts
# for the potential of overlapping genes
codingBaseMask = np.zeros(self.lastCodingBase[seqId])
for pos in self.genes[seqId].values():
codingBaseMask[pos[0]:pos[1] + 1] = 1
return codingBaseMask
def codingBases(self, seqId, start=0, end=None):
"""Calculate number of coding bases in sequence between [start, end)."""
# check if sequence has any genes
if seqId not in self.genes:
return 0
# set end to last coding base if not specified
if end == None:
end = self.lastCodingBase[seqId]
return np.sum(self.codingBaseMasks[seqId][start:end]) | PypiClean |
/BigJob2-0.54.post73.tar.gz/BigJob2-0.54.post73/util/bigjob_usage.ipynb | # Generating BigJob Usage Statistics out of Redis entries
Read `cus` and `pilots` from Redis
```
import pandas as pd
import matplotlib.pyplot as plt
import os, sys
import archive
import datetime
import ast
# Attempt to restore old data frame
cus_df = None
pilot_df = None
if os.path.exists("cus.df") and os.path.exists("pilot.df"):
cus_df = pd.load("cus.df") #pd.read_csv("cus.csv", index_col=0, parse_dates=False, date_parser=)
pilot_df = pd.load("pilot.df") #pd.read_csv("pilot.csv", index_col=0, parse_dates=False, date_parser=), dat
max_cus_date = cus_df.index.max()
max_pilots_date = pilot_df.index.max()
print "Restored data frames until %s"%max_cus_date
# Download new data
# Redis Service to connect to:
# redis://[email protected]:6379
# redis://localhost
rd = archive.RedisDownloader("redis://[email protected]:6379")
#rd = archive.RedisDownloader("redis://localhost:6379")
pilots = rd.get_pilots()
cus = rd.get_cus()
```
## Compute Units Executed per Day
```
# make sure only new entries are loaded into data frame
max_cus_date = None
try:
max_cus_date = cus_df.index.max()
except:
pass
timestamp_index = []
cus_new = []
for i in cus:
if max_cus_date == None or datetime.datetime.utcfromtimestamp(float(i["start_time"]))>max_cus_date:
# print "add " + str(datetime.datetime.utcfromtimestamp(float(i["start_time"])))
timestamp_index.append(datetime.datetime.utcfromtimestamp(float(i["start_time"])))
cus_new.append(i)
#print cus_new
if len(cus_new) > 0:
cus_df_new = pd.DataFrame(cus_new, index=timestamp_index, columns=['Executable', 'NumberOfProcesses', "SPMDVariation", "start_time", "end_queue_time", "start_staging_time", "end_time"])
try:
cus_df = pd.concat([cus_df, cus_df_new])
except:
cus_df = cus_df_new
cus_df_h = cus_df["Executable"].resample("D", how="count")
cus_df_h.plot(color='k', alpha=0.7)
plt.ylabel("Number of CUs Executed")
plt.xlabel("Day")
plt.savefig("number_cus_per_day.pdf", format="pdf", bbox_inches='tight', pad_inches=0.1)
```
## Compute Unit Types
How many sequential versus parallel (MPI) CUs are executed?
```
spmd = cus_df["SPMDVariation"].astype("object")
spmd[spmd.isnull()]="single"
spmd.value_counts().plot(kind="bar", color='k', alpha=0.7)
plt.ylabel("Number of CUs")
plt.ylabel("CU SPMD Variation")
plt.savefig("cu_type.pdf", format="pdf", bbox_inches='tight', pad_inches=0.1)
cus_df["Executable"].value_counts().plot(kind="bar", color='k', alpha=0.7)
plt.ylabel("Number CUs")
plt.xlabel("CU Executable")
plt.savefig("cu_executable.pdf", format="pdf", bbox_inches='tight', pad_inches=0.1)
```
## CU Runtime Distribution
```
runtimes = cus_df.apply(lambda row: float(row["end_time"]) - float(row["end_queue_time"]), axis=1)
runtimes.hist(bins=50)
plt.ylabel("Number of Events")
plt.xlabel("CU Runtime (in sec)")
plt.savefig("cu_runtime.pdf", format="pdf", bbox_inches='tight', pad_inches=0.1)
runtimes.describe()
```
## Pilots Executed per Day
Extract pilot desciptions out of Redis entries
```
print "Number of Pilots: %d Number CUs: %d Executed since: %s"%(len(pilots), len(cus), str(cus_df.index.min()))
pilots = [i for i in pilots if i.has_key("start_time")]
max_pilot_date = None
try:
max_pilot_date = max_pilot_date.index.max()
except:
pass
timestamp_index = []
pilot_new = []
for i in pilots:
if max_pilot_date == None or datetime.datetime.utcfromtimestamp(float(i["start_time"]))>max_pilot_date:
timestamp_index.append(datetime.datetime.utcfromtimestamp(float(i["start_time"])))
pilot_new.append(ast.literal_eval(i["description"]))
#print cus_new
if len(pilot_new) > 0:
pilot_df_new = pd.DataFrame(pilot_new, index=timestamp_index, columns=['service_url', "number_of_processes"])
try:
pilot_df = pd.concat([pilot_df, pilot_df_new])
except:
pilot_df = pilot_df_new
pilot_df_h = pilot_df['service_url'].resample("D", how="count")
pilot_df_h.plot(kind="line", color='k', alpha=0.7)
plt.ylabel("Number of Pilots")
plt.xlabel("Day")
plt.savefig("number_pilots.pdf", format="pdf", bbox_inches='tight', pad_inches=0.1)
```
## Store Dataframes for later usage
```
cus_df.save("cus.df")
pilot_df.save("pilot.df")
date_string = datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
cus_df.to_csv("cus-"+date_string+".csv", index_label="Date")
pilot_df.to_csv("pilot-"+date_string+".csv", index_label="Date")
```
| PypiClean |
/MTGProxyPrinter-0.25.0.tar.gz/MTGProxyPrinter-0.25.0/mtg_proxy_printer/settings.py |
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import configparser
import logging
import pathlib
import re
import typing
from PyQt5.QtCore import QStandardPaths
import mtg_proxy_printer.app_dirs
import mtg_proxy_printer.meta_data
import mtg_proxy_printer.natsort
from mtg_proxy_printer.units_and_sizes import CardSizes
__all__ = [
"settings",
"DEFAULT_SETTINGS",
"read_settings_from_file",
"write_settings_to_file",
"validate_settings",
"update_stored_version_string",
]
config_file_path = mtg_proxy_printer.app_dirs.data_directories.user_config_path / "MTGProxyPrinter.ini"
settings = configparser.ConfigParser()
DEFAULT_SETTINGS = configparser.ConfigParser()
# Support three-valued boolean logic by adding values that parse to None, instead of True/False.
# This will be used to store “unset” boolean settings.
configparser.ConfigParser.BOOLEAN_STATES.update({
"-1": None,
"unknown": None,
"none": None,
})
VERSION_CHECK_RE = re.compile(
# sourced from https://semver.org/#is-there-a-suggested-regular-expression-regex-to-check-a-semver-string
r"^(?P<major>0|[1-9]\d*)\.(?P<minor>0|[1-9]\d*)\.(?P<patch>0|[1-9]\d*)"
r"(?:-(?P<prerelease>(?:0|[1-9]\d*|\d*[a-zA-Z-][\da-zA-Z-]*)(?:\.(?:0|[1-9]\d*|\d*[a-zA-Z-][\da-zA-Z-]*))*))?"
r"(?:\+(?P<buildmetadata>[\da-zA-Z-]+(?:\.[\da-zA-Z-]+)*))?$"
)
# Below are the default application settings. How to define new ones:
# - Add a key-value pair (String keys and values only) to a section or add a new section
# - If adding a new section, also add a validator function for that section.
# - Add the new key to the validator of the section it’s in. The validator has to check that the value can be properly
# cast into the expected type and perform a value range check.
# - Add the option to the Settings window UI
# - Wire up save and load functionality for the new key in the Settings UI
# - The Settings GUI class has to also do a value range check.
DEFAULT_SETTINGS["images"] = {
"preferred-language": "en",
"automatically-add-opposing-faces": "True",
}
DEFAULT_SETTINGS["card-filter"] = {
"hide-cards-depicting-racism": "True",
"hide-cards-without-images": "True",
"hide-oversized-cards": "False",
"hide-banned-in-brawl": "False",
"hide-banned-in-commander": "False",
"hide-banned-in-historic": "False",
"hide-banned-in-legacy": "False",
"hide-banned-in-modern": "False",
"hide-banned-in-pauper": "False",
"hide-banned-in-penny": "False",
"hide-banned-in-pioneer": "False",
"hide-banned-in-standard": "False",
"hide-banned-in-vintage": "False",
"hide-white-bordered": "False",
"hide-gold-bordered": "False",
"hide-borderless": "False",
"hide-funny-cards": "False",
"hide-token": "False",
"hide-digital-cards": "True",
"hide-reversible-cards": "False",
}
DEFAULT_SETTINGS["documents"] = {
"paper-height-mm": "297",
"paper-width-mm": "210",
"margin-top-mm": "10",
"margin-bottom-mm": "10",
"margin-left-mm": "7",
"margin-right-mm": "7",
"image-spacing-horizontal-mm": "0",
"image-spacing-vertical-mm": "0",
"print-cut-marker": "False",
"pdf-page-count-limit": "0",
"print-sharp-corners": "False",
"print-page-numbers": "False",
"default-document-name": "",
}
DEFAULT_SETTINGS["default-filesystem-paths"] = {
"document-save-path": QStandardPaths.locate(QStandardPaths.DocumentsLocation, "", QStandardPaths.LocateDirectory),
"pdf-export-path": QStandardPaths.locate(QStandardPaths.DocumentsLocation, "", QStandardPaths.LocateDirectory),
"deck-list-search-path": QStandardPaths.locate(QStandardPaths.DownloadLocation, "", QStandardPaths.LocateDirectory),
}
DEFAULT_SETTINGS["gui"] = {
"central-widget-layout": "columnar",
"show-toolbar": "True",
}
VALID_SEARCH_WIDGET_LAYOUTS = {"horizontal", "columnar", "tabbed"}
DEFAULT_SETTINGS["debug"] = {
"cutelog-integration": "False",
"write-log-file": "True",
"log-level": "INFO"
}
VALID_LOG_LEVELS = set(map(logging.getLevelName, range(10, 60, 10)))
DEFAULT_SETTINGS["decklist-import"] = {
"enable-print-guessing-by-default": "True",
"prefer-already-downloaded-images": "True",
"always-translate-deck-lists": "False",
"remove-basic-wastes": "False",
"remove-snow-basics": "False",
}
DEFAULT_SETTINGS["application"] = {
"last-used-version": mtg_proxy_printer.meta_data.__version__,
"check-for-application-updates": "None",
"check-for-card-data-updates": "None",
}
MAX_DOCUMENT_NAME_LENGTH = 200
def read_settings_from_file():
global settings, DEFAULT_SETTINGS
settings.clear()
if not config_file_path.exists():
settings.read_dict(DEFAULT_SETTINGS)
else:
settings.read(config_file_path)
migrate_settings(settings)
read_sections = set(settings.sections())
known_sections = set(DEFAULT_SETTINGS.sections())
# Synchronize sections
for outdated in read_sections - known_sections:
settings.remove_section(outdated)
for new in sorted(known_sections - read_sections):
settings.add_section(new)
# Synchronize individual options
for section in known_sections:
read_options = set(settings[section].keys())
known_options = set(DEFAULT_SETTINGS[section].keys())
for outdated in read_options - known_options:
del settings[section][outdated]
for new in sorted(known_options - read_options):
settings[section][new] = DEFAULT_SETTINGS[section][new]
validate_settings(settings)
def write_settings_to_file():
global settings
if not config_file_path.parent.exists():
config_file_path.parent.mkdir(parents=True)
with config_file_path.open("w") as config_file:
settings.write(config_file)
def update_stored_version_string():
"""Sets the version string stored in the configuration file to the version of the currently running instance."""
settings["application"]["last-used-version"] = DEFAULT_SETTINGS["application"]["last-used-version"]
def was_application_updated() -> bool:
"""
Returns True, if the application was updated since last start, i.e. if the internal version number
is greater than the version string stored in the configuration file. Returns False otherwise.
"""
return mtg_proxy_printer.natsort.str_less_than(
settings["application"]["last-used-version"],
mtg_proxy_printer.meta_data.__version__
)
def validate_settings(read_settings: configparser.ConfigParser):
"""
Called after reading the settings from disk. Ensures that all settings contain valid values and expected types.
I.e. checks that settings that should contain booleans do contain valid booleans, options that should contain
non-negative integers do so, etc. If an option contains an invalid value, the default value is restored.
"""
_validate_card_filter_section(read_settings)
_validate_images_section(read_settings)
_validate_documents_section(read_settings)
_validate_application_section(read_settings)
_validate_gui_section(read_settings)
_validate_debug_section(read_settings)
_validate_decklist_import_section(read_settings)
_validate_default_filesystem_paths_section(read_settings)
def _validate_card_filter_section(settings: configparser.ConfigParser, section_name: str = "card-filter"):
section = settings[section_name]
defaults = DEFAULT_SETTINGS[section_name]
for key in section.keys():
_validate_boolean(section, defaults, key)
def _validate_images_section(settings: configparser.ConfigParser, section_name: str = "images"):
section = settings[section_name]
defaults = DEFAULT_SETTINGS[section_name]
for key in ("automatically-add-opposing-faces",):
_validate_boolean(section, defaults, key)
language = section["preferred-language"]
if not re.fullmatch(r"[a-z]{2}", language):
# Only syntactic validation: Language contains a string of exactly two lower case ascii letters
_restore_default(section, defaults, "preferred-language")
def _validate_documents_section(settings: configparser.ConfigParser, section_name: str = "documents"):
card_size = mtg_proxy_printer.units_and_sizes.CardSizes.OVERSIZED
card_height = card_size.as_mm(card_size.height)
card_width = card_size.as_mm(card_size.width)
section = settings[section_name]
if (document_name := section["default-document-name"]) and len(document_name) > MAX_DOCUMENT_NAME_LENGTH:
section["default-document-name"] = document_name[:MAX_DOCUMENT_NAME_LENGTH-1] + "…"
defaults = DEFAULT_SETTINGS[section_name]
boolean_settings = {"print-cut-marker", "print-sharp-corners", "print-page-numbers", }
string_settings = {"default-document-name", }
# Check syntax
for key in section.keys():
if key in boolean_settings:
_validate_boolean(section, defaults, key)
elif key in string_settings:
pass
else:
_validate_non_negative_int(section, defaults, key)
# Check some semantic properties
available_height = section.getint("paper-height-mm") - \
(section.getint("margin-top-mm") + section.getint("margin-bottom-mm"))
available_width = section.getint("paper-width-mm") - \
(section.getint("margin-left-mm") + section.getint("margin-right-mm"))
if available_height < card_height:
# Can not fit a single card on a page
section["paper-height-mm"] = defaults["paper-height-mm"]
section["margin-top-mm"] = defaults["margin-top-mm"]
section["margin-bottom-mm"] = defaults["margin-bottom-mm"]
if available_width < card_width:
# Can not fit a single card on a page
section["paper-width-mm"] = defaults["paper-width-mm"]
section["margin-left-mm"] = defaults["margin-left-mm"]
section["margin-right-mm"] = defaults["margin-right-mm"]
# Re-calculate, if width or height was reset
available_height = section.getint("paper-height-mm") - \
(section.getint("margin-top-mm") + section.getint("margin-bottom-mm"))
available_width = section.getint("paper-width-mm") - \
(section.getint("margin-left-mm") + section.getint("margin-right-mm"))
if section.getint("image-spacing-vertical-mm") > (available_spacing_vertical := available_height - card_height):
# Prevent vertical spacing from overlapping with bottom margin
section["image-spacing-vertical-mm"] = str(available_spacing_vertical)
if section.getint("image-spacing-horizontal-mm") > (available_spacing_horizontal := available_width - card_width):
# Prevent horizontal spacing from overlapping with right margin
section["image-spacing-horizontal-mm"] = str(available_spacing_horizontal)
def _validate_application_section(settings: configparser.ConfigParser, section_name: str = "application"):
section = settings[section_name]
defaults = DEFAULT_SETTINGS[section_name]
if not VERSION_CHECK_RE.fullmatch(section["last-used-version"]):
section["last-used-version"] = defaults["last-used-version"]
for option in ("check-for-application-updates", "check-for-card-data-updates"):
_validate_three_valued_boolean(section, defaults, option)
def _validate_gui_section(settings: configparser.ConfigParser, section_name: str = "gui"):
section = settings[section_name]
defaults = DEFAULT_SETTINGS[section_name]
_validate_string_is_in_set(section, defaults, VALID_SEARCH_WIDGET_LAYOUTS, "central-widget-layout")
_validate_boolean(section, defaults, "show-toolbar")
def _validate_debug_section(settings: configparser.ConfigParser, section_name: str = "debug"):
section = settings[section_name]
defaults = DEFAULT_SETTINGS[section_name]
_validate_boolean(section, defaults, "cutelog-integration")
_validate_boolean(section, defaults, "write-log-file")
_validate_string_is_in_set(section, defaults, VALID_LOG_LEVELS, "log-level")
def _validate_decklist_import_section(settings: configparser.ConfigParser, section_name: str = "decklist-import"):
section = settings[section_name]
defaults = DEFAULT_SETTINGS[section_name]
for key in section.keys():
_validate_boolean(section, defaults, key)
def _validate_default_filesystem_paths_section(
settings: configparser.ConfigParser, section_name: str = "default-filesystem-paths"):
section = settings[section_name]
defaults = DEFAULT_SETTINGS[section_name]
for key in section.keys():
_validate_path_to_directory(section, defaults, key)
def _validate_path_to_directory(section: configparser.SectionProxy, defaults: configparser.SectionProxy, key: str):
try:
if not pathlib.Path(section[key]).resolve().is_dir():
raise ValueError
except Exception:
_restore_default(section, defaults, key)
def _validate_boolean(section: configparser.SectionProxy, defaults: configparser.SectionProxy, key: str):
try:
if section.getboolean(key) is None:
raise ValueError
except ValueError:
_restore_default(section, defaults, key)
def _validate_three_valued_boolean(section: configparser.SectionProxy, defaults: configparser.SectionProxy, key: str):
try:
section.getboolean(key)
except ValueError:
_restore_default(section, defaults, key)
def _validate_non_negative_int(section: configparser.SectionProxy, defaults: configparser.SectionProxy, key: str):
try:
if section.getint(key) < 0:
raise ValueError
except ValueError:
_restore_default(section, defaults, key)
def _validate_string_is_in_set(
section: configparser.SectionProxy, defaults: configparser.SectionProxy,
valid_options: typing.Set[str], key: str):
"""Checks if the value of the option is one of the allowed values, as determined by the given set of strings."""
if section[key] not in valid_options:
_restore_default(section, defaults, key)
def _restore_default(section: configparser.SectionProxy, defaults: configparser.SectionProxy, key: str):
section[key] = defaults[key]
def migrate_settings(settings: configparser.ConfigParser):
_migrate_layout_setting(settings)
_migrate_download_settings(settings)
_migrate_default_save_paths_settings(settings)
def _migrate_layout_setting(settings: configparser.ConfigParser):
try:
gui_section = settings["gui"]
layout = gui_section["search-widget-layout"]
except KeyError:
return
else:
if layout == "vertical":
layout = "columnar"
gui_section["central-widget-layout"] = layout
def _migrate_download_settings(settings: configparser.ConfigParser):
target_section_name = "card-filter"
if settings.has_section(target_section_name) or not settings.has_section("downloads"):
return
download_section = settings["downloads"]
settings.add_section(target_section_name)
filter_section = settings[target_section_name]
for source_setting in settings["downloads"].keys():
target_setting = source_setting.replace("download-", "hide-")
try:
new_value = not download_section.getboolean(source_setting)
except ValueError:
pass
else:
filter_section[target_setting] = str(new_value)
def _migrate_default_save_paths_settings(settings: configparser.ConfigParser):
source_section_name = "default-save-paths"
target_section_name = "default-filesystem-paths"
if settings.has_section(target_section_name) or not settings.has_section(source_section_name):
return
settings.add_section(target_section_name)
settings[target_section_name].update(settings[source_section_name])
def _migrate_print_guessing_settings(settings: configparser.ConfigParser):
source_section_name = "print-guessing"
target_section_name = "decklist-import"
if settings.has_section(target_section_name) or not settings.has_section(source_section_name):
return
settings.add_section(target_section_name)
target = settings[target_section_name]
source = settings[source_section_name]
# Force-overwrite with the new default when migrating. Having this disabled has negative UX impact, so should not
# be disabled by default.
target["enable-print-guessing-by-default"] = "True"
target["prefer-already-downloaded-images"] = source["prefer-already-downloaded"]
target["always-translate-deck-lists"] = source["always-translate-deck-lists"]
# Read the settings from file during module import
# This has to be performed before any modules containing GUI classes are imported.
read_settings_from_file() | PypiClean |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.