id
stringlengths 1
265
| text
stringlengths 6
5.19M
| dataset_id
stringclasses 7
values |
---|---|---|
/Euphorie-15.0.2.tar.gz/Euphorie-15.0.2/docs/manuals/creation-guide.rst | ==========================================
A Guide to creating a Risk Assessment tool
==========================================
1. Introduction
===============
Your goal is to create the content of the OiRA tool for enterprises in your sector, and to offer this sector-specific tool to them.
The OiRA tool promotes a stepwise approach to risk assessment and is made up of 5 steps:
* **Preparation** > the sector introduces the end-users (enterprises) to the risk assessment
* **Identification** > the end-user goes through the hazards/problems and answers YES or NO
* **Evaluation** > the end-user evaluates the risks for each problem/hazard spotted
* **Action plan** > the end-user fills in an action plan with measures to tackle all stated risks
* **Report** > the action plan becomes a report to be downloaded and printed
1.1 Keep in mind your end-user
------------------------------
It is important to **keep in mind your end-user: the micro and small sized enterprise (employer and worker(s)) and the structure** of the risk assessment tool should be as relevant as possible to the daily activities of the enterprises; the end-user thinks and acts in terms of his own business processes.
Often, the expert’s way of thinking differs from the practice of the end-user. The latter thinks in terms of his own work processes and uses his own language. Some examples:
* the expert thinks of physical workload; *the end-user of physical work*
* the expert thinks of the thermal environment; *the end-user of working in the heat/in the cold*
* the expert thinks of safety and creates a module containing everything in that area; *the end-user may think of opening and closing a shop, for example, and what that involves, or dealing with aggressive customers and what to do about them.*
1.2 Use easy language
---------------------
**Structuring the content of the risk assessment tool so that it is in line with the way the average end-user thinks and acts** makes the content recognisable, and it makes it easier to carry out an action plan to tackle the risks with feasible measures.
Another decisive aspect is the language used. The **language** should be easy to understand with no need for interpretation, referring to things by names that are familiar and usual to enterprises.
Short sentences (at best no longer than ten words) and clear everyday language that can be easily read by the layman will prevent the end-user from developing an aversion, and enable him to draw up an assessment and use the OiRA tool properly.
At the beginning of the tool you will be given the chance to write a short introductory text sending a positive **and encouraging message** regarding:
* the **importance** of risk assessment
* the fact that risk assessment is **not necessarily complicated** (the idea is to contribute to demystify risk assessment)
* the fact that the tool has especially been conceived to **meet the needs of the enterprises** in the sector
It is important that the text is not too long; otherwise it could discourage the end-user from using the tool.
2.Team
======
Although it is important to keep the project team manageable in terms of size, it should preferably consist of:
* representative(s) of the trade association(s)
* representative(s) of the trade union(s)
* the OiRA developer
* an expert in occupational safety and health (with knowledge of and affinity with the sector)
* end-users (e.g. management or staff from companies, trade unions officials, etc.)
3. Structure
============
3.1 Structure the content hierarchically
----------------------------------------
Before you start creating an OiRA tool, we recommend considering the number of matters which you want to address. Thorough consideration of the structure will pay dividends later on, so classify the subjects in a way that is relevant to end-users.
The system offers a way to group topics, subtopics and types of risks together. The main goal of this grouping is to make it easier/more logical for the end-user to complete the risk assessment tool. Your risk assessment tool will therefore consist of:
.. image:: images/creation/module.png
:align: left
:height: 32 px
**MODULES** = subjects (locations, activities, …)
*Example*:
Module 1: *Hair Shampooing* (hairdresser sector)
.. image:: images/creation/submodule.png
:align: left
:height: 32 px
**SUB-MODULES** (not compulsory) = sub-subjects
*Example*:
Sub-module 1: *Working posture*
Sub-module 2: *Contact with water and cosmetic products*
.. image:: images/creation/risk.png
:align: left
:height: 32 px
**RISKS** = statements about a situation which is in order
*Example*:
*1.1 The shampoo station is adjustable*
*2.1 Suitable protective equipment, such as disposable safety gloves, is purchased*
.. image:: images/creation/solution.png
:align: left
:height: 32 px
**SOLUTIONS** = preventive measures to solve the problem recommended by the expert
*Example*:
*1.1 Taking regular breaks to be able to recover from physical work*
*2.1 Using dust-free products*
The system also offers the possibility to:
* skip one/a whole set of modules in case the content does not apply to the company activity
* repeat some modules in the case of enterprises having multiple locations.
3.2 Think about the risk as an affirmative statement
--------------------------------------------------------------
Once you have decided about the main structure of the risk assessment tool you can start to identify and explain the various risks.
The system works with **affirmative statements**; that is, it states **whether a situation is ‘in order’ (the goal to be attained) or ‘not in order’;**
.. note::
Example: Good lighting is present.
The end-user answer is either a clear ‘yes’ or ‘no’. If the end-user answers with NO (= the situation is not in order), then the problem is automatically included in the action plan step and the end-user will have to propose a measure to tackle the risk.
3.3 Consider the different types of risks
-----------------------------------------
You can choose from 3 types of risks:
* priority risk: refers to a risk considered by the sector to be among the high risks in the sector.
.. note::
Example: Working at height in the construction sector: the scaffold is erected on a firm foundation
* risk: refers to existing risks at the workplace or linked to the work carried out.
.. note::
Example: All office chairs are adjustable
To identify and evaluate the above two types of risk it is often necessary to examine the workplace (to walk around the workplace and look at what could cause harm; consult workers, …).
* policy: refers to agreements, procedures, and management decisions regarding OSH issues.
.. note::
Example: Manufacturers are regularly asked about alternative safe products
These policy statements can be answered from behind a desk (no need to examine the workplace).
3.4 Pre-set evaluation for the risk
-----------------------------------
For each “risk” type you can choose from 2 evaluation methods:
* **Estimated**: by selecting from **high, medium** or **low**.
* **Calculated**: by evaluating the **probability, frequency** and **severity** separately. The OiRA tool will then automatically calculate the priority.
End-users will not have to evaluate the following risks in the “Evaluation” step:
* Priority risks (considered by default as "high priority" and displayed as “high” in the action plan)
* Policy (strictly speaking this is not a risk).
3.5 Propose solutions
---------------------
The sector is generally well-informed of the risks that are most likely to lead to occupational accidents and diseases. In order to help the end-user to find solutions to these risks, you can include the solutions recommended by the sector/experts. While working on the action plan, the end-user will have the possibility to select the solutions and rework them (modify the text) according to the situation that prevails in their enterprise.
.. note::
All the necessary documents are available on the OiRA community site http://www.oiraproject.eu/documentation
| PypiClean |
/AstroKundli-2.0.0.tar.gz/AstroKundli-2.0.0/FlatlibAstroSidereal/angle.py | import math
# === Angular utilities === #
def norm(angle):
""" Normalizes an angle between 0 and 360. """
return angle % 360
def znorm(angle):
""" Normalizes an angle between -180 and 180. """
angle = angle % 360
return angle if angle <= 180 else angle - 360
def distance(angle1, angle2):
""" Angular distance from angle1 to angle2 (ccw). """
return norm(angle2 - angle1)
def closestdistance(angle1, angle2):
""" Closest distance from angle1 to angle2 (ccw is positive). """
return znorm(angle2 - angle1)
# === Signed Lists utilities === #
def _fixSlist(slist):
""" Guarantees that a signed list has exactly four elements. """
slist.extend([0] * (4-len(slist)))
return slist[:4]
def _roundSlist(slist):
""" Rounds a signed list over the last element and removes it. """
slist[-1] = 60 if slist[-1] >= 30 else 0
for i in range(len(slist)-1, 1, -1):
if slist[i] == 60:
slist[i] = 0
slist[i-1] += 1
return slist[:-1]
# === Base conversions === #
def strSlist(string):
""" Converts angle string to signed list. """
sign = '-' if string[0] == '-' else '+'
values = [abs(int(x)) for x in string.split(':')]
return _fixSlist(list(sign) + values)
def slistStr(slist):
""" Converts signed list to angle string. """
slist = _fixSlist(slist)
string = ':'.join(['%02d' % x for x in slist[1:]])
return slist[0] + string
def slistFloat(slist):
""" Converts signed list to float. """
values = [v / 60**(i) for (i,v) in enumerate(slist[1:])]
value = sum(values)
return -value if slist[0] == '-' else value
def floatSlist(value):
""" Converts float to signed list. """
slist = ['+', 0, 0, 0, 0]
if value < 0:
slist[0] = '-'
value = abs(value)
for i in range(1,5):
slist[i] = math.floor(value)
value = (value - slist[i]) * 60
return _roundSlist(slist)
def strFloat(string):
""" Converts angle string to float. """
slist = strSlist(string)
return slistFloat(slist)
def floatStr(value):
""" Converts angle float to string. """
slist = floatSlist(value)
return slistStr(slist)
# === Direct conversions === #
def toFloat(value):
""" Converts string or signed list to float. """
if isinstance(value, str):
return strFloat(value)
elif isinstance(value, list):
return slistFloat(value)
else:
return value
def toList(value):
""" Converts angle float to signed list. """
return floatSlist(value)
def toString(value):
""" Converts angle float to string. """
return floatStr(value) | PypiClean |
/AppiumRunner-0.0.1-py3-none-any.whl/appiumrunner/excel_reader.py | from xlrd import open_workbook
from appiumrunner.step_model import StepModel as model
class ExcelReader():
@staticmethod
def read_excel(excel_path):
reader = open_workbook(excel_path)
names = reader.sheet_names()
# 1. 读取步骤,以列表保存 {"login":[step1,step2]}
step_dict = {}
for name in names:
if name == 'data':
continue;
step_dict[name] = []
case_xls = reader.sheet_by_name(name)
for i in range(case_xls.nrows):
if i == 0: # 跳过表头
continue
smart_list = [] # 一个集合代表一个步骤
for j in range(case_xls.ncols):
smart_list.append(case_xls.cell(i, j).value)
mode = model()
mode.sort = smart_list[0]
mode.desc = smart_list[1]
mode.action = smart_list[2]
mode.searchType = smart_list[3]
mode.searchvalue = smart_list[4]
mode.searchindex = smart_list[5]
mode.validateSource = smart_list[6]
mode.validateAttr = smart_list[7]
mode.validateType = smart_list[8]
mode.validateData = smart_list[9]
step_dict[name].append(mode) # [mode1.model2 mode3 ]
# 2. 读取数据,以列表保存 {"login":[data1,data2]}
data_dict = {}
data_xls = reader.sheet_by_name("data")
for i in range(data_xls.nrows):
name = data_xls.cell(i, 0).value
data_dict[name] = []
for j in range(data_xls.ncols):
value = data_xls.cell(i, j).value.strip()
if (j == 0) or (value == ""):
continue
data_dict[name].append(eval(value))
# 3. 格式转变 [{name,desc,examples,steps}]
result = []
for case_name in list(step_dict.keys()):
if data_dict[case_name]:
data_list = data_dict[case_name]
num = 0
for data in data_list:
result.append({
"name": case_name,
"steps": step_dict[case_name],
"examples": data,
"desc": "{}_{}".format(case_name, num)
})
num += 1
else:
result.append({
"name": case_name,
"steps": step_dict[case_name],
"examples": {},
"desc": "{}_0".format(case_name)
})
return result | PypiClean |
/Gemtography-0.0.2-py3-none-any.whl/Gemtography-0.0.2.dist-info/LICENSE.md | Copyright 2022 Vlad Usatii
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
| PypiClean |
/nosedjango-1.1.0.tar.gz/nosedjango-1.1.0/nosedjango/plugins/cherrypy_plugin.py | import os
import time
from django.core.handlers.wsgi import WSGIHandler
from nosedjango.plugins.base_plugin import Plugin
# Next 3 plugins taken from django-sane-testing:
# http://github.com/Almad/django-sane-testing
# By: Lukas "Almad" Linhart http://almad.net/
#####
# It was a nice try with Django server being threaded.
# It still sucks for some cases (did I mentioned urllib2?),
# so provide cherrypy as working alternative.
# Do imports in method to avoid CP as dependency
# Code originally written by Mikeal Rogers under Apache License.
#####
DEFAULT_LIVE_SERVER_ADDRESS = '0.0.0.0'
DEFAULT_LIVE_SERVER_PORT = '8000'
class CherryPyLiveServerPlugin(Plugin):
name = 'cherrypyliveserver'
activation_parameter = '--with-cherrypyliveserver'
nosedjango = True
def __init__(self):
Plugin.__init__(self)
self.server_started = False
self.server_thread = None
def options(self, parser, env=os.environ):
Plugin.options(self, parser, env)
def configure(self, options, config):
Plugin.configure(self, options, config)
def startTest(self, test):
from django.conf import settings
if not self.server_started and \
getattr(test, 'start_live_server', False):
self.start_server(
address=getattr(
settings,
"LIVE_SERVER_ADDRESS",
DEFAULT_LIVE_SERVER_ADDRESS,
),
port=int(getattr(
settings,
"LIVE_SERVER_PORT",
DEFAULT_LIVE_SERVER_PORT,
))
)
self.server_started = True
def finalize(self, result):
self.stop_test_server()
def start_server(self, address='0.0.0.0', port=8000):
from django.contrib.staticfiles.handlers import StaticFilesHandler
_application = StaticFilesHandler(WSGIHandler())
def application(environ, start_response):
environ['PATH_INFO'] = environ['SCRIPT_NAME'] + environ['PATH_INFO'] # noqa
return _application(environ, start_response)
from cherrypy.wsgiserver import CherryPyWSGIServer
from threading import Thread
self.httpd = CherryPyWSGIServer(
(address, port),
application,
server_name='django-test-http',
)
self.httpd_thread = Thread(target=self.httpd.start)
self.httpd_thread.start()
# FIXME: This could be avoided by passing self to thread class starting
# django and waiting for Event lock
time.sleep(.5)
def stop_test_server(self):
if self.server_started:
self.httpd.stop()
self.server_started = False | PypiClean |
/BlueWhale3-ImageAnalytics-0.6.1.tar.gz/BlueWhale3-ImageAnalytics-0.6.1/doc/widgets/imageviewer.md | Image Viewer
============
Displays images that come with a data set.
**Inputs**
- Data: A data set with images.
**Outputs**
- Data: Images that come with the data.
- Selected images: Images selected in the widget.
The **Image Viewer** widget can display images from a data set, which are
stored locally or on the internet. The widget will look for an attribute with *type=image* in the third header row. It can be used for image comparison, while looking for similarities or discrepancies between selected data instances (e.g. bacterial growth or bitmap representations of handwriting).

1. Information on the data set
2. Select the column with image data (links).
3. Select the column with image titles.
4. Zoom in or out.
5. Saves the visualization in a file.
6. Tick the box on the left to commit changes automatically.
Alternatively, click *Send*.
Examples
--------
A very simple way to use this widget is to connect the **File** widget with **Image Viewer** and see all the images that come with your data set. You can also visualize images from [Import Images](importimages.md).

Alternatively, you can visualize only selected instances, as shown in the example below.

| PypiClean |
/EARL-pytorch-0.5.1.tar.gz/EARL-pytorch-0.5.1/earl_pytorch/util/util.py | import numpy as np
from torch import nn
boost_locations = [
(0.0, -4240.0, 70.0),
(-1792.0, -4184.0, 70.0),
(1792.0, -4184.0, 70.0),
(-3072.0, -4096.0, 73.0),
(3072.0, -4096.0, 73.0),
(- 940.0, -3308.0, 70.0),
(940.0, -3308.0, 70.0),
(0.0, -2816.0, 70.0),
(-3584.0, -2484.0, 70.0),
(3584.0, -2484.0, 70.0),
(-1788.0, -2300.0, 70.0),
(1788.0, -2300.0, 70.0),
(-2048.0, -1036.0, 70.0),
(0.0, -1024.0, 70.0),
(2048.0, -1036.0, 70.0),
(-3584.0, 0.0, 73.0),
(-1024.0, 0.0, 70.0),
(1024.0, 0.0, 70.0),
(3584.0, 0.0, 73.0),
(-2048.0, 1036.0, 70.0),
(0.0, 1024.0, 70.0),
(2048.0, 1036.0, 70.0),
(-1788.0, 2300.0, 70.0),
(1788.0, 2300.0, 70.0),
(-3584.0, 2484.0, 70.0),
(3584.0, 2484.0, 70.0),
(0.0, 2816.0, 70.0),
(- 940.0, 3310.0, 70.0),
(940.0, 3308.0, 70.0),
(-3072.0, 4096.0, 73.0),
(3072.0, 4096.0, 73.0),
(-1792.0, 4184.0, 70.0),
(1792.0, 4184.0, 70.0),
(0.0, 4240.0, 70.0),
]
def rotator_to_matrix(yaw, pitch, roll):
cr = np.cos(roll)
sr = np.sin(roll)
cp = np.cos(pitch)
sp = np.sin(pitch)
cy = np.cos(yaw)
sy = np.sin(yaw)
forward = [cp * cy, cp * sy, sp]
left = [cy * sp * sr - cr * sy, sy * sp * sr + cr * cy, -cp * sr]
up = [-cr * cy * sp - sr * sy, -cr * sy * sp + sr * cy, cp * cr]
# forward = [cp * cy, cy * sp * sr - cr * sy, -cr * cy * sp - sr * sy]
# right = [cp * sy, sy * sp * sr + cr * cy, -cr * sy * sp + sr * cy]
# up = [sp, -cp * sr, cp * cr]
return forward, up
class NGPModel(nn.Module):
def __init__(self, earl):
super().__init__()
self.earl = earl
self.score = nn.Linear(earl.n_dims, 2)
def forward(self, *args, **kwargs):
o = self.earl(*args, **kwargs)
return self.score(o[:, 0, :])
def mlp(in_features, hidden_features, hidden_layers, out_features=None, activation=nn.ReLU):
layers = [nn.Linear(in_features, hidden_features), activation()]
for _ in range(hidden_layers):
layers.extend([
nn.Linear(hidden_features, hidden_features),
activation()
])
if out_features is not None:
layers.append(nn.Linear(hidden_features, out_features))
return nn.Sequential(*layers) | PypiClean |
/EnvComparison-0.1.5.tar.gz/EnvComparison-0.1.5/compare.py |
from EnvComparison import connection, ssh_hosts, server, differ
import sys
import tornado.httpserver
import tornado.ioloop
import tornado.web
import os
def compare_servers(opt_1, opt_2, host_list, ssh_config):
connection_pool = [
connection.Connection(ssh_config, host_list[int(opt_1)]),
connection.Connection(ssh_config, host_list[int(opt_2)])
]
for conn in connection_pool:
conn.get_platform_details()
conn.get_platform_family()
conn.get_system_packages()
conn.get_system_arch()
conn.get_fqdn()
conn.get_php_packages()
conn.get_ruby_packages()
conn.get_pip_packages()
global server1_dict
global server2_dict
server1_dict, server2_dict = connection_pool[0].system, connection_pool[1].system
global samekeysandvalues
global samekeysdiffvalues
global missingkeys
global extrakeys
samekeysandvalues, samekeysdiffvalues, missingkeys, extrakeys = differ.diffdict(connection_pool[0].system, connection_pool[1].system)
def main():
if len(sys.argv) != 2:
print "Please provide the location of your ssh config file"
sys.exit()
ssh_config = sys.argv[1]
host_list = ssh_hosts.get_host_list(ssh_config)
for key, value in host_list.items():
print "[",key,"] ",value
opt_1 = raw_input('Please select the first server:')
opt_2 = raw_input('Please select the second server:')
compare_servers(int(opt_1), int(opt_2), host_list, ssh_config)
class MainHandler(tornado.web.RequestHandler):
def get(self):
samekeysandvalues
self.render("main.html",
server1_dict=server1_dict,
server2_dict=server2_dict,
samekeysandvalues=samekeysandvalues,
samekeysdiffvalues=samekeysdiffvalues,
missingkeys=missingkeys,
extrakeys=extrakeys,
)
class Application(tornado.web.Application):
def __init__(self):
handlers = [
(r"/", MainHandler),
]
settings = dict(
template_path=os.path.join(os.path.dirname(__file__), "templates"),
)
tornado.web.Application.__init__(self, handlers, **settings)
def server():
http_server = tornado.httpserver.HTTPServer(Application())
http_server.listen(8888)
print "Please browse to http://127.0.0.1:8888/"
tornado.ioloop.IOLoop.instance().start()
if __name__ == "__main__":
main()
server() | PypiClean |
/MatchZoo-2.2.0.tar.gz/MatchZoo-2.2.0/matchzoo/datasets/wiki_qa/load_data.py |
import typing
import csv
from pathlib import Path
import keras
import pandas as pd
import matchzoo
_url = "https://download.microsoft.com/download/E/5/F/" \
"E5FCFCEE-7005-4814-853D-DAA7C66507E0/WikiQACorpus.zip"
def load_data(
stage: str = 'train',
task: str = 'ranking',
filtered: bool = False,
return_classes: bool = False
) -> typing.Union[matchzoo.DataPack, tuple]:
"""
Load WikiQA data.
:param stage: One of `train`, `dev`, and `test`.
:param task: Could be one of `ranking`, `classification` or a
:class:`matchzoo.engine.BaseTask` instance.
:param filtered: Whether remove the questions without correct answers.
:param return_classes: `True` to return classes for classification task,
`False` otherwise.
:return: A DataPack unless `task` is `classificiation` and `return_classes`
is `True`: a tuple of `(DataPack, classes)` in that case.
"""
if stage not in ('train', 'dev', 'test'):
raise ValueError(f"{stage} is not a valid stage."
f"Must be one of `train`, `dev`, and `test`.")
data_root = _download_data()
file_path = data_root.joinpath(f'WikiQA-{stage}.tsv')
data_pack = _read_data(file_path)
if filtered and stage in ('dev', 'test'):
ref_path = data_root.joinpath(f'WikiQA-{stage}.ref')
filter_ref_path = data_root.joinpath(f'WikiQA-{stage}-filtered.ref')
with open(filter_ref_path, mode='r') as f:
filtered_ids = set([line.split()[0] for line in f])
filtered_lines = []
with open(ref_path, mode='r') as f:
for idx, line in enumerate(f.readlines()):
if line.split()[0] in filtered_ids:
filtered_lines.append(idx)
data_pack = data_pack[filtered_lines]
if task == 'ranking':
task = matchzoo.tasks.Ranking()
if task == 'classification':
task = matchzoo.tasks.Classification()
if isinstance(task, matchzoo.tasks.Ranking):
return data_pack
elif isinstance(task, matchzoo.tasks.Classification):
data_pack.one_hot_encode_label(task.num_classes, inplace=True)
if return_classes:
return data_pack, [False, True]
else:
return data_pack
else:
raise ValueError(f"{task} is not a valid task."
f"Must be one of `Ranking` and `Classification`.")
def _download_data():
ref_path = keras.utils.data_utils.get_file(
'wikiqa', _url, extract=True,
cache_dir=matchzoo.USER_DATA_DIR,
cache_subdir='wiki_qa'
)
return Path(ref_path).parent.joinpath('WikiQACorpus')
def _read_data(path):
table = pd.read_csv(path, sep='\t', header=0, quoting=csv.QUOTE_NONE)
df = pd.DataFrame({
'text_left': table['Question'],
'text_right': table['Sentence'],
'id_left': table['QuestionID'],
'id_right': table['SentenceID'],
'label': table['Label']
})
return matchzoo.pack(df) | PypiClean |
/ERP-0.27.tar.gz/ERP-0.27/erp/base/storage/views.py | from django.shortcuts import get_object_or_404, HttpResponse
from django.views.generic import ListView, TemplateView, DetailView, FormView
from django.forms.models import modelform_factory, inlineformset_factory
from django.contrib.contenttypes.forms import generic_inlineformset_factory
from django.contrib.contenttypes.models import ContentType
from . import models
from erp.extras.views import AjaxFormMixin
class Index(TemplateView):
template_name = 'storage/index.html'
class NomenclatureList(ListView):
model = models.Nomenclature
template_name = 'storage/products_list.html'
context_object_name = 'nomenclature'
paginate_by = 30
class NomenclatureView(DetailView):
template_name = 'storage/product_info.html'
context_object_name = 'nomenclature'
def get_object(self, queryset=None):
nom = get_object_or_404(models.Category, slug=self.kwargs.get('slug'))
return nom
def get_context_data(self, **kwargs):
context = super(NomenclatureView, self).get_context_data(**kwargs)
context['items'] = self.object.st_items
return context
class AddNomenclature(AjaxFormMixin):
template_name = 'storage/form_info.html'
form_class = modelform_factory(models.Nomenclature, fields=['title', 'cat'])
class AddItem(AjaxFormMixin):
template_name = 'storage/form_info.html'
def get_form_class(self):
if self.request.GET.get('type') == 'static':
form = modelform_factory(models.StaticItem, fields=['title', 'slug', 'pos', 'invent_no'])
elif self.request.GET.get('type') == 'dynamic':
form = modelform_factory(models.DynamicItem, fields=['title', 'slug', 'pos'])
return form
class AddLog(AjaxFormMixin):
def get_form_class(self):
if self.request.GET.get('type') == 'adoption':
form = generic_inlineformset_factory(models.StorageItemAdoptionLog,
fields=['waybill', 'comment', 'content_type', 'object_id'])
elif self.request.GET.get('type') == 'shipment':
form = generic_inlineformset_factory(models.StorageItemShipmentLog,
fields=['shipped_to', 'comment', 'content_type', 'object_id'])
else:
return HttpResponse('Wrong type!')
return form
def form_valid(self, form):
if self.request.GET.get('item') == 'dynamic':
content_type = ContentType.objects.get_for_model(models.DynamicItem)
elif self.request.GET.get('item') == 'static':
content_type = ContentType.objects.get_for_model(models.StaticItem)
object_id = self.request.GET.get('id')
if self.request.GET.get('type') == 'adoption':
for f in form:
fields = f.cleaned_data
if not bool(fields):
continue
fields['content_type'] = content_type
fields['object_id'] = object_id
log = models.StorageItemAdoptionLog()
log.save_from_ajax(fields)
elif self.request.GET.get('type') == 'shipment':
for f in form:
fields = f.cleaned_data
if not bool(fields):
continue
fields['content_type'] = content_type
fields['object_id'] = object_id
log = models.StorageItemShipmentLog()
log.save_from_ajax(fields)
else:
return HttpResponse('Wrong type!')
return HttpResponse('OK')
class AddCategory(AjaxFormMixin):
def get_form_class(self):
form = modelform_factory(models.Category, fields=['title', 'slug'])
return form
class ItemView(DetailView, FormView):
def get_form_class(self):
if issubclass(self.object, models.StaticItem):
form = inlineformset_factory(models.StaticItem, models.StorageItemAdoptionLog)
return form
#
# class CategoryCreateView(AjaxFormMixin):
#
# form_class = modelform_factory(models.Category)
#
#
# class CategoryUpdateView(AjaxFormMixin):
#
# form_class = modelform_factory(models.Category)
#
#
# class GoodsCreateView(AjaxFormMixin):
#
# form_class = modelform_factory(models.StorageItem)
#
#
# class GoodsUpdateView(AjaxFormMixin):
#
# form_class = modelform_factory(models.StorageItem) | PypiClean |
/MolScribe-1.1.1.tar.gz/MolScribe-1.1.1/molscribe/tokenizer.py | import os
import json
import random
import numpy as np
from SmilesPE.pretokenizer import atomwise_tokenizer
PAD = '<pad>'
SOS = '<sos>'
EOS = '<eos>'
UNK = '<unk>'
MASK = '<mask>'
PAD_ID = 0
SOS_ID = 1
EOS_ID = 2
UNK_ID = 3
MASK_ID = 4
class Tokenizer(object):
def __init__(self, path=None):
self.stoi = {}
self.itos = {}
if path:
self.load(path)
def __len__(self):
return len(self.stoi)
@property
def output_constraint(self):
return False
def save(self, path):
with open(path, 'w') as f:
json.dump(self.stoi, f)
def load(self, path):
with open(path) as f:
self.stoi = json.load(f)
self.itos = {item[1]: item[0] for item in self.stoi.items()}
def fit_on_texts(self, texts):
vocab = set()
for text in texts:
vocab.update(text.split(' '))
vocab = [PAD, SOS, EOS, UNK] + list(vocab)
for i, s in enumerate(vocab):
self.stoi[s] = i
self.itos = {item[1]: item[0] for item in self.stoi.items()}
assert self.stoi[PAD] == PAD_ID
assert self.stoi[SOS] == SOS_ID
assert self.stoi[EOS] == EOS_ID
assert self.stoi[UNK] == UNK_ID
def text_to_sequence(self, text, tokenized=True):
sequence = []
sequence.append(self.stoi['<sos>'])
if tokenized:
tokens = text.split(' ')
else:
tokens = atomwise_tokenizer(text)
for s in tokens:
if s not in self.stoi:
s = '<unk>'
sequence.append(self.stoi[s])
sequence.append(self.stoi['<eos>'])
return sequence
def texts_to_sequences(self, texts):
sequences = []
for text in texts:
sequence = self.text_to_sequence(text)
sequences.append(sequence)
return sequences
def sequence_to_text(self, sequence):
return ''.join(list(map(lambda i: self.itos[i], sequence)))
def sequences_to_texts(self, sequences):
texts = []
for sequence in sequences:
text = self.sequence_to_text(sequence)
texts.append(text)
return texts
def predict_caption(self, sequence):
caption = ''
for i in sequence:
if i == self.stoi['<eos>'] or i == self.stoi['<pad>']:
break
caption += self.itos[i]
return caption
def predict_captions(self, sequences):
captions = []
for sequence in sequences:
caption = self.predict_caption(sequence)
captions.append(caption)
return captions
def sequence_to_smiles(self, sequence):
return {'smiles': self.predict_caption(sequence)}
class NodeTokenizer(Tokenizer):
def __init__(self, input_size=100, path=None, sep_xy=False, continuous_coords=False, debug=False):
super().__init__(path)
self.maxx = input_size # height
self.maxy = input_size # width
self.sep_xy = sep_xy
self.special_tokens = [PAD, SOS, EOS, UNK, MASK]
self.continuous_coords = continuous_coords
self.debug = debug
def __len__(self):
if self.sep_xy:
return self.offset + self.maxx + self.maxy
else:
return self.offset + max(self.maxx, self.maxy)
@property
def offset(self):
return len(self.stoi)
@property
def output_constraint(self):
return not self.continuous_coords
def len_symbols(self):
return len(self.stoi)
def fit_atom_symbols(self, atoms):
vocab = self.special_tokens + list(set(atoms))
for i, s in enumerate(vocab):
self.stoi[s] = i
assert self.stoi[PAD] == PAD_ID
assert self.stoi[SOS] == SOS_ID
assert self.stoi[EOS] == EOS_ID
assert self.stoi[UNK] == UNK_ID
assert self.stoi[MASK] == MASK_ID
self.itos = {item[1]: item[0] for item in self.stoi.items()}
def is_x(self, x):
return self.offset <= x < self.offset + self.maxx
def is_y(self, y):
if self.sep_xy:
return self.offset + self.maxx <= y
return self.offset <= y
def is_symbol(self, s):
return len(self.special_tokens) <= s < self.offset or s == UNK_ID
def is_atom(self, id):
if self.is_symbol(id):
return self.is_atom_token(self.itos[id])
return False
def is_atom_token(self, token):
return token.isalpha() or token.startswith("[") or token == '*' or token == UNK
def x_to_id(self, x):
return self.offset + round(x * (self.maxx - 1))
def y_to_id(self, y):
if self.sep_xy:
return self.offset + self.maxx + round(y * (self.maxy - 1))
return self.offset + round(y * (self.maxy - 1))
def id_to_x(self, id):
return (id - self.offset) / (self.maxx - 1)
def id_to_y(self, id):
if self.sep_xy:
return (id - self.offset - self.maxx) / (self.maxy - 1)
return (id - self.offset) / (self.maxy - 1)
def get_output_mask(self, id):
mask = [False] * len(self)
if self.continuous_coords:
return mask
if self.is_atom(id):
return [True] * self.offset + [False] * self.maxx + [True] * self.maxy
if self.is_x(id):
return [True] * (self.offset + self.maxx) + [False] * self.maxy
if self.is_y(id):
return [False] * self.offset + [True] * (self.maxx + self.maxy)
return mask
def symbol_to_id(self, symbol):
if symbol not in self.stoi:
return UNK_ID
return self.stoi[symbol]
def symbols_to_labels(self, symbols):
labels = []
for symbol in symbols:
labels.append(self.symbol_to_id(symbol))
return labels
def labels_to_symbols(self, labels):
symbols = []
for label in labels:
symbols.append(self.itos[label])
return symbols
def nodes_to_grid(self, nodes):
coords, symbols = nodes['coords'], nodes['symbols']
grid = np.zeros((self.maxx, self.maxy), dtype=int)
for [x, y], symbol in zip(coords, symbols):
x = round(x * (self.maxx - 1))
y = round(y * (self.maxy - 1))
grid[x][y] = self.symbol_to_id(symbol)
return grid
def grid_to_nodes(self, grid):
coords, symbols, indices = [], [], []
for i in range(self.maxx):
for j in range(self.maxy):
if grid[i][j] != 0:
x = i / (self.maxx - 1)
y = j / (self.maxy - 1)
coords.append([x, y])
symbols.append(self.itos[grid[i][j]])
indices.append([i, j])
return {'coords': coords, 'symbols': symbols, 'indices': indices}
def nodes_to_sequence(self, nodes):
coords, symbols = nodes['coords'], nodes['symbols']
labels = [SOS_ID]
for (x, y), symbol in zip(coords, symbols):
assert 0 <= x <= 1
assert 0 <= y <= 1
labels.append(self.x_to_id(x))
labels.append(self.y_to_id(y))
labels.append(self.symbol_to_id(symbol))
labels.append(EOS_ID)
return labels
def sequence_to_nodes(self, sequence):
coords, symbols = [], []
i = 0
if sequence[0] == SOS_ID:
i += 1
while i + 2 < len(sequence):
if sequence[i] == EOS_ID:
break
if self.is_x(sequence[i]) and self.is_y(sequence[i+1]) and self.is_symbol(sequence[i+2]):
x = self.id_to_x(sequence[i])
y = self.id_to_y(sequence[i+1])
symbol = self.itos[sequence[i+2]]
coords.append([x, y])
symbols.append(symbol)
i += 3
return {'coords': coords, 'symbols': symbols}
def smiles_to_sequence(self, smiles, coords=None, mask_ratio=0, atom_only=False):
tokens = atomwise_tokenizer(smiles)
labels = [SOS_ID]
indices = []
atom_idx = -1
for token in tokens:
if atom_only and not self.is_atom_token(token):
continue
if token in self.stoi:
labels.append(self.stoi[token])
else:
if self.debug:
print(f'{token} not in vocab')
labels.append(UNK_ID)
if self.is_atom_token(token):
atom_idx += 1
if not self.continuous_coords:
if mask_ratio > 0 and random.random() < mask_ratio:
labels.append(MASK_ID)
labels.append(MASK_ID)
elif coords is not None:
if atom_idx < len(coords):
x, y = coords[atom_idx]
assert 0 <= x <= 1
assert 0 <= y <= 1
else:
x = random.random()
y = random.random()
labels.append(self.x_to_id(x))
labels.append(self.y_to_id(y))
indices.append(len(labels) - 1)
labels.append(EOS_ID)
return labels, indices
def sequence_to_smiles(self, sequence):
has_coords = not self.continuous_coords
smiles = ''
coords, symbols, indices = [], [], []
for i, label in enumerate(sequence):
if label == EOS_ID or label == PAD_ID:
break
if self.is_x(label) or self.is_y(label):
continue
token = self.itos[label]
smiles += token
if self.is_atom_token(token):
if has_coords:
if i+3 < len(sequence) and self.is_x(sequence[i+1]) and self.is_y(sequence[i+2]):
x = self.id_to_x(sequence[i+1])
y = self.id_to_y(sequence[i+2])
coords.append([x, y])
symbols.append(token)
indices.append(i+3)
else:
if i+1 < len(sequence):
symbols.append(token)
indices.append(i+1)
results = {'smiles': smiles, 'symbols': symbols, 'indices': indices}
if has_coords:
results['coords'] = coords
return results
class CharTokenizer(NodeTokenizer):
def __init__(self, input_size=100, path=None, sep_xy=False, continuous_coords=False, debug=False):
super().__init__(input_size, path, sep_xy, continuous_coords, debug)
def fit_on_texts(self, texts):
vocab = set()
for text in texts:
vocab.update(list(text))
if ' ' in vocab:
vocab.remove(' ')
vocab = [PAD, SOS, EOS, UNK] + list(vocab)
for i, s in enumerate(vocab):
self.stoi[s] = i
self.itos = {item[1]: item[0] for item in self.stoi.items()}
assert self.stoi[PAD] == PAD_ID
assert self.stoi[SOS] == SOS_ID
assert self.stoi[EOS] == EOS_ID
assert self.stoi[UNK] == UNK_ID
def text_to_sequence(self, text, tokenized=True):
sequence = []
sequence.append(self.stoi['<sos>'])
if tokenized:
tokens = text.split(' ')
assert all(len(s) == 1 for s in tokens)
else:
tokens = list(text)
for s in tokens:
if s not in self.stoi:
s = '<unk>'
sequence.append(self.stoi[s])
sequence.append(self.stoi['<eos>'])
return sequence
def fit_atom_symbols(self, atoms):
atoms = list(set(atoms))
chars = []
for atom in atoms:
chars.extend(list(atom))
vocab = self.special_tokens + chars
for i, s in enumerate(vocab):
self.stoi[s] = i
assert self.stoi[PAD] == PAD_ID
assert self.stoi[SOS] == SOS_ID
assert self.stoi[EOS] == EOS_ID
assert self.stoi[UNK] == UNK_ID
assert self.stoi[MASK] == MASK_ID
self.itos = {item[1]: item[0] for item in self.stoi.items()}
def get_output_mask(self, id):
''' TO FIX '''
mask = [False] * len(self)
if self.continuous_coords:
return mask
if self.is_x(id):
return [True] * (self.offset + self.maxx) + [False] * self.maxy
if self.is_y(id):
return [False] * self.offset + [True] * (self.maxx + self.maxy)
return mask
def nodes_to_sequence(self, nodes):
coords, symbols = nodes['coords'], nodes['symbols']
labels = [SOS_ID]
for (x, y), symbol in zip(coords, symbols):
assert 0 <= x <= 1
assert 0 <= y <= 1
labels.append(self.x_to_id(x))
labels.append(self.y_to_id(y))
for char in symbol:
labels.append(self.symbol_to_id(char))
labels.append(EOS_ID)
return labels
def sequence_to_nodes(self, sequence):
coords, symbols = [], []
i = 0
if sequence[0] == SOS_ID:
i += 1
while i < len(sequence):
if sequence[i] == EOS_ID:
break
if i+2 < len(sequence) and self.is_x(sequence[i]) and self.is_y(sequence[i+1]) and self.is_symbol(sequence[i+2]):
x = self.id_to_x(sequence[i])
y = self.id_to_y(sequence[i+1])
for j in range(i+2, len(sequence)):
if not self.is_symbol(sequence[j]):
break
symbol = ''.join(self.itos(sequence[k]) for k in range(i+2, j))
coords.append([x, y])
symbols.append(symbol)
i = j
else:
i += 1
return {'coords': coords, 'symbols': symbols}
def smiles_to_sequence(self, smiles, coords=None, mask_ratio=0, atom_only=False):
tokens = atomwise_tokenizer(smiles)
labels = [SOS_ID]
indices = []
atom_idx = -1
for token in tokens:
if atom_only and not self.is_atom_token(token):
continue
for c in token:
if c in self.stoi:
labels.append(self.stoi[c])
else:
if self.debug:
print(f'{c} not in vocab')
labels.append(UNK_ID)
if self.is_atom_token(token):
atom_idx += 1
if not self.continuous_coords:
if mask_ratio > 0 and random.random() < mask_ratio:
labels.append(MASK_ID)
labels.append(MASK_ID)
elif coords is not None:
if atom_idx < len(coords):
x, y = coords[atom_idx]
assert 0 <= x <= 1
assert 0 <= y <= 1
else:
x = random.random()
y = random.random()
labels.append(self.x_to_id(x))
labels.append(self.y_to_id(y))
indices.append(len(labels) - 1)
labels.append(EOS_ID)
return labels, indices
def sequence_to_smiles(self, sequence):
has_coords = not self.continuous_coords
smiles = ''
coords, symbols, indices = [], [], []
i = 0
while i < len(sequence):
label = sequence[i]
if label == EOS_ID or label == PAD_ID:
break
if self.is_x(label) or self.is_y(label):
i += 1
continue
if not self.is_atom(label):
smiles += self.itos[label]
i += 1
continue
if self.itos[label] == '[':
j = i + 1
while j < len(sequence):
if not self.is_symbol(sequence[j]):
break
if self.itos[sequence[j]] == ']':
j += 1
break
j += 1
else:
if i+1 < len(sequence) and (self.itos[label] == 'C' and self.is_symbol(sequence[i+1]) and self.itos[sequence[i+1]] == 'l' \
or self.itos[label] == 'B' and self.is_symbol(sequence[i+1]) and self.itos[sequence[i+1]] == 'r'):
j = i+2
else:
j = i+1
token = ''.join(self.itos[sequence[k]] for k in range(i, j))
smiles += token
if has_coords:
if j+2 < len(sequence) and self.is_x(sequence[j]) and self.is_y(sequence[j+1]):
x = self.id_to_x(sequence[j])
y = self.id_to_y(sequence[j+1])
coords.append([x, y])
symbols.append(token)
indices.append(j+2)
i = j+2
else:
i = j
else:
if j < len(sequence):
symbols.append(token)
indices.append(j)
i = j
results = {'smiles': smiles, 'symbols': symbols, 'indices': indices}
if has_coords:
results['coords'] = coords
return results
def get_tokenizer(args):
tokenizer = {}
for format_ in args.formats:
if format_ == 'atomtok':
if args.vocab_file is None:
args.vocab_file = os.path.join(os.path.dirname(__file__), 'vocab/vocab_uspto.json')
tokenizer['atomtok'] = Tokenizer(args.vocab_file)
elif format_ == "atomtok_coords":
if args.vocab_file is None:
args.vocab_file = os.path.join(os.path.dirname(__file__), 'vocab/vocab_uspto.json')
tokenizer["atomtok_coords"] = NodeTokenizer(args.coord_bins, args.vocab_file, args.sep_xy,
continuous_coords=args.continuous_coords)
elif format_ == "chartok_coords":
if args.vocab_file is None:
args.vocab_file = os.path.join(os.path.dirname(__file__), 'vocab/vocab_chars.json')
tokenizer["chartok_coords"] = CharTokenizer(args.coord_bins, args.vocab_file, args.sep_xy,
continuous_coords=args.continuous_coords)
return tokenizer | PypiClean |
/ALS.Milo-0.18.1.tar.gz/ALS.Milo-0.18.1/als/milo/version.py |
__version__ = None # This will be assigned later; see below
__date__ = None # This will be assigned later; see below
__credits__ = None # This will be assigned later; see below
try:
from als.milo._version import git_pieces_from_vcs as _git_pieces_from_vcs
from als.milo._version import run_command, register_vcs_handler
from als.milo._version import render as _render
from als.milo._version import render_pep440_auto
from als.milo._version import render_pep440_micro, render_pep440_develop
from als.milo._version import get_versions as _get_versions
from als.milo._version import get_config, get_keywords
from als.milo._version import git_versions_from_keywords
from als.milo._version import versions_from_parentdir
from als.milo._version import NotThisMethod
except ImportError:
# Assumption is that _version.py was generated by 'versioneer.py'
# for tarball distribution, which contains only static JSON version data
from als.milo._version import get_versions
# from als.milo._version import get_versions as _get_versions
#
# def get_versions():
# """Get version information or return default if unable to do so.
#
# Extension to ._version.get_versions()
#
# Additional functionality:
# Returns list of authors found in `git`
# """
# default_keys_values = {
# "version": "0+unknown",
# "full-revisionid": None,
# "dirty": None,
# "error": "unable to compute version",
# "date": None,
# "authors": [],
# }
#
# return_key_values = _get_versions()
# return_key_values = dict(
# default_keys_values.items() + return_key_values.items()
# )
# return return_key_values
else:
import os
import sys
import numpy as np
@register_vcs_handler("git", "pieces_from_vcs")
def git_pieces_from_vcs(
tag_prefix, root, verbose, run_command=run_command):
"""Get version information from 'git' in the root of the source tree.
Extension to ._version.git_pieces_from_vcs()
Additional functionality:
Extracts all commit authors, sorts unique authors chronologically,
then adds them to `pieces["authors"]`, where `pieces` is the object
that was returned by ._version.git_pieces_from_vcs()
"""
pieces = _git_pieces_from_vcs(
tag_prefix, root, verbose, run_command=run_command)
GITS = ["git"]
if sys.platform == "win32":
GITS = ["git.cmd", "git.exe"]
##################################################
# Added to retrieve list of authors
(authors_raw, rc) = run_command(
GITS, ["log", "--pretty=%an"], cwd=root)
authors = [author.strip() for author in authors_raw.split('\n')]
(authors_unique, authors_indices) = np.unique(
authors, return_index=True)
pieces["authors"] = list(reversed(np.array(authors)[authors_indices]))
return pieces
def render(pieces, style):
"""Render the given version pieces into the requested style."""
if pieces["error"]:
return {"version": "unknown",
"full-revisionid": pieces.get("long"),
"dirty": None,
"error": pieces["error"],
"date": None,
"authors": None,
}
if not style or style == "default":
style = "pep440-auto" # the default
if style == "pep440-micro":
rendered = render_pep440_micro(pieces)
elif style == "pep440-develop":
rendered = render_pep440_develop(pieces)
elif style == "pep440-auto":
rendered = render_pep440_auto(pieces)
else:
return_key_values = _render(pieces, style)
return_key_values["authors"] = pieces["authors"]
return return_key_values
return {"version": rendered, "full-revisionid": pieces["long"],
"dirty": pieces["dirty"], "error": None,
"date": pieces.get("date"), "authors": pieces["authors"]}
def get_versions():
"""Get version information or return default if unable to do so.
Extension to ._version.get_versions()
Additional functionality:
Returns list of authors found in `git`
"""
# I am in _version.py, which lives at ROOT/VERSIONFILE_SOURCE.
# If we have __file__, we can work backwards from there to the root.
# Some py2exe/bbfreeze/non-CPython implementations don't do __file__,
# in which case we can only use expanded keywords.
cfg = get_config()
verbose = cfg.verbose
default_keys_values = {
"version": "0+unknown",
"full-revisionid": None,
"dirty": None,
"error": "unable to compute version",
"date": None,
"authors": [],
}
try:
return git_versions_from_keywords(get_keywords(), cfg.tag_prefix,
verbose)
except NotThisMethod:
pass
try:
root = os.path.realpath(__file__)
# versionfile_source is the relative path from the top of the
# source tree (where the .git directory might live) to this file.
# Invert this to find the root from __file__.
for i in cfg.versionfile_source.split('/'):
root = os.path.dirname(root)
except NameError:
return default_keys_values
try:
pieces = git_pieces_from_vcs(cfg.tag_prefix, root, verbose)
return render(pieces, cfg.style)
except NotThisMethod:
pass
try:
if cfg.parentdir_prefix:
return versions_from_parentdir(
cfg.parentdir_prefix, root, verbose)
except NotThisMethod:
pass
return_key_values = _get_versions()
return_key_values = dict(
list( default_keys_values.items() )
+ list( return_key_values.items() )
)
return return_key_values
__version__ = get_versions()["version"]
__date__ = get_versions()["date"]
__credits__ = get_versions()["authors"]
del get_versions | PypiClean |
/FamcyDev-0.3.71-py3-none-any.whl/Famcy/_style_/VideoStreamStyle/VideoStreamStyle.py | import Famcy
from flask import request, Response
import time
try:
import cv2
except:
print("pip install opencv-python")
import base64
class VideoCamera(object):
def __init__(self, rtsp_address, timeout=15, delay=0.5):
# 通過opencv獲取實時視頻流
self.cv_module = cv2
self.video = self.cv_module.VideoCapture(rtsp_address)
self.start_time = time.time()
self.stop_time = self.start_time + int(timeout)
self.is_decoded = False
self.timeout = int(timeout)
self.delay = delay
def __del__(self):
self.video.release()
@classmethod
def create_camera_response(cls, rtsp_address, timeout, delay):
return cls.gen(cls(rtsp_address, timeout, delay), timeout, delay)
@classmethod
def gen(cls, camera, timeout, delay):
camera.start_time = time.time()
camera.stop_time = camera.start_time + int(timeout)
while True:
time.sleep(delay)
frame, is_decoded = camera.get_frame()
# 使用generator函式輸出視頻流, 每次請求輸出的content型別是image/jpeg
if is_decoded:
break
yield (b'--frame\r\n'
b'Content-Type: image/jpeg\r\n\r\n' + frame + b'\r\n\r\n')
def get_frame(self):
success, image = self.video.read()
# 因為opencv讀取的圖片并非jpeg格式,因此要用motion JPEG模式需要先將圖片轉碼成jpg格式圖片
ret, jpeg = self.cv_module.imencode('.jpg', image)
is_decoded = (time.time() >= self.stop_time)
return jpeg.tobytes(), (self.is_decoded or is_decoded)
class VideoCameraSnap(object):
def __init__(self,rtsp_address):
# 通過opencv獲取實時視頻流
self.cv_module = cv2
self.video = self.cv_module.VideoCapture(rtsp_address)
def return_frame(self):
success, image = self.video.read()
if success:
ret, jpeg = self.cv_module.imencode('.jpg', image)
return (b'--frame\r\n'
b'Content-Type: image/jpeg\r\n\r\n' + jpeg.tobytes() + b'\r\n\r\n')
else:
return False
class VideoStreamStyle(Famcy.FamcyStyle):
def __init__(self, delay=0.5, snap=False):
# self.path = path
self.video_camera = VideoCamera
self.is_decoded = False
self.delay = delay
self.snap = snap
super(VideoStreamStyle, self).__init__()
def render(self, _script, _html, background_flag=False, **kwargs):
address = request.args.get('address')
timeout = request.args.get('timeout')
if self.snap:
self.video_camera = VideoCameraSnap(address)
res = self.video_camera.return_frame()
return Response(res, mimetype='multipart/x-mixed-replace; boundary=frame')
else:
return Response(self.video_camera.create_camera_response(address, timeout, self.delay), mimetype='multipart/x-mixed-replace; boundary=frame') | PypiClean |
/CONEstrip-0.1.1.tar.gz/CONEstrip-0.1.1/src/conestrip/optimization.py |
import random
from itertools import chain, combinations
from typing import Any, List, Tuple
from more_itertools import collapse
from more_itertools.recipes import flatten
from z3 import *
from conestrip.cones import GeneralCone, Gamble, print_gamble, print_general_cone, print_cone_generator
from conestrip.global_settings import GlobalSettings
from conestrip.utility import product, sum_rows, random_rationals_summing_to_one
AtomicEvent = int
Event = List[AtomicEvent]
PossibilitySpace = List[AtomicEvent]
MassFunction = List[Fraction]
LowerPrevisionFunction = List[Tuple[Gamble, Fraction]]
LowerPrevisionAssessment = List[Gamble]
ConditionalLowerPrevisionFunction = List[Tuple[Gamble, Event, Fraction]]
ConditionalLowerPrevisionAssessment = List[Tuple[Gamble, Event]]
def powerset(iterable):
s = list(iterable)
return chain.from_iterable(combinations(s, r) for r in range(len(s)+1))
def print_lower_prevision_function(P: LowerPrevisionFunction, pretty=False) -> str:
if pretty:
items = [f'({print_gamble(g, pretty)}, {float(c)})' for (g, c) in P]
return '[{}]'.format(', '.join(items))
else:
items = [f'({print_gamble(g)}, {c})' for (g, c) in P]
return ', '.join(items)
def make_one_omega(i: int, N: int) -> Gamble:
result = [Fraction(0)] * N
result[i] = Fraction(1)
return result
def make_one_Omega(N: int) -> Gamble:
return [Fraction(1)] * N
def make_minus_one_Omega(N: int) -> Gamble:
return [-Fraction(1)] * N
def make_zero(N: int) -> Gamble:
return [Fraction(0)] * N
def make_one(B: Event, N: int) -> Gamble:
return [Fraction(i in B) for i in range(N)]
def make_minus_one(B: Event, N: int) -> Gamble:
return [-Fraction(i in B) for i in range(N)]
def is_unit_gamble(g: Gamble) -> bool:
return g.count(Fraction(1)) == 1 and g.count(Fraction(0)) == len(g) - 1
def generate_mass_function(Omega: PossibilitySpace, number_of_zeroes: int = 0, decimals: int = 2) -> MassFunction:
if number_of_zeroes == 0:
N = len(Omega)
return random_rationals_summing_to_one(N, 10 ** decimals)
else:
N = len(Omega)
values = random_rationals_summing_to_one(N - number_of_zeroes, 10 ** decimals)
zero_positions = random.sample(range(N), number_of_zeroes)
result = [Fraction(0)] * N
index = 0
for i in range(N):
if i not in zero_positions:
result[i] = values[index]
index += 1
return result
# Generates a mass function on the subsets of Omega.
# The mass of the empty set is always 0.
# The subsets are assumed to be ordered according to powerset, which means
# that the empty set is always at the front.
def generate_subset_mass_function(Omega: PossibilitySpace, decimals: int = 2) -> MassFunction:
N = 2 ** len(Omega)
return [Fraction(0)] + random_rationals_summing_to_one(N - 1, 10 ** decimals)
# generates 1 mass function with 1 non-zero, 2 with 2 non-zeroes, etc.
def generate_mass_functions(Omega: PossibilitySpace, decimals: int = 2) -> List[MassFunction]:
result = []
N = len(Omega)
for number_of_zeroes in range(N - 1, -1, -1):
for _ in range(N - number_of_zeroes):
result.append(generate_mass_function(Omega, number_of_zeroes, decimals))
return result
def is_mass_function(p: MassFunction) -> Bool:
return all(x >= 0 for x in p) and sum(p) == 1
def optimize_constraints(R: GeneralCone, f: List[Any], B: List[Tuple[Any, Any]], Omega: PossibilitySpace, variables: Any) -> Tuple[List[Any], List[Any]]:
# variables
mu = variables
# constants
g = [[[RealVal(R[d][i][j]) for j in range(len(R[d][i]))] for i in range(len(R[d]))] for d in range(len(R))]
# if f contains elements of type ArithRef, then they are already in Z3 format
if not isinstance(f[0], ArithRef):
f = [RealVal(f[j]) for j in range(len(f))]
# intermediate expressions
h = sum_rows(list(sum_rows([product(mu[d][i], g[d][i]) for i in range(len(R[d]))]) for d in range(len(R))))
# 0 <= mu && exists mu_D: (x != 0 for x in mu_D)
mu_constraints = [0 <= x for x in collapse(mu)] + [Or([And([x != 0 for x in mu_D]) for mu_D in mu])]
constraints_1 = [h[omega] == f[omega] for omega in Omega]
constraints_2 = []
for b, c in range(len(B)):
h_j = sum_rows(list(sum_rows([product(mu[d][i], b[d][i]) for i in range(len(R[d]))]) for d in range(len(R))))
h_j_constraints = [h_j[omega] == f[omega] for omega in Omega]
constraints_2.extend(h_j_constraints)
if GlobalSettings.print_smt:
print('--- variables ---')
print(mu)
print('--- constants ---')
print('g =', g)
print('f =', f)
print('--- intermediate expressions ---')
print('h =', h)
print('--- constraints ---')
print(mu_constraints)
print(constraints_1)
return mu_constraints, constraints_1 + constraints_2
def optimize_find(R: GeneralCone, f: Gamble, B: List[Tuple[Any, Any]], Omega: List[int]) -> Any:
# variables
mu = [[Real(f'mu{d}_{i}') for i in range(len(R[d]))] for d in range(len(R))]
constraints = list(flatten(optimize_constraints(R, f, B, Omega, mu)))
solver = Solver()
solver.add(constraints)
if GlobalSettings.verbose:
print('=== optimize_find ===')
print_constraints('constraints:\n', constraints)
if solver.check() == sat:
model = solver.model()
mu_solution = [[model.evaluate(mu[d][i]) for i in range(len(R[d]))] for d in range(len(R))]
if GlobalSettings.verbose:
print('SAT')
print('mu =', mu_solution)
return mu_solution
else:
if GlobalSettings.verbose:
print('UNSAT')
return None
def print_constraints(msg: str, constraints: List[Any]) -> None:
print(msg)
for constraint in constraints:
print(constraint)
print('')
def optimize_maximize_full(R: GeneralCone, f: Gamble, a: List[List[Fraction]], B: List[Tuple[Any, Any]], Omega: List[int]) -> Tuple[Any, Fraction]:
# variables
mu = [[Real(f'mu{d}_{i}') for i in range(len(R[d]))] for d in range(len(R))]
constraints = list(flatten(optimize_constraints(R, f, B, Omega, mu)))
goal = simplify(sum(sum(mu[d][g] * a[d][g] for g in range(len(mu[d]))) for d in range(len(mu))))
optimizer = Optimize()
optimizer.add(constraints)
optimizer.maximize(goal)
if GlobalSettings.verbose:
print('=== optimize_maximize ===')
print('goal:', goal)
print_constraints('constraints:\n', constraints)
if optimizer.check() == sat:
model = optimizer.model()
mu_solution = [[model.evaluate(mu[d][i]) for i in range(len(R[d]))] for d in range(len(R))]
goal_solution = model.evaluate(goal)
if GlobalSettings.verbose:
print('SAT')
print('mu =', mu_solution)
print('goal =', model.evaluate(goal))
return mu_solution, goal_solution
else:
if GlobalSettings.verbose:
print('UNSAT')
return None, Fraction(0)
def optimize_maximize(R: GeneralCone, f: Gamble, a: List[List[Fraction]], B: List[Tuple[Any, Any]], Omega: List[int]):
mu, goal = optimize_maximize_full(R, f, a, B, Omega)
return mu
def optimize_maximize_value(R: GeneralCone, f: Gamble, a: List[List[Fraction]], B: List[Tuple[Any, Any]], Omega: List[int]):
mu, goal = optimize_maximize_full(R, f, a, B, Omega)
return goal
def minus_constant(f: Gamble, c: Fraction) -> Gamble:
return [x - c for x in f]
def linear_lower_prevision_function(p: MassFunction, K: List[Gamble]) -> LowerPrevisionFunction:
def value(f: Gamble) -> Fraction:
assert len(f) == len(p)
return sum(p_i * f_i for (p_i, f_i) in zip(p, f))
return [(f, value(f)) for f in K]
def linear_vacuous_lower_prevision_function(p: MassFunction, K: List[Gamble], delta: Fraction) -> LowerPrevisionFunction:
def value(f: Gamble) -> Fraction:
assert len(f) == len(p)
return (1 - delta) * sum(p_i * f_i for (p_i, f_i) in zip(p, f)) + delta * min(f)
return [(f, value(f)) for f in K]
# We assume that the subsets of Omega are ordered according to the powerset function
def belief_lower_prevision_function(p: MassFunction, K: List[Gamble]) -> LowerPrevisionFunction:
def value(f: Gamble) -> Fraction:
assert len(p) == 2 ** len(f)
return sum((p[i] * min(S)) for (i, S) in enumerate(powerset(f)) if i > 0) # N.B. skip the empty set!
return [(f, value(f)) for f in K]
def lower_prevision_assessment(P: LowerPrevisionFunction) -> LowerPrevisionAssessment:
return [minus_constant(h, c) for (h, c) in P]
def conditional_lower_prevision_assessment(P: ConditionalLowerPrevisionFunction, Omega: PossibilitySpace) -> ConditionalLowerPrevisionAssessment:
def hadamard(f: Gamble, g: Gamble) -> Gamble:
return [x * y for (x, y) in zip(f, g)]
N = len(Omega)
return [(hadamard(minus_constant(h, c), make_one(B, N)), B) for (h, B, c) in P]
def sure_loss_cone(A: LowerPrevisionAssessment, Omega: PossibilitySpace) -> GeneralCone:
N = len(Omega)
zero = make_zero(N)
D = [make_one_omega(i, N) for i in range(N)] + [a for a in A if not a == zero and not is_unit_gamble(a)]
R = [D]
return R
def partial_loss_cone(B: ConditionalLowerPrevisionAssessment, Omega: PossibilitySpace) -> GeneralCone:
N = len(Omega)
zero = make_zero(N)
R1 = [[g, make_one(B1, N)] for (g, B1) in B if g != zero]
R2 = [[make_one_omega(i, N)] for i in range(N)]
R = R1 + R2 # TODO: remove duplicates
return R
def natural_extension_cone(A: LowerPrevisionAssessment, Omega: PossibilitySpace) -> GeneralCone:
N = len(Omega)
R1 = [[g] for g in A]
R2 = [[make_one_Omega(N)], [make_minus_one_Omega(N)], [make_zero(N)]]
R3 = [[make_one_omega(i, N)] for i in range(N)]
R = R1 + R2 + R3 # TODO: remove duplicates
return R
def natural_extension_objective(R: GeneralCone, Omega: PossibilitySpace) -> List[List[Fraction]]:
N = len(Omega)
one_Omega = make_one_Omega(N)
minus_one_Omega = make_minus_one_Omega(N)
def a_value(D, g):
if D == [one_Omega] and g == one_Omega:
return Fraction(1)
elif D == [minus_one_Omega] and g == minus_one_Omega:
return Fraction(-1)
return Fraction(0)
a = [[a_value(D, g) for g in D] for D in R]
return a
def incurs_sure_loss_cone(R: GeneralCone, Omega: PossibilitySpace) -> bool:
N = len(Omega)
zero = make_zero(N)
mu = optimize_find(R, zero, [], Omega)
return mu is not None
def incurs_sure_loss(P: LowerPrevisionFunction, Omega: PossibilitySpace, pretty=False) -> bool:
A = lower_prevision_assessment(P)
R = sure_loss_cone(A, Omega)
if GlobalSettings.verbose:
print(f'incurs_sure_loss: R =\n{print_general_cone(R, pretty)}\n')
return incurs_sure_loss_cone(R, Omega)
def natural_extension(A: List[Gamble], f: Gamble, Omega: PossibilitySpace, pretty=False) -> Fraction:
R = natural_extension_cone(A, Omega)
a = natural_extension_objective(R, Omega)
if GlobalSettings.verbose:
print(f'natural_extension: R =\n{print_general_cone(R, pretty)}\n')
print(f'natural_extension: a =\n{print_cone_generator(a, pretty)}\n')
return optimize_maximize_value(R, f, a, [], Omega)
def is_coherent(P: LowerPrevisionFunction, Omega: PossibilitySpace, pretty=False) -> bool:
A = lower_prevision_assessment(P)
for (f, P_f) in P:
n_f = natural_extension(A, f, Omega, pretty)
if not P_f == n_f:
if GlobalSettings.verbose:
print(f'the is_coherent check failed for: f=({print_gamble(f, pretty)}) P(f)={P_f} natural_extension(f) = {n_f}')
return False
return True
# return all(P_f == natural_extension(A, f, Omega, pretty) for (f, P_f) in P)
def incurs_partial_loss(P: ConditionalLowerPrevisionFunction, Omega: PossibilitySpace, pretty=False) -> bool:
N = len(P)
zero = make_zero(N)
B = conditional_lower_prevision_assessment(P, Omega)
R = partial_loss_cone(B, Omega)
if GlobalSettings.verbose:
print(f'incurs_partial_loss: R = {print_general_cone(R, pretty)}\n')
mu = optimize_find(R, zero, [], Omega)
return mu is not None
def conditional_natural_extension_cone(B: ConditionalLowerPrevisionAssessment, C: Event, Omega: PossibilitySpace) -> GeneralCone:
N = len(Omega)
zero = make_zero(N)
one_C = make_one(C, N)
minus_one_C = make_minus_one(C, N)
R1 = [[g, make_one(B1, N)] for (g, B1) in B]
R2 = [[one_C], [minus_one_C], [zero]]
R3 = [[make_one_omega(i, N) for i in range(N)]]
R = R1 + R2 + R3 # TODO: remove duplicates
return R
def conditional_natural_extension(B: ConditionalLowerPrevisionAssessment, f: Gamble, C: Event, Omega: PossibilitySpace, pretty=False) -> Fraction:
def hadamard(f: Gamble, g: Gamble) -> Gamble:
return [x * y for (x, y) in zip(f, g)]
N = len(Omega)
R = conditional_natural_extension_cone(B, C, Omega)
a = natural_extension_objective(R, Omega)
if GlobalSettings.verbose:
print(f'conditional_natural_extension: R = {print_general_cone(R, pretty)}\n')
return optimize_maximize_value(R, hadamard(f, make_one(C, N)), a, [], Omega)
def generate_lower_prevision_perturbation(K: List[Gamble], epsilon: Fraction) -> LowerPrevisionFunction:
result = []
for f in K:
delta = Fraction(random.uniform(0, 1))
value = random.choice([-epsilon, epsilon]) * delta * (max(f) - min(f))
result.append((f, value))
return result
def scale_lower_prevision_function(P: LowerPrevisionFunction, c: Fraction) -> LowerPrevisionFunction:
return [(f, c * value) for (f, value) in P]
def lower_prevision_sum(P: LowerPrevisionFunction, Q: LowerPrevisionFunction) -> LowerPrevisionFunction:
def same_domain(P, Q):
return len(P) == len(Q) and all(p[0] == q[0] for (p, q) in zip(P, Q))
assert same_domain(P, Q)
return [(p[0], p[1] + q[1]) for (p, q) in zip(P, Q)]
def clamp(num, min_value, max_value):
return max(min(num, max_value), min_value)
def lower_prevision_clamped_sum(P: LowerPrevisionFunction, Q: LowerPrevisionFunction) -> LowerPrevisionFunction:
def same_domain(P, Q):
return len(P) == len(Q) and all(p[0] == q[0] for (p, q) in zip(P, Q))
assert same_domain(P, Q)
return [(f, clamp(value_f + value_g, min(f), max(f))) for ((f, value_f), (g, value_g)) in zip(P, Q)] | PypiClean |
/Flask-Track-Usage-2.0.0.tar.gz/Flask-Track-Usage-2.0.0/src/flask_track_usage/storage/mongo.py | import datetime
import inspect
from flask_track_usage.storage import Storage
class _MongoStorage(Storage):
"""
Parent storage class for Mongo storage.
"""
def store(self, data):
"""
Executed on "function call".
:Parameters:
- `data`: Data to store.
.. versionchanged:: 1.1.0
xforwardfor item added directly after remote_addr
"""
ua_dict = {
'browser': data['user_agent'].browser,
'language': data['user_agent'].language,
'platform': data['user_agent'].platform,
'version': data['user_agent'].version,
}
data['date'] = datetime.datetime.fromtimestamp(data['date'])
data['user_agent'] = ua_dict
print(self.collection.insert(data))
def _get_usage(self, start_date=None, end_date=None, limit=500, page=1):
"""
Implements the simple usage information by criteria in a standard form.
:Parameters:
- `start_date`: datetime.datetime representation of starting date
- `end_date`: datetime.datetime representation of ending date
- `limit`: The max amount of results to return
- `page`: Result page number limited by `limit` number in a page
.. versionchanged:: 1.1.0
xforwardfor item added directly after remote_addr
"""
criteria = {}
# Set up date based criteria
if start_date or end_date:
criteria['date'] = {}
if start_date:
criteria['date']['$gte'] = start_date
if end_date:
criteria['date']['$lte'] = end_date
cursor = []
if limit:
cursor = self.collection.find(criteria).skip(
limit * (page - 1)).limit(limit)
else:
cursor = self.collection.find(criteria)
return [x for x in cursor]
class MongoPiggybackStorage(_MongoStorage):
"""
Uses a pymongo collection to store data.
"""
def set_up(self, collection, hooks=None):
"""
Sets the collection.
:Parameters:
- `collection`: A pymongo collection (not database or connection).
"""
self.collection = collection
class MongoStorage(_MongoStorage):
"""
Creates it's own connection for storage.
"""
def set_up(
self, database, collection, host='127.0.0.1',
port=27017, username=None, password=None, hooks=None):
"""
Sets the collection.
:Parameters:
- `database`: Name of the database to use.
- `collection`: Name of the collection to use.
- `host`: Host to conenct to. Default: 127.0.0.1
- `port`: Port to connect to. Default: 27017
- `username`: Optional username to authenticate with.
- `password`: Optional password to authenticate with.
"""
import pymongo
self.connection = pymongo.MongoClient(host, port)
self.db = getattr(self.connection, database)
if username and password:
self.db.authenticate(username, password)
self.collection = getattr(self.db, collection)
class MongoEngineStorage(_MongoStorage):
"""
Uses MongoEngine library to store data in MongoDB.
The resulting collection is named `usageTracking`.
Should you need to access the actual Document class that this storage uses,
you can pull it from `collection` *instance* attribute. For example: ::
trackerDoc = MongoEngineStorage().collection
"""
def set_up(self, doc=None, website=None, apache_log=False, hooks=None):
import mongoengine as db
"""
Sets the general settings.
:Parameters:
- `doc`: optional alternate MongoEngine document class.
- 'website': name for the website. Defaults to 'default'. Useful
when multiple websites are saving data to the same collection.
- 'apache_log': if set to True, then an attribute called
'apache_combined_log' is set that mimics a line from a traditional
apache webserver web log text file.
.. versionchanged:: 2.0.0
"""
class UserAgent(db.EmbeddedDocument):
browser = db.StringField()
language = db.StringField()
platform = db.StringField()
version = db.StringField()
string = db.StringField()
class UsageTracker(db.Document):
date = db.DateTimeField(
required=True,
default=datetime.datetime.utcnow
)
website = db.StringField(required=True, default="default")
server_name = db.StringField(default="self")
blueprint = db.StringField(default=None)
view_args = db.DictField()
ip_info = db.StringField()
xforwardedfor = db.StringField()
path = db.StringField()
speed = db.FloatField()
remote_addr = db.StringField()
url = db.StringField()
status = db.IntField()
authorization = db.BooleanField()
content_length = db.IntField()
url_args = db.DictField()
username = db.StringField()
user_agent = db.EmbeddedDocumentField(UserAgent)
track_var = db.DictField()
apache_combined_log = db.StringField()
meta = {
'collection': "usageTracking"
}
self.collection = doc or UsageTracker
# self.user_agent = UserAgent
self.website = website or 'default'
self.apache_log = apache_log
def store(self, data):
doc = self.collection()
doc.date = datetime.datetime.fromtimestamp(data['date'])
doc.website = self.website
doc.server_name = data['server_name']
doc.blueprint = data['blueprint']
doc.view_args = data['view_args']
doc.ip_info = data['ip_info']
doc.xforwardedfor = data['xforwardedfor']
doc.path = data['path']
doc.speed = data['speed']
doc.remote_addr = data['remote_addr']
doc.url = data['url']
doc.status = data['status']
doc.authorization = data['authorization']
doc.content_length = data['content_length']
doc.url_args = data['url_args']
doc.username = data['username']
doc.track_var = data['track_var']
# the following is VERY MUCH A HACK to allow a passed 'doc' on set_up
ua = doc._fields['user_agent'].document_type_obj()
ua.browser = data['user_agent'].browser
if data['user_agent'].language:
ua.language = data['user_agent'].language
ua.platform = data['user_agent'].platform
if data['user_agent'].version:
ua.version = str(data['user_agent'].version)
ua.string = data['user_agent'].string
doc.user_agent = ua
if self.apache_log:
t = '{h} - {u} [{t}] "{r}" {s} {b} "{ref}" "{ua}"'.format(
h=data['remote_addr'],
u=data["username"] or '-',
t=doc.date.strftime("%d/%b/%Y:%H:%M:%S %z"),
r=data.get("request", '?'),
s=data['status'],
b=data['content_length'],
ref=data['url'],
ua=str(data['user_agent'])
)
doc.apache_combined_log = t
doc.save()
data['mongoengine_document'] = doc
return data
def _get_usage(self, start_date=None, end_date=None, limit=500, page=1):
"""
Implements the simple usage information by criteria in a standard form.
:Parameters:
- `start_date`: datetime.datetime representation of starting date
- `end_date`: datetime.datetime representation of ending date
- `limit`: The max amount of results to return
- `page`: Result page number limited by `limit` number in a page
.. versionchanged:: 2.0.0
"""
query = {}
if start_date:
query["date__gte"] = start_date
if end_date:
query["date__lte"] = end_date
if limit:
first = limit * (page - 1)
last = limit * page
logs = self.collection.objects(
**query
).order_by('-date')[first:last]
else:
logs = self.collection.objects(**query).order_by('-date')
result = [log.to_mongo().to_dict() for log in logs]
return result
def get_sum(
self,
hook,
start_date=None,
end_date=None,
limit=500,
page=1,
target=None
):
"""
Queries a subtending hook for summarization data.
:Parameters:
- 'hook': the hook 'class' or it's name as a string
- `start_date`: datetime.datetime representation of starting date
- `end_date`: datetime.datetime representation of ending date
- `limit`: The max amount of results to return
- `page`: Result page number limited by `limit` number in a page
- 'target': search string to limit results; meaning depend on hook
.. versionchanged:: 2.0.0
"""
if inspect.isclass(hook):
hook_name = hook.__name__
else:
hook_name = str(hook)
for h in self._post_storage_hooks:
if h.__class__.__name__ == hook_name:
return h.get_sum(
start_date=start_date,
end_date=end_date,
limit=limit,
page=page,
target=target,
_parent_class_name=self.__class__.__name__,
_parent_self=self
)
raise NotImplementedError(
'Cannot find hook named "{}"'.format(hook_name)
) | PypiClean |
/NeuroTorch-0.0.1b2.tar.gz/NeuroTorch-0.0.1b2/src/neurotorch/rl/agent.py | import json
import logging
from copy import deepcopy
from typing import Sequence, Union, Optional, Dict, Any, List
import numpy as np
import torch
import gym
from ..callbacks.checkpoints_manager import CheckpointManager, LoadCheckpointMode
from ..transforms.base import to_numpy, to_tensor
from ..modules.base import BaseModel
from ..modules.sequential import Sequential
from ..utils import maybe_apply_softmax, unpack_out_hh
try:
from ..modules.layers import Linear
except ImportError:
from .utils import Linear
from .utils import (
obs_sequence_to_batch,
space_to_spec,
obs_batch_to_sequence,
space_to_continuous_shape,
get_single_observation_space,
get_single_action_space,
sample_action_space,
continuous_actions_distribution,
)
class Agent(torch.nn.Module):
@staticmethod
def copy_from_agent(agent: "Agent", requires_grad: Optional[bool] = None) -> "Agent":
"""
Copy the agent.
:param agent: The agent to copy.
:type agent: Agent
:param requires_grad: Whether to require gradients.
:type requires_grad: Optional[bool]
:return: The copied agent.
:rtype: Agent
"""
return Agent(
env=agent.env,
observation_space=agent.observation_space,
action_space=agent.action_space,
behavior_name=agent.behavior_name,
policy=agent.copy_policy(requires_grad=requires_grad),
**agent.kwargs
)
def __init__(
self,
*,
env: Optional[gym.Env] = None,
observation_space: Optional[gym.spaces.Space] = None,
action_space: Optional[gym.spaces.Space] = None,
behavior_name: Optional[str] = None,
policy: Optional[BaseModel] = None,
policy_predict_method: str = "__call__",
policy_kwargs: Optional[Dict[str, Any]] = None,
critic: Optional[BaseModel] = None,
critic_predict_method: str = "__call__",
critic_kwargs: Optional[Dict[str, Any]] = None,
**kwargs
):
"""
Constructor for BaseAgent class.
:param env: The environment.
:type env: Optional[gym.Env]
:param observation_space: The observation space. Must be a single space not batched. Must be provided if
`env` is not provided. If `env` is provided, then this will be ignored.
:type observation_space: Optional[gym.spaces.Space]
:param action_space: The action space. Must be a single space not batched. Must be provided if
`env` is not provided. If `env` is provided, then this will be ignored.
:type action_space: Optional[gym.spaces.Space]
:param behavior_name: The name of the behavior.
:type behavior_name: Optional[str]
:param policy: The model to use.
:type policy: BaseModel
:param policy_kwargs: The keyword arguments to pass to the policy if it is created by default.
The keywords are:
- `default_hidden_units` (List[int]): The default number of hidden units. Defaults to [256].
- `default_activation` (str): The default activation function. Defaults to "ReLu".
- `default_output_activation` (str): The default output activation function. Defaults to "Identity".
- `default_dropout` (float): The default dropout rate. Defaults to 0.1.
- all other keywords are passed to the `Sequential` constructor.
:type policy_kwargs: Optional[Dict[str, Any]]
:param critic: The value model to use.
:type critic: BaseModel
:param critic_kwargs: The keyword arguments to pass to the critic if it is created by default.
The keywords are:
- `default_hidden_units` (List[int]): The default number of hidden units. Defaults to [256].
- `default_activation` (str): The default activation function. Defaults to "ReLu".
- `default_output_activation` (str): The default output activation function. Defaults to "Identity".
- `default_n_values` (int): The default number of values to output. Defaults to 1.
- `default_dropout` (float): The default dropout rate. Defaults to 0.1.
- all other keywords are passed to the `Sequential` constructor.
:type critic_kwargs: Optional[Dict[str, Any]]
:param kwargs: Other keyword arguments.
"""
super().__init__()
self.kwargs = kwargs
self.policy_kwargs = policy_kwargs if policy_kwargs is not None else {}
self.set_default_policy_kwargs()
self.critic_kwargs = critic_kwargs if critic_kwargs is not None else {}
self.set_default_critic_kwargs()
self.env = env
if env:
self.observation_space = get_single_observation_space(env)
self.action_space = get_single_action_space(env)
else:
self.observation_space = observation_space
self.action_space = action_space
if behavior_name:
self.behavior_name = behavior_name
elif env.spec:
self.behavior_name = env.spec.id
else:
self.behavior_name = "default"
self.policy = policy
if self.policy is None:
self.policy = self._create_default_policy()
self.policy_predict_method_name = policy_predict_method
assert hasattr(self.policy, self.policy_predict_method_name), \
f"Policy does not have method '{self.policy_predict_method_name}'"
self.policy_predict_method = getattr(self.policy, self.policy_predict_method_name)
assert callable(self.policy_predict_method), \
f"Policy method '{self.policy_predict_method_name}' is not callable"
self.critic = critic
if self.critic is None:
self.critic = self._create_default_critic()
self.critic_predict_method_name = critic_predict_method
assert hasattr(self.critic, self.critic_predict_method_name), \
f"Critic does not have method '{self.critic_predict_method_name}'"
self.critic_predict_method = getattr(self.critic, self.critic_predict_method_name)
assert callable(self.policy_predict_method), \
f"Critic method '{self.critic_predict_method_name}' is not callable"
self.checkpoint_folder = kwargs.get("checkpoint_folder", ".")
@property
def observation_spec(self) -> Dict[str, Any]:
return space_to_spec(self.observation_space)
@property
def action_spec(self) -> Dict[str, Any]:
return space_to_spec(self.action_space)
@property
def discrete_actions(self) -> List[str]:
return [k for k, v in self.action_spec.items() if isinstance(v, gym.spaces.Discrete)]
@property
def continuous_actions(self) -> List[str]:
return [k for k, v in self.action_spec.items() if not isinstance(v, gym.spaces.Discrete)]
@property
def device(self) -> torch.device:
"""
The device of the agent.
:return: The device of the agent.
:rtype: torch.device
"""
return next(self.parameters()).device
@device.setter
def device(self, device: torch.device):
"""
Set the device of the agent.
:param device: The device to set.
:type device: torch.device
"""
self.policy.to(device)
if self.critic is not None:
self.critic.to(device)
def set_default_policy_kwargs(self):
self.policy_kwargs.setdefault("default_hidden_units", [256])
if isinstance(self.policy_kwargs["default_hidden_units"], int):
self.policy_kwargs["default_hidden_units"] = [self.policy_kwargs["default_hidden_units"]]
assert len(self.policy_kwargs["default_hidden_units"]) > 0, "Must have at least one hidden unit."
self.policy_kwargs.setdefault("default_activation", "ReLu")
self.policy_kwargs.setdefault("default_output_activation", "Identity")
self.policy_kwargs.setdefault("default_dropout", 0.1)
def set_default_critic_kwargs(self):
self.critic_kwargs.setdefault("default_hidden_units", [256])
if isinstance(self.critic_kwargs["default_hidden_units"], int):
self.critic_kwargs["default_hidden_units"] = [self.critic_kwargs["default_hidden_units"]]
assert len(self.critic_kwargs["default_hidden_units"]) > 0, "Must have at least one hidden unit."
self.critic_kwargs.setdefault("default_activation", "ReLu")
self.critic_kwargs.setdefault("default_output_activation", "Identity")
self.critic_kwargs.setdefault("default_n_values", 1)
self.critic_kwargs.setdefault("default_dropout", 0.1)
def _create_default_policy(self) -> BaseModel:
"""
Create the default policy.
:return: The default policy.
:rtype: BaseModel
"""
hidden_block = [torch.nn.Dropout(p=self.policy_kwargs["default_dropout"])]
for i in range(len(self.policy_kwargs["default_hidden_units"]) - 1):
hidden_block.extend([
Linear(
input_size=self.policy_kwargs["default_hidden_units"][i],
output_size=self.policy_kwargs["default_hidden_units"][i + 1],
activation=self.policy_kwargs["default_activation"]
),
# torch.nn.Linear(
# in_features=self.policy_kwargs["default_hidden_units"][i],
# out_features=self.policy_kwargs["default_hidden_units"][i + 1]
# ),
# torch.nn.PReLU(), # TODO: for Debugging
torch.nn.Dropout(p=self.policy_kwargs["default_dropout"]),
])
default_policy = Sequential(layers=[
{
f"in_{k}": Linear(
input_size=int(space_to_continuous_shape(v, flatten_spaces=True)[0]),
output_size=self.policy_kwargs["default_hidden_units"][0],
activation=self.policy_kwargs["default_activation"]
)
# k: torch.nn.Linear(
# in_features=int(space_to_continuous_shape(v, flatten_spaces=True)[0]),
# out_features=self.policy_kwargs["default_hidden_units"][0]
# )
for k, v in self.observation_spec.items()
},
*hidden_block,
{
f"out_{k}": Linear(
input_size=self.policy_kwargs["default_hidden_units"][-1],
output_size=int(space_to_continuous_shape(v, flatten_spaces=True)[0]),
activation=self.policy_kwargs["default_output_activation"]
)
for k, v in self.action_spec.items()
}
],
**self.policy_kwargs
).build()
return default_policy
def _create_default_critic(self) -> BaseModel:
"""
Create the default critic.
:return: The default critic.
:rtype: BaseModel
"""
hidden_block = [torch.nn.Dropout(p=self.critic_kwargs["default_dropout"])]
for i in range(len(self.policy_kwargs["default_hidden_units"]) - 1):
hidden_block.extend([
Linear(
input_size=self.critic_kwargs["default_hidden_units"][i],
output_size=self.critic_kwargs["default_hidden_units"][i + 1],
activation=self.critic_kwargs["default_activation"]
),
torch.nn.Dropout(p=self.critic_kwargs["default_dropout"]),
])
default_policy = Sequential(layers=[
{
f"in_{k}": Linear(
input_size=int(space_to_continuous_shape(v, flatten_spaces=True)[0]),
output_size=self.critic_kwargs["default_hidden_units"][0],
activation=self.critic_kwargs["default_activation"]
)
for k, v in self.observation_spec.items()
},
*hidden_block,
Linear(
input_size=self.critic_kwargs["default_hidden_units"][-1],
output_size=self.critic_kwargs["default_n_values"],
activation=self.critic_kwargs["default_output_activation"]
)
],
**self.critic_kwargs
).build()
return default_policy
def forward(self, *args, **kwargs):
"""
Call the agent.
:return: The output of the agent.
"""
return self.policy_predict_method(*args, **kwargs)
def get_actions(
self,
obs: Union[np.ndarray, torch.Tensor, Dict[str, Union[np.ndarray, torch.Tensor]]],
**kwargs
) -> Any:
"""
Get the actions for the given observations.
:param obs: The observations. The observations must be batched.
:type obs: Union[np.ndarray, torch.Tensor, Dict[str, Union[np.ndarray, torch.Tensor]]]
:param kwargs: Keywords arguments.
:keyword str re_format: The format to reformat the discrete actions to. Default is "index" which
will return the index of the action. For other options see :mth:`format_batch_discrete_actions`.
:keyword bool as_numpy: Whether to return the actions as numpy arrays. Default is True.
:return: The actions.
"""
self.env = kwargs.get("env", self.env)
re_as_dict = kwargs.get("re_as_dict", isinstance(obs, dict) or isinstance(obs[0], dict))
re_formats = kwargs.get("re_format", "index").split(",")
as_numpy = kwargs.get("as_numpy", True)
obs_as_tensor = to_tensor(obs)
out_actions, _ = unpack_out_hh(self.policy_predict_method(obs_as_tensor, **kwargs))
re_actions_list = [
self.format_batch_discrete_actions(out_actions, re_format=re_format)
for re_format in re_formats
]
if as_numpy:
re_actions_list = [to_numpy(re_actions) for re_actions in re_actions_list]
for i, re_actions in enumerate(re_actions_list):
if not re_as_dict and isinstance(re_actions, dict):
if not len(re_actions) == 1:
raise ValueError("Cannot unpack actions from dict because it has not a length of 1.")
re_actions_list[i] = re_actions[list(re_actions.keys())[0]]
elif re_as_dict and not isinstance(re_actions, dict):
keys = self.discrete_actions + self.continuous_actions
re_actions_list[i] = {k: re_actions for k in keys}
if len(re_actions_list) == 1:
return re_actions_list[0]
return re_actions_list
def format_batch_discrete_actions(
self,
actions: Union[torch.Tensor, Dict[str, torch.Tensor]],
re_format: str = "logits",
**kwargs
) -> Union[torch.Tensor, Dict[str, torch.Tensor]]:
"""
Format the batch of actions. If actions is a dict, then it is assumed that the keys are the action names and the
values are the actions. In this case, all the values where their keys are in `self.discrete_actions` will be
formatted. If actions is a tensor, then the actions will be formatted if `self.discrete_actions` is not empty.
TODO: fragment this method into smaller methods.
:param actions: The actions.
:param re_format: The format to reformat the actions to. Can be "logits", "probs", "index", or "one_hot".
:param kwargs: Keywords arguments.
:return: The formatted actions.
"""
discrete_actions = kwargs.get("discrete_actions", self.discrete_actions)
continuous_actions = kwargs.get("continuous_actions", self.continuous_actions)
actions = to_tensor(actions)
if re_format.lower() in ["logits", "raw"]:
return actions
elif re_format.lower() == "probs":
if isinstance(actions, torch.Tensor):
return torch.softmax(actions, dim=-1) if len(discrete_actions) >= 1 else actions
elif isinstance(actions, dict):
return {k: (
torch.softmax(v, dim=-1) if (k in discrete_actions or len(continuous_actions) == 0) else v
) for k, v in actions.items()}
else:
raise ValueError(f"Cannot format actions of type {type(actions)}.")
elif re_format.lower() == "log_probs":
if isinstance(actions, torch.Tensor):
return torch.log_softmax(actions, dim=-1) if len(discrete_actions) >= 1 else actions
elif isinstance(actions, dict):
return {k: (
torch.log_softmax(v, dim=-1)
if (k in discrete_actions or len(continuous_actions) == 0) else v
) for k, v in actions.items()}
else:
raise ValueError(f"Cannot format actions of type {type(actions)}.")
elif re_format.lower() in ["index", "indices", "argmax", "imax", "amax"]:
if isinstance(actions, torch.Tensor):
return torch.argmax(actions, dim=-1).long() if len(discrete_actions) >= 1 else actions
elif isinstance(actions, dict):
return {k: (
torch.argmax(v, dim=-1).long()
if (k in discrete_actions or len(continuous_actions) == 0) else v
) for k, v in actions.items()}
else:
raise ValueError(f"Cannot format actions of type {type(actions)}.")
elif re_format.lower() == "one_hot":
if isinstance(actions, torch.Tensor):
return (
torch.nn.functional.one_hot(torch.argmax(actions, dim=-1), num_classes=actions.shape[-1])
if len(discrete_actions) >= 1 else actions
)
elif isinstance(actions, dict):
return {
k: (
torch.nn.functional.one_hot(torch.argmax(v, dim=-1), num_classes=v.shape[-1])
if (k in discrete_actions or len(continuous_actions) == 0) else v
)
for k, v in actions.items()
}
else:
raise ValueError(f"Cannot format actions of type {type(actions)}.")
elif re_format.lower() == "max":
if isinstance(actions, torch.Tensor):
return torch.max(actions, dim=-1).values if len(discrete_actions) >= 1 else actions
elif isinstance(actions, dict):
return {k: (
torch.max(v, dim=-1).values
if (k in discrete_actions or len(continuous_actions) == 0) else v
) for k, v in actions.items()}
else:
raise ValueError(f"Cannot format actions of type {type(actions)}.")
elif re_format.lower() == "smax":
if isinstance(actions, torch.Tensor):
return torch.softmax(actions, dim=-1).max(dim=-1).values if len(discrete_actions) >= 1 else actions
elif isinstance(actions, dict):
return {
k: (
torch.softmax(v, dim=-1).max(dim=-1).values
if (k in discrete_actions or len(continuous_actions) == 0) else v
)
for k, v in actions.items()
}
else:
raise ValueError(f"Cannot format actions of type {type(actions)}.")
elif re_format.lower() == "log_smax":
if isinstance(actions, torch.Tensor):
return torch.log_softmax(actions, dim=-1).max(dim=-1).values if len(discrete_actions) >= 1 else actions
elif isinstance(actions, dict):
return {
k: (
torch.log_softmax(v, dim=-1).max(dim=-1).values
if (k in discrete_actions or len(continuous_actions) == 0) else v
)
for k, v in actions.items()
}
else:
raise ValueError(f"Cannot format actions of type {type(actions)}.")
elif re_format.lower() == "sample":
if isinstance(actions, torch.Tensor):
if len(discrete_actions) >= 1:
probs = maybe_apply_softmax(actions, dim=-1)
return torch.distributions.Categorical(probs=probs).sample()
else:
return continuous_actions_distribution(actions).sample()
elif isinstance(actions, dict):
return {
k: (
torch.distributions.Categorical(probs=maybe_apply_softmax(v, dim=-1)).sample()
if (k in discrete_actions or len(continuous_actions) == 0)
else continuous_actions_distribution(v).sample()
)
for k, v in actions.items()
}
else:
raise ValueError(f"Cannot format actions of type {type(actions)}.")
else:
raise ValueError(f"Unknown re-formatting option {re_format}.")
def get_random_actions(self, n_samples: int = 1, **kwargs) -> Any:
as_batch = kwargs.get("as_batch", False)
as_sequence = kwargs.get("as_sequence", False)
re_formats = kwargs.get("re_format", "index").split(",")
as_numpy = kwargs.get("as_numpy", True)
assert not (as_batch and as_sequence), "Cannot return actions as both batch and sequence."
as_single = not (as_batch or as_sequence)
self.env = kwargs.get("env", self.env)
if self.env:
action_space = self.env.action_space
else:
action_space = self.action_space
if as_single and n_samples == 1:
out_actions = sample_action_space(action_space, re_format="one_hot")
else:
out_actions = [sample_action_space(action_space, re_format="one_hot") for _ in range(n_samples)]
if as_batch:
out_actions = obs_sequence_to_batch(out_actions)
re_actions_list = [
self.format_batch_discrete_actions(out_actions, re_format=re_format)
for re_format in re_formats
]
if as_numpy:
re_actions_list = [to_numpy(re_actions) for re_actions in re_actions_list]
if len(re_actions_list) == 1:
return re_actions_list[0]
return re_actions_list
def get_values(self, obs: torch.Tensor, **kwargs) -> Any:
"""
Get the values for the given observations.
:param obs: The batched observations.
:param kwargs: Keywords arguments.
:return: The values.
"""
self.env = kwargs.get("env", self.env)
re_as_dict = kwargs.get("re_as_dict", isinstance(obs, dict) or isinstance(obs[0], dict))
as_numpy = kwargs.get("as_numpy", True)
obs_as_tensor = to_tensor(obs)
values, _ = unpack_out_hh(self.critic_predict_method(obs_as_tensor))
if as_numpy:
values = to_numpy(values)
if not re_as_dict and isinstance(values, dict):
values = values[list(values.keys())[0]]
return values
def __str__(self):
n_tab = 2
policy_repr = str(self.policy)
tab_policy_repr = "\t" + policy_repr.replace("\n", "\n"+("\t"*n_tab))
critic_repr = str(self.critic)
tab_critic_repr = "\t" + critic_repr.replace("\n", "\n"+("\t"*n_tab))
agent_repr = f"Agent<{self.behavior_name}>:\n\t[Policy](\n{tab_policy_repr}\t\n)\n"
if self.critic:
agent_repr += f"\t[Critic](\n{tab_critic_repr}\t\n)\n"
return agent_repr
def soft_update(self, policy, tau):
self.policy.soft_update(policy, tau)
def hard_update(self, policy):
self.policy.hard_update(policy)
def copy_policy(self, requires_grad: Optional[bool] = None) -> BaseModel:
"""
Copy the policy to a new instance.
:return: The copied policy.
"""
policy_copy = deepcopy(self.policy)
if requires_grad is not None:
for param in policy_copy.parameters():
param.requires_grad = requires_grad
return policy_copy
def load_checkpoint(
self,
checkpoints_meta_path: Optional[str] = None,
load_checkpoint_mode: LoadCheckpointMode = LoadCheckpointMode.BEST_ITR,
verbose: bool = True
) -> dict:
"""
Load the checkpoint from the checkpoints_meta_path. If the checkpoints_meta_path is None, the default
checkpoints_meta_path is used.
:param checkpoints_meta_path: The path to the checkpoints meta file.
:type checkpoints_meta_path: Optional[str]
:param load_checkpoint_mode: The mode to use when loading the checkpoint.
:type load_checkpoint_mode: LoadCheckpointMode
:param verbose: Whether to print the loaded checkpoint information.
:type verbose: bool
:return: The loaded checkpoint information.
:rtype: dict
"""
if checkpoints_meta_path is None:
checkpoints_meta_path = self.checkpoints_meta_path
with open(checkpoints_meta_path, "r+") as jsonFile:
info: dict = json.load(jsonFile)
save_name = CheckpointManager.get_save_name_from_checkpoints(info, load_checkpoint_mode)
checkpoint_path = f"{self.checkpoint_folder}/{save_name}"
if verbose:
logging.info(f"Loading checkpoint from {checkpoint_path}")
checkpoint = torch.load(checkpoint_path, map_location=self.device)
self.load_state_dict(checkpoint[CheckpointManager.CHECKPOINT_STATE_DICT_KEY], strict=True)
return checkpoint | PypiClean |
/JaqalPaq-extras-1.2.0a1.tar.gz/JaqalPaq-extras-1.2.0a1/README.md | # JaqalPaq-Extras
JaqalPaq-Extras contains extensions to the
[JaqalPaq](https://gitlab.com/jaqal/jaqalpaq/) python package, which itself is
used to parse, manipulate, emulate, and generate quantum assembly code written
in
[Jaqal](https://qscout.sandia.gov/jaqal) (Just another quantum assembly
language). The purpose of JaqalPaq-Extras is to facilitate the conversion of
programs written in other quantum assembly languages into Jaqal circuit objects
in JaqalPaq. JaqalPaq-Extras is supported on a "best effort" basis, and
quality cannot be guaranteed.
Because some other quantum assembly languages do not support explicit
scheduling like Jaqal does, JaqalPaq-Extras also contains some basic quantum
circuit scheduling routines. Furthermore, to facilitate execution on the
[QSCOUT](https://qscout.sandia.gov/) (Quantum Scientific Computing Open User
Testbed) platform, JaqalPaq-Extras also includes extensions for third-party
quantum software toolchains that support the QSCOUT hardware model (including
its native gate set and scheduling constraints). In summary, JaqalPaq-Extras
contains the following functionalities:
* Conversion of quantum assembly data structures into JaqalPaq circuit objects
from:
* IBM's [Qiskit](https://github.com/Qiskit)
* Rigetti's [pyquil](https://github.com/rigetti/pyquil)
* Google's [Cirq](https://github.com/quantumlib/Cirq)
* ETH Zurich's [ProjectQ](https://github.com/ProjectQ-Framework/ProjectQ)
* CQC's [pytket](https://github.com/CQCL/pytket)
* Basic routines for scheduling unscheduled quantum assembly programs.
* Extensions to these packages above, as needed, to support to the QSCOUT
hardware model.
## Installation
JaqalPaq-Extras is available on
[GitLab](https://gitlab.com/jaqal/jaqalpaq-extras). It requires JaqalPaq to be
installed first, which is also available on
[GitLab](https://gitlab.com/jaqal/jaqalpaq). JaqalPaq-Extras requires JaqalPaq
itself be installed first.
Both JaqalPaq and its extensions can be installed with
[pip](https://pip.pypa.io/en/stable/):
```bash
pip install jaqalpaq
pip install jaqalpaq-extras
```
If only the scheduler will be used, there are no other dependencies.
However, to make use of the transpiler subpackages, one or more other software
toolchains
must be installed. As of this writing, all five supported toolchains can be
installed via
pip as follows, with the supported versions of these packages indicated:
```bash
pip install qiskit>=0.27.0,<0.28.0
pip install pyquil>=2.21.0,<3.0.0
pip install cirq>=0.11.0,<0.12.0
pip install projectq>=0.5.1,<0.7.0
pip install pytket>=0.5.6,<0.13.0
```
Additionally, a gate-set specification is required for all of the transpiler
subpackages.
Currently, we provide the QSCOUT native gate models, which is also available on
[GitLab](https://gitlab.com/jaqal/qscout-gatemodels/) and can be installed via
[pip](https://pip.pypa.io/en/stable/):
```bash
pip install qscout-gatemodels
```
## Documentation
Online documentation is hosted on [Read the
Docs](https://jaqalpaq.readthedocs.io).
## License
[Apache 2.0](https://choosealicense.com/licenses/apache-2.0/)
## Questions?
For help and support, please contact
[[email protected]](mailto:[email protected]).
| PypiClean |
/Nuitka-1.8.tar.gz/Nuitka-1.8/nuitka/build/inline_copy/lib/scons-2.3.2/SCons/Scanner/Prog.py |
__revision__ = "src/engine/SCons/Scanner/Prog.py 2014/07/05 09:42:21 garyo"
import SCons.Node
import SCons.Node.FS
import SCons.Scanner
import SCons.Util
# global, set by --debug=findlibs
print_find_libs = None
def ProgramScanner(**kw):
"""Return a prototype Scanner instance for scanning executable
files for static-lib dependencies"""
kw['path_function'] = SCons.Scanner.FindPathDirs('LIBPATH')
ps = SCons.Scanner.Base(scan, "ProgramScanner", **kw)
return ps
def scan(node, env, libpath = ()):
"""
This scanner scans program files for static-library
dependencies. It will search the LIBPATH environment variable
for libraries specified in the LIBS variable, returning any
files it finds as dependencies.
"""
try:
libs = env['LIBS']
except KeyError:
# There are no LIBS in this environment, so just return a null list:
return []
if SCons.Util.is_String(libs):
libs = libs.split()
else:
libs = SCons.Util.flatten(libs)
try:
prefix = env['LIBPREFIXES']
if not SCons.Util.is_List(prefix):
prefix = [ prefix ]
except KeyError:
prefix = [ '' ]
try:
suffix = env['LIBSUFFIXES']
if not SCons.Util.is_List(suffix):
suffix = [ suffix ]
except KeyError:
suffix = [ '' ]
pairs = []
for suf in map(env.subst, suffix):
for pref in map(env.subst, prefix):
pairs.append((pref, suf))
result = []
if callable(libpath):
libpath = libpath()
find_file = SCons.Node.FS.find_file
adjustixes = SCons.Util.adjustixes
for lib in libs:
if SCons.Util.is_String(lib):
lib = env.subst(lib)
for pref, suf in pairs:
l = adjustixes(lib, pref, suf)
l = find_file(l, libpath, verbose=print_find_libs)
if l:
result.append(l)
else:
result.append(lib)
return result
# Local Variables:
# tab-width:4
# indent-tabs-mode:nil
# End:
# vim: set expandtab tabstop=4 shiftwidth=4: | PypiClean |
/FlexGet-3.9.6-py3-none-any.whl/flexget/components/tmdb/api.py | from flask import jsonify
from flask_restx import inputs
from flexget import plugin
from flexget.api import APIResource, api
from flexget.api.app import BadRequest, NotFoundError, etag
tmdb_api = api.namespace('tmdb', description='TMDB lookup endpoint')
class ObjectsContainer:
poster_object = {
'type': 'object',
'properties': {
'id': {'type': ['integer', 'null']},
'movie_id': {'type': ['integer', 'null']},
'urls': {'type': 'object'},
'file_path': {'type': 'string'},
'width': {'type': 'integer'},
'height': {'type': 'integer'},
'aspect_ratio': {'type': 'number'},
'vote_average': {'type': 'number'},
'vote_count': {'type': 'integer'},
'language_code': {'type': ['string', 'null']},
},
'required': [
'id',
'movie_id',
'urls',
'file_path',
'width',
'height',
'aspect_ratio',
'vote_average',
'vote_count',
'language_code',
],
'additionalProperties': False,
}
movie_object = {
'type': 'object',
'properties': {
'id': {'type': 'integer'},
'imdb_id': {'type': 'string'},
'name': {'type': 'string'},
'original_name': {'type': ['string', 'null']},
'alternative_name': {'type': ['string', 'null']},
'year': {'type': 'integer'},
'runtime': {'type': 'integer'},
'language': {'type': 'string'},
'overview': {'type': 'string'},
'tagline': {'type': 'string'},
'rating': {'type': ['number', 'null']},
'votes': {'type': ['integer', 'null']},
'popularity': {'type': ['number', 'null']},
'adult': {'type': 'boolean'},
'budget': {'type': ['integer', 'null']},
'revenue': {'type': ['integer', 'null']},
'homepage': {'type': ['string', 'null'], 'format': 'uri'},
'posters': {'type': 'array', 'items': poster_object},
'backdrops': {'type': 'array', 'items': poster_object},
'genres': {'type': 'array', 'items': {'type': 'string'}},
'updated': {'type': 'string', 'format': 'date-time'},
'lookup_language': {'type': ['string', 'null']},
},
'required': [
'id',
'name',
'year',
'original_name',
'alternative_name',
'runtime',
'language',
'overview',
'tagline',
'rating',
'votes',
'popularity',
'adult',
'budget',
'revenue',
'homepage',
'genres',
'updated',
],
'additionalProperties': False,
}
description = 'Either title, TMDB ID or IMDB ID are required for a lookup'
return_schema = api.schema_model('tmdb_search_schema', ObjectsContainer.movie_object)
tmdb_parser = api.parser()
tmdb_parser.add_argument('title', help='Movie title')
tmdb_parser.add_argument('tmdb_id', help='TMDB ID')
tmdb_parser.add_argument('imdb_id', help='IMDB ID')
tmdb_parser.add_argument('language', help='ISO 639-1 language code')
tmdb_parser.add_argument('year', type=int, help='Movie year')
tmdb_parser.add_argument('only_cached', type=int, help='Return only cached results')
tmdb_parser.add_argument(
'include_posters', type=inputs.boolean, default=False, help='Include posters in response'
)
tmdb_parser.add_argument(
'include_backdrops', type=inputs.boolean, default=False, help='Include backdrops in response'
)
tmdb_parser.add_argument(
'include_backdrops', type=inputs.boolean, default=False, help='Include backdrops in response'
)
@tmdb_api.route('/movies/')
@api.doc(description=description)
class TMDBMoviesAPI(APIResource):
@etag(cache_age=3600)
@api.response(200, model=return_schema)
@api.response(NotFoundError)
@api.response(BadRequest)
@api.doc(expect=[tmdb_parser])
def get(self, session=None):
"""Get TMDB movie data"""
args = tmdb_parser.parse_args()
title = args.get('title')
tmdb_id = args.get('tmdb_id')
imdb_id = args.get('imdb_id')
posters = args.pop('include_posters', False)
backdrops = args.pop('include_backdrops', False)
if not (title or tmdb_id or imdb_id):
raise BadRequest(description)
lookup = plugin.get('api_tmdb', 'tmdb.api').lookup
try:
movie = lookup(session=session, **args)
except LookupError as e:
raise NotFoundError(e.args[0])
return_movie = movie.to_dict()
if posters:
return_movie['posters'] = [p.to_dict() for p in movie.posters]
if backdrops:
return_movie['backdrops'] = [p.to_dict() for p in movie.backdrops]
return jsonify(return_movie) | PypiClean |
/Electrum-Zcash-Random-Fork-3.1.3b5.tar.gz/Electrum-Zcash-Random-Fork-3.1.3b5/plugins/cosigner_pool/qt.py |
import time
from xmlrpc.client import ServerProxy
from PyQt5.QtGui import *
from PyQt5.QtCore import *
from PyQt5.QtWidgets import QPushButton
from electrum_zcash import bitcoin, util
from electrum_zcash import transaction
from electrum_zcash.plugins import BasePlugin, hook
from electrum_zcash.i18n import _
from electrum_zcash.wallet import Multisig_Wallet
from electrum_zcash.util import bh2u, bfh
from electrum_zcash_gui.qt.transaction_dialog import show_transaction
import sys
import traceback
server = ServerProxy('https://cosigner.electrum.org/', allow_none=True)
class Listener(util.DaemonThread):
def __init__(self, parent):
util.DaemonThread.__init__(self)
self.daemon = True
self.parent = parent
self.received = set()
self.keyhashes = []
def set_keyhashes(self, keyhashes):
self.keyhashes = keyhashes
def clear(self, keyhash):
server.delete(keyhash)
self.received.remove(keyhash)
def run(self):
while self.running:
if not self.keyhashes:
time.sleep(2)
continue
for keyhash in self.keyhashes:
if keyhash in self.received:
continue
try:
message = server.get(keyhash)
except Exception as e:
self.print_error("cannot contact cosigner pool")
time.sleep(30)
continue
if message:
self.received.add(keyhash)
self.print_error("received message for", keyhash)
self.parent.obj.cosigner_receive_signal.emit(
keyhash, message)
# poll every 30 seconds
time.sleep(30)
class QReceiveSignalObject(QObject):
cosigner_receive_signal = pyqtSignal(object, object)
class Plugin(BasePlugin):
def __init__(self, parent, config, name):
BasePlugin.__init__(self, parent, config, name)
self.listener = None
self.obj = QReceiveSignalObject()
self.obj.cosigner_receive_signal.connect(self.on_receive)
self.keys = []
self.cosigner_list = []
@hook
def init_qt(self, gui):
for window in gui.windows:
self.on_new_window(window)
@hook
def on_new_window(self, window):
self.update(window)
@hook
def on_close_window(self, window):
self.update(window)
def is_available(self):
return True
def update(self, window):
wallet = window.wallet
if type(wallet) != Multisig_Wallet:
return
if self.listener is None:
self.print_error("starting listener")
self.listener = Listener(self)
self.listener.start()
elif self.listener:
self.print_error("shutting down listener")
self.listener.stop()
self.listener = None
self.keys = []
self.cosigner_list = []
for key, keystore in wallet.keystores.items():
xpub = keystore.get_master_public_key()
K = bitcoin.deserialize_xpub(xpub)[-1]
_hash = bh2u(bitcoin.Hash(K))
if not keystore.is_watching_only():
self.keys.append((key, _hash, window))
else:
self.cosigner_list.append((window, xpub, K, _hash))
if self.listener:
self.listener.set_keyhashes([t[1] for t in self.keys])
@hook
def transaction_dialog(self, d):
d.cosigner_send_button = b = QPushButton(_("Send to cosigner"))
b.clicked.connect(lambda: self.do_send(d.tx))
d.buttons.insert(0, b)
self.transaction_dialog_update(d)
@hook
def transaction_dialog_update(self, d):
if d.tx.is_complete() or d.wallet.can_sign(d.tx):
d.cosigner_send_button.hide()
return
for window, xpub, K, _hash in self.cosigner_list:
if window.wallet == d.wallet and self.cosigner_can_sign(d.tx, xpub):
d.cosigner_send_button.show()
break
else:
d.cosigner_send_button.hide()
def cosigner_can_sign(self, tx, cosigner_xpub):
from electrum_zcash.keystore import is_xpubkey, parse_xpubkey
xpub_set = set([])
for txin in tx.inputs():
for x_pubkey in txin['x_pubkeys']:
if is_xpubkey(x_pubkey):
xpub, s = parse_xpubkey(x_pubkey)
xpub_set.add(xpub)
return cosigner_xpub in xpub_set
def do_send(self, tx):
for window, xpub, K, _hash in self.cosigner_list:
if not self.cosigner_can_sign(tx, xpub):
continue
raw_tx_bytes = bfh(str(tx))
message = bitcoin.encrypt_message(raw_tx_bytes, bh2u(K)).decode('ascii')
try:
server.put(_hash, message)
except Exception as e:
traceback.print_exc(file=sys.stdout)
window.show_message("Failed to send transaction to cosigning pool.")
return
window.show_message("Your transaction was sent to the cosigning pool.\nOpen your cosigner wallet to retrieve it.")
def on_receive(self, keyhash, message):
self.print_error("signal arrived for", keyhash)
for key, _hash, window in self.keys:
if _hash == keyhash:
break
else:
self.print_error("keyhash not found")
return
wallet = window.wallet
if wallet.has_keystore_encryption():
password = window.password_dialog('An encrypted transaction was retrieved from cosigning pool.\nPlease enter your password to decrypt it.')
if not password:
return
else:
password = None
if not window.question(_("An encrypted transaction was retrieved from cosigning pool.\nDo you want to open it now?")):
return
xprv = wallet.keystore.get_master_private_key(password)
if not xprv:
return
try:
k = bh2u(bitcoin.deserialize_xprv(xprv)[-1])
EC = bitcoin.EC_KEY(bfh(k))
message = bh2u(EC.decrypt_message(message))
except Exception as e:
traceback.print_exc(file=sys.stdout)
window.show_message(str(e))
return
self.listener.clear(keyhash)
tx = transaction.Transaction(message)
show_transaction(tx, window, prompt_if_unsaved=True) | PypiClean |
/GASSBI_distributions-0.1.tar.gz/GASSBI_distributions-0.1/distributions/Gaussiandistribution.py | import math
import matplotlib.pyplot as plt
from .Generaldistribution import Distribution
class Gaussian(Distribution):
""" Gaussian distribution class for calculating and
visualizing a Gaussian distribution.
Attributes:
mean (float) representing the mean value of the distribution
stdev (float) representing the standard deviation of the distribution
data_list (list of floats) a list of floats extracted from the data file
"""
def __init__(self, mu=0, sigma=1):
Distribution.__init__(self, mu, sigma)
def calculate_mean(self):
"""Function to calculate the mean of the data set.
Args:
None
Returns:
float: mean of the data set
"""
avg = 1.0 * sum(self.data) / len(self.data)
self.mean = avg
return self.mean
def calculate_stdev(self, sample=True):
"""Function to calculate the standard deviation of the data set.
Args:
sample (bool): whether the data represents a sample or population
Returns:
float: standard deviation of the data set
"""
if sample:
n = len(self.data) - 1
else:
n = len(self.data)
mean = self.calculate_mean()
sigma = 0
for d in self.data:
sigma += (d - mean) ** 2
sigma = math.sqrt(sigma / n)
self.stdev = sigma
return self.stdev
def plot_histogram(self):
"""Function to output a histogram of the instance variable data using
matplotlib pyplot library.
Args:
None
Returns:
None
"""
plt.hist(self.data)
plt.title('Histogram of Data')
plt.xlabel('data')
plt.ylabel('count')
def pdf(self, x):
"""Probability density function calculator for the gaussian distribution.
Args:
x (float): point for calculating the probability density function
Returns:
float: probability density function output
"""
return (1.0 / (self.stdev * math.sqrt(2*math.pi))) * math.exp(-0.5*((x - self.mean) / self.stdev) ** 2)
def plot_histogram_pdf(self, n_spaces = 50):
"""Function to plot the normalized histogram of the data and a plot of the
probability density function along the same range
Args:
n_spaces (int): number of data points
Returns:
list: x values for the pdf plot
list: y values for the pdf plot
"""
mu = self.mean
sigma = self.stdev
min_range = min(self.data)
max_range = max(self.data)
# calculates the interval between x values
interval = 1.0 * (max_range - min_range) / n_spaces
x = []
y = []
# calculate the x values to visualize
for i in range(n_spaces):
tmp = min_range + interval*i
x.append(tmp)
y.append(self.pdf(tmp))
# make the plots
fig, axes = plt.subplots(2,sharex=True)
fig.subplots_adjust(hspace=.5)
axes[0].hist(self.data, density=True)
axes[0].set_title('Normed Histogram of Data')
axes[0].set_ylabel('Density')
axes[1].plot(x, y)
axes[1].set_title('Normal Distribution for \n Sample Mean and Sample Standard Deviation')
axes[0].set_ylabel('Density')
plt.show()
return x, y
def __add__(self, other):
"""Function to add together two Gaussian distributions
Args:
other (Gaussian): Gaussian instance
Returns:
Gaussian: Gaussian distribution
"""
result = Gaussian()
result.mean = self.mean + other.mean
result.stdev = math.sqrt(self.stdev ** 2 + other.stdev ** 2)
return result
def __repr__(self):
"""Function to output the characteristics of the Gaussian instance
Args:
None
Returns:
string: characteristics of the Gaussian
"""
return "mean {}, standard deviation {}".format(self.mean, self.stdev) | PypiClean |
/DS_Store_Cleaner-0.2.tar.gz/DS_Store_Cleaner-0.2/README.md | # DS_Store_Cleaner
`DS_Store_Cleaner`是可以删除当前目录或指定目录下所有的 `.DS_Store` 文件的工具。 / `DS_Store_Cleaner` can delete all `.DS_Store` files in the current directory or in the specified directory.
## 什么是 .DS_Store? / What is a .DS_Store?
`.DS_Store` 是 `macOS` 用来保存如何展示文件/文件夹的数据文件,如果开发/设计人员将 `.DS_Store` 文件上传或部署到线上环境,可能造成文件目录结构泄露,特别是备份文件、源代码文件。如果不设置 `.gitignore` 文件,`.DS_Store` 很容易被上传到 `Github` 之上,可以使用该工具删除已经上传的文件。
[ds_store_exp](https://github.com/lijiejie/ds_store_exp) 就是这样的一个 `.DS_Store` 文件泄露利用脚本。
`.DS_Store` is the data file used by `macOS` to describe files/folders. If the developer/designer uploads or deploys the `.DS_Store` file to the production environment, the file directory structure may be leaked, especially the source code file.If you don't set the `.gitignore` file, `.DS_Store` can easily be uploaded to `Github`. Then you can use `DS_Store_Cleaner` tool to delete these files that have already been uploaded.
[ds_store_exp](https://github.com/lijiejie/ds_store_exp) is such a `.DS_Store` file leak exploit script.
## 安装 / Install
```bash
pip install DS_Store_Cleaner
```
## 使用方法 / Usage
* 清除当前路径下所有 `.DS_Store` 文件 / Clear all `.DS_Store` files in the current directory:
```python
dsclean
```
* 清除指定路径下所有 `.DS_Store` 文件 / Clear all `.DS_Store` files in the specified directory:
```python
dsclean ~/Desktop
```
## Licnese
[MIT License](https://github.com/VXenomac/DS_Store_Cleaner/blob/master/LICENSE) | PypiClean |
/DAQBrokerServer-0.0.2-py3-none-any.whl/daqbrokerServer.py | from tornado.wsgi import WSGIContainer
from tornado.ioloop import IOLoop
from tornado.httpserver import HTTPServer
#import gevent.monkey
# gevent.monkey.patch_all()
import time
import sys
import json
import traceback
import logging
import multiprocessing
import ntplib
import socket
import psutil
import struct
import shutil
import uuid
import platform
import os
import math
import signal
import sqlite3
import pyAesCrypt
import snowflake
import simplejson
import re
import ctypes
import requests
import concurrent.futures
import daqbrokerSettings
import monitorServer
import backupServer
import commServer
import logServer
import webbrowser
import zipfile
import io
from asteval import Interpreter
from concurrent_log_handler import ConcurrentRotatingFileHandler
from subprocess import call
from subprocess import check_output
#from bcrypt import gensalt
from functools import reduce
from sqlalchemy import create_engine
from sqlalchemy import text
from sqlalchemy import bindparam
from sqlalchemy import func
from sqlalchemy.orm import sessionmaker
from sqlalchemy.orm import scoped_session
from logging.handlers import RotatingFileHandler
from sqlalchemy_utils.functions import database_exists
from sqlalchemy_utils.functions import drop_database
from sqlalchemy_utils.functions import create_database
from flask import Flask
from flask import Markup
from flask import request
from flask import render_template
from flask import redirect
from flask import send_from_directory
from flask import url_for
from flask import session
from flask import flash
from flask import jsonify
from flask import request_tearing_down
#fom gevent.pywsgi import WSGIServer
#from sympy import *
from numpy import asarray, linspace
from scipy.interpolate import interp1d
from numbers import Number
import app
from bpApp import multiprocesses
class daqbrokerServer:
"""
Main server application class. This class can be used to start the DAQBroker server environment and contains the following members
:ivar localSettings: (string) name of the local settings database file (defaults to `localSettings`)
:ivar appPort: (integer) network port for the DAQBroker server REST API (defaults to `7000`)
:ivar logFileName: (string) name of the logging file (defaults to `logFIle`)
"""
def __init__(self, localSettings='localSettings', appPort=7000, logFileName='logFile.txt'):
self.logFile = logFileName
self.appPort = appPort
self.localSettings = localSettings
#print(self.logFile, self.appPort, self.localSettings)
def start(self, detached=False):
"""
Starts the DAQBroker server environment.
.. warning::
This is a long running process and blocks execution of the main task, it should therefore be called on a separate process.
:param detached: Unusable in current version. Meant to be used to launch a background (daemon-like) environment to continue to be used in the same python session
"""
startServer(localSettings=self.localSettings, appPort=self.appPort, logFilename=self.logFile)
alphabets = [
'a',
'b',
'c',
'd',
'e',
'f',
'g',
'h',
'i',
'j',
'k',
'l',
'm',
'n',
'o',
'p',
'q',
'r',
's',
't',
'u',
'v',
'w',
'x',
'y',
'z']
VERSION = "0.1"
timeStart = time.time()
strings = []
def dict_factory(cursor, row):
d = {}
for idx, col in enumerate(cursor.description):
d[col[0]] = row[idx]
return d
base_dir = '.'
if getattr(sys, 'frozen', False):
base_dir = os.path.join(sys._MEIPASS)
def setupLocalSettings(localSettings='localSettings'):
try:
#print(os.path.dirname(os.path.realpath(__file__)))
#print(os.path.realpath(__file__))
#print(sys._MEIPASS)
if not os.path.isdir(os.path.join(base_dir, 'static')):
print("Server files not found on this directory. Setting up required files . . .")
canUseLocal = False
useLocal = False
if os.path.isfile(os.path.join(base_dir, 'server.zip')):
canUseLocal = True
if canUseLocal:
useLocal = False
choice = input("Server files found in local compressed file, use these? (Could be out of date)\n\t1. Yes\n\t2. No\nMake a choice[1]:")
if choice == '1':
useLocal = True
if useLocal:
z = zipfile.ZipFile(os.path.join(base_dir, 'server.zip'))
z.extractall(path=base_dir)
print("done")
else:
zipFiles = requests.get("https://daqbroker.com/downloads/server.zip")
if zipFiles.ok:
z = zipfile.ZipFile(io.BytesIO(zipFiles.content))
z.extractall(path=base_dir)
print("done")
else:
sys.exit("Files not found on remote server. Make sure you have internet connection before trying again.")
if os.path.isfile(localSettings): # Must create new local settings
isNewDB = False
else: # Already there, let's hope with no problems
isNewDB = True
databases = []
daqbrokerSettings.setupLocalVars(localSettings)
scoped = daqbrokerSettings.getScoped()
session = scoped()
daqbrokerSettings.daqbroker_settings_local.metadata.create_all(
daqbrokerSettings.localEngine)
#id = snowflake.make_snowflake(snowflake_file='snowflake')
if isNewDB:
newGlobal = daqbrokerSettings.Global(
clock=time.time(),
version=VERSION,
backupfolder="backups",
importfolder="import",
tempfolder="temp",
ntp="NONE",
logport=9092,
commport=9090,
remarks="{}")
session.add(newGlobal)
newFolder = daqbrokerSettings.folder(
clock=time.time(), path="backups", type="0", remarks="{}")
session.add(newFolder)
newFolder = daqbrokerSettings.folder(
clock=time.time(), path="imports", type="0", remarks="{}")
session.add(newFolder)
newFolder = daqbrokerSettings.folder(
clock=time.time(), path="temp", type="0", remarks="{}")
session.add(newFolder)
newNode = daqbrokerSettings.nodes(
node=monitorServer.globalID,
name="localhost",
address="127.0.0.1",
port=9091,
local="127.0.0.1",
active=True,
lastActive=time.time(),
tsyncauto=False,
remarks="{}")
session.add(newNode)
globals = {
'clock': time.time(),
'version': VERSION,
'backupfolder': 'backups',
'importfolder': 'import',
'tempfolder': 'temp',
'ntp': None,
'remarks': {},
'commport': 9090,
'logport': 9092,
'isDefault': True} # Default values, should I use this?
else:
maxGlobal = session.query(
daqbrokerSettings.Global).filter_by(
clock=session.query(
func.max(
daqbrokerSettings.Global.clock))).first()
if maxGlobal:
globals = {}
for field in maxGlobal.__dict__:
if not field.startswith('_'):
globals[field] = getattr(maxGlobal, field)
else:
pass # Something very wrong happened with the local settings, this should be handled with a GUI
session.commit()
return globals
except Exception as e:
traceback.print_exc()
session.rollback()
sys.exit('Could not set up local settings, make sure you have the correct access rights for this folder and restart the application!')
def startServer(localSettings='localSettings', appPort=7000, logFilename="logFile.log"):
global theApp
bufferSize = 64 * 1024
password = str(snowflake.make_snowflake(snowflake_file=os.path.join(base_dir, 'snowflake')))
manager = multiprocessing.Manager()
servers = manager.list()
workers = manager.list()
backupInfo = manager.dict()
for i in range(0, 10000):
workers.append(-1)
if os.path.isfile(os.path.join(base_dir, 'secretEnc')):
pyAesCrypt.decryptFile(
os.path.join(base_dir, "secretEnc"),
os.path.join(base_dir, "secretPlain"),
password,
bufferSize)
file = open(os.path.join(base_dir, "secretPlain"), 'r')
aList = json.load(file)
for server in aList:
servers.append(server)
file.close()
os.remove(os.path.join(base_dir, "secretPlain"))
if os.path.isabs(localSettings):
setFile = localSettings
else:
setFile = os.path.join(base_dir, localSettings)
globals = setupLocalSettings(setFile)
theApp = app.createApp(theServers=servers, theWorkers=workers)
p = multiprocessing.Process(
target=backupServer.startBackup, args=(
os.path.join(base_dir, 'static', 'rsync'), backupInfo, setFile))
p.start()
multiprocesses.append(
{'name': 'Backup', 'pid': p.pid, 'description': 'DAQBroker backup process'})
time.sleep(1)
p = multiprocessing.Process(
target=logServer.logServer, args=(
globals["logport"], base_dir), kwargs={
'logFilename': logFilename})
p.start()
multiprocesses.append(
{'name': 'Logger', 'pid': p.pid, 'description': 'DAQBroker log process'})
time.sleep(1)
p = multiprocessing.Process(
target=commServer.collector,
args=(
servers,
globals["commport"],
globals["logport"],
backupInfo,
setFile))
p.start()
multiprocesses.append({'name': 'Collector', 'pid': p.pid,
'description': 'DAQBroker message collector process'})
time.sleep(1)
p = multiprocessing.Process(
target=monitorServer.producer,
args=(
servers,
globals["commport"],
globals["logport"],
False,
backupInfo,
workers,
setFile
))
p.start()
multiprocesses.append({'name': 'Producer', 'pid': p.pid,
'description': 'DAQBroker broadcasting server process'})
time.sleep(1)
http_server = HTTPServer(WSGIContainer(theApp))
http_server.listen(appPort)
webbrowser.open('http://localhost:'+str(appPort)+"/daqbroker")
IOLoop.instance().start()
if __name__ == "__main__":
multiprocessing.freeze_support()
theArguments = ['localSettings', 'appPort', 'logFileName']
obj = {}
if len(sys.argv) < 5:
for i, val in enumerate(sys.argv):
if i == len(theArguments) + 1:
break
if i < 1:
continue
obj[theArguments[i - 1]] = val
else:
sys.exit(
"Usage:\n\tdaqbrokerServer localSettings apiPort logFile\nOr:\n\tdaqbrokerServer localSettings apiPort\nOr:\n\tdaqbrokerServer localSettings\nOr:\n\tdaqbroker")
if os.path.isfile(os.path.join(base_dir, 'pid')):
if 'appPort' in obj:
appPort = int(obj['appPort'])
else:
appPort = 7000
with open(os.path.join(base_dir, 'pid'), 'r') as f:
existingPID = f.read().strip('\n').strip('\r').strip('\n')
processExists = False
if existingPID:
if psutil.pid_exists(int(existingPID)):
processExists = True
if not processExists:
with open(os.path.join(base_dir, 'pid'), 'w') as f:
f.write(str(os.getpid()))
f.flush()
newServer = daqbrokerServer(**obj)
newServer.start()
else:
webbrowser.open('http://localhost:' + str(appPort) + "/daqbroker")
else:
with open(os.path.join(base_dir, 'pid'), 'w') as f:
f.write(str(os.getpid()))
f.flush()
newServer = daqbrokerServer(**obj)
newServer.start() | PypiClean |
/AFQ-Browser-0.3.tar.gz/AFQ-Browser-0.3/doc/sphinxext/numpydoc.py | from __future__ import division, absolute_import, print_function
import sys
import re
import pydoc
import sphinx
import inspect
import collections
if sphinx.__version__ < '1.0.1':
raise RuntimeError("Sphinx 1.0.1 or newer is required")
from docscrape_sphinx import get_doc_object, SphinxDocString
from sphinx.util.compat import Directive
if sys.version_info[0] >= 3:
sixu = lambda s: s
else:
sixu = lambda s: unicode(s, 'unicode_escape')
def mangle_docstrings(app, what, name, obj, options, lines,
reference_offset=[0]):
cfg = {'use_plots': app.config.numpydoc_use_plots,
'show_class_members': app.config.numpydoc_show_class_members,
'show_inherited_class_members':
app.config.numpydoc_show_inherited_class_members,
'class_members_toctree': app.config.numpydoc_class_members_toctree}
u_NL = sixu('\n')
if what == 'module':
# Strip top title
pattern = '^\\s*[#*=]{4,}\\n[a-z0-9 -]+\\n[#*=]{4,}\\s*'
title_re = re.compile(sixu(pattern), re.I | re.S)
lines[:] = title_re.sub(sixu(''), u_NL.join(lines)).split(u_NL)
else:
doc = get_doc_object(obj, what, u_NL.join(lines), config=cfg)
if sys.version_info[0] >= 3:
doc = str(doc)
else:
doc = unicode(doc)
lines[:] = doc.split(u_NL)
if (app.config.numpydoc_edit_link and hasattr(obj, '__name__')
and obj.__name__):
if hasattr(obj, '__module__'):
v = dict(full_name=sixu("%s.%s") % (obj.__module__, obj.__name__))
else:
v = dict(full_name=obj.__name__)
lines += [sixu(''), sixu('.. htmlonly::'), sixu('')]
lines += [sixu(' %s') % x for x in
(app.config.numpydoc_edit_link % v).split("\n")]
# replace reference numbers so that there are no duplicates
references = []
for line in lines:
line = line.strip()
m = re.match(sixu('^.. \\[([a-z0-9_.-])\\]'), line, re.I)
if m:
references.append(m.group(1))
# start renaming from the longest string, to avoid overwriting parts
references.sort(key=lambda x: -len(x))
if references:
for i, line in enumerate(lines):
for r in references:
if re.match(sixu('^\\d+$'), r):
new_r = sixu("R%d") % (reference_offset[0] + int(r))
else:
new_r = sixu("%s%d") % (r, reference_offset[0])
lines[i] = lines[i].replace(sixu('[%s]_') % r,
sixu('[%s]_') % new_r)
lines[i] = lines[i].replace(sixu('.. [%s]') % r,
sixu('.. [%s]') % new_r)
reference_offset[0] += len(references)
def mangle_signature(app, what, name, obj, options, sig, retann):
# Do not try to inspect classes that don't define `__init__`
if (inspect.isclass(obj)
and (not hasattr(obj, '__init__')
or 'initializes x; see ' in pydoc.getdoc(obj.__init__))):
return '', ''
if not (isinstance(obj, collections.Callable)
or hasattr(obj, '__argspec_is_invalid_')):
return
if not hasattr(obj, '__doc__'):
return
doc = SphinxDocString(pydoc.getdoc(obj))
if doc['Signature']:
sig = re.sub(sixu("^[^(]*"), sixu(""), doc['Signature'])
return sig, sixu('')
def setup(app, get_doc_object_=get_doc_object):
if not hasattr(app, 'add_config_value'):
return # probably called by nose, better bail out
global get_doc_object
get_doc_object = get_doc_object_
app.connect('autodoc-process-docstring', mangle_docstrings)
app.connect('autodoc-process-signature', mangle_signature)
app.add_config_value('numpydoc_edit_link', None, False)
app.add_config_value('numpydoc_use_plots', None, False)
app.add_config_value('numpydoc_show_class_members', True, True)
app.add_config_value('numpydoc_show_inherited_class_members', True, True)
app.add_config_value('numpydoc_class_members_toctree', True, True)
# Extra mangling domains
app.add_domain(NumpyPythonDomain)
app.add_domain(NumpyCDomain)
# ------------------------------------------------------------------------------
# Docstring-mangling domains
# ------------------------------------------------------------------------------
from docutils.statemachine import ViewList
from sphinx.domains.c import CDomain
from sphinx.domains.python import PythonDomain
class ManglingDomainBase(object):
directive_mangling_map = {}
def __init__(self, *a, **kw):
super(ManglingDomainBase, self).__init__(*a, **kw)
self.wrap_mangling_directives()
def wrap_mangling_directives(self):
for name, objtype in list(self.directive_mangling_map.items()):
self.directives[name] = wrap_mangling_directive(
self.directives[name], objtype)
class NumpyPythonDomain(ManglingDomainBase, PythonDomain):
name = 'np'
directive_mangling_map = {
'function': 'function',
'class': 'class',
'exception': 'class',
'method': 'function',
'classmethod': 'function',
'staticmethod': 'function',
'attribute': 'attribute',
}
indices = []
class NumpyCDomain(ManglingDomainBase, CDomain):
name = 'np-c'
directive_mangling_map = {
'function': 'function',
'member': 'attribute',
'macro': 'function',
'type': 'class',
'var': 'object',
}
def wrap_mangling_directive(base_directive, objtype):
class directive(base_directive):
def run(self):
env = self.state.document.settings.env
name = None
if self.arguments:
m = re.match(r'^(.*\s+)?(.*?)(\(.*)?', self.arguments[0])
name = m.group(2).strip()
if not name:
name = self.arguments[0]
lines = list(self.content)
mangle_docstrings(env.app, objtype, name, None, None, lines)
self.content = ViewList(lines, self.content.parent)
return base_directive.run(self)
return directive | PypiClean |
/Aitomatic-Contrib-23.8.10.3.tar.gz/Aitomatic-Contrib-23.8.10.3/src/aito/iot_mgmt/data/scripts/profile_equipment_data_fields.py | from pandas._libs.missing import NA # pylint: disable=no-name-in-module
from tqdm import tqdm
from aito.pmfp.data_mgmt import EquipmentParquetDataSet
from aito.util.data_proc import ParquetDataset
from aito.iot_mgmt.api import (EquipmentUniqueTypeGroup,
EquipmentUniqueTypeGroupDataFieldProfile)
MAX_N_DISTINCT_VALUES_TO_PROFILE: int = 30
def run(general_type: str, unique_type_group: str):
"""Run this script to profile Equipment Unique Type Group's data fields."""
# get Equipment Unique Type Group and corresponding Parquet Data Set
eq_unq_tp_grp: EquipmentUniqueTypeGroup = \
EquipmentUniqueTypeGroup.objects.get(name=unique_type_group)
eq_unq_tp_grp_parquet_data_set: EquipmentParquetDataSet = \
EquipmentParquetDataSet(general_type=general_type,
unique_type_group=unique_type_group)
eq_unq_tp_grp_parquet_ds: ParquetDataset = \
eq_unq_tp_grp_parquet_data_set.load()
# delete previously stored Data Field profiles
EquipmentUniqueTypeGroupDataFieldProfile.objects.filter(
equipment_unique_type_group=eq_unq_tp_grp).delete()
# profile Data Fields and save profiles into DB
for equipment_data_field in tqdm(eq_unq_tp_grp.equipment_data_fields.all()): # noqa: E501
eq_data_field_name: str = equipment_data_field.name
if eq_data_field_name in eq_unq_tp_grp_parquet_ds.possibleFeatureCols:
# pylint: disable=protected-access
if eq_unq_tp_grp_parquet_ds.typeIsNum(eq_data_field_name):
eq_unq_tp_grp_parquet_ds._nulls[eq_data_field_name] = \
equipment_data_field.lower_numeric_null, \
equipment_data_field.upper_numeric_null
_distinct_values_proportions: dict = {
(str(NA) if k is NA else k): v
for k, v in
eq_unq_tp_grp_parquet_ds.distinct(eq_data_field_name).items()}
_n_distinct_values: int = len(_distinct_values_proportions)
eq_unq_tp_grp_data_field_profile: \
EquipmentUniqueTypeGroupDataFieldProfile = \
EquipmentUniqueTypeGroupDataFieldProfile.objects.create(
equipment_unique_type_group=eq_unq_tp_grp,
equipment_data_field=equipment_data_field,
valid_proportion=(eq_unq_tp_grp_parquet_ds
.nonNullProportion(eq_data_field_name)),
n_distinct_values=_n_distinct_values)
if _n_distinct_values <= MAX_N_DISTINCT_VALUES_TO_PROFILE:
eq_unq_tp_grp_data_field_profile.distinct_values = \
_distinct_values_proportions
if eq_unq_tp_grp_parquet_ds.typeIsNum(eq_data_field_name):
quartiles: dict = (eq_unq_tp_grp_parquet_ds
.reprSample[eq_data_field_name]
.describe(percentiles=(.25, .5, .75))
.drop(index='count',
level=None,
inplace=False,
errors='raise')
.to_dict())
eq_unq_tp_grp_data_field_profile.sample_min = \
quartiles['min']
eq_unq_tp_grp_data_field_profile.outlier_rst_min = \
eq_unq_tp_grp_parquet_ds.outlierRstMin(eq_data_field_name)
eq_unq_tp_grp_data_field_profile.sample_quartile = \
quartiles['25%']
eq_unq_tp_grp_data_field_profile.sample_median = \
quartiles['50%']
eq_unq_tp_grp_data_field_profile.sample_3rd_quartile = \
quartiles['75%']
eq_unq_tp_grp_data_field_profile.outlier_rst_max = \
eq_unq_tp_grp_parquet_ds.outlierRstMax(eq_data_field_name)
eq_unq_tp_grp_data_field_profile.sample_max = \
quartiles['max']
eq_unq_tp_grp_data_field_profile.save() | PypiClean |
/Findex_GUI-0.2.18-py3-none-any.whl/findex_gui/controllers/auth/permissions.py | from flask import current_app
from findex_gui.controllers.auth.auth import get_current_user_data, not_logged_in
def has_permission(role, resource, action):
"""Function to check if a user has the specified permission."""
role = current_app.auth.load_role(role)
return role.has_permission(resource, action) if role else False
def permission_required(resource, action, callback=None):
"""
Decorator for views that require a certain permission of the logged in
user.
"""
def wrap(func):
def decorator(*args, **kwargs):
user_data = get_current_user_data()
if user_data is None:
return not_logged_in(callback, *args, **kwargs)
if not has_permission(user_data.get('role'), resource, action):
if callback is None:
return current_app.auth.not_permitted_callback(*args, **kwargs)
else:
return callback(*args, **kwargs)
return callback(*args, **kwargs)
return func(*args, **kwargs)
return decorator
return wrap
class Permission(object):
"""
Permission object, representing actions that can be taken on a resource.
Attributes:
- resource: A resource is a component on which actions can be performed.
Examples: post, user, ticket, product, but also post.comment, user.role,
etc.
- action: Any action that can be performed on a resource. Names of actions
should be short and clear. Examples: create, read, update, delete, download,
list, etc.
"""
def __init__(self, resource, action):
self.resource = resource
self.action = action
def __eq__(self, other):
return self.resource == other.resource and self.action == other.action
class Role(object):
"""
Role object to group users and permissions.
Attributes:
- name: The name of the role.
- permissions: A list of permissions.
"""
def __init__(self, name, permissions):
self.name = name
self.permissions = permissions
def has_permission(self, resource, action):
return any([resource == perm.resource and action == perm.action\
for perm in self.permissions]) | PypiClean |
/AltAnalyze-2.1.3.15.tar.gz/AltAnalyze-2.1.3.15/altanalyze/stats_scripts/mpmath/libmp/libintmath.py | import math
from bisect import bisect
from .backend import xrange
from .backend import BACKEND, gmpy, sage, sage_utils, MPZ, MPZ_ONE, MPZ_ZERO
def giant_steps(start, target, n=2):
"""
Return a list of integers ~=
[start, n*start, ..., target/n^2, target/n, target]
but conservatively rounded so that the quotient between two
successive elements is actually slightly less than n.
With n = 2, this describes suitable precision steps for a
quadratically convergent algorithm such as Newton's method;
with n = 3 steps for cubic convergence (Halley's method), etc.
>>> giant_steps(50,1000)
[66, 128, 253, 502, 1000]
>>> giant_steps(50,1000,4)
[65, 252, 1000]
"""
L = [target]
while L[-1] > start*n:
L = L + [L[-1]//n + 2]
return L[::-1]
def rshift(x, n):
"""For an integer x, calculate x >> n with the fastest (floor)
rounding. Unlike the plain Python expression (x >> n), n is
allowed to be negative, in which case a left shift is performed."""
if n >= 0: return x >> n
else: return x << (-n)
def lshift(x, n):
"""For an integer x, calculate x << n. Unlike the plain Python
expression (x << n), n is allowed to be negative, in which case a
right shift with default (floor) rounding is performed."""
if n >= 0: return x << n
else: return x >> (-n)
if BACKEND == 'sage':
import operator
rshift = operator.rshift
lshift = operator.lshift
def python_trailing(n):
"""Count the number of trailing zero bits in abs(n)."""
if not n:
return 0
t = 0
while not n & 1:
n >>= 1
t += 1
return t
if BACKEND == 'gmpy':
if gmpy.version() >= '2':
def gmpy_trailing(n):
"""Count the number of trailing zero bits in abs(n) using gmpy."""
if n: return MPZ(n).bit_scan1()
else: return 0
else:
def gmpy_trailing(n):
"""Count the number of trailing zero bits in abs(n) using gmpy."""
if n: return MPZ(n).scan1()
else: return 0
# Small powers of 2
powers = [1<<_ for _ in range(300)]
def python_bitcount(n):
"""Calculate bit size of the nonnegative integer n."""
bc = bisect(powers, n)
if bc != 300:
return bc
bc = int(math.log(n, 2)) - 4
return bc + bctable[n>>bc]
def gmpy_bitcount(n):
"""Calculate bit size of the nonnegative integer n."""
if n: return MPZ(n).numdigits(2)
else: return 0
#def sage_bitcount(n):
# if n: return MPZ(n).nbits()
# else: return 0
def sage_trailing(n):
return MPZ(n).trailing_zero_bits()
if BACKEND == 'gmpy':
bitcount = gmpy_bitcount
trailing = gmpy_trailing
elif BACKEND == 'sage':
sage_bitcount = sage_utils.bitcount
bitcount = sage_bitcount
trailing = sage_trailing
else:
bitcount = python_bitcount
trailing = python_trailing
if BACKEND == 'gmpy' and 'bit_length' in dir(gmpy):
bitcount = gmpy.bit_length
# Used to avoid slow function calls as far as possible
trailtable = [trailing(n) for n in range(256)]
bctable = [bitcount(n) for n in range(1024)]
# TODO: speed up for bases 2, 4, 8, 16, ...
def bin_to_radix(x, xbits, base, bdigits):
"""Changes radix of a fixed-point number; i.e., converts
x * 2**xbits to floor(x * 10**bdigits)."""
return x * (MPZ(base)**bdigits) >> xbits
stddigits = '0123456789abcdefghijklmnopqrstuvwxyz'
def small_numeral(n, base=10, digits=stddigits):
"""Return the string numeral of a positive integer in an arbitrary
base. Most efficient for small input."""
if base == 10:
return str(n)
digs = []
while n:
n, digit = divmod(n, base)
digs.append(digits[digit])
return "".join(digs[::-1])
def numeral_python(n, base=10, size=0, digits=stddigits):
"""Represent the integer n as a string of digits in the given base.
Recursive division is used to make this function about 3x faster
than Python's str() for converting integers to decimal strings.
The 'size' parameters specifies the number of digits in n; this
number is only used to determine splitting points and need not be
exact."""
if n <= 0:
if not n:
return "0"
return "-" + numeral(-n, base, size, digits)
# Fast enough to do directly
if size < 250:
return small_numeral(n, base, digits)
# Divide in half
half = (size // 2) + (size & 1)
A, B = divmod(n, base**half)
ad = numeral(A, base, half, digits)
bd = numeral(B, base, half, digits).rjust(half, "0")
return ad + bd
def numeral_gmpy(n, base=10, size=0, digits=stddigits):
"""Represent the integer n as a string of digits in the given base.
Recursive division is used to make this function about 3x faster
than Python's str() for converting integers to decimal strings.
The 'size' parameters specifies the number of digits in n; this
number is only used to determine splitting points and need not be
exact."""
if n < 0:
return "-" + numeral(-n, base, size, digits)
# gmpy.digits() may cause a segmentation fault when trying to convert
# extremely large values to a string. The size limit may need to be
# adjusted on some platforms, but 1500000 works on Windows and Linux.
if size < 1500000:
return gmpy.digits(n, base)
# Divide in half
half = (size // 2) + (size & 1)
A, B = divmod(n, MPZ(base)**half)
ad = numeral(A, base, half, digits)
bd = numeral(B, base, half, digits).rjust(half, "0")
return ad + bd
if BACKEND == "gmpy":
numeral = numeral_gmpy
else:
numeral = numeral_python
_1_800 = 1<<800
_1_600 = 1<<600
_1_400 = 1<<400
_1_200 = 1<<200
_1_100 = 1<<100
_1_50 = 1<<50
def isqrt_small_python(x):
"""
Correctly (floor) rounded integer square root, using
division. Fast up to ~200 digits.
"""
if not x:
return x
if x < _1_800:
# Exact with IEEE double precision arithmetic
if x < _1_50:
return int(x**0.5)
# Initial estimate can be any integer >= the true root; round up
r = int(x**0.5 * 1.00000000000001) + 1
else:
bc = bitcount(x)
n = bc//2
r = int((x>>(2*n-100))**0.5+2)<<(n-50) # +2 is to round up
# The following iteration now precisely computes floor(sqrt(x))
# See e.g. Crandall & Pomerance, "Prime Numbers: A Computational
# Perspective"
while 1:
y = (r+x//r)>>1
if y >= r:
return r
r = y
def isqrt_fast_python(x):
"""
Fast approximate integer square root, computed using division-free
Newton iteration for large x. For random integers the result is almost
always correct (floor(sqrt(x))), but is 1 ulp too small with a roughly
0.1% probability. If x is very close to an exact square, the answer is
1 ulp wrong with high probability.
With 0 guard bits, the largest error over a set of 10^5 random
inputs of size 1-10^5 bits was 3 ulp. The use of 10 guard bits
almost certainly guarantees a max 1 ulp error.
"""
# Use direct division-based iteration if sqrt(x) < 2^400
# Assume floating-point square root accurate to within 1 ulp, then:
# 0 Newton iterations good to 52 bits
# 1 Newton iterations good to 104 bits
# 2 Newton iterations good to 208 bits
# 3 Newton iterations good to 416 bits
if x < _1_800:
y = int(x**0.5)
if x >= _1_100:
y = (y + x//y) >> 1
if x >= _1_200:
y = (y + x//y) >> 1
if x >= _1_400:
y = (y + x//y) >> 1
return y
bc = bitcount(x)
guard_bits = 10
x <<= 2*guard_bits
bc += 2*guard_bits
bc += (bc&1)
hbc = bc//2
startprec = min(50, hbc)
# Newton iteration for 1/sqrt(x), with floating-point starting value
r = int(2.0**(2*startprec) * (x >> (bc-2*startprec)) ** -0.5)
pp = startprec
for p in giant_steps(startprec, hbc):
# r**2, scaled from real size 2**(-bc) to 2**p
r2 = (r*r) >> (2*pp - p)
# x*r**2, scaled from real size ~1.0 to 2**p
xr2 = ((x >> (bc-p)) * r2) >> p
# New value of r, scaled from real size 2**(-bc/2) to 2**p
r = (r * ((3<<p) - xr2)) >> (pp+1)
pp = p
# (1/sqrt(x))*x = sqrt(x)
return (r*(x>>hbc)) >> (p+guard_bits)
def sqrtrem_python(x):
"""Correctly rounded integer (floor) square root with remainder."""
# to check cutoff:
# plot(lambda x: timing(isqrt, 2**int(x)), [0,2000])
if x < _1_600:
y = isqrt_small_python(x)
return y, x - y*y
y = isqrt_fast_python(x) + 1
rem = x - y*y
# Correct remainder
while rem < 0:
y -= 1
rem += (1+2*y)
else:
if rem:
while rem > 2*(1+y):
y += 1
rem -= (1+2*y)
return y, rem
def isqrt_python(x):
"""Integer square root with correct (floor) rounding."""
return sqrtrem_python(x)[0]
def sqrt_fixed(x, prec):
return isqrt_fast(x<<prec)
sqrt_fixed2 = sqrt_fixed
if BACKEND == 'gmpy':
isqrt_small = isqrt_fast = isqrt = gmpy.sqrt
sqrtrem = gmpy.sqrtrem
elif BACKEND == 'sage':
isqrt_small = isqrt_fast = isqrt = \
getattr(sage_utils, "isqrt", lambda n: MPZ(n).isqrt())
sqrtrem = lambda n: MPZ(n).sqrtrem()
else:
isqrt_small = isqrt_small_python
isqrt_fast = isqrt_fast_python
isqrt = isqrt_python
sqrtrem = sqrtrem_python
def ifib(n, _cache={}):
"""Computes the nth Fibonacci number as an integer, for
integer n."""
if n < 0:
return (-1)**(-n+1) * ifib(-n)
if n in _cache:
return _cache[n]
m = n
# Use Dijkstra's logarithmic algorithm
# The following implementation is basically equivalent to
# http://en.literateprograms.org/Fibonacci_numbers_(Scheme)
a, b, p, q = MPZ_ONE, MPZ_ZERO, MPZ_ZERO, MPZ_ONE
while n:
if n & 1:
aq = a*q
a, b = b*q+aq+a*p, b*p+aq
n -= 1
else:
qq = q*q
p, q = p*p+qq, qq+2*p*q
n >>= 1
if m < 250:
_cache[m] = b
return b
MAX_FACTORIAL_CACHE = 1000
def ifac(n, memo={0:1, 1:1}):
"""Return n factorial (for integers n >= 0 only)."""
f = memo.get(n)
if f:
return f
k = len(memo)
p = memo[k-1]
MAX = MAX_FACTORIAL_CACHE
while k <= n:
p *= k
if k <= MAX:
memo[k] = p
k += 1
return p
def ifac2(n, memo_pair=[{0:1}, {1:1}]):
"""Return n!! (double factorial), integers n >= 0 only."""
memo = memo_pair[n&1]
f = memo.get(n)
if f:
return f
k = max(memo)
p = memo[k]
MAX = MAX_FACTORIAL_CACHE
while k < n:
k += 2
p *= k
if k <= MAX:
memo[k] = p
return p
if BACKEND == 'gmpy':
ifac = gmpy.fac
elif BACKEND == 'sage':
ifac = lambda n: int(sage.factorial(n))
ifib = sage.fibonacci
def list_primes(n):
n = n + 1
sieve = list(xrange(n))
sieve[:2] = [0, 0]
for i in xrange(2, int(n**0.5)+1):
if sieve[i]:
for j in xrange(i**2, n, i):
sieve[j] = 0
return [p for p in sieve if p]
if BACKEND == 'sage':
# Note: it is *VERY* important for performance that we convert
# the list to Python ints.
def list_primes(n):
return [int(_) for _ in sage.primes(n+1)]
small_odd_primes = (3,5,7,11,13,17,19,23,29,31,37,41,43,47)
small_odd_primes_set = set(small_odd_primes)
def isprime(n):
"""
Determines whether n is a prime number. A probabilistic test is
performed if n is very large. No special trick is used for detecting
perfect powers.
>>> sum(list_primes(100000))
454396537
>>> sum(n*isprime(n) for n in range(100000))
454396537
"""
n = int(n)
if not n & 1:
return n == 2
if n < 50:
return n in small_odd_primes_set
for p in small_odd_primes:
if not n % p:
return False
m = n-1
s = trailing(m)
d = m >> s
def test(a):
x = pow(a,d,n)
if x == 1 or x == m:
return True
for r in xrange(1,s):
x = x**2 % n
if x == m:
return True
return False
# See http://primes.utm.edu/prove/prove2_3.html
if n < 1373653:
witnesses = [2,3]
elif n < 341550071728321:
witnesses = [2,3,5,7,11,13,17]
else:
witnesses = small_odd_primes
for a in witnesses:
if not test(a):
return False
return True
def moebius(n):
"""
Evaluates the Moebius function which is `mu(n) = (-1)^k` if `n`
is a product of `k` distinct primes and `mu(n) = 0` otherwise.
TODO: speed up using factorization
"""
n = abs(int(n))
if n < 2:
return n
factors = []
for p in xrange(2, n+1):
if not (n % p):
if not (n % p**2):
return 0
if not sum(p % f for f in factors):
factors.append(p)
return (-1)**len(factors)
def gcd(*args):
a = 0
for b in args:
if a:
while b:
a, b = b, a % b
else:
a = b
return a
# Comment by Juan Arias de Reyna:
#
# I learn this method to compute EulerE[2n] from van de Lune.
#
# We apply the formula EulerE[2n] = (-1)^n 2**(-2n) sum_{j=0}^n a(2n,2j+1)
#
# where the numbers a(n,j) vanish for j > n+1 or j <= -1 and satisfies
#
# a(0,-1) = a(0,0) = 0; a(0,1)= 1; a(0,2) = a(0,3) = 0
#
# a(n,j) = a(n-1,j) when n+j is even
# a(n,j) = (j-1) a(n-1,j-1) + (j+1) a(n-1,j+1) when n+j is odd
#
#
# But we can use only one array unidimensional a(j) since to compute
# a(n,j) we only need to know a(n-1,k) where k and j are of different parity
# and we have not to conserve the used values.
#
# We cached up the values of Euler numbers to sufficiently high order.
#
# Important Observation: If we pretend to use the numbers
# EulerE[1], EulerE[2], ... , EulerE[n]
# it is convenient to compute first EulerE[n], since the algorithm
# computes first all
# the previous ones, and keeps them in the CACHE
MAX_EULER_CACHE = 500
def eulernum(m, _cache={0:MPZ_ONE}):
r"""
Computes the Euler numbers `E(n)`, which can be defined as
coefficients of the Taylor expansion of `1/cosh x`:
.. math ::
\frac{1}{\cosh x} = \sum_{n=0}^\infty \frac{E_n}{n!} x^n
Example::
>>> [int(eulernum(n)) for n in range(11)]
[1, 0, -1, 0, 5, 0, -61, 0, 1385, 0, -50521]
>>> [int(eulernum(n)) for n in range(11)] # test cache
[1, 0, -1, 0, 5, 0, -61, 0, 1385, 0, -50521]
"""
# for odd m > 1, the Euler numbers are zero
if m & 1:
return MPZ_ZERO
f = _cache.get(m)
if f:
return f
MAX = MAX_EULER_CACHE
n = m
a = [MPZ(_) for _ in [0,0,1,0,0,0]]
for n in range(1, m+1):
for j in range(n+1, -1, -2):
a[j+1] = (j-1)*a[j] + (j+1)*a[j+2]
a.append(0)
suma = 0
for k in range(n+1, -1, -2):
suma += a[k+1]
if n <= MAX:
_cache[n] = ((-1)**(n//2))*(suma // 2**n)
if n == m:
return ((-1)**(n//2))*suma // 2**n | PypiClean |
/CheeseFramework-1.4.95-py3-none-any.whl/Cheese/mockManager.py |
from Cheese.testError import MockError
class MockManager:
mocks = {}
@staticmethod
def setMock(mock):
MockManager.mocks[mock.repoName.upper()] = mock
@staticmethod
def returnMock(repositoryName, methodName, kwargs):
"""
Mocks repository method
"""
if (repositoryName.upper() in MockManager.mocks.keys()):
mock = MockManager.mocks[repositoryName.upper()]
if (methodName in mock.whenReturns.keys()): # try to find whenReturn
method = mock.whenReturns[methodName]
for ret in method:
if (kwargs == ret["KWARGS"]):
return MockManager.prepareResponse(ret["RESPONSE"])
if (methodName in mock.catch.keys()): # try to find catch
method = mock.catch[methodName]
for catch in method:
for key in catch["KWARGS"].keys(): # runs through all condition arguments
if (key not in kwargs.keys()):
raise MockError(repositoryName, methodName, key)
if (catch["ARG_NAME"] not in kwargs): # if cached argument is not in kwargs of real method
raise MockError(repositoryName, methodName, catch["ARG_NAME"])
pointer = catch["POINTER"]
pointer.setValue(kwargs[catch["ARG_NAME"]])
return None
def prepareResponse(response):
"""
Prepares response
Finds its type and returns it
If type is Pointer returns value
"""
newResponse = response
if (type(response) == list): #list
newResponse = []
for res in response:
newResponse.append(MockManager.prepareResponse(res))
elif (type(response) == tuple): # tuple
for i, res in enumerate(response):
newResponse[i] = MockManager.prepareResponse(res)
elif (type(response) == dict): # dictionary
for key in response.keys():
newResponse[key] = MockManager.prepareResponse(response[key])
elif (response.__class__.__name__ == "Pointer"):
newResponse = MockManager.prepareResponse(response.getValue())
return newResponse | PypiClean |
/Dejavu-1.5.0.zip/Dejavu-1.5.0/dejavu/test/zoo_fixture.py |
import datetime
import os
thisdir = os.path.dirname(__file__)
logname = os.path.join(thisdir, "djvtest.log")
try:
import pythoncom
except ImportError:
pythoncom = None
try:
set
except NameError:
from sets import Set as set
import sys
import threading
import time
import traceback
import unittest
import warnings
try:
# Builtin in Python 2.5?
decimal
except NameError:
try:
# Module in Python 2.3, 2.4
import decimal
except ImportError:
decimal = None
try:
import fixedpoint
except ImportError:
fixedpoint = None
__all__ = ['Animal', 'Exhibit', 'Lecture', 'Vet', 'Visit', 'Zoo',
# Don't export the ZooTests class--it will break e.g. test_dejavu.
'arena', 'init', 'run', 'setup', 'teardown']
import dejavu
from dejavu import errors, logic, storage
from dejavu import Unit, UnitProperty, ToOne, ToMany, UnitSequencerInteger, UnitAssociation
from dejavu.test import tools
from dejavu import engines
class EscapeProperty(UnitProperty):
def __set__(self, unit, value):
UnitProperty.__set__(self, unit, value)
# Zoo is a ToOne association, so it will return a unit or None.
z = unit.Zoo()
if z:
z.LastEscape = unit.LastEscape
class Animal(Unit):
Species = UnitProperty(hints={'bytes': 100})
ZooID = UnitProperty(int, index=True)
Legs = UnitProperty(int, default=4)
PreviousZoos = UnitProperty(list, hints={'bytes': 8000})
LastEscape = EscapeProperty(datetime.datetime)
Lifespan = UnitProperty(float, hints={'precision': 4})
Age = UnitProperty(float, hints={'precision': 4}, default=1)
MotherID = UnitProperty(int)
PreferredFoodID = UnitProperty(int)
AlternateFoodID = UnitProperty(int)
Animal.many_to_one('ID', Animal, 'MotherID')
class Zoo(Unit):
Name = UnitProperty()
Founded = UnitProperty(datetime.date)
Opens = UnitProperty(datetime.time)
LastEscape = UnitProperty(datetime.datetime)
if fixedpoint:
# Explicitly set precision and scale so test_storemsaccess
# can test CURRENCY type
Admission = UnitProperty(fixedpoint.FixedPoint,
hints={'precision': 4, 'scale': 2})
else:
Admission = UnitProperty(float)
Zoo.one_to_many('ID', Animal, 'ZooID')
class AlternateFoodAssociation(UnitAssociation):
to_many = False
register = False
def related(self, unit, expr=None):
food = unit.sandbox.unit(Food, ID=unit.AlternateFoodID)
return food
class Food(Unit):
"""A food item."""
Name = UnitProperty()
NutritionValue = UnitProperty(int)
Food.one_to_many('ID', Animal, 'PreferredFoodID')
descriptor = AlternateFoodAssociation('AlternateFoodID', Food, 'ID')
descriptor.nearClass = Animal
Animal._associations['Alternate Food'] = descriptor
Animal.AlternateFood = descriptor
del descriptor
class Vet(Unit):
"""A Veterinarian."""
Name = UnitProperty()
ZooID = UnitProperty(int, index=True)
City = UnitProperty()
sequencer = UnitSequencerInteger(initial=200)
Vet.many_to_one('ZooID', Zoo, 'ID')
class Visit(Unit):
"""Work done by a Veterinarian on an Animal."""
VetID = UnitProperty(int, index=True)
ZooID = UnitProperty(int, index=True)
AnimalID = UnitProperty(int, index=True)
Date = UnitProperty(datetime.date)
Vet.one_to_many('ID', Visit, 'VetID')
Animal.one_to_many('ID', Visit, 'AnimalID')
class Lecture(Visit):
"""A Visit by a Vet to train staff (rather than visit an Animal)."""
AnimalID = None
Topic = UnitProperty()
class Exhibit(Unit):
# Make this a string to help test vs unicode.
Name = UnitProperty(str)
ZooID = UnitProperty(int)
Animals = UnitProperty(list)
PettingAllowed = UnitProperty(bool)
Creators = UnitProperty(tuple)
if decimal:
Acreage = UnitProperty(decimal.Decimal)
else:
Acreage = UnitProperty(float)
# Remove the ID property (inherited from Unit) from the Exhibit class.
ID = None
sequencer = dejavu.UnitSequencer()
identifiers = ("ZooID", Name)
Zoo.one_to_many('ID', Exhibit, 'ZooID')
class NothingToDoWithZoos(Unit):
ALong = UnitProperty(long, hints={'precision': 1})
AFloat = UnitProperty(float, hints={'precision': 1})
if decimal:
ADecimal = UnitProperty(decimal.Decimal,
hints={'precision': 1, 'scale': 1})
if fixedpoint:
AFixed = UnitProperty(fixedpoint.FixedPoint,
hints={'precision': 1, 'scale': 1})
Jan_1_2001 = datetime.date(2001, 1, 1)
every13days = [Jan_1_2001 + datetime.timedelta(x * 13) for x in range(20)]
every17days = [Jan_1_2001 + datetime.timedelta(x * 17) for x in range(20)]
del x
class ZooTests(unittest.TestCase):
def test_1_model(self):
self.assertEqual(Zoo.Animal.__class__, dejavu.ToMany)
self.assertEqual(Zoo.Animal.nearClass, Zoo)
self.assertEqual(Zoo.Animal.nearKey, 'ID')
self.assertEqual(Zoo.Animal.farClass, Animal)
self.assertEqual(Zoo.Animal.farKey, 'ZooID')
self.assertEqual(Animal.Zoo.__class__, dejavu.ToOne)
self.assertEqual(Animal.Zoo.nearClass, Animal)
self.assertEqual(Animal.Zoo.nearKey, 'ZooID')
self.assertEqual(Animal.Zoo.farClass, Zoo)
self.assertEqual(Animal.Zoo.farKey, 'ID')
def test_2_populate(self):
box = arena.new_sandbox()
try:
# Notice this also tests that: a Unit which is only
# dirtied via __init__ is still saved.
WAP = Zoo(Name = 'Wild Animal Park',
Founded = datetime.date(2000, 1, 1),
# 59 can give rounding errors with divmod, which
# AdapterFromADO needs to correct.
Opens = datetime.time(8, 15, 59),
LastEscape = datetime.datetime(2004, 7, 29, 5, 6, 7),
Admission = "4.95",
)
box.memorize(WAP)
# The object should get an ID automatically.
self.assertNotEqual(WAP.ID, None)
SDZ = Zoo(Name = 'San Diego Zoo',
# This early date should play havoc with a number
# of implementations.
Founded = datetime.date(1835, 9, 13),
Opens = datetime.time(9, 0, 0),
Admission = "0",
)
box.memorize(SDZ)
# The object should get an ID automatically.
self.assertNotEqual(SDZ.ID, None)
Biodome = Zoo(Name = u'Montr\xe9al Biod\xf4me',
Founded = datetime.date(1992, 6, 19),
Opens = datetime.time(9, 0, 0),
Admission = "11.75",
)
box.memorize(Biodome)
seaworld = Zoo(Name = 'Sea_World', Admission = "60")
box.memorize(seaworld)
##
## mostly_empty = Zoo(Name = 'The Mostly Empty Zoo' + (" " * 255))
## box.memorize(mostly_empty)
# Animals
leopard = Animal(Species='Leopard', Lifespan=73.5)
self.assertEqual(leopard.PreviousZoos, None)
box.memorize(leopard)
self.assertEqual(leopard.ID, 1)
leopard.add(WAP)
leopard.LastEscape = datetime.datetime(2004, 12, 21, 8, 15, 0, 999907)
lion = Animal(Species='Lion', ZooID=WAP.ID)
box.memorize(lion)
box.memorize(Animal(Species='Slug', Legs=1, Lifespan=.75,
# Test our 8000-byte limit
PreviousZoos=["f" * (8000 - 14)]))
tiger = Animal(Species='Tiger', PreviousZoos=['animal\\universe'])
box.memorize(tiger)
# Override Legs.default with itself just to make sure it works.
box.memorize(Animal(Species='Bear', Legs=4))
# Notice that ostrich.PreviousZoos is [], whereas leopard is None.
box.memorize(Animal(Species='Ostrich', Legs=2, PreviousZoos=[],
Lifespan=103.2))
box.memorize(Animal(Species='Centipede', Legs=100))
emp = Animal(Species='Emperor Penguin', Legs=2)
box.memorize(emp)
adelie = Animal(Species='Adelie Penguin', Legs=2)
box.memorize(adelie)
seaworld.add(emp, adelie)
millipede = Animal(Species='Millipede', Legs=1000000)
millipede.PreviousZoos = [WAP.Name]
box.memorize(millipede)
SDZ.add(tiger, millipede)
# Add a mother and child to test relationships
bai_yun = Animal(Species='Ape', Legs=2)
box.memorize(bai_yun) # ID = 11
self.assertEqual(bai_yun.ID, 11)
hua_mei = Animal(Species='Ape', Legs=2, MotherID=bai_yun.ID)
box.memorize(hua_mei) # ID = 12
self.assertEqual(hua_mei.ID, 12)
# Exhibits
pe = Exhibit(Name = 'The Penguin Encounter',
ZooID = seaworld.ID,
Animals = [emp.ID, adelie.ID],
PettingAllowed = True,
Acreage = "3.1",
# See ticket #45
Creators = (u'Richard F\xfcrst', u'Sonja Martin'),
)
box.memorize(pe)
tr = Exhibit(Name = 'Tiger River',
ZooID = SDZ.ID,
Animals = [tiger.ID],
PettingAllowed = False,
Acreage = "4",
)
box.memorize(tr)
# Vets
cs = Vet(Name = 'Charles Schroeder', ZooID = SDZ.ID)
box.memorize(cs)
self.assertEqual(cs.ID, Vet.sequencer.initial)
jm = Vet(Name = 'Jim McBain', ZooID = seaworld.ID)
box.memorize(jm)
# Visits
for d in every13days:
box.memorize(Visit(VetID=cs.ID, AnimalID=tiger.ID, Date=d))
for d in every17days:
box.memorize(Visit(VetID=jm.ID, AnimalID=emp.ID, Date=d))
# Foods
dead_fish = Food(Name="Dead Fish", Nutrition=5)
live_fish = Food(Name="Live Fish", Nutrition=10)
bunnies = Food(Name="Live Bunny Wabbit", Nutrition=10)
steak = Food(Name="T-Bone", Nutrition=7)
for food in [dead_fish, live_fish, bunnies, steak]:
box.memorize(food)
# Foods --> add preferred foods
lion.add(steak)
tiger.add(bunnies)
emp.add(live_fish)
adelie.add(live_fish)
# Foods --> add alternate foods
lion.AlternateFoodID = bunnies.ID
tiger.AlternateFoodID = steak.ID
emp.AlternateFoodID = dead_fish.ID
adelie.AlternateFoodID = dead_fish.ID
finally:
box.flush_all()
def test_3_Properties(self):
box = arena.new_sandbox()
try:
# Zoos
WAP = box.unit(Zoo, Name='Wild Animal Park')
self.assertNotEqual(WAP, None)
self.assertEqual(WAP.Founded, datetime.date(2000, 1, 1))
self.assertEqual(WAP.Opens, datetime.time(8, 15, 59))
# This should have been updated when leopard.LastEscape was set.
## self.assertEqual(WAP.LastEscape,
## datetime.datetime(2004, 12, 21, 8, 15, 0, 999907))
self.assertEqual(WAP.Admission, Zoo.Admission.coerce(WAP, "4.95"))
SDZ = box.unit(Zoo, Founded=datetime.date(1835, 9, 13))
self.assertNotEqual(SDZ, None)
self.assertEqual(SDZ.Founded, datetime.date(1835, 9, 13))
self.assertEqual(SDZ.Opens, datetime.time(9, 0, 0))
self.assertEqual(SDZ.LastEscape, None)
self.assertEqual(float(SDZ.Admission), 0)
# Try a magic Sandbox recaller method
Biodome = box.Zoo(Name = u'Montr\xe9al Biod\xf4me')
self.assertNotEqual(Biodome, None)
self.assertEqual(Biodome.Name, u'Montr\xe9al Biod\xf4me')
self.assertEqual(Biodome.Founded, datetime.date(1992, 6, 19))
self.assertEqual(Biodome.Opens, datetime.time(9, 0, 0))
self.assertEqual(Biodome.LastEscape, None)
self.assertEqual(float(Biodome.Admission), 11.75)
if fixedpoint:
seaworld = box.unit(Zoo, Admission = fixedpoint.FixedPoint(60))
else:
seaworld = box.unit(Zoo, Admission = float(60))
self.assertNotEqual(seaworld, None)
self.assertEqual(seaworld.Name, u'Sea_World')
# Animals
leopard = box.unit(Animal, Species='Leopard')
self.assertEqual(leopard.Species, 'Leopard')
self.assertEqual(leopard.Legs, 4)
self.assertEqual(leopard.Lifespan, 73.5)
self.assertEqual(leopard.ZooID, WAP.ID)
self.assertEqual(leopard.PreviousZoos, None)
## self.assertEqual(leopard.LastEscape,
## datetime.datetime(2004, 12, 21, 8, 15, 0, 999907))
ostrich = box.unit(Animal, Species='Ostrich')
self.assertEqual(ostrich.Species, 'Ostrich')
self.assertEqual(ostrich.Legs, 2)
self.assertEqual(ostrich.ZooID, None)
self.assertEqual(ostrich.PreviousZoos, [])
self.assertEqual(ostrich.LastEscape, None)
millipede = box.unit(Animal, Legs=1000000)
self.assertEqual(millipede.Species, 'Millipede')
self.assertEqual(millipede.Legs, 1000000)
self.assertEqual(millipede.ZooID, SDZ.ID)
self.assertEqual(millipede.PreviousZoos, [WAP.Name])
self.assertEqual(millipede.LastEscape, None)
# Test that strings in a list get decoded correctly.
# See http://projects.amor.org/dejavu/ticket/50
tiger = box.unit(Animal, Species='Tiger')
self.assertEqual(tiger.PreviousZoos, ["animal\\universe"])
# Test our 8000-byte limit.
# len(pickle.dumps(["f" * (8000 - 14)]) == 8000
slug = box.unit(Animal, Species='Slug')
self.assertEqual(len(slug.PreviousZoos[0]), 8000 - 14)
# Exhibits
exes = box.recall(Exhibit)
self.assertEqual(len(exes), 2)
if exes[0].Name == 'The Penguin Encounter':
pe = exes[0]
tr = exes[1]
else:
pe = exes[1]
tr = exes[0]
self.assertEqual(pe.ZooID, seaworld.ID)
self.assertEqual(len(pe.Animals), 2)
self.assertEqual(float(pe.Acreage), 3.1)
self.assertEqual(pe.PettingAllowed, True)
self.assertEqual(pe.Creators, (u'Richard F\xfcrst', u'Sonja Martin'))
self.assertEqual(tr.ZooID, SDZ.ID)
self.assertEqual(len(tr.Animals), 1)
self.assertEqual(float(tr.Acreage), 4)
self.assertEqual(tr.PettingAllowed, False)
finally:
box.flush_all()
def test_4_Expressions(self):
box = arena.new_sandbox()
try:
def matches(lam, cls=Animal):
# We flush_all to ensure a DB hit each time.
box.flush_all()
return len(box.recall(cls, (lam)))
zoos = box.recall(Zoo)
self.assertEqual(zoos[0].dirty(), False)
self.assertEqual(len(zoos), 4)
self.assertEqual(matches(lambda x: True), 12)
self.assertEqual(matches(lambda x: x.Legs == 4), 4)
self.assertEqual(matches(lambda x: x.Legs == 2), 5)
self.assertEqual(matches(lambda x: x.Legs >= 2 and x.Legs < 20), 9)
self.assertEqual(matches(lambda x: x.Legs > 10), 2)
self.assertEqual(matches(lambda x: x.Lifespan > 70), 2)
self.assertEqual(matches(lambda x: x.Species.startswith('L')), 2)
self.assertEqual(matches(lambda x: x.Species.endswith('pede')), 2)
self.assertEqual(matches(lambda x: x.LastEscape != None), 1)
self.assertEqual(matches(lambda x: x.LastEscape is not None), 1)
self.assertEqual(matches(lambda x: None == x.LastEscape), 11)
# In operator (containedby)
self.assertEqual(matches(lambda x: 'pede' in x.Species), 2)
self.assertEqual(matches(lambda x: x.Species in ('Lion', 'Tiger', 'Bear')), 3)
# Try In with cell references
class thing(object): pass
pet, pet2 = thing(), thing()
pet.Name, pet2.Name = 'Slug', 'Ostrich'
self.assertEqual(matches(lambda x: x.Species in (pet.Name, pet2.Name)), 2)
# logic and other functions
self.assertEqual(matches(lambda x: dejavu.ieq(x.Species, 'slug')), 1)
self.assertEqual(matches(lambda x: dejavu.icontains(x.Species, 'PEDE')), 2)
self.assertEqual(matches(lambda x: dejavu.icontains(('Lion', 'Banana'), x.Species)), 1)
f = lambda x: dejavu.icontainedby(x.Species, ('Lion', 'Bear', 'Leopard'))
self.assertEqual(matches(f), 3)
name = 'Lion'
self.assertEqual(matches(lambda x: len(x.Species) == len(name)), 3)
# This broke sometime in 2004. Rev 32 seems to have fixed it.
self.assertEqual(matches(lambda x: 'i' in x.Species), 7)
# Test now(), today(), year(), month(), day()
self.assertEqual(matches(lambda x: x.Founded != None
and x.Founded < dejavu.today(), Zoo), 3)
self.assertEqual(matches(lambda x: x.LastEscape == dejavu.now()), 0)
self.assertEqual(matches(lambda x: dejavu.year(x.LastEscape) == 2004), 1)
self.assertEqual(matches(lambda x: dejavu.month(x.LastEscape) == 12), 1)
self.assertEqual(matches(lambda x: dejavu.day(x.LastEscape) == 21), 1)
# Test AND, OR with cannot_represent.
# Notice that we reference a method ('count') which no
# known SM handles, so it will default back to Expr.eval().
self.assertEqual(matches(lambda x: 'p' in x.Species
and x.Species.count('e') > 1), 3)
# This broke in MSAccess (storeado) in April 2005, due to a bug in
# db.SQLDecompiler.visit_CALL_FUNCTION (append TOS, not replace!).
box.flush_all()
e = logic.Expression(lambda x, **kw: x.LastEscape != None
and x.LastEscape >= datetime.datetime(kw['Year'], 12, 1)
and x.LastEscape < datetime.datetime(kw['Year'], 12, 31)
)
e.bind_args(Year=2004)
units = box.recall(Animal, e)
self.assertEqual(len(units), 1)
# Test wildcards in LIKE. This fails with SQLite <= 3.0.8,
# so make sure it's always at the end of this method so
# it doesn't preclude running the other tests.
box.flush_all()
units = box.recall(Zoo, lambda x: "_" in x.Name)
self.assertEqual(len(units), 1)
finally:
box.flush_all()
def test_5_Aggregates(self):
box = arena.new_sandbox()
try:
# views
legs = [x[0] for x in box.view(Animal, ['Legs'])]
legs.sort()
self.assertEqual(legs, [1, 2, 2, 2, 2, 2, 4, 4, 4, 4, 100, 1000000])
expected = {'Leopard': 73.5,
'Slug': .75,
'Tiger': None,
'Lion': None,
'Bear': None,
'Ostrich': 103.2,
'Centipede': None,
'Emperor Penguin': None,
'Adelie Penguin': None,
'Millipede': None,
'Ape': None,
}
for species, lifespan in box.view(Animal, ['Species', 'Lifespan']):
if expected[species] is None:
self.assertEqual(lifespan, None)
else:
self.assertAlmostEqual(expected[species], lifespan, places=5)
expected = [u'Montr\xe9al Biod\xf4me', 'Wild Animal Park']
e = (lambda x: x.Founded != None
and x.Founded <= dejavu.today()
and x.Founded >= datetime.date(1990, 1, 1))
values = [val[0] for val in box.view(Zoo, ['Name'], e)]
for name in expected:
self.assert_(name in values)
# distinct
legs = box.distinct(Animal, ['Legs'])
legs.sort()
self.assertEqual(legs, [1, 2, 4, 100, 1000000])
# This may raise a warning on some DB's.
f = (lambda x: x.Species == 'Lion')
escapees = box.distinct(Animal, ['Legs'], f)
self.assertEqual(escapees, [4])
# range should return a sorted list
legs = box.range(Animal, 'Legs', lambda x: x.Legs <= 100)
self.assertEqual(legs, range(1, 101))
topics = box.range(Exhibit, 'Name')
self.assertEqual(topics, ['The Penguin Encounter', 'Tiger River'])
vets = box.range(Vet, 'Name')
self.assertEqual(vets, ['Charles Schroeder', 'Jim McBain'])
finally:
box.flush_all()
def test_6_Editing(self):
# Edit
box = arena.new_sandbox()
try:
SDZ = box.unit(Zoo, Name='San Diego Zoo')
SDZ.Name = 'The San Diego Zoo'
SDZ.Founded = datetime.date(1900, 1, 1)
SDZ.Opens = datetime.time(7, 30, 0)
SDZ.Admission = "35.00"
finally:
box.flush_all()
# Test edits
box = arena.new_sandbox()
try:
SDZ = box.unit(Zoo, Name='The San Diego Zoo')
self.assertEqual(SDZ.Name, 'The San Diego Zoo')
self.assertEqual(SDZ.Founded, datetime.date(1900, 1, 1))
self.assertEqual(SDZ.Opens, datetime.time(7, 30, 0))
if fixedpoint:
self.assertEqual(SDZ.Admission, fixedpoint.FixedPoint(35, 2))
else:
self.assertEqual(SDZ.Admission, 35.0)
finally:
box.flush_all()
# Change it back
box = arena.new_sandbox()
try:
SDZ = box.unit(Zoo, Name='The San Diego Zoo')
SDZ.Name = 'San Diego Zoo'
SDZ.Founded = datetime.date(1835, 9, 13)
SDZ.Opens = datetime.time(9, 0, 0)
SDZ.Admission = "0"
finally:
box.flush_all()
# Test re-edits
box = arena.new_sandbox()
try:
SDZ = box.unit(Zoo, Name='San Diego Zoo')
self.assertEqual(SDZ.Name, 'San Diego Zoo')
self.assertEqual(SDZ.Founded, datetime.date(1835, 9, 13))
self.assertEqual(SDZ.Opens, datetime.time(9, 0, 0))
if fixedpoint:
self.assertEqual(SDZ.Admission, fixedpoint.FixedPoint(0, 2))
else:
self.assertEqual(SDZ.Admission, 0.0)
finally:
box.flush_all()
def test_7_Multirecall(self):
box = arena.new_sandbox()
try:
f = (lambda z, a: z.Name == 'San Diego Zoo')
zooed_animals = box.recall(Zoo & Animal, f)
self.assertEqual(len(zooed_animals), 2)
SDZ = box.unit(Zoo, Name='San Diego Zoo')
aid = 0
for z, a in zooed_animals:
self.assertEqual(id(z), id(SDZ))
self.assertNotEqual(id(a), aid)
aid = id(a)
# Assert that multirecalls with no matching related units returns
# no matches for the initial class, since all joins are INNER.
# We're also going to test that you can combine a one-arg expr
# with a two-arg expr.
sdexpr = logic.filter(Name='San Diego Zoo')
leo = lambda z, a: a.Species == 'Leopard'
zooed_animals = box.recall(Zoo & Animal, sdexpr + leo)
self.assertEqual(len(zooed_animals), 0)
# Now try the same expr with INNER, LEFT, and RIGHT JOINs.
zooed_animals = box.recall(Zoo & Animal)
self.assertEqual(len(zooed_animals), 6)
self.assertEqual(set([(z.Name, a.Species) for z, a in zooed_animals]),
set([("Wild Animal Park", "Leopard"),
("Wild Animal Park", "Lion"),
("San Diego Zoo", "Tiger"),
("San Diego Zoo", "Millipede"),
("Sea_World", "Emperor Penguin"),
("Sea_World", "Adelie Penguin")]))
zooed_animals = box.recall(Zoo >> Animal)
self.assertEqual(len(zooed_animals), 12)
self.assertEqual(set([(z.Name, a.Species) for z, a in zooed_animals]),
set([("Wild Animal Park", "Leopard"),
("Wild Animal Park", "Lion"),
("San Diego Zoo", "Tiger"),
("San Diego Zoo", "Millipede"),
("Sea_World", "Emperor Penguin"),
("Sea_World", "Adelie Penguin"),
(None, "Slug"),
(None, "Bear"),
(None, "Ostrich"),
(None, "Centipede"),
(None, "Ape"),
(None, "Ape"),
]))
zooed_animals = box.recall(Zoo << Animal)
self.assertEqual(len(zooed_animals), 7)
self.assertEqual(set([(z.Name, a.Species) for z, a in zooed_animals]),
set([("Wild Animal Park", "Leopard"),
("Wild Animal Park", "Lion"),
("San Diego Zoo", "Tiger"),
("San Diego Zoo", "Millipede"),
("Sea_World", "Emperor Penguin"),
("Sea_World", "Adelie Penguin"),
(u'Montr\xe9al Biod\xf4me', None),
]))
# Try a multiple-arg expression
f = (lambda a, z: a.Legs >= 4 and z.Admission < 10)
animal_zoos = box.recall(Animal & Zoo, f)
self.assertEqual(len(animal_zoos), 4)
names = [a.Species for a, z in animal_zoos]
names.sort()
self.assertEqual(names, ['Leopard', 'Lion', 'Millipede', 'Tiger'])
# Let's try three joined classes just for the sadistic fun of it.
tree = (Animal >> Zoo) >> Vet
f = (lambda a, z, v: z.Name == 'Sea_World')
self.assertEqual(len(box.recall(tree, f)), 2)
# MSAccess can't handle an INNER JOIN nested in an OUTER JOIN.
# Test that this fails for MSAccess, but works for other SM's.
trees = []
def make_tree():
trees.append( (Animal & Zoo) >> Vet )
warnings.filterwarnings("ignore", category=errors.StorageWarning)
try:
make_tree()
finally:
warnings.filters.pop(0)
azv = []
def set_azv():
f = (lambda a, z, v: z.Name == 'Sea_World')
azv.append(box.recall(trees[0], f))
smname = arena.stores['testSM'].__class__.__name__
if smname in ("StorageManagerADO_MSAccess",):
self.assertRaises(pythoncom.com_error, set_azv)
else:
set_azv()
self.assertEqual(len(azv[0]), 2)
# Try mentioning the same class twice.
tree = (Animal << Animal)
f = (lambda anim, mother: mother.ID != None)
animals = [mother.ID for anim, mother in box.recall(tree, f)]
self.assertEqual(animals, [11])
finally:
box.flush_all()
def test_8_CustomAssociations(self):
box = arena.new_sandbox()
try:
# Try different association paths
std_expected = ['Live Bunny Wabbit', 'Live Fish', 'Live Fish', 'T-Bone']
cus_expected = ['Dead Fish', 'Dead Fish', 'Live Bunny Wabbit', 'T-Bone']
uj = Animal & Food
for path, expected in [# standard path
(None, std_expected),
# custom path
('Alternate Food', cus_expected)]:
uj.path = path
foods = [food for animal, food in box.recall(uj)]
foods.sort(dejavu.sort('Name'))
self.assertEqual([f.Name for f in foods], expected)
# Test the magic association methods
tiger = box.unit(Animal, Species='Tiger')
self.assertEqual(tiger.Food().Name, 'Live Bunny Wabbit')
self.assertEqual(tiger.AlternateFood().Name, 'T-Bone')
finally:
box.flush_all()
def test_Iteration(self):
box = arena.new_sandbox()
try:
# Test box.unit inside of xrecall
for visit in box.xrecall(Visit, VetID=1):
firstvisit = box.unit(Visit, VetID=1, Date=Jan_1_2001)
self.assertEqual(firstvisit.VetID, 1)
self.assertEqual(visit.VetID, 1)
# Test recall inside of xrecall
for visit in box.xrecall(Visit, VetID=1):
f = (lambda x: x.VetID == 1 and x.ID != visit.ID)
othervisits = box.recall(Visit, f)
self.assertEqual(len(othervisits), len(every13days) - 1)
# Test far associations inside of xrecall
for visit in box.xrecall(Visit, VetID=1):
# visit.Vet is a ToOne association, so will return a unit or None.
vet = visit.Vet()
self.assertEqual(vet.ID, 1)
finally:
box.flush_all()
def test_Engines(self):
box = arena.new_sandbox()
try:
quadrupeds = box.recall(Animal, Legs=4)
self.assertEqual(len(quadrupeds), 4)
eng = engines.UnitEngine()
box.memorize(eng)
eng.add_rule('CREATE', 1, "Animal")
eng.add_rule('FILTER', 1, logic.filter(Legs=4))
self.assertEqual(eng.FinalClassName, "Animal")
qcoll = eng.take_snapshot()
self.assertEqual(len(qcoll), 4)
self.assertEqual(qcoll.EngineID, eng.ID)
eng.add_rule('TRANSFORM', 1, "Zoo")
self.assertEqual(eng.FinalClassName, "Zoo")
# Sleep for a second so the Timestamps are different.
time.sleep(1)
qcoll = eng.take_snapshot()
self.assertEqual(len(qcoll), 2)
zoos = qcoll.units()
zoos.sort(dejavu.sort('Name'))
SDZ = box.unit(Zoo, Name='San Diego Zoo')
WAP = box.unit(Zoo, Name='Wild Animal Park')
self.assertEqual(zoos, [SDZ, WAP])
# Flush and start over
box.flush_all()
box = arena.new_sandbox()
# Use the Sandbox magic recaller method
eng = box.UnitEngine(1)
self.assertEqual(len(eng.rules()), 3)
snaps = eng.snapshots()
self.assertEqual(len(snaps), 2)
self.assertEqual(snaps[0].Type, "Animal")
self.assertEqual(len(snaps[0]), 4)
self.assertEqual(snaps[1].Type, "Zoo")
self.assertEqual(len(snaps[1]), 2)
self.assertEqual(eng.last_snapshot(), snaps[1])
# Remove the last TRANSFORM rule to see if finalclass reverts.
self.assertEqual(eng.FinalClassName, "Zoo")
eng.rules()[-1].forget()
self.assertEqual(eng.FinalClassName, "Animal")
finally:
box.flush_all()
def test_Subclassing(self):
box = arena.new_sandbox()
try:
box.memorize(Visit(VetID=21, ZooID=1, AnimalID=1))
box.memorize(Visit(VetID=21, ZooID=1, AnimalID=2))
box.memorize(Visit(VetID=32, ZooID=1, AnimalID=3))
box.memorize(Lecture(VetID=21, ZooID=1, Topic='Cage Cleaning'))
box.memorize(Lecture(VetID=21, ZooID=1, Topic='Ape Mating Habits'))
box.memorize(Lecture(VetID=32, ZooID=3, Topic='Your Tiger and Steroids'))
visits = box.recall(Visit, inherit=True, ZooID=1)
self.assertEqual(len(visits), 5)
box.flush_all()
box = arena.new_sandbox()
visits = box.recall(Visit, inherit=True, VetID=21)
self.assertEqual(len(visits), 4)
cc = [x for x in visits
if getattr(x, "Topic", None) == "Cage Cleaning"]
self.assertEqual(len(cc), 1)
# Checking for non-existent attributes in/from subclasses
# isn't supported yet.
## f = logic.filter(AnimalID=2)
## self.assertEqual(len(box.recall(Visit, f)), 1)
## self.assertEqual(len(box.recall(Lecture, f)), 0)
finally:
box.flush_all()
def test_DB_Introspection(self):
s = arena.stores.values()[0]
if not hasattr(s, "db"):
print "not a db (skipped) ",
return
zootable = s.db['Zoo']
cols = zootable
self.assertEqual(len(cols), 6)
idcol = cols['ID']
self.assertEqual(s.db.python_type(idcol.dbtype), int)
for prop in Zoo.properties:
self.assertEqual(cols[prop].key,
prop in Zoo.identifiers)
def test_zzz_Schema_Upgrade(self):
# Must run last.
zs = ZooSchema(arena)
# In this first upgrade, we simulate the case where the code was
# upgraded, and the database schema upgrade performed afterward.
# The Schema.latest property is set, and upgrade() is called with
# no argument (which should upgrade us to "latest").
Animal.set_property("ExhibitID")
# Test numeric default (see hack in storeado for MS Access).
prop = Animal.set_property("Stomachs", int)
prop.default = 1
zs.latest = 2
zs.upgrade()
# In this example, we simulate the developer who wants to put
# model changes inline with database changes (see upgrade_to_3).
# We do not set latest, but instead supply an arg to upgrade().
zs.upgrade(3)
# Test that Animals have a new "Family" property, and an ExhibitID.
box = arena.new_sandbox()
try:
emp = box.unit(Animal, Family='Emperor Penguin')
self.assertEqual(emp.ExhibitID, 'The Penguin Encounter')
finally:
box.flush_all()
def test_numbers(self):
## print "skipped ",
## return
float_prec = 53
box = arena.new_sandbox()
try:
print "precision:",
# PostgreSQL should be able to go up to 1000 decimal digits (~= 2 ** 10),
# but SQL constants don't actually overflow until 2 ** 15. Meh.
db = getattr(arena.stores['testSM'], "db", None)
if db:
import math
maxprec = db.typeadapter.numeric_max_precision
if maxprec == 0:
# SQLite, for example, must always use TEXT.
# So we might as well try... oh... how about 3?
overflow_prec = 3
else:
overflow_prec = int(math.log(maxprec, 2)) + 1
else:
overflow_prec = 8
dc = decimal.getcontext()
for prec in xrange(overflow_prec + 1):
p = 2 ** prec
print p,
if p > dc.prec:
dc.prec = p
# We don't need to test <type long> at different 'scales'.
long_done = False
# Test scales at both extremes and the median
for s in (0, int(prec/2), max(prec-1, 0)):
s = 2 ** s
# Modify the model and storage
if not long_done:
arena.drop_property(NothingToDoWithZoos, 'ALong')
NothingToDoWithZoos.ALong.hints['bytes'] = p
arena.add_property(NothingToDoWithZoos, 'ALong')
if p <= float_prec:
arena.drop_property(NothingToDoWithZoos, 'AFloat')
NothingToDoWithZoos.AFloat.hints['precision'] = p
arena.add_property(NothingToDoWithZoos, 'AFloat')
if decimal:
arena.drop_property(NothingToDoWithZoos, 'ADecimal')
NothingToDoWithZoos.ADecimal.hints['precision'] = p
NothingToDoWithZoos.ADecimal.hints['scale'] = s
arena.add_property(NothingToDoWithZoos, 'ADecimal')
if fixedpoint:
arena.drop_property(NothingToDoWithZoos, 'AFixed')
NothingToDoWithZoos.AFixed.hints['precision'] = p
NothingToDoWithZoos.AFixed.hints['scale'] = s
arena.add_property(NothingToDoWithZoos, 'AFixed')
# Create an instance and set the specified precision/scale
nothing = NothingToDoWithZoos()
if not long_done:
Lval = (16 ** p) - 1
setattr(nothing, 'ALong', Lval)
if p <= float_prec:
fval = float(((2 ** p) - 1) / (2 ** s))
setattr(nothing, 'AFloat', fval)
nval = "1" * p
nval = nval[:-s] + "." + nval[-s:]
if decimal:
dval = decimal.Decimal(nval)
setattr(nothing, 'ADecimal', dval)
if fixedpoint:
# fixedpoint uses "precision" where we use "scale";
# that is, number of digits after the decimal point.
fpval = fixedpoint.FixedPoint(nval, s)
setattr(nothing, 'AFixed', fpval)
box.memorize(nothing)
# Flush and retrieve the object. Use comparisons to test
# decompilation of imperfect_type when using large numbers.
if not long_done:
box.flush_all()
nothing = box.unit(NothingToDoWithZoos, ALong=Lval)
if nothing is None:
self.fail("Unit not found by long property. "
"prec=%s scale=%s" % (p, s))
if p <= float_prec:
box.flush_all()
nothing = box.unit(NothingToDoWithZoos, AFloat=fval)
if nothing is None:
self.fail("Unit not found by float property. "
"prec=%s scale=%s" % (p, s))
if decimal:
box.flush_all()
nothing = box.unit(NothingToDoWithZoos, ADecimal=dval)
if nothing is None:
self.fail("Unit not found by decimal property. "
"prec=%s scale=%s" % (p, s))
if fixedpoint:
box.flush_all()
nothing = box.unit(NothingToDoWithZoos, AFixed=fpval)
if nothing is None:
self.fail("Unit not found by fixedpoint property. "
"prec=%s scale=%s" % (p, s))
# Test retrieved values.
if not long_done:
if nothing.ALong != Lval:
self.fail("%s != %s prec=%s scale=%s" %
(`nothing.ALong`, `Lval`, p, s))
if p <= float_prec:
if nothing.AFloat != fval:
self.fail("%s != %s prec=%s scale=%s" %
(`nothing.AFloat`, `fval`, p, s))
if decimal:
if nothing.ADecimal != dval:
self.fail("%s != %s prec=%s scale=%s" %
(`nothing.ADecimal`, `dval`, p, s))
if fixedpoint:
if nothing.AFixed != fpval:
self.fail("%s != %s prec=%s scale=%s" %
(`nothing.AFixed`, `fpval`, p, s))
nothing.forget()
box.flush_all()
long_done = True
finally:
box.flush_all()
class IsolationTests(unittest.TestCase):
verbose = False
_boxid = 0
def setUp(self):
s = arena.stores.values()[0]
if hasattr(s, "db"):
self.db = s.db
else:
self.db = None
try:
self.old_implicit = s.db.implicit_trans
s.db.implicit_trans = False
self.old_tkey = s.db.transaction_key
# Use an explicit 'boxid' for the transaction key
s.db.transaction_key = lambda: self.boxid
except AttributeError:
self.old_implicit = None
def tearDown(self):
if self.db and self.old_implicit is not None:
self.db.implicit_trans = self.old_implicit
self.db.transaction_key = self.old_tkey
def restore(self):
self.boxid = 0
box = arena.new_sandbox()
box.start()
jim = box.unit(Vet, Name = 'Jim McBain')
jim.City = None
box.flush_all()
def cleanup_boxes(self):
try:
self.boxid = 1
self.box1.rollback()
except: pass
try:
self.boxid = 2
self.box2.rollback()
except: pass
# Destroy refs so the conns can go back in the pool.
del self.box1, self.box2
def attempt(self, testfunc, anomaly_name, level):
self.restore()
self.boxid = 1
self.box1 = arena.new_sandbox()
self.box1.start(level)
self.boxid = 2
self.box2 = arena.new_sandbox()
self.box2.start(level)
try:
testfunc(level)
except AssertionError:
self.cleanup_boxes()
if level.forbids(anomaly_name):
warnings.warn("%r allowed anomaly %r." %
(level, anomaly_name))
except:
if self.db.is_lock_error(sys.exc_info()[1]):
self.cleanup_boxes()
if not level.forbids(anomaly_name):
warnings.warn("%r prevented anomaly %r with an error." %
(level, anomaly_name))
else:
self.cleanup_boxes()
raise
else:
self.cleanup_boxes()
if not level.forbids(anomaly_name):
warnings.warn("%r prevented anomaly %r." %
(level, anomaly_name))
def _get_boxid(self):
return self._boxid
def _set_boxid(self, val):
if self.verbose:
print val,
self._boxid = val
boxid = property(_get_boxid, _set_boxid)
def test_dirty_read(self):
def dirty_read(level):
# Write City 1
self.boxid = 1
jim1 = self.box1.unit(Vet, Name = 'Jim McBain')
jim1.City = "Addis Ababa"
self.box1.repress(jim1)
# Read City 2.
self.boxid = 2
jim2 = self.box2.unit(Vet, Name = 'Jim McBain')
# If READ UNCOMMITTED or lower, this should fail
assert jim2.City is None
for level in storage.isolation.levels:
if self.verbose:
print
print level,
if level.name in self.db.isolation_levels:
self.attempt(dirty_read, "Dirty Read", level)
def test_nonrepeatable_read(self):
def nonrepeatable_read(level):
# Read City 1
self.boxid = 1
jim1 = self.box1.unit(Vet, Name = 'Jim McBain')
val1 = jim1.City
assert val1 is None
self.box1.repress(jim1)
# Write City 2.
self.boxid = 2
jim2 = self.box2.unit(Vet, Name = 'Jim McBain')
jim2.City = "Tehachapi"
self.box2.flush_all()
# Re-read City 1
self.boxid = 1
jim1 = self.box1.unit(Vet, Name = 'Jim McBain')
# If READ COMMITTED or lower, this should fail
assert jim1.City == val1
for level in storage.isolation.levels:
if self.verbose:
print
print level,
if level.name in self.db.isolation_levels:
self.attempt(nonrepeatable_read, "Nonrepeatable Read", level)
def test_phantom(self):
def phantom(level):
# Read City 1
self.boxid = 1
pvets = self.box1.recall(Vet, City = 'Poughkeepsie')
assert len(pvets) == 0
# Write City 2.
self.boxid = 2
jim2 = self.box2.unit(Vet, Name = 'Jim McBain')
jim2.City = "Poughkeepsie"
self.box2.flush_all()
# Re-read City 1
self.boxid = 1
pvets = self.box1.recall(Vet, City = 'Poughkeepsie')
# If REPEATABLE READ or lower, this should fail
assert len(pvets) == 0
for level in storage.isolation.levels:
if self.verbose:
print
print level,
if level.name in self.db.isolation_levels:
self.attempt(phantom, "Phantom", level)
class ConcurrencyTests(unittest.TestCase):
def test_Multithreading(self):
## print "skipped ",
## return
s = arena.stores.values()[0]
# Test threads overlapping on separate sandboxes
f = (lambda x: x.Legs == 4)
def box_per_thread():
# Notice that, although we write changes in each thread,
# we only assert the unchanged data, since the order of
# thread execution can not be guaranteed.
box = arena.new_sandbox()
try:
quadrupeds = box.recall(Animal, f)
self.assertEqual(len(quadrupeds), 4)
quadrupeds[0].Age += 1.0
finally:
box.flush_all()
ts = []
# PostgreSQL, for example, has a default max_connections of 100.
for x in range(99):
t = threading.Thread(target=box_per_thread)
t.start()
ts.append(t)
for t in ts:
t.join()
def test_Implicit_Transactions(self):
zoostore = arena.storage(Zoo)
if not hasattr(zoostore, "db"):
print "not a db (skipped) ",
return
old_implicit = zoostore.db.implicit_trans
try:
def commit_test():
"""Test transaction commit."""
now = datetime.time(8, 18, 28)
box = arena.new_sandbox()
try:
WAP = box.unit(Zoo, Name='Wild Animal Park')
WAP.Opens = now
box.flush_all()
WAP = box.unit(Zoo, Name='Wild Animal Park')
self.assertEqual(WAP.Opens, now)
finally:
box.flush_all()
def rollback_test():
"""Test transaction rollback."""
box = arena.new_sandbox()
try:
SDZ = box.unit(Zoo, Name='San Diego Zoo')
SDZ.Name = 'The One and Only San Diego Zoo'
SDZ.Founded = datetime.date(2039, 9, 13)
box.rollback()
SDZ = box.unit(Zoo, Name='San Diego Zoo')
self.assertEqual(SDZ.Name, 'San Diego Zoo')
self.assertEqual(SDZ.Founded, datetime.date(1835, 9, 13))
finally:
box.flush_all()
zoostore.db.implicit_trans = True
commit_test()
if zoostore.rollback:
rollback_test()
zoostore.db.implicit_trans = False
zoostore.start()
commit_test()
if zoostore.rollback:
zoostore.start()
rollback_test()
finally:
zoostore.db.implicit_trans = old_implicit
def test_ContextManagement(self):
# Test context management using Python 2.5 'with ... as'
try:
from dejavu.test import test_context
except SyntaxError:
print "'with ... as' not supported (skipped) ",
else:
test_context.test_with_context(arena)
class DiscoveryTests(unittest.TestCase):
def assertIn(self, first, second, msg=None):
"""Fail if 'second not in first'."""
if not second.lower() in first.lower():
raise self.failureException, (msg or '%r not in %r' % (second, first))
def setUp(self):
self.modeler = None
s = arena.stores.values()[0]
if not hasattr(s, "db"):
return
# Clear out all mappings and re-discover
dict.clear(s.db)
s.db.discover_all()
from dejavu.storage import db
self.modeler = db.Modeler(s.db)
def test_make_classes(self):
if not self.modeler:
print "not a db (skipped) ",
return
for cls in (Zoo, Animal):
tkey = self.modeler.db.table_name(cls.__name__)
uc = self.modeler.make_class(tkey, cls.__name__)
self.assert_(not issubclass(uc, cls))
self.assertEqual(uc.__name__, cls.__name__)
# Both Zoo and Animal should have autoincrementing ID's
# (but MySQL uses all lowercase identifiers).
self.assertEqual(set([x.lower() for x in uc.identifiers]),
set([x.lower() for x in cls.identifiers]))
self.assert_(isinstance(uc.sequencer, UnitSequencerInteger),
"sequencer is of type %r (expected %r)"
% (type(uc.sequencer), UnitSequencerInteger))
for pname in cls.properties:
cname = self.modeler.db.column_name(tkey, pname)
copy = getattr(uc, cname)
orig = getattr(cls, pname)
self.assertEqual(copy.key, cname)
# self.assertEqual(copy.type, orig.type)
self.assertEqual(copy.default, orig.default,
"%s.%s default %s != copy %s"
% (cls.__name__, pname,
`orig.default`, `copy.default`))
for k, v in orig.hints.iteritems():
if isinstance(v, (int, long)):
v2 = copy.hints.get(k)
if v2 != 0 and v2 < v:
self.fail("%s.%s hint[%s] %s not >= %s" %
(cls.__name__, pname, k, v2, v))
else:
self.assertEqual(copy.hints[k], v)
def test_make_source(self):
if not self.modeler:
print "not a db (skipped) ",
return
tkey = self.modeler.db.table_name('Exhibit')
source = self.modeler.make_source(tkey, 'Exhibit')
classline = "class Exhibit(Unit):"
if not source.lower().startswith(classline.lower()):
self.fail("%r does not start with %r" % (source, classline))
clsname = self.modeler.db.__class__.__name__
if "SQLite" in clsname:
# SQLite's internal types are teh suck.
self.assertIn(source, " Name = UnitProperty(")
self.assertIn(source, " ZooID = UnitProperty(")
self.assertIn(source, " PettingAllowed = UnitProperty(")
self.assertIn(source, " Acreage = UnitProperty(")
self.assertIn(source, " sequencer = UnitSequencer")
else:
try:
self.assertIn(source, " Name = UnitProperty(unicode")
except AssertionError:
self.assertIn(source, " Name = UnitProperty(str")
self.assertIn(source, " ZooID = UnitProperty(int")
if "Firebird" in self.modeler.db.__class__.__name__:
# Firebird doesn't have a bool datatype
self.assertIn(source, " PettingAllowed = UnitProperty(int")
else:
self.assertIn(source, " PettingAllowed = UnitProperty(bool")
if decimal:
self.assertIn(source, " Acreage = UnitProperty(decimal.Decimal")
else:
self.assertIn(source, " Acreage = UnitProperty(float")
self.assertIn(source, " sequencer = UnitSequencer()")
if " ID = UnitProperty" in source:
self.fail("Exhibit incorrectly possesses an ID property.")
# ID = None should remove the existing ID property
self.assertIn(source, " ID = None")
for items in ["'zooid', 'name'", "'name', 'zooid'",
"u'zooid', u'name'", "u'name', u'zooid'"]:
if (" identifiers = (%s)" % items) in source.lower():
break
else:
self.fail("%r not found in %r" %
(" identifiers = ('ZooID', 'Name')", source))
arena = dejavu.Arena()
def _djvlog(message):
"""Dejavu logger (writes to error.log)."""
if isinstance(message, unicode):
message = message.encode('utf8')
s = "%s %s" % (datetime.datetime.now().isoformat(), message)
f = open(logname, 'ab')
f.write(s + '\n')
f.close()
def init():
global arena
arena = dejavu.Arena()
arena.log = _djvlog
arena.logflags = (dejavu.logflags.ERROR + dejavu.logflags.SQL +
dejavu.logflags.IO + dejavu.logflags.RECALL)
class ZooSchema(dejavu.Schema):
# We set "latest" to 1 so we can test upgrading manually.
latest = 1
def upgrade_to_2(self):
self.arena.add_property(Animal, "Stomachs")
self.arena.add_property(Animal, "ExhibitID")
box = self.arena.new_sandbox()
for exhibit in box.recall(Exhibit):
for animalID in exhibit.Animals:
# Use the Sandbox magic recaller method.
a = box.Animal(animalID)
if a:
# Exhibits are identified by ZooID and Name
a.ZooID = exhibit.ZooID
a.ExhibitID = exhibit.Name
box.flush_all()
def upgrade_to_3(self):
Animal.remove_property("Species")
Animal.set_property("Family")
# Note that we drop this column in a separate step from step 2.
# If we had mixed model properties and SM properties in step 2,
# we could have done this all in one step. But this is a better
# demonstration of the possibilities. ;)
Exhibit.remove_property("Animals")
self.arena.drop_property(Exhibit, "Animals")
self.arena.rename_property(Animal, "Species", "Family")
def setup(SM_class, opts):
"""setup(SM_class, opts). Set up storage for Zoo classes."""
global arena
sm = arena.add_store('testSM', SM_class, opts)
v = getattr(sm, "version", None)
if v:
print v()
sm.create_database()
arena.register_all(globals())
engines.register_classes(arena)
zs = ZooSchema(arena)
zs.upgrade()
zs.assert_storage()
def teardown():
"""Tear down storage for Zoo classes."""
# Manually drop each table just to test that code.
# Call map_all first in case our discovery tests screwed up the keys.
arena.map_all()
for cls in arena._registered_classes:
arena.drop_storage(cls)
for store in arena.stores.values():
try:
store.drop_database()
except (AttributeError, NotImplementedError):
pass
arena.stores = {}
arena.shutdown()
def run(SM_class, opts):
"""Run the zoo fixture."""
try:
try:
setup(SM_class, opts)
loader = unittest.TestLoader().loadTestsFromTestCase
# Run the ZooTests and time it.
zoocase = loader(ZooTests)
startTime = datetime.datetime.now()
tools.djvTestRunner.run(zoocase)
print "Ran zoo cases in:", datetime.datetime.now() - startTime
# Run the other cases.
tools.djvTestRunner.run(loader(ConcurrencyTests))
s = arena.stores.values()[0]
if hasattr(s, "db"):
tools.djvTestRunner.run(loader(IsolationTests))
tools.djvTestRunner.run(loader(DiscoveryTests))
except:
traceback.print_exc()
finally:
teardown() | PypiClean |
/Macrocomplex_Builder-1.2-py3-none-any.whl/Macrocomplex_Builder-1.2.data/scripts/macrocomplex_functions.py |
import Bio.PDB
import sys
import string
import os
import argparse
import timeit
import logging
import re
def Key_atom_retriever(chain):
"""This function retrieves the key atom, CA in case of proteins and C4' in case of nucleic acids, to do the superimposition and also returns a
variable indicating the kind of molecule that that chain is: either DNA, RNA or PROTEIN
Arguments:
chain (Bio.PDB.Chain.Chain): an instance of class chain
Returns:
atoms (list): contains all key atoms (CA/C4') instances
molecule (str): contains type of molecule of the chain
"""
### Declaring and creating new variables ###
nucleic_acids = ['DA','DT','DC','DG','DI','A','U','C','G','I'] #creating a list with all possible nucleic acids letters
RNA = ['A','U','C','G','I'] #creating a list with all possible RNA letters
DNA = ['DA','DT','DC','DG','DI'] #creating a list with all possible DNA letters
atoms = []
### Loops through all residues of the chain ###
for res in chain:
res_name = res.get_resname()[0:3].strip() #get the name of the residue (with no spaces)
## Appends the CA atoms and sets the molecule type of protein ##
if res.get_id()[0] == " " and res_name not in nucleic_acids: #checks whether the residue is not a HETATM or nucleic acid
if 'CA' not in res: #checks whether the residue has CA atoms
logging.warning("This protein residue %d %s does not have CA atom" % (res.get_id()[1], res_name))
else:
atoms.append(res['CA']) #append CA atoms to the list of sample atoms
molecule = 'PROTEIN' #set the molecule type to protein
## Append the C4 atoms and sets the molecule type of DNA or RNA ##
elif res.get_id()[0] == " " and res_name in nucleic_acids: #checks whether the residue is a nucleic acid and not HETATM
if res_name in DNA: #checks whether the residue is a DNA nucleotide
molecule = 'DNA' #set the molecule type to DNA
elif res_name in RNA: #checks whether the residue is a RNA nucleotide
molecule = 'RNA' #set the molecule type to RNA
atoms.append(res['C4\'']) #append C4' atoms to the list of atoms
return(atoms, molecule) # Return all key atoms list and the type of molecule to which they belong
def ID_creator(IDs, ID):
"""This function returns an ID for the new chain to be added to the complex. It generates a single character ID for the first
62 IDs, being all the uppercase, lowercase letters and digits, i.e., 26 + 26 + 10 = 62. Then, it generates two-character IDs,
by combining all the uppercase letters, generating up to 26**2 IDs, which is 676 new IDs. This is a total of 738 chain IDs. It
also needs a list with all the chain IDs, so it does not return an ID already present in the list of chain IDs already in the complex
Arguments:
IDs (list): a list containing the IDs of all chains present in the building complex
ID (string): the ID that the chain has by default, i.e., the ID it has on the PDB file
Returns:
ID (string): the new ID that is not present in the list of IDs
"""
UP = list(string.ascii_uppercase)
LOW = list(string.ascii_lowercase)
DIG = list(string.digits)
alphabet = UP + LOW + DIG #creates an alphabet containing all the possible characters that can be used as chain IDs
if len(IDs) < 62: #checks if the length of the lsit containing the IDs of all chains in the complex is smaller than 62
if ID not in IDs: #checks if the ID by default of the chain is present on the IDs list
return ID
elif ID in IDs: #checks if the ID by default is indeed on the list
for i in range(0, len(alphabet)): #loops through all the characters on the alphabet
if alphabet[i] not in IDs: #checks if that character is not already taken as an ID
return alphabet[i]
else: #if it is already an ID, keeps looping through the alphabet
continue
elif len(IDs) >= 62: #checks if the length of the lsit containing the IDs of all chains in the complex is greater than 62
for char1 in alphabet:
for char2 in alphabet:
ID = char1 + char2 #creates a 2 character ID by combining two uppercase letters
if ID not in IDs: #checks if new ID is not on the list of IDs
return ID
else: #if it is indeed on the list, keeps loopting
continue
def superimposition(ref_structure, sample_structure, rmsd_threshold):
"""This function, given a reference and a sample structure does the superimposition of every combination of pairs of chains and calculates the RMSD.
It returns a dictionary with a tuple of the reference and sample chain as a tuple and the superimposer instance resulting from those two chains, as
well as two variables, the ID of the chain with the smallest RMSD when superimposing with the reference structure and the RMSD itself
Arguments:
ref_structure (Bio.PDB.Structure): is the structure on which the macrocomplex is gonna get build on every iteration of the function
sample_structure (Bio.PDB.Structure): is the structure that is gonna be added on every iteration of the function
Returns:
all_superimpositions (dict): dictionary of tuples of chain identifiers as key and Superimposer instances as values
superimposed_chains (boolean): set to True if there has been at least one superimposition, otherwise is False.
best_RMSD (float): RMSD of the best superimposition (the lowest RMSD value)
"""
### Saving arguments passed on to the function ###
ref_model = ref_structure[0] #retrieves the first and only available model of reference structure
sample_model = sample_structure[0] #retrieves the first and only available model of the sample structure
### Initializing and declaring variables ###
best_sample_chain_ID = best_ref_chain_ID = ""
best_RMSD = 0 #variable for the lowest RMSD
prev_RMSD = True #variable to know we are in the first combination of pairs of chains
superimposed_chains = False #variable that indicates the presence of a superimposed chain (True if there is superimposed chain)
all_superimpositions = {} #start the dictionary that will contain all superimposition instances
### Superimposition of every combination of pairs of chains between the reference and the sample structures ###
## loops through all chains in the reference model ##
for ref_chain in ref_model:
logging.info("Processing reference chain %s", ref_chain.id)
ref_atoms, ref_molecule = Key_atom_retriever(ref_chain) #Retrieves all key atoms (CA or C4') and molecule type of the sample
## loops through all chains in the sample model ##
for sample_chain in sample_model:
logging.info("Processing sample chain %s", sample_chain.id)
sample_atoms, sample_molecule = Key_atom_retriever(sample_chain) #Retrieves all key atoms (CA or C4') and molecule type of the sample
if ref_molecule != sample_molecule: #checks that the molecular types of ref chain and sample chain are the same
logging.warning("Cannot superimpose. Reference chain %s is %s and sample chain %s is %s" %(ref_chain.get_id(), ref_molecule, sample_chain.get_id(), sample_molecule))
elif len(ref_atoms) != len(sample_atoms): #checks that the length of ref_atoms and sample_atoms is the same
logging.warning("Cannot superimpose. The number of atoms of the reference chain %s is %d and the number of atoms of the sample chain %s is %d", ref_chain.get_id(), len(ref_atoms), sample_chain.get_id(), len(sample_atoms))
## Make the superimposition between reference and sample chain ##
else: #everything is fine, same type of molecule, same length of atom lists
super_imposer = Bio.PDB.Superimposer() #creates superimposer instance
super_imposer.set_atoms(ref_atoms, sample_atoms) #creates ROTATION and TRANSLATION matrices from lists of atoms to align
RMSD = super_imposer.rms #retrieves RMSD
if RMSD > rmsd_threshold:
logging.info("The RMSD between chain %s of the reference and chain %s of the sample is %f", ref_chain.id, sample_chain.id, RMSD)
continue
if prev_RMSD is True or RMSD < prev_RMSD: #checks that the RMSD of this combination is smaller than the previous one
best_sample_chain_ID = sample_chain.id
best_ref_chain_ID = ref_chain.id #with this condition, the superimposer instance and other important
best_RMSD = RMSD #information pertaining to the superimposition with the smallest
prev_RMSD = RMSD #RMSD will be saved
all_superimpositions[(ref_chain.id, sample_chain.id)] = super_imposer #saving ALL superimposer instances in a dictionary
superimposed_chains = True # The superimposition has been made
logging.info("The RMSD between chain %s of the reference and chain %s of the sample is %f", ref_chain.id, sample_chain.id, RMSD)
### checks that there has been, at least, one superimposition ###
if superimposed_chains is True:
all_superimpositions = sorted(all_superimpositions.items(), key=lambda k:k[1].rms) #sorting by the lowest RMSD and saving to a list
logging.info("The combination of chains with the lowest RMSD is ref chain %s and sample chain %s with an RMSD of %f", best_ref_chain_ID, best_sample_chain_ID, best_RMSD)
return(all_superimpositions, superimposed_chains, best_RMSD)
def MacrocomplexBuilder(ref_structure, files_list, it, not_added, command_arguments):
"""This recursive function superimposes the most similar chain of a binary interaction PDB file with a reference structure and adds the transformed chain to the building complex
Arguments:
ref_structure (Bio.PDB.Structure): is the structure on which the macrocomplex is gonna get build on every iteration of the function
files_list (list): a list containing all the pdb files of binary interactions between the different subunits or chains that form the complex
it (int): this is a counter that keeps track of the interaction of the iterative function
not_added (int): this is a counter that keeps track of the files that have been parsed but no chains were added to the complex
command_arguments(argparse object): is the object containing all the command-line arguments. Contains:
RMSD (float): this is the RMSD threshold. If the RMSD of a superimposition between reference and sample structures is greater than this value, it will be
considered as a wrong superimposition and will not be used to build the complex
clashes (int): this is the clashes or contacts threshold. If the number of contacts between two chains exceeds this value, the superimposition will not be
taken into account for the rotated chain is either present in the complex already or clashing with other chains because it should not be there. The chain
in question will be dismissed
number_chains (int): this is the numbers of chains that the complex must have in order to stop running. However, if these value is never reached, the program
will stop after a certain number of iterations
indir(str): this is the input directory relative path
outdir(str): this is the output directory relative path
iterations(boolean): this is set True if the user wants a pdb file for each iteration of the complex. Otherwise is False
It is an iterative function, it calls itself until certain condition is met, then:
Returns:
ref_structure (Bio.PDB.Structure): pdb structure instance containing all chains of the final macrocomplex.
"""
### Saving arguments passed on to the function ###
i = it #number of iterations
n = not_added #number of files that have been parsed but no chain has been added
nc = command_arguments.number_chains #number of chains
clashes_threshold = command_arguments.clashes #clashes threshold
RMSD_threshold = command_arguments.rmsd_threshold #RMSD threshold
indir = command_arguments.indir #input directory relative path
outdir = command_arguments.outdir #output directory relative path
pdb_iterations = command_arguments.pdb_iterations #if True, each iteration is stored in a pdb file
alphabet = list(string.ascii_uppercase) + list(string.ascii_lowercase) + list(string.digits) #creates an alphabet containing all the possible characters that can be used as chain IDs
chains = ref_structure[0].__len__()
### Prints the current iteration and number of chains of the current complex ###
logging.info("This is the iteration #%d of the recursive function" % i )
logging.info("The complex has %d chains at this point" % chains)
### Checks if the current macrocomplex satisfies the desired number of chains or just stops at iteration 150 ###
if chains == nc or n > len(files_list):
logging.info("The whole macrocomplex has been successfully build")
logging.info("The final complex has %d chains" % chains)
logging.info("We have arrived to iteration %d" %(i))
return ref_structure #END OF THE RECURSIVE FUNCTION
### Selects the file to analyze in this iteration. It is always the first element of the list of files because once analyzed it is substracted and appended at the end of the list ###
sample = files_list[0] #saves the first file name of the list of files as the sample
logging.info("We are processing the file %s" % (sample))
file_path = indir + "/" + sample #takes the path of the sample file
pdb_parser = Bio.PDB.PDBParser(QUIET = True) #parses the sample PDB file and creates a sample PDBParser object
sample_structure = pdb_parser.get_structure("sample", file_path) #saves the Structure object of the sample PDBParser object
sample_model = sample_structure[0] #obtains the first and only available model of the sample structure
### Calling the superimposition function to obtain the superimposition of every combination of pairs of chains between the reference and sample structures
all_superimpositions, superimposed_chains, best_RMSD = superimposition(ref_structure, sample_structure, RMSD_threshold)
### There are no superimposed chains or RMSD is above the threshold --> Call again the recursive function ###
if superimposed_chains is False or best_RMSD > RMSD_threshold: #if condition is met, there are no superimposed chains, or the RMSD is not small enough to be considered
file = files_list.pop(0) #substracts the current file
files_list.append(file) #and adds it at the end of the list of files
i += 1 #calling again the recursive function to analyze the next file
n += 1
return MacrocomplexBuilder(ref_structure = ref_structure, files_list = files_list, it = i, not_added = n, command_arguments = command_arguments) #call again the iterative function, j does not change
### There are superimposed chains ###
else:
## Loops through the superimposition dictionary, obtaining the superimposition instances and the reference and sample IDs ##
for chains, sup in all_superimpositions:
logging.info("We are processing the superimposition of ref chain %s with sample chain %s with an RMSD of %f" % (chains[0],chains[1], sup.rms))
if sup.rms > RMSD_threshold: #Checks that the superimposition has an RMSD above the threshold
logging.info("This superimposition of ref chain %s with sample chain %s has an RMSD bigger than the threshold, therefore it is skipped" % (chains[0],chains[1]))
continue #if not, skip that superimposition
sup.apply(sample_model.get_atoms()) #applies ROTATION and TRANSLATION matrices to all the atoms in the sample model
## Gets the sample chain that was not superimposed with the reference chain --> putative chain to add ##
chain_to_add = [chain for chain in sample_model.get_chains() if chain.get_id() != chains[0]][0]
present_chain = False #this variable indicates whether the chain to add is present on the building complex or not: False => not present, True => present
sample_atoms, sample_molecule = Key_atom_retriever(chain_to_add) #retrieves all key atoms (CA or C4') and molecule type of chain_to_add
logging.info("Putative chain to add is %s" % chain_to_add.id)
## Loops through all the chains from the reference structure ##
all_atoms = []
for chain in ref_structure[0].get_chains():
ref_atoms, ref_molecule = Key_atom_retriever(chain) #retrieves all key atoms (CA or C4') and molecule type of the reference present chain
## Makes a Neighbor Search to look for clashes between the chain to add and the chains from the reference structure ##
all_atoms.extend(ref_atoms)
Neighbor = Bio.PDB.NeighborSearch(ref_atoms) #creates an instance of class NeighborSearch, given a list of reference atoms
clashes = [] #declares a list that will contain all the atoms that clash between the reference and sample chains
for atom in sample_atoms: #loops through the list of atoms of chain_to_add
atoms_clashed = Neighbor.search(atom.coord,5) #produces a Neighbor search that returns all atoms/residues/chains/models/structures that have at least one atom within radius of center.
if len(atoms_clashed) > 0: #if there are clashes
clashes.extend(atoms_clashed) #adds the atoms list to the list of clashes
if len(clashes) > clashes_threshold: #checks that the number of total clashes is above the threshold
present_chain = True #then, chain_to_add is considered a chain already present in the complex
logging.info("The number of clashes between the chain to add %s and reference chain %s is %d, therefore the chain is the same and it is skipped" % (chain_to_add.id, chain.id,len(clashes)))
break #skips continuing through the loop, as it already clashes with one reference chain
## Checks that the number of total clashes is under the threshold ##
elif len(clashes) <= clashes_threshold:
logging.info("The number of clashes between the chain to add %s and reference chain %s is %d, it is under the threshold" % (chain_to_add.id, chain.id,len(clashes)))
continue #continue the loops, as we must ensure that chain_to_add does not clash with ANY reference chain
## Rotated chain to add is not a chain already in the building macrocomplex structure, then adds it, with its original ID or with a new one ##
if present_chain is False:
logging.info("Chain %s superimposed with chain %s yields rotated chain %s which is not in the complex" %(chains[0],chains[1],chain_to_add.id))
chain_ids = [chain.id for chain in ref_structure[0].get_chains()] #list containing IDs of all chains present in reference structure
ID = ID_creator(chain_ids, chain_to_add.id)
chain_to_add.id = ID
ref_structure[0].add(chain_to_add) #adds chain_to_add to the building macrocomplex structure
logging.info("Added Chain %s" % ID)
## Checks whether the user provided the iterations argument, then save each iteration of the current complex in a PDB file ##
if pdb_iterations:
if len(list(ref_structure[0].get_atoms())) > 99999 or len(list(ref_structure[0].get_chains())) > 62: #checks that the structure has has less atoms than the maximum for a PDB, 99,999
io = Bio.PDB.MMCIFIO() #creates the MMCIFIO object, that can contain more than 99,999 atom coordinates
io.set_structure(ref_structure[0]) #sets the reference structure object to be written in a MMCIF file
io.save("macrocomplex_chains_%d.cif" %(ref_structure[0].__len__())) #saves the structure on a file
logging.info("saving macrocomplex_chains_%d.cif in %s" %(ref_structure[0].__len__(),os.path.abspath(outdir)))
else: #checks that the structure has more than 99,999 atoms
io = Bio.PDB.PDBIO() #creates the PDBIO object
io.set_structure(ref_structure[0]) #sets the reference structure object to be written in a PDB file
io.save("macrocomplex_chains_%d.pdb" %(ref_structure[0].__len__())) #saves the structure on a file
logging.info("saving macrocomplex_chains_%d.pdb in %s" %(ref_structure[0].__len__(),os.path.abspath(outdir)))
file = files_list.pop(0) #substracts the first file of the files list
files_list.append(file) #adds the file at the end of the files list
i += 1 #adds one to the iteration variable
n = 0
#this is what makes the function recursive, it calls itself on the return, executing the whole function again and again until certain condition is met
return MacrocomplexBuilder(ref_structure = ref_structure, files_list = files_list, it = i, not_added = n, command_arguments = command_arguments)
### Once the current file has been analyzed it is substracted and appended at the end of the files list ###
file = files_list.pop(0) #substracts the first file of the files list
files_list.append(file) #adds the file at the end of the files list
i += 1 #adds one to the iteration variable
n += 1
#this is what makes the function recursive, it calls itself on the return, executing the whole function again and again until certain condition is met
return MacrocomplexBuilder(ref_structure = ref_structure, files_list = files_list, it = i, not_added = n, command_arguments = command_arguments) | PypiClean |
/ESMValTool-2.9.0-py3-none-any.whl/esmvaltool/diag_scripts/autoassess/stratosphere/strat_metrics_1.py | import logging
import os
import iris
import iris.analysis.cartography as iac
import iris.coord_categorisation as icc
import iris.plot as iplt
import matplotlib.cm as mpl_cm
import matplotlib.colors as mcol
import matplotlib.dates as mdates
import matplotlib.pyplot as plt
import numpy as np
from cartopy.mpl.gridliner import LATITUDE_FORMATTER
from esmvaltool.diag_scripts.autoassess.loaddata import load_run_ss
from .plotting import segment2list
logger = logging.getLogger(__name__)
# Candidates for general utility functions
def weight_lat_ave(cube):
"""Routine to calculate weighted latitudinal average."""
grid_areas = iac.area_weights(cube)
return cube.collapsed('latitude', iris.analysis.MEAN, weights=grid_areas)
def weight_cosine(cube):
"""Routine to calculate weighted lat avg when there is no longitude."""
grid_areas = iac.cosine_latitude_weights(cube)
return cube.collapsed('latitude', iris.analysis.MEAN, weights=grid_areas)
def cmap_and_norm(cmap, levels, reverse=False):
"""
Generate interpolated colour map.
Routine to generate interpolated colourmap and normalisation from
given colourmap and level set.
"""
# cmap must be a registered colourmap
tcmap = mpl_cm.get_cmap(cmap)
colourmap = segment2list(tcmap, levels.size, reverse=reverse)
normalisation = mcol.BoundaryNorm(levels, levels.size - 1)
return colourmap, normalisation
def plot_zmean(cube, levels, title, log=False, ax1=None):
"""
Plot zonal means.
Routine to plot zonal mean fields as latitude-pressure contours with given
contour levels.
Option to plot against log(pressure).
"""
(colormap, normalisation) = cmap_and_norm('brewer_RdBu_11', levels)
if ax1 is None:
ax1 = plt.gca()
ax1.set_title(title)
iplt.contourf(cube, levels=levels, cmap=colormap, norm=normalisation)
lwid = 1. * np.ones_like(levels)
cl1 = iplt.contour(cube, colors='k', linewidths=lwid, levels=levels)
plt.clabel(cl1, cl1.levels, inline=1, fontsize=6, fmt='%1.0f')
ax1.set_xlabel('Latitude', fontsize='small')
ax1.set_xlim(-90, 90)
ax1.set_xticks([-90, -60, -30, 0, 30, 60, 90])
ax1.xaxis.set_major_formatter(LATITUDE_FORMATTER)
ax1.set_ylabel('Pressure (Pa)', fontsize='small')
ax1.set_ylim(100000., 10.)
if log:
ax1.set_yscale("log")
def plot_timehgt(cube, levels, title, log=False, ax1=None):
"""
Plot fields as time-pressure.
Routine to plot fields as time-pressure contours with given
contour levels.
Option to plot against log(pressure).
"""
(colormap, normalisation) = cmap_and_norm('brewer_RdBu_11', levels)
if ax1 is None:
ax1 = plt.gca()
ax1.set_title(title)
iplt.contourf(cube, levels=levels, cmap=colormap, norm=normalisation)
lwid = 1. * np.ones_like(levels)
cl1 = iplt.contour(cube, colors='k', linewidths=lwid, levels=levels)
plt.clabel(cl1, cl1.levels, inline=1, fontsize=6, fmt='%1.0f')
ax1.set_xlabel('Year', fontsize='small')
time_coord = cube.coord('time')
new_epoch = time_coord.points[0]
new_unit_str = 'hours since {}'
new_unit = new_unit_str.format(time_coord.units.num2date(new_epoch))
ax1.xaxis.set_label(new_unit)
ax1.xaxis.set_major_locator(mdates.YearLocator(4))
ax1.xaxis.set_major_formatter(mdates.DateFormatter('%Y'))
ax1.set_ylabel('Pressure (Pa)', fontsize='small')
ax1.set_ylim(100000., 10.)
if log:
ax1.set_yscale("log")
# Routines specific to stratosphere assessment
def plot_uwind(cube, month, filename):
"""Routine to plot zonal mean zonal wind on log pressure scale."""
levels = np.arange(-120, 121, 10)
title = 'Zonal mean zonal wind ({})'.format(month)
fig = plt.figure()
plot_zmean(cube, levels, title, log=True)
fig.savefig(filename)
plt.close()
def plot_temp(cube, season, filename):
"""Routine to plot zonal mean temperature on log pressure scale."""
levels = np.arange(160, 321, 10)
title = 'Temperature ({})'.format(season)
fig = plt.figure()
plot_zmean(cube, levels, title, log=True)
fig.savefig(filename)
plt.close()
def plot_qbo(cube, filename):
"""Routine to create time-height plot of 5S-5N mean zonal mean U."""
levels = np.arange(-80, 81, 10)
title = 'QBO'
fig = plt.figure(figsize=(12, 6))
# perform a check on iris version
# plot_timehgt will not work correctly
# for iris<2.1
ivlist = iris.__version__.split('.')
if float('.'.join([ivlist[0], ivlist[1]])) >= 2.1:
plot_timehgt(cube, levels, title, log=True)
fig.savefig(filename)
plt.close()
def calc_qbo_index(qbo):
"""
Routine to calculate QBO indices.
The segment of code you include scans the timeseries of U(30hPa) and looks
for the times where this crosses the zero line. Essentially U(30hPa)
oscillates between positive and negative, and we're looking for a period,
defined as the length of time between where U becomes positive and then
negative and then becomes positive again (or negative/positive/negative).
Also, periods less than 12 months are discounted.
"""
ufin = qbo.data
indiciesdown, indiciesup = find_zero_crossings(ufin)
counterup = len(indiciesup)
counterdown = len(indiciesdown)
# Did we start on an upwards or downwards cycle?
if indiciesdown and indiciesup:
if indiciesdown[0] < indiciesup[0]:
(kup, kdown) = (0, 1)
else:
(kup, kdown) = (1, 0)
else:
logger.warning('QBO metric can not be computed; no zero crossings!')
logger.warning(
"This means the model U(30hPa, around tropics) doesn't oscillate"
"between positive and negative"
"with a period<12 months, QBO can't be computed, set to 0."
)
(kup, kdown) = (0, 0)
# Translate upwards and downwards indices into U wind values
periodsmin = counterup - kup
periodsmax = counterdown - kdown
valsup = np.zeros(periodsmax)
valsdown = np.zeros(periodsmin)
for i in range(periodsmin):
valsdown[i] = np.amin(ufin[indiciesdown[i]:indiciesup[i + kup]])
for i in range(periodsmax):
valsup[i] = np.amax(ufin[indiciesup[i]:indiciesdown[i + kdown]])
# Calculate eastward QBO amplitude
counter = 0
totvals = 0
# valsup limit was initially hardcoded to +10.0
for i in range(periodsmax):
if valsup[i] > 0.:
totvals = totvals + valsup[i]
counter = counter + 1
if counter == 0:
ampl_east = 0.
else:
totvals = totvals / counter
ampl_east = totvals
# Calculate westward QBO amplitude
counter = 0
totvals = 0
for i in range(periodsmin):
# valdown limit was initially hardcoded to -20.0
if valsdown[i] < 0.:
totvals = totvals + valsdown[i]
counter = counter + 1
if counter == 0:
ampl_west = 0.
else:
totvals = totvals / counter
ampl_west = -totvals
# Calculate QBO period, set to zero if no full oscillations in data
period1 = 0.0
period2 = 0.0
if counterdown > 1:
period1 = (indiciesdown[counterdown - 1] - indiciesdown[0]) / (
counterdown - 1)
if counterup > 1:
period2 = (indiciesup[counterup - 1] - indiciesup[0]) / (counterup - 1)
# Pick larger oscillation period
if period1 < period2:
period = period2
else:
period = period1
return (period, ampl_west, ampl_east)
def flatten_list(list_):
"""
Flatten list.
Turn list of lists into a list of all elements.
[[1], [2, 3]] -> [1, 2, 3]
"""
return [item for sublist in list_ for item in sublist]
def find_zero_crossings(array):
"""
Find zero crossings in 1D iterable.
Returns two lists with indices, last_pos and last_neg.
If a zero crossing includes zero, zero is used as last positive
or last negative value.
:param array: 1D iterable.
:returns (last_pos, last_neg): Tuples with indices before sign change.
last_pos: indices of positive values with consecutive negative value.
last_neg: indices of negative values with consecutive positive value.
"""
signed_array = np.sign(array) # 1 if positive and -1 if negative
diff = np.diff(signed_array) # difference of one item and the next item
# sum differences in case zero is included in zero crossing
# array: [-1, 0, 1]
# signed: [-1, 0, 1]
# diff: [ 1, 1]
# sum: [ 0, 2]
for i, d in enumerate(diff):
if i < len(diff) - 1: # not last item
if d != 0 and d == diff[i + 1]:
diff[i + 1] = d + diff[i + 1]
diff[i] = 0
last_neg = np.argwhere(diff == 2)
last_pos = np.argwhere(diff == -2)
last_neg = flatten_list(last_neg)
last_pos = flatten_list(last_pos)
return last_pos, last_neg
def pnj_strength(cube, winter=True):
"""
Calculate PNJ.
Calculate PNJ and ENJ strength as max/(-min) of zonal mean U wind
for nh/sh in winter and sh/nh in summer repsectively.
"""
# Extract regions of interest
notrop = iris.Constraint(air_pressure=lambda p: p < 8000.)
nh_cons = iris.Constraint(latitude=lambda l: l > 0)
sh_cons = iris.Constraint(latitude=lambda l: l < 0)
nh_tmp = cube.extract(notrop & nh_cons)
sh_tmp = cube.extract(notrop & sh_cons)
# Calculate max/min depending on season
coords = ['latitude', 'air_pressure']
if winter:
pnj_max = nh_tmp.collapsed(coords, iris.analysis.MAX)
pnj_min = sh_tmp.collapsed(coords, iris.analysis.MIN) * (-1.0)
else:
pnj_max = sh_tmp.collapsed(coords, iris.analysis.MAX)
pnj_min = nh_tmp.collapsed(coords, iris.analysis.MIN) * (-1.0)
return (pnj_max, pnj_min)
def pnj_metrics(run, ucube, metrics):
"""
Calculate PNJ strength.
Routine to calculate PNJ strength metrics from zonal mean U
Also produce diagnostic plots of zonal mean U
"""
# TODO side effect: changes metrics without returning
# Extract U for January and average over years
jancube = ucube.extract(iris.Constraint(month_number=1))
jan_annm = jancube.collapsed('time', iris.analysis.MEAN)
# Extract U for July and average over years
julcube = ucube.extract(iris.Constraint(month_number=7))
jul_annm = julcube.collapsed('time', iris.analysis.MEAN)
# Calculate PNJ and ENJ strengths
(jan_pnj, jan_enj) = pnj_strength(jan_annm, winter=True)
(jul_pnj, jul_enj) = pnj_strength(jul_annm, winter=False)
# Add to metrics dictionary
metrics['Polar night jet: northern hem (January)'] = jan_pnj.data
metrics['Polar night jet: southern hem (July)'] = jul_pnj.data
metrics['Easterly jet: southern hem (January)'] = jan_enj.data
metrics['Easterly jet: northern hem (July)'] = jul_enj.data
# Plot U(Jan) and U(Jul)
plot_uwind(jan_annm, 'January', '{}_u_jan.png'.format(run['runid']))
plot_uwind(jul_annm, 'July', '{}_u_jul.png'.format(run['runid']))
def qbo_metrics(run, ucube, metrics):
"""Routine to calculate QBO metrics from zonal mean U."""
# TODO side effect: changes metrics without returning
# Extract equatorial zonal mean U
tropics = iris.Constraint(latitude=lambda lat: -5 <= lat <= 5)
p30 = iris.Constraint(air_pressure=3000.)
ucube_cds = [cdt.standard_name for cdt in ucube.coords()]
if 'longitude' in ucube_cds:
qbo = weight_lat_ave(ucube.extract(tropics))
else:
qbo = weight_cosine(ucube.extract(tropics))
qbo30 = qbo.extract(p30)
# write results to current working directory
outfile = '{0}_qbo30_{1}.nc'
iris.save(qbo30, outfile.format(run['runid'], run['period']))
# Calculate QBO metrics
(period, amp_west, amp_east) = calc_qbo_index(qbo30)
# Add to metrics dictionary
metrics['QBO period at 30 hPa'] = period
metrics['QBO amplitude at 30 hPa (westward)'] = amp_west
metrics['QBO amplitude at 30 hPa (eastward)'] = amp_east
# Plot QBO and timeseries of QBO at 30hPa
plot_qbo(qbo, '{}_qbo.png'.format(run['runid']))
def tpole_metrics(run, tcube, metrics):
"""
Compute 50hPa polar temp.
Routine to calculate polar 50hPa temperature metrics from zonal mean
temperature.
Also produce diagnostic plots of zonal mean temperature.
"""
# TODO side effect: changes metrics without returning
# Calculate and extract seasonal mean temperature
t_seas_mean = tcube.aggregated_by('clim_season', iris.analysis.MEAN)
t_djf = t_seas_mean.extract(iris.Constraint(clim_season='djf'))
t_mam = t_seas_mean.extract(iris.Constraint(clim_season='mam'))
t_jja = t_seas_mean.extract(iris.Constraint(clim_season='jja'))
t_son = t_seas_mean.extract(iris.Constraint(clim_season='son'))
# Calculate area averages over polar regions at 50hPa
nhpole = iris.Constraint(latitude=lambda la: la >= 60,
air_pressure=5000.0)
shpole = iris.Constraint(latitude=lambda la: la <= -60,
air_pressure=5000.0)
tcube_cds = [cdt.standard_name for cdt in tcube.coords()]
if 'longitude' in tcube_cds:
djf_polave = weight_lat_ave(t_djf.extract(nhpole))
mam_polave = weight_lat_ave(t_mam.extract(nhpole))
jja_polave = weight_lat_ave(t_jja.extract(shpole))
son_polave = weight_lat_ave(t_son.extract(shpole))
else:
djf_polave = weight_cosine(t_djf.extract(nhpole))
mam_polave = weight_cosine(t_mam.extract(nhpole))
jja_polave = weight_cosine(t_jja.extract(shpole))
son_polave = weight_cosine(t_son.extract(shpole))
# Calculate metrics and add to metrics dictionary
# TODO Why take off 180.0?
metrics['50 hPa temperature: 60N-90N (DJF)'] = djf_polave.data - 180.
metrics['50 hPa temperature: 60N-90N (MAM)'] = mam_polave.data - 180.
metrics['50 hPa temperature: 90S-60S (JJA)'] = jja_polave.data - 180.
metrics['50 hPa temperature: 90S-60S (SON)'] = son_polave.data - 180.
# Plot T(DJF) and T(JJA)
plot_temp(t_djf, 'DJF', '{}_t_djf.png'.format(run['runid']))
plot_temp(t_jja, 'JJA', '{}_t_jja.png'.format(run['runid']))
def mean_and_strength(cube):
"""Calculate mean and strength of equatorial temperature season cycle."""
# Calculate mean, max and min values of seasonal timeseries
tmean = cube.collapsed('time', iris.analysis.MEAN)
tmax = cube.collapsed('time', iris.analysis.MAX)
tmin = cube.collapsed('time', iris.analysis.MIN)
tstrength = (tmax - tmin) / 2.
# TODO Why take off 180.0?
return (tmean.data - 180.0, tstrength.data)
def t_mean(cube):
"""Calculate mean equatorial 100hPa temperature."""
tmean = cube.collapsed('time', iris.analysis.MEAN)
return tmean.data
def q_mean(cube):
"""Calculate mean tropical 70hPa water vapour."""
qmean = cube.collapsed('time', iris.analysis.MEAN)
# TODO magic numbers
return (1000000. * 29. / 18.) * qmean.data # ppmv
def teq_metrics(run, tcube, metrics):
"""Routine to calculate equatorial 100hPa temperature metrics."""
# Extract equatorial temperature at 100hPa
equator = iris.Constraint(latitude=lambda lat: -2 <= lat <= 2)
p100 = iris.Constraint(air_pressure=10000.)
teq100 = tcube.extract(equator & p100)
# Calculate area-weighted global monthly means from multi-annual data
t_months = teq100.aggregated_by('month', iris.analysis.MEAN)
tcube_cds = [cdt.standard_name for cdt in tcube.coords()]
if 'longitude' in tcube_cds:
t_months = weight_lat_ave(t_months)
else:
t_months = weight_cosine(t_months)
# write results to current working directory
outfile = '{0}_teq100_{1}.nc'
iris.save(t_months, outfile.format(run['runid'], run['period']))
# Calculate metrics
(tmean, tstrength) = mean_and_strength(t_months)
# Add to metrics dictionary
metrics['100 hPa equatorial temp (annual mean)'] = tmean
metrics['100 hPa equatorial temp (annual cycle strength)'] = tstrength
def t_metrics(run, tcube, metrics):
"""Routine to calculate 10S-10N 100hPa temperature metrics."""
# TODO side effect: changes metrics without returning
# Extract 10S-10N temperature at 100hPa
equator = iris.Constraint(latitude=lambda lat: -10 <= lat <= 10)
p100 = iris.Constraint(air_pressure=10000.)
t100 = tcube.extract(equator & p100)
# Calculate area-weighted global monthly means from multi-annual data
t_months = t100.aggregated_by('month', iris.analysis.MEAN)
tcube_cds = [cdt.standard_name for cdt in tcube.coords()]
if 'longitude' in tcube_cds:
t_months = weight_lat_ave(t_months)
else:
t_months = weight_cosine(t_months)
# write results to current working directory
outfile = '{0}_t100_{1}.nc'
iris.save(t_months, outfile.format(run['runid'], run['period']))
# Calculate metrics
(tmean, tstrength) = mean_and_strength(t_months)
# Add to metrics dictionary
metrics['100 hPa 10Sto10N temp (annual mean)'] = tmean
metrics['100 hPa 10Sto10N temp (annual cycle strength)'] = tstrength
def q_metrics(run, qcube, metrics):
"""Routine to calculate 10S-10N 70hPa water vapour metrics."""
# TODO side effect: changes metrics without returning
# Extract 10S-10N humidity at 100hPa
tropics = iris.Constraint(latitude=lambda lat: -10 <= lat <= 10)
p70 = iris.Constraint(air_pressure=7000.)
q70 = qcube.extract(tropics & p70)
# Calculate area-weighted global monthly means from multi-annual data
q_months = q70.aggregated_by('month', iris.analysis.MEAN)
qcube_cds = [cdt.standard_name for cdt in qcube.coords()]
if 'longitude' in qcube_cds:
q_months = weight_lat_ave(q_months)
else:
q_months = weight_cosine(q_months)
# write results to current working directory
outfile = '{0}_q70_{1}.nc'
iris.save(q_months, outfile.format(run['runid'], run['period']))
# Calculate metrics
qmean = q_mean(q_months)
# Add to metrics dictionary
metrics['70 hPa 10Sto10N wv (annual mean)'] = qmean
def summary_metric(metrics):
"""
Compute weighted avg of metrics.
This is a weighted average of all 13 metrics,
giving equal weights to the averages of extratropical U,
extratropical T, QBO, and equatorial T metrics.
"""
# TODO side effect: changes metrics without returning
pnj_metric = metrics['Polar night jet: northern hem (January)'] \
+ metrics['Polar night jet: southern hem (July)'] \
+ metrics['Easterly jet: southern hem (January)'] \
+ metrics['Easterly jet: northern hem (July)']
t50_metric = metrics['50 hPa temperature: 60N-90N (DJF)'] \
+ metrics['50 hPa temperature: 60N-90N (MAM)'] \
+ metrics['50 hPa temperature: 90S-60S (JJA)'] \
+ metrics['50 hPa temperature: 90S-60S (SON)']
qbo_metric = metrics['QBO period at 30 hPa'] \
+ metrics['QBO amplitude at 30 hPa (westward)'] \
+ metrics['QBO amplitude at 30 hPa (eastward)']
teq_metric = metrics['100 hPa equatorial temp (annual mean)'] \
+ metrics['100 hPa equatorial temp (annual cycle strength)']
q_metric = metrics['70 hPa 10Sto10N wv (annual mean)']
# TODO magic numbers
summary = (
(pnj_metric / 4.) + (2.4 * t50_metric / 4.) + (3.1 * qbo_metric / 3.) +
(8.6 * teq_metric / 2.) + (18.3 * q_metric)) / 33.4
# Add to metrics dictionary
metrics['Summary'] = summary
def mainfunc(run):
"""Main function in stratospheric assessment code."""
metrics = dict()
# Set up to only run for 10 year period (eventually)
year_cons = dict(from_dt=run['from_monthly'], to_dt=run['to_monthly'])
# Read zonal mean U (lbproc=192) and add month number to metadata
ucube = load_run_ss(
run, 'monthly', 'eastward_wind', lbproc=192, **year_cons)
# Although input data is a zonal mean, iris does not recognise it as such
# and just reads it as having a single longitudinal coordinate. This
# removes longitude as a dimension coordinate and makes it a scalar
# coordinate in line with how a zonal mean would be described.
# Is there a better way of doing this?
ucube_cds = [cdt.standard_name for cdt in ucube.coords()]
if 'longitude' in ucube_cds:
ucube = ucube.collapsed('longitude', iris.analysis.MEAN)
if not ucube.coord('latitude').has_bounds():
ucube.coord('latitude').guess_bounds()
# check for month_number
aux_coord_names = [aux_coord.var_name for aux_coord in ucube.aux_coords]
if 'month_number' not in aux_coord_names:
icc.add_month_number(ucube, 'time', name='month_number')
# Read zonal mean T (lbproc=192) and add clim month and season to metadata
tcube = load_run_ss(
run, 'monthly', 'air_temperature', lbproc=192,
**year_cons) # m01s30i204
# Although input data is a zonal mean, iris does not recognise it as such
# and just reads it as having a single longitudinal coordinate. This
# removes longitude as a dimension coordinate and makes it a scalar
# coordinate in line with how a zonal mean would be described.
# Is there a better way of doing this?
tcube_cds = [cdt.standard_name for cdt in tcube.coords()]
if 'longitude' in tcube_cds:
tcube = tcube.collapsed('longitude', iris.analysis.MEAN)
if not tcube.coord('latitude').has_bounds():
tcube.coord('latitude').guess_bounds()
aux_coord_names = [aux_coord.var_name for aux_coord in tcube.aux_coords]
if 'month' not in aux_coord_names:
icc.add_month(tcube, 'time', name='month')
if 'clim_season' not in aux_coord_names:
icc.add_season(tcube, 'time', name='clim_season')
# Read zonal mean q (lbproc=192) and add clim month and season to metadata
qcube = load_run_ss(
run, 'monthly', 'specific_humidity', lbproc=192,
**year_cons) # m01s30i205
# Although input data is a zonal mean, iris does not recognise it as such
# and just reads it as having a single longitudinal coordinate. This
# removes longitude as a dimension coordinate and makes it a scalar
# coordinate in line with how a zonal mean would be described.
# Is there a better way of doing this?
qcube_cds = [cdt.standard_name for cdt in qcube.coords()]
if 'longitude' in qcube_cds:
qcube = qcube.collapsed('longitude', iris.analysis.MEAN)
if not qcube.coord('latitude').has_bounds():
qcube.coord('latitude').guess_bounds()
aux_coord_names = [aux_coord.var_name for aux_coord in qcube.aux_coords]
if 'month' not in aux_coord_names:
icc.add_month(qcube, 'time', name='month')
if 'clim_season' not in aux_coord_names:
icc.add_season(qcube, 'time', name='clim_season')
# Calculate PNJ metrics
pnj_metrics(run, ucube, metrics)
# Calculate QBO metrics
qbo_metrics(run, ucube, metrics)
# Calculate polar temperature metrics
tpole_metrics(run, tcube, metrics)
# Calculate equatorial temperature metrics
teq_metrics(run, tcube, metrics)
# Calculate tropical temperature metrics
t_metrics(run, tcube, metrics)
# Calculate tropical water vapour metric
q_metrics(run, qcube, metrics)
# Summary metric
summary_metric(metrics)
# Make sure all metrics are of type float
# Need at the moment to populate metrics files
for key, value in metrics.items():
metrics[key] = float(value)
return metrics
def multi_qbo_plot(run):
"""Plot 30hPa QBO (5S to 5N) timeseries on one plot."""
# TODO avoid running mainfunc
# Run mainfunc for each run.
# mainfunc returns metrics and writes results into an *.nc in the current
# working directory.
# To make this function indendent of previous call to mainfunc, mainfunc
# is run again for each run in this function
#
# This behaviour is due to the convention that only metric_functions can
# return metric values, multi_functions are supposed to
# only produce plots (see __init__.py).
# QBO at 30hPa timeseries plot
# Set up generic input file name
infile = '{0}_qbo30_{1}.nc'
# Create control filename
cntlfile = infile.format(run['suite_id1'], run['period'])
# Create experiment filename
exptfile = infile.format(run['suite_id2'], run['period'])
# If no control data then stop ...
if not os.path.exists(cntlfile):
logger.warning('QBO30 Control absent. skipping ...')
return
# Create plot
fig = plt.figure()
ax1 = plt.gca()
# Plot control
qbo30_cntl = iris.load_cube(cntlfile)
ivlist = iris.__version__.split('.')
if float('.'.join([ivlist[0], ivlist[1]])) >= 2.1:
iplt.plot(qbo30_cntl, label=run['suite_id1'])
# Plot experiments
if os.path.exists(exptfile):
qbo30_expt = iris.load_cube(exptfile)
iplt.plot(qbo30_expt, label=run['suite_id2'])
ax1.set_title('QBO at 30hPa')
ax1.set_xlabel('Time', fontsize='small')
ax1.set_ylabel('U (m/s)', fontsize='small')
ax1.legend(loc='upper left', fontsize='small')
fig.savefig('qbo_30hpa.png')
plt.close()
def multi_teq_plot(run):
"""
Plot temperature.
Plot 100hPa equatorial temperature seasonal cycle comparing
experiments on one plot.
"""
# TODO avoid running mainfunc
# Run mainfunc for each run.
# mainfunc returns metrics and writes results into an *.nc in the current
# working directory.
# To make this function indendent of previous call to mainfunc, mainfunc
# is run again for each run in this function
#
# This behaviour is due to the convention that only metric_functions can
# return metric values, multi_functions are supposed to
# only produce plots (see __init__.py).
# Set up generic input file name
infile = '{0}_teq100_{1}.nc'
# Create control filename
cntlfile = infile.format(run['suite_id1'], run['period'])
# Create experiment filename
exptfile = infile.format(run['suite_id2'], run['period'])
# If no control data then stop ...
if not os.path.exists(cntlfile):
logger.warning('100hPa Teq for control absent. skipping ...')
return
# Set up generic plot label
plotlabel = '{0}, mean={1:5.2f}, cycle={2:5.2f}'
# Create plot
times = np.arange(12)
fig = plt.figure()
ax1 = plt.gca()
# Plot control
tmon = iris.load_cube(cntlfile)
(tmean, tstrg) = mean_and_strength(tmon)
label = plotlabel.format(run['suite_id1'], float(tmean), float(tstrg))
plt.plot(times, tmon.data, linewidth=2, label=label)
# Plot experiments
if os.path.exists(exptfile):
tmon = iris.load_cube(exptfile)
(tmean, tstrg) = mean_and_strength(tmon)
label = plotlabel.format(run['suite_id2'], float(tmean), float(tstrg))
plt.plot(times, tmon.data, linewidth=2, label=label)
ax1.set_title('Equatorial 100hPa temperature, Multi-annual monthly means')
ax1.set_xlabel('Month', fontsize='small')
ax1.set_xlim(0, 11)
ax1.set_xticks(times)
ax1.set_xticklabels(tmon.coord('month').points, fontsize='small')
ax1.set_ylabel('T (K)', fontsize='small')
ax1.legend(loc='upper left', fontsize='small')
fig.savefig('teq_100hpa.png')
plt.close()
def calc_merra(run):
"""Use MERRA as obs to compare."""
# Load data
merrafile = os.path.join(run['clim_root'], 'ERA-Interim_cubeList.nc')
(t, q) = iris.load_cubes(merrafile,
['air_temperature', 'specific_humidity'])
# Strip out required times
time = iris.Constraint(
time=lambda cell:
run['from_monthly'] <= cell.point <= run['to_monthly']
)
t = t.extract(time)
q = q.extract(time)
# zonal mean
t_cds = [cdt.standard_name for cdt in t.coords()]
if 'longitude' in t_cds:
t = t.collapsed('longitude', iris.analysis.MEAN)
q_cds = [cdt.standard_name for cdt in q.coords()]
if 'longitude' in q_cds:
q = q.collapsed('longitude', iris.analysis.MEAN)
# mean over tropics
equator = iris.Constraint(latitude=lambda lat: -10 <= lat <= 10)
p100 = iris.Constraint(air_pressure=10000.)
t = t.extract(equator & p100)
# Calculate area-weighted global monthly means from multi-annual data
iris.coord_categorisation.add_month(t, 'time', name='month')
t = t.aggregated_by('month', iris.analysis.MEAN)
if 'longitude' in t_cds:
t = weight_lat_ave(t)
else:
t = weight_cosine(t)
# Extract 10S-10N humidity at 100hPa
tropics = iris.Constraint(latitude=lambda lat: -10 <= lat <= 10)
p70 = iris.Constraint(air_pressure=7000.)
q = q.extract(tropics & p70)
# Calculate area-weighted global monthly means from multi-annual data
iris.coord_categorisation.add_month(q, 'time', name='month')
q = q.aggregated_by('month', iris.analysis.MEAN)
if 'longitude' in q_cds:
q = weight_lat_ave(q)
else:
q = weight_cosine(q)
# Calculate time mean
t = t.collapsed('time', iris.analysis.MEAN)
q = q.collapsed('time', iris.analysis.MEAN)
# Create return values
tmerra = t.data # K
# TODO magic numbers
qmerra = ((1000000. * 29. / 18.) * q.data) # ppmv
return tmerra, qmerra
def calc_erai(run):
"""Use ERA-Interim as obs to compare."""
# Load data
eraifile = os.path.join(run['clim_root'], 'ERA-Interim_cubeList.nc')
(t, q) = iris.load_cubes(eraifile,
['air_temperature', 'specific_humidity'])
# Strip out required times
time = iris.Constraint(
time=lambda cell:
run['from_monthly'] <= cell.point <= run['to_monthly']
)
t = t.extract(time)
q = q.extract(time)
# Calculate time mean
t = t.collapsed('time', iris.analysis.MEAN)
q = q.collapsed('time', iris.analysis.MEAN)
# Create return values
terai = t.data # K
qerai = ((1000000. * 29. / 18.) * q.data) # ppmv
return terai, qerai
def multi_t100_vs_q70_plot(run):
"""Plot mean 100hPa temperature against mean 70hPa humidity."""
# TODO avoid running mainfunc
# Run mainfunc for each run.
# mainfunc returns metrics and writes results into an *.nc in the current
# working directory.
# To make this function indendent of previous call to mainfunc, mainfunc
# is run again for each run in this function
#
# This behaviour is due to the convention that only metric_functions can
# return metric values, multi_functions are supposed to
# only produce plots (see __init__.py).
# Set up generic input file name
t_file = '{0}_t100_{1}.nc'
q_file = '{0}_q70_{1}.nc'
# Create control filenames
t_cntl = t_file.format(run['suite_id1'], run['period'])
q_cntl = q_file.format(run['suite_id1'], run['period'])
# Create experiment filenames
t_expt = t_file.format(run['suite_id2'], run['period'])
q_expt = q_file.format(run['suite_id2'], run['period'])
# If no control data then stop ...
if not os.path.exists(t_cntl):
logger.warning('100hPa T for control absent. skipping ...')
return
# If no control data then stop ...
if not os.path.exists(q_cntl):
logger.warning('70hPa q for control absent. skipping ...')
return
# Load MERRA data (currently set to pre-calculated values)
(t_merra, q_merra) = calc_merra(run)
# Load ERA-I data (currently set to pre-calculated values)
(t_erai, q_erai) = calc_erai(run)
# Create plot
# Axes
# bottom X: temperature bias wrt MERRA
# left Y : water vapour bias wrt MERRA
# top X : temperature bias wrt ERA-I
# right Y : water vapour bias wrt ERA-I
merra_xmin = -1.0
merra_xmax = 4.0
merra_ymin = -1.0
merra_ymax = 3.0
# erai_xmin = merra_xmin + (t_merra - t_erai)
# erai_xmax = merra_xmax + (t_merra - t_erai)
# erai_ymin = merra_ymin + (q_merra - q_erai)
# erai_ymax = merra_ymax + (q_merra - q_erai)
fig = plt.figure()
# MERRA axes
ax1 = plt.gca()
ax1.set_xlim(merra_xmin, merra_xmax)
ax1.set_ylim(merra_ymin, merra_ymax)
ax1.xaxis.set_tick_params(labelsize='small')
ax1.yaxis.set_tick_params(labelsize='small')
ax1.set_xlabel('T(10S-10N, 100hPa) bias wrt ERA-I (K)', fontsize='large')
ax1.set_ylabel('q(10S-10N, 70hPa) bias wrt ERA-I (ppmv)', fontsize='large')
# ERA-I axes
# ax2 = ax1.twiny() # twiny gives second horizontal axis
# ay2 = ax1.twinx() # twinx gives second vertical axis
# ax2.xaxis.set_tick_params(labelsize='small')
# ay2.yaxis.set_tick_params(labelsize='small')
# ax2.set_xlabel('T(10S-10N, 100hPa) bias wrt ERA-I (K)',
# fontsize='large')
# ay2.set_ylabel('q(10S-10N, 70hPa) bias wrt ERA-I (ppmv)',
# fontsize='large')
# Plot ideal area
# Arbitrary box of acceptability for Met Office model
# development, designed to target warm
# tropopause temperature biases
# (e.g. Hardiman et al (2015) DOI: 10.1175/JCLI-D-15-0075.1.
# Defined as T bias < 2K and q bias < 20% relative to MERRA.
# MERRA is not used in this plot so ranges shifted by
# +0.8 K and +0.1 ppmv to account for
# differences between MERRA and ERA-Interim.
# TODO: Make box symmetric about zero to be relevant
# to models with a cold bias?
# TODO: add this to the final plot
# patch = Rectangle(
# (0.8, 0.1),
# 2.0,
# 0.2 * q_merra,
# fc='lime',
# ec='None',
# zorder=0)
# ax1.add_patch(patch)
# Plot control
tmon = iris.load_cube(t_cntl)
tmean = t_mean(tmon) - t_merra
qmon = iris.load_cube(q_cntl)
qmean = q_mean(qmon) - q_merra
label = run['suite_id1']
ax1.scatter(tmean, qmean, s=100, label=label, marker='^')
# Plot experiment
if os.path.exists(t_expt) and os.path.exists(q_expt):
tmon = iris.load_cube(t_expt)
tmean = t_mean(tmon) - t_merra
qmon = iris.load_cube(q_expt)
qmean = q_mean(qmon) - q_merra
label = run['suite_id2']
ax1.scatter(tmean, qmean, s=100, label=label, marker='v')
ax1.legend(loc='upper right', scatterpoints=1, fontsize='medium')
fig.savefig('t100_vs_q70.png')
plt.close() | PypiClean |
/AltAnalyze-2.1.3.15.tar.gz/AltAnalyze-2.1.3.15/altanalyze/visualization_scripts/umap_learn/spectral.py | import numpy as np
import scipy.sparse
import scipy.sparse.csgraph
from sklearn.manifold import SpectralEmbedding
from sklearn.metrics import pairwise_distances
from warnings import warn
def component_layout(
data, n_components, component_labels, dim, metric="euclidean", metric_kwds={}
):
"""Provide a layout relating the separate connected components. This is done
by taking the centroid of each component and then performing a spectral embedding
of the centroids.
Parameters
----------
data: array of shape (n_samples, n_features)
The source data -- required so we can generate centroids for each
connected component of the graph.
n_components: int
The number of distinct components to be layed out.
component_labels: array of shape (n_samples)
For each vertex in the graph the label of the component to
which the vertex belongs.
dim: int
The chosen embedding dimension.
metric: string or callable (optional, default 'euclidean')
The metric used to measure distances among the source data points.
metric_kwds: dict (optional, default {})
Keyword arguments to be passed to the metric function.
Returns
-------
component_embedding: array of shape (n_components, dim)
The ``dim``-dimensional embedding of the ``n_components``-many
connected components.
"""
component_centroids = np.empty((n_components, data.shape[1]), dtype=np.float64)
for label in range(n_components):
component_centroids[label] = data[component_labels == label].mean(axis=0)
distance_matrix = pairwise_distances(
component_centroids, metric=metric, **metric_kwds
)
affinity_matrix = np.exp(-distance_matrix ** 2)
component_embedding = SpectralEmbedding(
n_components=dim, affinity="precomputed"
).fit_transform(affinity_matrix)
component_embedding /= component_embedding.max()
return component_embedding
def multi_component_layout(
data,
graph,
n_components,
component_labels,
dim,
random_state,
metric="euclidean",
metric_kwds={},
):
"""Specialised layout algorithm for dealing with graphs with many connected components.
This will first fid relative positions for the components by spectrally embedding
their centroids, then spectrally embed each individual connected component positioning
them according to the centroid embeddings. This provides a decent embedding of each
component while placing the components in good relative positions to one another.
Parameters
----------
data: array of shape (n_samples, n_features)
The source data -- required so we can generate centroids for each
connected component of the graph.
graph: sparse matrix
The adjacency matrix of the graph to be emebdded.
n_components: int
The number of distinct components to be layed out.
component_labels: array of shape (n_samples)
For each vertex in the graph the label of the component to
which the vertex belongs.
dim: int
The chosen embedding dimension.
metric: string or callable (optional, default 'euclidean')
The metric used to measure distances among the source data points.
metric_kwds: dict (optional, default {})
Keyword arguments to be passed to the metric function.
Returns
-------
embedding: array of shape (n_samples, dim)
The initial embedding of ``graph``.
"""
result = np.empty((graph.shape[0], dim), dtype=np.float32)
if n_components > 2 * dim:
meta_embedding = component_layout(
data,
n_components,
component_labels,
dim,
metric=metric,
metric_kwds=metric_kwds,
)
else:
k = int(np.ceil(n_components / 2.0))
base = np.hstack([np.eye(k), np.zeros((k, dim - k))])
meta_embedding = np.vstack([base, -base])[:n_components]
for label in range(n_components):
component_graph = graph.tocsr()[component_labels == label, :].tocsc()
component_graph = component_graph[:, component_labels == label].tocoo()
distances = pairwise_distances([meta_embedding[label]], meta_embedding)
data_range = distances[distances > 0.0].min() / 2.0
if component_graph.shape[0] < 2 * dim:
result[component_labels == label] = (
random_state.uniform(
low=-data_range,
high=data_range,
size=(component_graph.shape[0], dim),
)
+ meta_embedding[label]
)
continue
diag_data = np.asarray(component_graph.sum(axis=0))
# standard Laplacian
# D = scipy.sparse.spdiags(diag_data, 0, graph.shape[0], graph.shape[0])
# L = D - graph
# Normalized Laplacian
I = scipy.sparse.identity(component_graph.shape[0], dtype=np.float64)
D = scipy.sparse.spdiags(
1.0 / np.sqrt(diag_data),
0,
component_graph.shape[0],
component_graph.shape[0],
)
L = I - D * component_graph * D
k = dim + 1
num_lanczos_vectors = max(2 * k + 1, int(np.sqrt(component_graph.shape[0])))
try:
eigenvalues, eigenvectors = scipy.sparse.linalg.eigsh(
L,
k,
which="SM",
ncv=num_lanczos_vectors,
tol=1e-4,
v0=np.ones(L.shape[0]),
maxiter=graph.shape[0] * 5,
)
order = np.argsort(eigenvalues)[1:k]
component_embedding = eigenvectors[:, order]
expansion = data_range / np.max(np.abs(component_embedding))
component_embedding *= expansion
result[component_labels == label] = (
component_embedding + meta_embedding[label]
)
except scipy.sparse.linalg.ArpackError:
warn(
"WARNING: spectral initialisation failed! The eigenvector solver\n"
"failed. This is likely due to too small an eigengap. Consider\n"
"adding some noise or jitter to your data.\n\n"
"Falling back to random initialisation!"
)
result[component_labels == label] = (
random_state.uniform(
low=-data_range,
high=data_range,
size=(component_graph.shape[0], dim),
)
+ meta_embedding[label]
)
return result
def spectral_layout(data, graph, dim, random_state, metric="euclidean", metric_kwds={}):
"""Given a graph compute the spectral embedding of the graph. This is
simply the eigenvectors of the laplacian of the graph. Here we use the
normalized laplacian.
Parameters
----------
data: array of shape (n_samples, n_features)
The source data
graph: sparse matrix
The (weighted) adjacency matrix of the graph as a sparse matrix.
dim: int
The dimension of the space into which to embed.
random_state: numpy RandomState or equivalent
A state capable being used as a numpy random state.
Returns
-------
embedding: array of shape (n_vertices, dim)
The spectral embedding of the graph.
"""
n_samples = graph.shape[0]
n_components, labels = scipy.sparse.csgraph.connected_components(graph)
if n_components > 1:
warn(
"Embedding a total of {} separate connected components using meta-embedding (experimental)".format(
n_components
)
)
return multi_component_layout(
data,
graph,
n_components,
labels,
dim,
random_state,
metric=metric,
metric_kwds=metric_kwds,
)
diag_data = np.asarray(graph.sum(axis=0))
# standard Laplacian
# D = scipy.sparse.spdiags(diag_data, 0, graph.shape[0], graph.shape[0])
# L = D - graph
# Normalized Laplacian
I = scipy.sparse.identity(graph.shape[0], dtype=np.float64)
D = scipy.sparse.spdiags(
1.0 / np.sqrt(diag_data), 0, graph.shape[0], graph.shape[0]
)
L = I - D * graph * D
k = dim + 1
num_lanczos_vectors = max(2 * k + 1, int(np.sqrt(graph.shape[0])))
try:
if L.shape[0] < 2000000:
eigenvalues, eigenvectors = scipy.sparse.linalg.eigsh(
L,
k,
which="SM",
ncv=num_lanczos_vectors,
tol=1e-4,
v0=np.ones(L.shape[0]),
maxiter=graph.shape[0] * 5,
)
else:
eigenvalues, eigenvectors = scipy.sparse.linalg.lobpcg(
L,
random_state.normal(size=(L.shape[0], k)),
largest=False,
tol=1e-8
)
order = np.argsort(eigenvalues)[1:k]
return eigenvectors[:, order]
except scipy.sparse.linalg.ArpackError:
warn(
"WARNING: spectral initialisation failed! The eigenvector solver\n"
"failed. This is likely due to too small an eigengap. Consider\n"
"adding some noise or jitter to your data.\n\n"
"Falling back to random initialisation!"
)
return random_state.uniform(low=-10.0, high=10.0, size=(graph.shape[0], dim)) | PypiClean |
/ClueDojo-1.4.3-1.tar.gz/ClueDojo-1.4.3-1/src/cluedojo/static/dojo/cldr/nls/hebrew.js | ({"dateFormatItem-yM":"y-M","dateTimeFormats-appendItem-Second":"{0} ({2}: {1})","dateFormatItem-yQ":"y Q","eraNames":["AM"],"dateFormatItem-MMMEd":"E MMM d","dateTimeFormat-full":"{1} {0}","dateFormatItem-hms":"h:mm:ss a","dateFormatItem-yQQQ":"y QQQ","days-standAlone-wide":["1","2","3","4","5","6","7"],"dateFormatItem-MMM":"LLL","months-standAlone-narrow":["1","2","3","4","5","6","7","8","9","10","11","12","13"],"dateTimeFormats-appendItem-Year":"{0} {1}","dateTimeFormat-short":"{1} {0}","dateTimeFormat-medium":"{1} {0}","quarters-standAlone-abbr":["Q1","Q2","Q3","Q4"],"dateFormatItem-y":"y","timeFormat-full":"HH:mm:ss zzzz","dateTimeFormats-appendItem-Week":"{0} ({2}: {1})","dateTimeFormats-appendItem-Timezone":"{0} {1}","months-standAlone-abbr":["Tishri","Heshvan","Kislev","Tevet","Shevat","Adar I","Adar","Nisan","Iyar","Sivan","Tamuz","Av","Elul"],"dateFormatItem-yMMM":"y MMM","dateTimeFormats-appendItem-Month":"{0} ({2}: {1})","days-standAlone-narrow":["1","2","3","4","5","6","7"],"eraAbbr":["AM"],"dateFormat-long":"y MMMM d","timeFormat-medium":"HH:mm:ss","dateFormatItem-EEEd":"d EEE","dateTimeFormats-appendItem-Minute":"{0} ({2}: {1})","dateFormatItem-Hm":"H:mm","dateFormat-medium":"y MMM d","dateFormatItem-Hms":"H:mm:ss","quarters-standAlone-wide":["Q1","Q2","Q3","Q4"],"dateFormatItem-yMMMM":"y MMMM","dateFormatItem-ms":"mm:ss","quarters-standAlone-narrow":["1","2","3","4"],"dateTimeFormat-long":"{1} {0}","months-standAlone-wide":["Tishri","Heshvan","Kislev","Tevet","Shevat","Adar I","Adar","Nisan","Iyar","Sivan","Tamuz","Av","Elul"],"dateTimeFormats-appendItem-Day":"{0} ({2}: {1})","dateFormatItem-MMMMEd":"E MMMM d","quarters-format-narrow":["1","2","3","4"],"dateFormatItem-MMMd":"MMM d","timeFormat-long":"HH:mm:ss z","months-format-abbr":["Tishri","Heshvan","Kislev","Tevet","Shevat","Adar I","Adar","Nisan","Iyar","Sivan","Tamuz","Av","Elul"],"timeFormat-short":"HH:mm","dateTimeFormats-appendItem-Quarter":"{0} ({2}: {1})","dateFormatItem-MMMMd":"MMMM d","quarters-format-abbr":["Q1","Q2","Q3","Q4"],"days-format-abbr":["1","2","3","4","5","6","7"],"pm":"PM","dateFormatItem-M":"L","days-format-narrow":["1","2","3","4","5","6","7"],"dateTimeFormats-appendItem-Day-Of-Week":"{0} {1}","dateFormatItem-MEd":"E, M-d","months-format-narrow":["1","2","3","4","5","6","7","8","9","10","11","12","13"],"dateFormatItem-hm":"h:mm a","dateTimeFormats-appendItem-Hour":"{0} ({2}: {1})","am":"AM","days-standAlone-abbr":["1","2","3","4","5","6","7"],"dateFormat-short":"yyyy-MM-dd","dateFormatItem-yMMMEd":"EEE, y MMM d","dateFormat-full":"EEEE, y MMMM dd","dateFormatItem-Md":"M-d","dateFormatItem-yMEd":"EEE, y-M-d","months-format-wide":["Tishri","Heshvan","Kislev","Tevet","Shevat","Adar I","Adar","Nisan","Iyar","Sivan","Tamuz","Av","Elul"],"dateTimeFormats-appendItem-Era":"{0} {1}","dateFormatItem-d":"d","quarters-format-wide":["Q1","Q2","Q3","Q4"],"eraNarrow":["AM"],"days-format-wide":["1","2","3","4","5","6","7"]}) | PypiClean |
/DjangoDjangoAppCenter-0.0.11-py3-none-any.whl/DjangoAppCenter/simpleui/static/admin/simpleui-x/elementui/dialog.js | module.exports =
/******/ (function (modules) { // webpackBootstrap
/******/ // The module cache
/******/
var installedModules = {};
/******/
/******/ // The require function
/******/
function __webpack_require__(moduleId) {
/******/
/******/ // Check if module is in cache
/******/
if (installedModules[moduleId]) {
/******/
return installedModules[moduleId].exports;
/******/
}
/******/ // Create a new module (and put it into the cache)
/******/
var module = installedModules[moduleId] = {
/******/ i: moduleId,
/******/ l: false,
/******/ exports: {}
/******/
};
/******/
/******/ // Execute the module function
/******/
modules[moduleId].call(module.exports, module, module.exports, __webpack_require__);
/******/
/******/ // Flag the module as loaded
/******/
module.l = true;
/******/
/******/ // Return the exports of the module
/******/
return module.exports;
/******/
}
/******/
/******/
/******/ // expose the modules object (__webpack_modules__)
/******/
__webpack_require__.m = modules;
/******/
/******/ // expose the module cache
/******/
__webpack_require__.c = installedModules;
/******/
/******/ // define getter function for harmony exports
/******/
__webpack_require__.d = function (exports, name, getter) {
/******/
if (!__webpack_require__.o(exports, name)) {
/******/
Object.defineProperty(exports, name, {enumerable: true, get: getter});
/******/
}
/******/
};
/******/
/******/ // define __esModule on exports
/******/
__webpack_require__.r = function (exports) {
/******/
if (typeof Symbol !== 'undefined' && Symbol.toStringTag) {
/******/
Object.defineProperty(exports, Symbol.toStringTag, {value: 'Module'});
/******/
}
/******/
Object.defineProperty(exports, '__esModule', {value: true});
/******/
};
/******/
/******/ // create a fake namespace object
/******/ // mode & 1: value is a module id, require it
/******/ // mode & 2: merge all properties of value into the ns
/******/ // mode & 4: return value when already ns object
/******/ // mode & 8|1: behave like require
/******/
__webpack_require__.t = function (value, mode) {
/******/
if (mode & 1) value = __webpack_require__(value);
/******/
if (mode & 8) return value;
/******/
if ((mode & 4) && typeof value === 'object' && value && value.__esModule) return value;
/******/
var ns = Object.create(null);
/******/
__webpack_require__.r(ns);
/******/
Object.defineProperty(ns, 'default', {enumerable: true, value: value});
/******/
if (mode & 2 && typeof value != 'string') for (var key in value) __webpack_require__.d(ns, key, function (key) {
return value[key];
}.bind(null, key));
/******/
return ns;
/******/
};
/******/
/******/ // getDefaultExport function for compatibility with non-harmony modules
/******/
__webpack_require__.n = function (module) {
/******/
var getter = module && module.__esModule ?
/******/ function getDefault() {
return module['default'];
} :
/******/ function getModuleExports() {
return module;
};
/******/
__webpack_require__.d(getter, 'a', getter);
/******/
return getter;
/******/
};
/******/
/******/ // Object.prototype.hasOwnProperty.call
/******/
__webpack_require__.o = function (object, property) {
return Object.prototype.hasOwnProperty.call(object, property);
};
/******/
/******/ // __webpack_public_path__
/******/
__webpack_require__.p = "/dist/";
/******/
/******/
/******/ // Load entry module and return exports
/******/
return __webpack_require__(__webpack_require__.s = 77);
/******/
})
/************************************************************************/
/******/({
/***/ 0:
/***/ (function (module, __webpack_exports__, __webpack_require__) {
"use strict";
/* harmony export (binding) */
__webpack_require__.d(__webpack_exports__, "a", function () {
return normalizeComponent;
});
/* globals __VUE_SSR_CONTEXT__ */
// IMPORTANT: Do NOT use ES2015 features in this file (except for modules).
// This module is a runtime utility for cleaner component module output and will
// be included in the final webpack user bundle.
function normalizeComponent(
scriptExports,
render,
staticRenderFns,
functionalTemplate,
injectStyles,
scopeId,
moduleIdentifier, /* server only */
shadowMode /* vue-cli only */
) {
// Vue.extend constructor export interop
var options = typeof scriptExports === 'function'
? scriptExports.options
: scriptExports
// render functions
if (render) {
options.render = render
options.staticRenderFns = staticRenderFns
options._compiled = true
}
// functional template
if (functionalTemplate) {
options.functional = true
}
// scopedId
if (scopeId) {
options._scopeId = 'data-v-' + scopeId
}
var hook
if (moduleIdentifier) { // server build
hook = function (context) {
// 2.3 injection
context =
context || // cached call
(this.$vnode && this.$vnode.ssrContext) || // stateful
(this.parent && this.parent.$vnode && this.parent.$vnode.ssrContext) // functional
// 2.2 with runInNewContext: true
if (!context && typeof __VUE_SSR_CONTEXT__ !== 'undefined') {
context = __VUE_SSR_CONTEXT__
}
// inject component styles
if (injectStyles) {
injectStyles.call(this, context)
}
// register component module identifier for async chunk inferrence
if (context && context._registeredComponents) {
context._registeredComponents.add(moduleIdentifier)
}
}
// used by ssr in case component is cached and beforeCreate
// never gets called
options._ssrRegister = hook
} else if (injectStyles) {
hook = shadowMode
? function () {
injectStyles.call(this, this.$root.$options.shadowRoot)
}
: injectStyles
}
if (hook) {
if (options.functional) {
// for template-only hot-reload because in that case the render fn doesn't
// go through the normalizer
options._injectStyles = hook
// register for functioal component in vue file
var originalRender = options.render
options.render = function renderWithStyleInjection(h, context) {
hook.call(context)
return originalRender(h, context)
}
} else {
// inject component registration as beforeCreate hook
var existing = options.beforeCreate
options.beforeCreate = existing
? [].concat(existing, hook)
: [hook]
}
}
return {
exports: scriptExports,
options: options
}
}
/***/
}),
/***/ 10:
/***/ (function (module, exports) {
module.exports = require("element-ui/lib/mixins/migrating");
/***/
}),
/***/ 14:
/***/ (function (module, exports) {
module.exports = require("element-ui/lib/utils/popup");
/***/
}),
/***/ 4:
/***/ (function (module, exports) {
module.exports = require("element-ui/lib/mixins/emitter");
/***/
}),
/***/ 77:
/***/ (function (module, __webpack_exports__, __webpack_require__) {
"use strict";
__webpack_require__.r(__webpack_exports__);
// CONCATENATED MODULE: ./node_modules/[email protected]@vue-loader/lib/loaders/templateLoader.js??vue-loader-options!./node_modules/[email protected]@vue-loader/lib??vue-loader-options!./packages/dialog/src/component.vue?vue&type=template&id=60140e62&
var render = function () {
var _vm = this
var _h = _vm.$createElement
var _c = _vm._self._c || _h
return _c(
"transition",
{
attrs: {name: "dialog-fade"},
on: {"after-enter": _vm.afterEnter, "after-leave": _vm.afterLeave}
},
[
_c(
"div",
{
directives: [
{
name: "show",
rawName: "v-show",
value: _vm.visible,
expression: "visible"
}
],
staticClass: "el-dialog__wrapper",
on: {
click: function ($event) {
if ($event.target !== $event.currentTarget) {
return null
}
return _vm.handleWrapperClick($event)
}
}
},
[
_c(
"div",
{
key: _vm.key,
ref: "dialog",
class: [
"el-dialog",
{
"is-fullscreen": _vm.fullscreen,
"el-dialog--center": _vm.center
},
_vm.customClass
],
style: _vm.style,
attrs: {
role: "dialog",
"aria-modal": "true",
"aria-label": _vm.title || "dialog"
}
},
[
_c(
"div",
{staticClass: "el-dialog__header"},
[
_vm._t("title", [
_c("span", {staticClass: "el-dialog__title"}, [
_vm._v(_vm._s(_vm.title))
])
]),
_vm.showClose
? _c(
"button",
{
staticClass: "el-dialog__headerbtn",
attrs: {type: "button", "aria-label": "Close"},
on: {click: _vm.handleClose}
},
[
_c("i", {
staticClass:
"el-dialog__close el-icon el-icon-close"
})
]
)
: _vm._e()
],
2
),
_vm.rendered
? _c(
"div",
{staticClass: "el-dialog__body"},
[_vm._t("default")],
2
)
: _vm._e(),
_vm.$slots.footer
? _c(
"div",
{staticClass: "el-dialog__footer"},
[_vm._t("footer")],
2
)
: _vm._e()
]
)
]
)
]
)
}
var staticRenderFns = []
render._withStripped = true
// CONCATENATED MODULE: ./packages/dialog/src/component.vue?vue&type=template&id=60140e62&
// EXTERNAL MODULE: external "element-ui/lib/utils/popup"
var popup_ = __webpack_require__(14);
var popup_default = /*#__PURE__*/__webpack_require__.n(popup_);
// EXTERNAL MODULE: external "element-ui/lib/mixins/migrating"
var migrating_ = __webpack_require__(10);
var migrating_default = /*#__PURE__*/__webpack_require__.n(migrating_);
// EXTERNAL MODULE: external "element-ui/lib/mixins/emitter"
var emitter_ = __webpack_require__(4);
var emitter_default = /*#__PURE__*/__webpack_require__.n(emitter_);
// CONCATENATED MODULE: ./node_modules/[email protected]@babel-loader/lib!./node_modules/[email protected]@vue-loader/lib??vue-loader-options!./packages/dialog/src/component.vue?vue&type=script&lang=js&
//
//
//
//
//
//
//
//
//
//
//
//
//
//
//
//
//
//
//
//
//
//
//
//
//
//
//
//
//
//
//
//
//
//
//
//
//
//
//
/* harmony default export */
var componentvue_type_script_lang_js_ = ({
name: 'ElDialog',
mixins: [popup_default.a, emitter_default.a, migrating_default.a],
props: {
title: {
type: String,
default: ''
},
modal: {
type: Boolean,
default: true
},
modalAppendToBody: {
type: Boolean,
default: true
},
appendToBody: {
type: Boolean,
default: false
},
lockScroll: {
type: Boolean,
default: true
},
closeOnClickModal: {
type: Boolean,
default: true
},
closeOnPressEscape: {
type: Boolean,
default: true
},
showClose: {
type: Boolean,
default: true
},
width: String,
fullscreen: Boolean,
customClass: {
type: String,
default: ''
},
top: {
type: String,
default: '15vh'
},
beforeClose: Function,
center: {
type: Boolean,
default: false
},
destroyOnClose: Boolean
},
data: function data() {
return {
closed: false,
key: 0
};
},
watch: {
visible: function visible(val) {
var _this = this;
if (val) {
this.closed = false;
this.$emit('open');
this.$el.addEventListener('scroll', this.updatePopper);
this.$nextTick(function () {
_this.$refs.dialog.scrollTop = 0;
});
if (this.appendToBody) {
document.body.appendChild(this.$el);
}
} else {
this.$el.removeEventListener('scroll', this.updatePopper);
if (!this.closed) this.$emit('close');
if (this.destroyOnClose) {
this.$nextTick(function () {
_this.key++;
});
}
}
}
},
computed: {
style: function style() {
var style = {};
if (!this.fullscreen) {
style.marginTop = this.top;
if (this.width) {
style.width = this.width;
}
}
return style;
}
},
methods: {
getMigratingConfig: function getMigratingConfig() {
return {
props: {
'size': 'size is removed.'
}
};
},
handleWrapperClick: function handleWrapperClick() {
if (!this.closeOnClickModal) return;
this.handleClose();
},
handleClose: function handleClose() {
if (typeof this.beforeClose === 'function') {
this.beforeClose(this.hide);
} else {
this.hide();
}
},
hide: function hide(cancel) {
if (cancel !== false) {
this.$emit('update:visible', false);
this.$emit('close');
this.closed = true;
}
},
updatePopper: function updatePopper() {
this.broadcast('ElSelectDropdown', 'updatePopper');
this.broadcast('ElDropdownMenu', 'updatePopper');
},
afterEnter: function afterEnter() {
this.$emit('opened');
},
afterLeave: function afterLeave() {
this.$emit('closed');
}
},
mounted: function mounted() {
if (this.visible) {
this.rendered = true;
this.open();
if (this.appendToBody) {
document.body.appendChild(this.$el);
}
}
},
destroyed: function destroyed() {
// if appendToBody is true, remove DOM node after destroy
if (this.appendToBody && this.$el && this.$el.parentNode) {
this.$el.parentNode.removeChild(this.$el);
}
}
});
// CONCATENATED MODULE: ./packages/dialog/src/component.vue?vue&type=script&lang=js&
/* harmony default export */
var src_componentvue_type_script_lang_js_ = (componentvue_type_script_lang_js_);
// EXTERNAL MODULE: ./node_modules/[email protected]@vue-loader/lib/runtime/componentNormalizer.js
var componentNormalizer = __webpack_require__(0);
// CONCATENATED MODULE: ./packages/dialog/src/component.vue
/* normalize component */
var component = Object(componentNormalizer["a" /* default */])(
src_componentvue_type_script_lang_js_,
render,
staticRenderFns,
false,
null,
null,
null
)
/* hot reload */
if (false) {
var api;
}
component.options.__file = "packages/dialog/src/component.vue"
/* harmony default export */
var src_component = (component.exports);
// CONCATENATED MODULE: ./packages/dialog/index.js
/* istanbul ignore next */
src_component.install = function (Vue) {
Vue.component(src_component.name, src_component);
};
/* harmony default export */
var dialog = __webpack_exports__["default"] = (src_component);
/***/
})
/******/
}); | PypiClean |
/Electrum-VTC-2.9.3.3.tar.gz/Electrum-VTC-2.9.3.3/gui/qt/address_list.py |
import webbrowser
from util import *
from electrum_vtc.i18n import _
from electrum_vtc.util import block_explorer_URL, format_satoshis, format_time
from electrum_vtc.plugins import run_hook
from electrum_vtc.bitcoin import is_address
class AddressList(MyTreeWidget):
filter_columns = [0, 1, 2] # Address, Label, Balance
def __init__(self, parent=None):
MyTreeWidget.__init__(self, parent, self.create_menu, [ _('Address'), _('Label'), _('Balance'), _('Tx')], 1)
self.setSelectionMode(QAbstractItemView.ExtendedSelection)
def on_update(self):
self.wallet = self.parent.wallet
item = self.currentItem()
current_address = item.data(0, Qt.UserRole).toString() if item else None
self.clear()
receiving_addresses = self.wallet.get_receiving_addresses()
change_addresses = self.wallet.get_change_addresses()
if True:
account_item = self
sequences = [0,1] if change_addresses else [0]
for is_change in sequences:
if len(sequences) > 1:
name = _("Receiving") if not is_change else _("Change")
seq_item = QTreeWidgetItem( [ name, '', '', '', ''] )
account_item.addChild(seq_item)
if not is_change:
seq_item.setExpanded(True)
else:
seq_item = account_item
used_item = QTreeWidgetItem( [ _("Used"), '', '', '', ''] )
used_flag = False
addr_list = change_addresses if is_change else receiving_addresses
for address in addr_list:
num = len(self.wallet.history.get(address,[]))
is_used = self.wallet.is_used(address)
label = self.wallet.labels.get(address,'')
c, u, x = self.wallet.get_addr_balance(address)
balance = self.parent.format_amount(c + u + x)
address_item = QTreeWidgetItem([address, label, balance, "%d"%num])
address_item.setFont(0, QFont(MONOSPACE_FONT))
address_item.setData(0, Qt.UserRole, address)
address_item.setData(0, Qt.UserRole+1, True) # label can be edited
if self.wallet.is_frozen(address):
address_item.setBackgroundColor(0, QColor('lightblue'))
if self.wallet.is_beyond_limit(address, is_change):
address_item.setBackgroundColor(0, QColor('red'))
if is_used:
if not used_flag:
seq_item.insertChild(0, used_item)
used_flag = True
used_item.addChild(address_item)
else:
seq_item.addChild(address_item)
if address == current_address:
self.setCurrentItem(address_item)
def create_menu(self, position):
from electrum_vtc.wallet import Multisig_Wallet
is_multisig = isinstance(self.wallet, Multisig_Wallet)
can_delete = self.wallet.can_delete_address()
selected = self.selectedItems()
multi_select = len(selected) > 1
addrs = [unicode(item.text(0)) for item in selected]
if not addrs:
return
if not multi_select:
item = self.itemAt(position)
col = self.currentColumn()
if not item:
return
addr = addrs[0]
if not is_address(addr):
item.setExpanded(not item.isExpanded())
return
menu = QMenu()
if not multi_select:
column_title = self.headerItem().text(col)
menu.addAction(_("Copy %s")%column_title, lambda: self.parent.app.clipboard().setText(item.text(col)))
menu.addAction(_('Details'), lambda: self.parent.show_address(addr))
if col in self.editable_columns:
menu.addAction(_("Edit %s")%column_title, lambda: self.editItem(item, col))
menu.addAction(_("Request payment"), lambda: self.parent.receive_at(addr))
if self.wallet.can_export():
menu.addAction(_("Private key"), lambda: self.parent.show_private_key(addr))
if not is_multisig and not self.wallet.is_watching_only():
menu.addAction(_("Sign/verify message"), lambda: self.parent.sign_verify_message(addr))
menu.addAction(_("Encrypt/decrypt message"), lambda: self.parent.encrypt_message(addr))
if can_delete:
menu.addAction(_("Remove from wallet"), lambda: self.parent.remove_address(addr))
addr_URL = block_explorer_URL(self.config, 'addr', addr)
if addr_URL:
menu.addAction(_("View on block explorer"), lambda: webbrowser.open(addr_URL))
if not self.wallet.is_frozen(addr):
menu.addAction(_("Freeze"), lambda: self.parent.set_frozen_state([addr], True))
else:
menu.addAction(_("Unfreeze"), lambda: self.parent.set_frozen_state([addr], False))
coins = self.wallet.get_utxos(addrs)
if coins:
menu.addAction(_("Spend from"), lambda: self.parent.spend_coins(coins))
run_hook('receive_menu', menu, addrs, self.wallet)
menu.exec_(self.viewport().mapToGlobal(position)) | PypiClean |
/mynewspaper-4.0.tar.gz/mynewspaper-4.0/misc/dateutil/zoneinfo/__init__.py | from dateutil.tz import tzfile
from tarfile import TarFile
import os
__author__ = "Gustavo Niemeyer <[email protected]>"
__license__ = "PSF License"
__all__ = ["setcachesize", "gettz", "rebuild"]
CACHE = []
CACHESIZE = 10
USE_SYSTEM_ZONEINFO = True # XXX configure at build time
class tzfile(tzfile):
def __reduce__(self):
return (gettz, (self._filename,))
def getzoneinfofile():
filenames = sorted(os.listdir(os.path.join(os.path.dirname(__file__))))
filenames.reverse()
for entry in filenames:
if entry.startswith("zoneinfo") and ".tar." in entry:
return os.path.join(os.path.dirname(__file__), entry)
return None
ZONEINFOFILE = getzoneinfofile() if USE_SYSTEM_ZONEINFO else None
ZONEINFODIR = (os.getenv("TZDIR") or "/usr/share/zoneinfo").rstrip(os.sep)
del getzoneinfofile
def setcachesize(size):
global CACHESIZE, CACHE
CACHESIZE = size
del CACHE[size:]
def gettz(name):
for cachedname, tzinfo in CACHE:
if cachedname == name:
return tzinfo
name_parts = name.lstrip('/').split('/')
for part in name_parts:
if part == os.path.pardir or os.path.sep in part:
raise ValueError('Bad path segment: %r' % part)
filename = os.path.join(ZONEINFODIR, *name_parts)
try:
zonefile = open(filename, "rb")
except:
tzinfo = None
else:
tzinfo = tzfile(zonefile)
zonefile.close()
if tzinfo is None and ZONEINFOFILE:
tf = TarFile.open(ZONEINFOFILE)
try:
zonefile = tf.extractfile(name)
except KeyError:
tzinfo = None
else:
tzinfo = tzfile(zonefile)
tf.close()
if tzinfo is not None:
CACHE.insert(0, (name, tzinfo))
del CACHE[CACHESIZE:]
return tzinfo
def rebuild(filename, tag=None, format="gz"):
import tempfile, shutil
tmpdir = tempfile.mkdtemp()
zonedir = os.path.join(tmpdir, "zoneinfo")
moduledir = os.path.dirname(__file__)
if tag: tag = "-"+tag
targetname = "zoneinfo%s.tar.%s" % (tag, format)
try:
tf = TarFile.open(filename)
for name in tf.getnames():
if not (name.endswith(".sh") or
name.endswith(".tab") or
name == "leapseconds"):
tf.extract(name, tmpdir)
filepath = os.path.join(tmpdir, name)
os.system("zic -d %s %s" % (zonedir, filepath))
tf.close()
target = os.path.join(moduledir, targetname)
for entry in os.listdir(moduledir):
if entry.startswith("zoneinfo") and ".tar." in entry:
os.unlink(os.path.join(moduledir, entry))
tf = TarFile.open(target, "w:%s" % format)
for entry in os.listdir(zonedir):
entrypath = os.path.join(zonedir, entry)
tf.add(entrypath, entry)
tf.close()
finally:
shutil.rmtree(tmpdir) | PypiClean |
/AMLT-learn-0.2.9.tar.gz/AMLT-learn-0.2.9/amltlearn/preprocessing/Discretizer.py | __author__ = 'bejar'
import numpy as np
from sklearn.base import TransformerMixin
#Todo: Add the possibility of using the (weighted) mean value of the interval
class Discretizer(TransformerMixin):
"""
Discretization of the attributes of a dataset (unsupervised)
Parameters:
method: str
* 'equal' equal sized bins
* 'frequency' bins with the same number of examples
bins: int
number of bins
"""
intervals = None
def __init__(self, method='equal', bins=2):
self.method = method
self.bins = bins
def _fit(self, X):
"""
Computes the discretization intervals
:param matrix X:
:return:
"""
if self.method == 'equal':
self._fit_equal(X)
elif self.method == 'frequency':
self._fit_frequency(X)
def _fit_equal(self, X):
"""
Computes the discretization intervals for equal sized discretization
:param X:
:return:
"""
self.intervals = np.zeros((self.bins, X.shape[1]))
for i in range(X.shape[1]):
vmin = np.min(X[:, i])
vmax = np.max(X[:, i])
step = np.abs(vmax - vmin) / float(self.bins)
for j in range(self.bins):
vmin += step
self.intervals[j, i] = vmin
self.intervals[self.bins-1, i] += 0.00000000001
def _fit_frequency(self, X):
"""
Computes the discretization intervals for equal frequency
:param X:
:return:
"""
self.intervals = np.zeros((self.bins, X.shape[1]))
quant = X.shape[0] / float(self.bins)
for i in range(X.shape[1]):
lvals = sorted(X[:, i])
nb = 0
while nb < self.bins:
self.intervals[nb, i] = lvals[int((quant*nb) + quant)-1]
nb += 1
self.intervals[self.bins-1, i] += 0.00000000001
def _transform(self, X, copy=False):
"""
Discretizes the attributes of a dataset
:param matrix X: Data matrix
:return:
"""
if self.intervals is None:
raise Exception('Discretizer: Not fitted')
if copy:
y = X.copy()
else:
y = X
self.__transform(y)
return y
def __discretizer(self, v, at):
"""
Determines the dicretized value for an atribute
:param v:
:return:
"""
i=0
while i< self.intervals.shape[0] and v > self.intervals[i, at]:
i += 1
return i
def __transform(self, X):
"""
Applies the discretization to all the attributes of the data matrix
:param X:
:return:
"""
for i in range(X.shape[1]):
for j in range(X.shape[0]):
X[j, i] = self.__discretizer(X[j, i], i)
def fit(self, X):
"""
Fits a set of discretization intervals using the data in X
:param matrix X: The data matrix
"""
self._fit(X)
def transform(self, X, copy=False):
"""
Applies previously fitted discretization intervals to X
:param matrix X: The data matrix
:param bool copy: Returns a copy of the transformed datamatrix
:return: The transformed datamatrix
"""
return self._transform(X, copy=copy)
def fit_transform(self, X, copy=False):
"""
Fits and transforms the data
:param matrix X: The data matrix
:param bool copy: Returns a copy of the transformed datamatrix
:return:The transformed datamatrix
"""
self._fit(X)
return self._transform(X, copy=copy) | PypiClean |
/LEPL-5.1.3.zip/LEPL-5.1.3/src/lepl/support/_test/graph.py | from unittest import TestCase
from lepl.support.graph import ArgAsAttributeMixin, preorder, postorder, reset, \
ConstructorWalker, Clone, make_proxy, LEAF, leaves
from lepl.support.node import Node
# pylint: disable-msg=C0103, C0111, C0301, W0702, C0324, C0102, C0321, W0141
# (dude this is just a test)
class SimpleNode(ArgAsAttributeMixin):
# pylint: disable-msg=E1101
def __init__(self, label, *nodes):
super(SimpleNode, self).__init__()
self._arg(label=label)
self._args(nodes=nodes)
def __str__(self):
return str(self.label)
def __repr__(self):
args = [str(self.label)]
args.extend(map(repr, self.nodes))
return 'SimpleNode(%s)' % ','.join(args)
def __getitem__(self, index):
return self.nodes[index]
def __len__(self):
return len(self.nodes)
def graph():
return SimpleNode(1,
SimpleNode(11,
SimpleNode(111),
SimpleNode(112)),
SimpleNode(12))
class OrderTest(TestCase):
def test_preorder(self):
result = [node.label for node in preorder(graph(), SimpleNode, exclude=LEAF)]
assert result == [1, 11, 111, 112, 12], result
def test_postorder(self):
result = [node.label for node in postorder(graph(), SimpleNode, exclude=LEAF)]
assert result == [111, 112, 11, 12, 1], result
class ResetTest(TestCase):
def test_reset(self):
nodes = preorder(graph(), SimpleNode, exclude=LEAF)
assert next(nodes).label == 1
assert next(nodes).label == 11
reset(nodes)
assert next(nodes).label == 1
assert next(nodes).label == 11
class CloneTest(TestCase):
def test_simple(self):
g1 = graph()
g2 = ConstructorWalker(g1, SimpleNode)(Clone())
assert repr(g1) == repr(g2)
assert g1 is not g2
def assert_same(self, text1, text2):
assert self.__clean(text1) == self.__clean(text2), self.__clean(text1)
def __clean(self, text):
depth = 0
result = ''
for c in text:
if c == '<':
depth += 1
elif c == '>':
depth -= 1
elif depth == 0:
result += c
return result
def test_loop(self):
(s, n) = make_proxy()
g1 = SimpleNode(1,
SimpleNode(11,
SimpleNode(111),
SimpleNode(112),
n),
SimpleNode(12))
s(g1)
g2 = ConstructorWalker(g1, SimpleNode)(Clone())
self.assert_same(repr(g1), repr(g2))
def test_loops(self):
(s1, n1) = make_proxy()
(s2, n2) = make_proxy()
g1 = SimpleNode(1,
SimpleNode(11,
SimpleNode(111, n2),
SimpleNode(112),
n1),
SimpleNode(12, n1))
s1(g1)
s2(next(iter(g1)))
g2 = ConstructorWalker(g1, SimpleNode)(Clone())
self.assert_same(repr(g1), repr(g2))
def test_loops_with_proxy(self):
(s1, n1) = make_proxy()
(s2, n2) = make_proxy()
g1 = SimpleNode(1,
SimpleNode(11,
SimpleNode(111, n2),
SimpleNode(112),
n1),
SimpleNode(12, n1))
s1(g1)
s2(next(iter(g1)))
g2 = ConstructorWalker(g1, SimpleNode)(Clone())
g3 = ConstructorWalker(g2, SimpleNode)(Clone())
self.assert_same(repr(g1), repr(g3))
# print(repr(g3))
class GenericOrderTest(TestCase):
def test_preorder(self):
g = [1, [11, [111, 112], 12]]
result = [node for node in preorder(g, list) if isinstance(node, int)]
assert result == [1, 11, 111, 112, 12], result
def test_postorder(self):
'''
At first I was surprised about this (compare with SimpleNode results above),
but these are leaf nodes, so postorder doesn't change anything (there's
no difference between "before visiting" and "after visiting" a leaf).
'''
g = [1, [11, [111, 112], 12]]
result = [node for node in postorder(g, list) if isinstance(node, int)]
assert result == [1, 11, 111, 112, 12], result
class LeafTest(TestCase):
def test_order(self):
tree = Node(1, 2, Node(3, Node(4), Node(), 5))
result = list(leaves(tree, Node))
assert result == [1,2,3,4,5], result | PypiClean |
/ACSNI-1.0.6.tar.gz/ACSNI-1.0.6/README.md | # ACSNI
Automatic context-specific network inference
Determining tissue- and disease-specific circuit of biological pathways remains a fundamental goal of molecular biology.
Many components of these biological pathways still remain unknown, hindering the full and accurate characterisation of
biological processes of interest. ACSNI leverages artificial intelligence for the reconstruction of a biological pathway,
aids the discovery of pathway components and classification of the crosstalk between pathways in specific tissues.

This tool is built in python3.8 with tensorflow backend and keras functional API.
# Installation and running the tool
The best way to get ACSNI along with all the dependencies is to install the release from python package installer (pip)
```pip install ACSNI```
This will add four command line scripts:
| Script | Context | Usage |
| --- | --- | --- |
| ACSNI-run | Gene set analysis | ```ACSNI-run -h``` |
| ACSNI-derive | Single gene analysis | ```ACSNI-derive -h``` |
| ACSNI-get | Link pathway trait | ```ACSNI-get -h``` |
| ACSNI-split | Split expression data | ```ACSNI-split -h``` |
Utility functions can be imported using conventional python system like ```from ACSNI.dbs import ACSNIResults```
# Input ACSNI-run
Expression Matrix - The expression file (.csv), specified by ```-i```, where columns are samples and rows are genes.
The expression values should be normalised (eg. TPM, CPM, RSEM). Make sure the column name of the 1st column is "gene".
| gene | Sample1 | Sample2 | Sample3 |
| --- | --- | --- | --- |
| Foxp1 | 123.2 | 274.1 | 852.6 |
| PD1 | 324.2 | 494.1 | 452.6 |
| CD8 | 523.6 | 624.1 | 252.6 |
This input should not be transformed in any way (e.g. log, z-scale)
Gene set matrix - The prior matrix (.csv) file, specified by ```-t```, where rows are genes and column is a binary
pathway membership. Where "1" means that a gene is in the pathway and "0" means that the gene is not know a priori.
The standard prior looks like below. Make sure the column name of the 1st column is "gene".
| gene | Pathway |
| --- | --- |
| Foxp1 | 0 |
| PD1 | 0 |
| CD8 | 1 |
You can also supply gene IDs instead of gene symbols.
The tool can handle multiple pathway columns in the ```-t``` file as below.
| gene | Pathway1 | Pathway2 | Pathway3 |
| --- | --- | --- | --- |
| Foxp1 | 0 | 0 | 0 |
| PD1 | 0 | 1 | 0 |
| CD8 | 1 | 0 | 1 |
Note: Each pathway above is analysed independently, and the outputs have no in-built relationship.
The tool is designed to get a granular view of a single pathway at a time.
# Output ACSNI-run
Database (.ptl)
| Content | Information |
| --- | --- |
| co | Pathway Code|
| w | Subprocess space |
| n | Interaction scores |
| p | Score classification |
| d | Interaction direction |
| run_info | Run parameters |
| methods | Extractor functions |
Predicted Network (.csv)
| Content | Meaning |
| --- | --- |
| name | Gene |
| sub | Subprocess |
| direction | Direction of interactions with subprocess |
Null (.csv) {Shuffled expression matrix}
# Input ACSNI-derive
Expression Matrix - See ``-i``` description above.
Note - We recommend removing any un-desirable genes (eg. MT, RPL) from the expression
matrix prior to running ACSNI-derive as they usually interfere during initial prior matrix generation steps.
For TCR/BCR genes, counts of alpha, beta and gamma chains can be combined into a single count.
Biotype file (Optional) - The biotype file (.csv) specified by ```-f```, given if the generation of gene set should be
based on a particular biotype specified by ```-b```.
| gene | biotype |
| --- | --- |
| Foxp1 | protein_coding |
| PD1 | protein_coding |
| MALAT1 | lncRNA |
| SNHG12 | lncRNA |
| RNU1-114P | snRNA |
Correlation file (Optional) - The correlation file (.csv) specified by ```-u```, given if the user wishes to replace
"some" specific genes with other genes to be used as a prior for the first iteration of ACSNI-run (internally).
| gene | cor |
| --- | --- |
| Foxp1 | 0.9 |
| PD1 | 0.89 |
| MALAT1 | 0.85 |
| SNHG12 | 0.80 |
| RNU1-114P | 0.72 |
# Output ACSNI-derive
Database (.ptl)
| Content | Information |
| --- | --- |
| co | Pathway Code|
| n | Interaction scores |
| d | Interaction direction |
| ac | Correlation and T test results |
| fd | Unfiltered prediction data |
| run_info | Run parameters |
| methods | Extractor functions |
Predicted (.csv)
| Content | Meaning |
| --- | --- |
| name | Gene |
| predict | Classification of genes|
Null (.csv) {Shuffled expression matrix}
# Input ACSNI-get
ACSNI database - Output of ACSNI-run (.ptl) specified by ```-r```.
Target phenotype - Biological phenotype file (.csv) to link ACSNI subprocesses, specified by ```-v```.
The sample IDs should match the IDs in the ```-i``` analysed by ACSNI-run.
Variable type - The type of phenotype i.e "numeric" or "character", specified by ```-c```.
Outputs the strength of the associations across the subprocesses (.csv).
# Input ACSNI-split
Expression Matrix - See ``-i``` description above.
Number of splits - The number of independent cohorts to generate from `-i```.
Outputs the data splits in the current working directory.
# Extras
R functions to reproduce the downstream analyses reported in the paper are inside the folder "R".
Example runs are inside the folder "sh".
# Tutorial
An extensive tutorial on how to use ACSNI commands can be found inside the Tutorial folder.
# To clone the source repository
git clone https://github.com/caanene1/ACSNI
# Citation
ACSNI: An unsupervised machine-learning tool for prediction of tissue-specific pathway components using gene expression profiles
Chinedu Anthony Anene, Faraz Khan, Findlay Bewicke-Copley, Eleni Maniati and Jun Wang
| PypiClean |
/Firefly%20III%20API%20Python%20Client-1.5.6.post2.tar.gz/Firefly III API Python Client-1.5.6.post2/firefly_iii_client/model/account_type_filter.py | import re # noqa: F401
import sys # noqa: F401
from firefly_iii_client.model_utils import ( # noqa: F401
ApiTypeError,
ModelComposed,
ModelNormal,
ModelSimple,
cached_property,
change_keys_js_to_python,
convert_js_args_to_python_args,
date,
datetime,
file_type,
none_type,
validate_get_composed_info,
)
from ..model_utils import OpenApiModel
from firefly_iii_client.exceptions import ApiAttributeError
class AccountTypeFilter(ModelSimple):
"""NOTE: This class is auto generated by OpenAPI Generator.
Ref: https://openapi-generator.tech
Do not edit the class manually.
Attributes:
allowed_values (dict): The key is the tuple path to the attribute
and the for var_name this is (var_name,). The value is a dict
with a capitalized key describing the allowed value and an allowed
value. These dicts store the allowed enum values.
validations (dict): The key is the tuple path to the attribute
and the for var_name this is (var_name,). The value is a dict
that stores validations for max_length, min_length, max_items,
min_items, exclusive_maximum, inclusive_maximum, exclusive_minimum,
inclusive_minimum, and regex.
additional_properties_type (tuple): A tuple of classes accepted
as additional properties values.
"""
allowed_values = {
('value',): {
'ALL': "all",
'ASSET': "asset",
'CASH': "cash",
'EXPENSE': "expense",
'REVENUE': "revenue",
'SPECIAL': "special",
'HIDDEN': "hidden",
'LIABILITY': "liability",
'LIABILITIES': "liabilities",
'DEFAULT_ACCOUNT': "Default account",
'CASH_ACCOUNT': "Cash account",
'ASSET_ACCOUNT': "Asset account",
'EXPENSE_ACCOUNT': "Expense account",
'REVENUE_ACCOUNT': "Revenue account",
'INITIAL_BALANCE_ACCOUNT': "Initial balance account",
'BENEFICIARY_ACCOUNT': "Beneficiary account",
'IMPORT_ACCOUNT': "Import account",
'RECONCILIATION_ACCOUNT': "Reconciliation account",
'LOAN': "Loan",
'DEBT': "Debt",
'MORTGAGE': "Mortgage",
},
}
validations = {
}
additional_properties_type = None
_nullable = False
@cached_property
def openapi_types():
"""
This must be a method because a model may have properties that are
of type self, this must run after the class is loaded
Returns
openapi_types (dict): The key is attribute name
and the value is attribute type.
"""
return {
'value': (str,),
}
@cached_property
def discriminator():
return None
attribute_map = {}
read_only_vars = set()
_composed_schemas = None
required_properties = set([
'_data_store',
'_check_type',
'_spec_property_naming',
'_path_to_item',
'_configuration',
'_visited_composed_classes',
])
@convert_js_args_to_python_args
def __init__(self, *args, **kwargs):
"""AccountTypeFilter - a model defined in OpenAPI
Note that value can be passed either in args or in kwargs, but not in both.
Args:
args[0] (str):, must be one of ["all", "asset", "cash", "expense", "revenue", "special", "hidden", "liability", "liabilities", "Default account", "Cash account", "Asset account", "Expense account", "Revenue account", "Initial balance account", "Beneficiary account", "Import account", "Reconciliation account", "Loan", "Debt", "Mortgage", ] # noqa: E501
Keyword Args:
value (str):, must be one of ["all", "asset", "cash", "expense", "revenue", "special", "hidden", "liability", "liabilities", "Default account", "Cash account", "Asset account", "Expense account", "Revenue account", "Initial balance account", "Beneficiary account", "Import account", "Reconciliation account", "Loan", "Debt", "Mortgage", ] # noqa: E501
_check_type (bool): if True, values for parameters in openapi_types
will be type checked and a TypeError will be
raised if the wrong type is input.
Defaults to True
_path_to_item (tuple/list): This is a list of keys or values to
drill down to the model in received_data
when deserializing a response
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_configuration (Configuration): the instance to use when
deserializing a file_type parameter.
If passed, type conversion is attempted
If omitted no type conversion is done.
_visited_composed_classes (tuple): This stores a tuple of
classes that we have traveled through so that
if we see that class again we will not use its
discriminator again.
When traveling through a discriminator, the
composed schema that is
is traveled through is added to this set.
For example if Animal has a discriminator
petType and we pass in "Dog", and the class Dog
allOf includes Animal, we move through Animal
once using the discriminator, and pick Dog.
Then in Dog, we will make an instance of the
Animal class but this time we won't travel
through its discriminator because we passed in
_visited_composed_classes = (Animal,)
"""
# required up here when default value is not given
_path_to_item = kwargs.pop('_path_to_item', ())
if 'value' in kwargs:
value = kwargs.pop('value')
elif args:
args = list(args)
value = args.pop(0)
else:
raise ApiTypeError(
"value is required, but not passed in args or kwargs and doesn't have default",
path_to_item=_path_to_item,
valid_classes=(self.__class__,),
)
_check_type = kwargs.pop('_check_type', True)
_spec_property_naming = kwargs.pop('_spec_property_naming', False)
_configuration = kwargs.pop('_configuration', None)
_visited_composed_classes = kwargs.pop('_visited_composed_classes', ())
if args:
raise ApiTypeError(
"Invalid positional arguments=%s passed to %s. Remove those invalid positional arguments." % (
args,
self.__class__.__name__,
),
path_to_item=_path_to_item,
valid_classes=(self.__class__,),
)
self._data_store = {}
self._check_type = _check_type
self._spec_property_naming = _spec_property_naming
self._path_to_item = _path_to_item
self._configuration = _configuration
self._visited_composed_classes = _visited_composed_classes + (self.__class__,)
self.value = value
if kwargs:
raise ApiTypeError(
"Invalid named arguments=%s passed to %s. Remove those invalid named arguments." % (
kwargs,
self.__class__.__name__,
),
path_to_item=_path_to_item,
valid_classes=(self.__class__,),
)
@classmethod
@convert_js_args_to_python_args
def _from_openapi_data(cls, *args, **kwargs):
"""AccountTypeFilter - a model defined in OpenAPI
Note that value can be passed either in args or in kwargs, but not in both.
Args:
args[0] (str):, must be one of ["all", "asset", "cash", "expense", "revenue", "special", "hidden", "liability", "liabilities", "Default account", "Cash account", "Asset account", "Expense account", "Revenue account", "Initial balance account", "Beneficiary account", "Import account", "Reconciliation account", "Loan", "Debt", "Mortgage", ] # noqa: E501
Keyword Args:
value (str):, must be one of ["all", "asset", "cash", "expense", "revenue", "special", "hidden", "liability", "liabilities", "Default account", "Cash account", "Asset account", "Expense account", "Revenue account", "Initial balance account", "Beneficiary account", "Import account", "Reconciliation account", "Loan", "Debt", "Mortgage", ] # noqa: E501
_check_type (bool): if True, values for parameters in openapi_types
will be type checked and a TypeError will be
raised if the wrong type is input.
Defaults to True
_path_to_item (tuple/list): This is a list of keys or values to
drill down to the model in received_data
when deserializing a response
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_configuration (Configuration): the instance to use when
deserializing a file_type parameter.
If passed, type conversion is attempted
If omitted no type conversion is done.
_visited_composed_classes (tuple): This stores a tuple of
classes that we have traveled through so that
if we see that class again we will not use its
discriminator again.
When traveling through a discriminator, the
composed schema that is
is traveled through is added to this set.
For example if Animal has a discriminator
petType and we pass in "Dog", and the class Dog
allOf includes Animal, we move through Animal
once using the discriminator, and pick Dog.
Then in Dog, we will make an instance of the
Animal class but this time we won't travel
through its discriminator because we passed in
_visited_composed_classes = (Animal,)
"""
# required up here when default value is not given
_path_to_item = kwargs.pop('_path_to_item', ())
self = super(OpenApiModel, cls).__new__(cls)
if 'value' in kwargs:
value = kwargs.pop('value')
elif args:
args = list(args)
value = args.pop(0)
else:
raise ApiTypeError(
"value is required, but not passed in args or kwargs and doesn't have default",
path_to_item=_path_to_item,
valid_classes=(self.__class__,),
)
_check_type = kwargs.pop('_check_type', True)
_spec_property_naming = kwargs.pop('_spec_property_naming', False)
_configuration = kwargs.pop('_configuration', None)
_visited_composed_classes = kwargs.pop('_visited_composed_classes', ())
if args:
raise ApiTypeError(
"Invalid positional arguments=%s passed to %s. Remove those invalid positional arguments." % (
args,
self.__class__.__name__,
),
path_to_item=_path_to_item,
valid_classes=(self.__class__,),
)
self._data_store = {}
self._check_type = _check_type
self._spec_property_naming = _spec_property_naming
self._path_to_item = _path_to_item
self._configuration = _configuration
self._visited_composed_classes = _visited_composed_classes + (self.__class__,)
self.value = value
if kwargs:
raise ApiTypeError(
"Invalid named arguments=%s passed to %s. Remove those invalid named arguments." % (
kwargs,
self.__class__.__name__,
),
path_to_item=_path_to_item,
valid_classes=(self.__class__,),
)
return self | PypiClean |
/mynewspaper-4.0.tar.gz/mynewspaper-4.0/misc/dateutil/easter.py | __author__ = "Gustavo Niemeyer <[email protected]>"
__license__ = "Simplified BSD"
import datetime
__all__ = ["easter", "EASTER_JULIAN", "EASTER_ORTHODOX", "EASTER_WESTERN"]
EASTER_JULIAN = 1
EASTER_ORTHODOX = 2
EASTER_WESTERN = 3
def easter(year, method=EASTER_WESTERN):
"""
This method was ported from the work done by GM Arts,
on top of the algorithm by Claus Tondering, which was
based in part on the algorithm of Ouding (1940), as
quoted in "Explanatory Supplement to the Astronomical
Almanac", P. Kenneth Seidelmann, editor.
This algorithm implements three different easter
calculation methods:
1 - Original calculation in Julian calendar, valid in
dates after 326 AD
2 - Original method, with date converted to Gregorian
calendar, valid in years 1583 to 4099
3 - Revised method, in Gregorian calendar, valid in
years 1583 to 4099 as well
These methods are represented by the constants:
EASTER_JULIAN = 1
EASTER_ORTHODOX = 2
EASTER_WESTERN = 3
The default method is method 3.
More about the algorithm may be found at:
http://users.chariot.net.au/~gmarts/eastalg.htm
and
http://www.tondering.dk/claus/calendar.html
"""
if not (1 <= method <= 3):
raise ValueError("invalid method")
# g - Golden year - 1
# c - Century
# h - (23 - Epact) mod 30
# i - Number of days from March 21 to Paschal Full Moon
# j - Weekday for PFM (0=Sunday, etc)
# p - Number of days from March 21 to Sunday on or before PFM
# (-6 to 28 methods 1 & 3, to 56 for method 2)
# e - Extra days to add for method 2 (converting Julian
# date to Gregorian date)
y = year
g = y % 19
e = 0
if method < 3:
# Old method
i = (19*g+15)%30
j = (y+y//4+i)%7
if method == 2:
# Extra dates to convert Julian to Gregorian date
e = 10
if y > 1600:
e = e+y//100-16-(y//100-16)//4
else:
# New method
c = y//100
h = (c-c//4-(8*c+13)//25+19*g+15)%30
i = h-(h//28)*(1-(h//28)*(29//(h+1))*((21-g)//11))
j = (y+y//4+i+2-c+c//4)%7
# p can be from -6 to 56 corresponding to dates 22 March to 23 May
# (later dates apply to method 2, although 23 May never actually occurs)
p = i-j+e
d = 1+(p+27+(p+6)//40)%31
m = 3+(p+26)//30
return datetime.date(int(y), int(m), int(d)) | PypiClean |
/EasyFileWatcher-0.0.5.tar.gz/EasyFileWatcher-0.0.5/README.md | <a name="readme-top"></a>
<!-- [![Contributors][contributors-shield]][contributors-url]
[![Forks][forks-shield]][forks-url]
[![Stargazers][stars-shield]][stars-url]
[![Issues][issues-shield]][issues-url] -->
[![MIT License][license-shield]][license-url]
<!-- [![LinkedIn][linkedin-shield]][linkedin-url] -->
<!-- PROJECT LOGO -->
<br />
<div align="center">
<a href="https://github.com/efstratios97/ltep_athena_api">
<img src="https://www.ltep-technologies.com/wp-content/uploads/2022/06/LTEP_LOGO_21-3.png" alt="Logo" width="80" height="80">
</a>
<h3 align="center">EasyFileWatcher</h3>
<p align="center">
The official API
<br />
<a href="https://github.com/efstratios97/EasyFileWatcher/tree/main/docs"><strong>Explore the docs »</strong></a>
<br />
<br />
<a href="https://github.com/efstratios97/EasyFileWatcher">View Demo</a>
-
<a href="https://github.com/efstratios97/EasyFileWatcher/issues">Report Bug</a>
-
<a href="https://github.com/efstratios97/EasyFileWatcher/issues">Request Feature</a>
</p>
</div>
<!-- TABLE OF CONTENTS -->
<details>
<summary>Table of Contents</summary>
<ol>
<li>
<a href="#about-the-project">About The Project</a>
<ul>
<li><a href="#built-with">Built With</a></li>
</ul>
</li>
<li>
<a href="#getting-started">Getting Started</a>
<ul>
<li><a href="#prerequisites">Prerequisites</a></li>
<li><a href="#installation">Installation</a></li>
</ul>
</li>
<li><a href="#usage">Usage</a></li>
<li><a href="#roadmap">Roadmap</a></li>
<li><a href="#contributing">Contributing</a></li>
<li><a href="#license">License</a></li>
<li><a href="#contact">Contact</a></li>
<li><a href="#acknowledgments">Acknowledgments</a></li>
</ol>
</details>
<!-- ABOUT THE PROJECT -->
## About The Project
<!-- [![Product Name Screen Shot][product-screenshot]](https://www.ltep-technologies.com/wp-content/uploads/2022/06/ATHINA_LOGO-3.png) -->
This is yet another FileWatcher. Developed to run smoothier, without sideffects and give more control to the developer in comparison to common packages for such purpose.
<table>
<tr>
<th>Features</th>
<th>EasyFileWatcher</th>
<th>Others (Watchdog etc.)</th>
</th>
<tr>
<td><strong>Schedule Start & End Time</strong></td>
<td><img src="https://img.icons8.com/emoji/48/000000/check-mark-emoji.png"/></td>
<td><img src="https://img.icons8.com/external-bearicons-flat-bearicons/46/000000/external-block-essential-collection-bearicons-flat-bearicons.png"/></td>
</tr>
<tr>
<td><strong>Pause & Resume</strong></td>
<td><img src="https://img.icons8.com/emoji/48/000000/check-mark-emoji.png"/></td>
<td><img src="https://img.icons8.com/external-bearicons-flat-bearicons/46/000000/external-block-essential-collection-bearicons-flat-bearicons.png"/></td>
</tr>
<tr>
<td><strong>By default runs in Background</strong></td>
<td><img src="https://img.icons8.com/emoji/48/000000/check-mark-emoji.png"/></td>
<td><img src="https://img.icons8.com/external-bearicons-flat-bearicons/46/000000/external-block-essential-collection-bearicons-flat-bearicons.png"/></td>
</tr>
<tr>
<td><strong>Configurable Pooling Time</strong></td>
<td><img src="https://img.icons8.com/emoji/48/000000/check-mark-emoji.png"/></td>
<td><img src="https://img.icons8.com/external-bearicons-flat-bearicons/46/000000/external-block-essential-collection-bearicons-flat-bearicons.png"/></td>
</tr>
<td><strong>Persist FileWatcher Tasks</strong></td>
<td><img src="https://img.icons8.com/emoji/48/000000/check-mark-emoji.png"/></td>
<td><img src="https://img.icons8.com/external-bearicons-flat-bearicons/46/000000/external-block-essential-collection-bearicons-flat-bearicons.png"/></td>
</tr>
</table>
<p align="right">(<a href="#readme-top">back to top</a>)</p>
### Built With
[![Python][Python]][Python-url]
<p align="right">(<a href="#readme-top">back to top</a>)</p>
<!-- GETTING STARTED -->
## Getting Started
```python
from easyfilewatcher.EasyFileWatcher import EasyFileWatcher
def print_msg(msg: str):
print(msg)
if __name__ == "__main__":
filewatcher = EasyFileWatcher()
filewatcher.add_directory_to_watch(directory_path="your\\directory",
directory_watcher_id="my_id", callback=print_msg,
callback_param={'msg': 'hi'}, event_on_deletion=False)
while(True):
pass
```
### Prerequisites
Python >=3.8
### Installation
pip install EasyFileWatcher
<p align="right">(<a href="#readme-top">back to top</a>)</p>
<!-- USAGE EXAMPLES -->
## Usage
The Documentation you can find here [docs](https://easyfilewatcher.readthedocs.io/en/latest/index.html)
<p align="right">(<a href="#readme-top">back to top</a>)</p>
ROADMAP
## Roadmap
- [x] Add Changelog
See the [open issues](https://github.com/efstratios97/EasyFileWatcher/issues) for a full list of proposed features (and known issues).
<p align="right">(<a href="#readme-top">back to top</a>)</p>
<!-- CONTRIBUTING -->
## Contributing
Contributions are what make the open source community such an amazing place to learn, inspire, and create. Any contributions you make are **greatly appreciated**.
If you have a suggestion that would make this better, please fork the repo and create a pull request. You can also simply open an issue with the tag "enhancement".
Don't forget to give the project a star! Thanks again!
1. Fork the Project
2. Create your Feature Branch (`git checkout -b feature/AmazingFeature`)
3. Commit your Changes (`git commit -m 'Add some AmazingFeature'`)
4. Push to the Branch (`git push origin feature/AmazingFeature`)
5. Open a Pull Request
<p align="right">(<a href="#readme-top">back to top</a>)</p>
<!-- LICENSE -->
## License
Distributed under the MIT License. See `LICENSE` for more information.
<p align="right">(<a href="#readme-top">back to top</a>)</p>
<!-- CONTACT -->
## Contact
Efstratios Pahis - [@ltepTechnologies](https://ltep-technologies.com) -
Project Link: [https://github.com/efstratios97/EasyFileWatcher](https://github.com/efstratios97/EasyFileWatcher)
<p align="right">(<a href="#readme-top">back to top</a>)</p>
<!-- ACKNOWLEDGMENTS -->
## Acknowledgments
LTEP Technologies UG (haftungsbeschränkt)
www.ltep-technologies.com
<p align="right">(<a href="#readme-top">back to top</a>)</p>
<!-- MARKDOWN LINKS & IMAGES -->
<!-- https://www.markdownguide.org/basic-syntax/#reference-style-links -->
[contributors-shield]: https://img.shields.io/github/contributors/othneildrew/Best-README-Template.svg?style=for-the-badge
[contributors-url]: https://github.com/othneildrew/Best-README-Template/graphs/contributors
[forks-shield]: https://img.shields.io/github/forks/othneildrew/Best-README-Template.svg?style=for-the-badge
[forks-url]: https://github.com/othneildrew/Best-README-Template/network/members
[stars-shield]: https://img.shields.io/github/stars/othneildrew/Best-README-Template.svg?style=for-the-badge
[stars-url]: https://github.com/othneildrew/Best-README-Template/stargazers
[issues-shield]: https://img.shields.io/github/issues/othneildrew/Best-README-Template.svg?style=for-the-badge
[issues-url]: https://github.com/othneildrew/Best-README-Template/issues
[license-shield]: https://img.shields.io/github/license/othneildrew/Best-README-Template.svg?style=for-the-badge
[license-url]: https://github.com/othneildrew/Best-README-Template/blob/master/LICENSE.txt
[linkedin-shield]: https://img.shields.io/badge/-LinkedIn-black.svg?style=for-the-badge&logo=linkedin&colorB=555
[linkedin-url]: https://linkedin.com/in/othneildrew
[product-screenshot]: https://www.ltep-technologies.com/wp-content/uploads/2022/06/ATHINA_LOGO-3.png
[Python]: https://www.python.org/static/community_logos/python-powered-w-100x40.png
[Python-url]: https://www.python.org/
[React.js]: https://img.shields.io/badge/React-20232A?style=for-the-badge&logo=react&logoColor=61DAFB
[React-url]: https://reactjs.org/
[Vue.js]: https://img.shields.io/badge/Vue.js-35495E?style=for-the-badge&logo=vuedotjs&logoColor=4FC08D
[Vue-url]: https://vuejs.org/
[Angular.io]: https://img.shields.io/badge/Angular-DD0031?style=for-the-badge&logo=angular&logoColor=white
[Angular-url]: https://angular.io/
[Svelte.dev]: https://img.shields.io/badge/Svelte-4A4A55?style=for-the-badge&logo=svelte&logoColor=FF3E00
[Svelte-url]: https://svelte.dev/
[Laravel.com]: https://img.shields.io/badge/Laravel-FF2D20?style=for-the-badge&logo=laravel&logoColor=white
[Laravel-url]: https://laravel.com
[Bootstrap.com]: https://img.shields.io/badge/Bootstrap-563D7C?style=for-the-badge&logo=bootstrap&logoColor=white
[Bootstrap-url]: https://getbootstrap.com
[JQuery.com]: https://img.shields.io/badge/jQuery-0769AD?style=for-the-badge&logo=jquery&logoColor=white
[JQuery-url]: https://jquery.com | PypiClean |
/Indomielibs-2.0.106.tar.gz/Indomielibs-2.0.106/pyrogram/types/user_and_chats/chat_privileges.py |
from pyrogram import raw
from ..object import Object
class ChatPrivileges(Object):
"""Describes privileged actions an administrator is able to take in a chat.
Parameters:
can_manage_chat (``bool``, *optional*):
True, if the administrator can access the chat event log, chat statistics, message statistics in channels,
see channel members, see anonymous administrators in supergroups and ignore slow mode.
Implied by any other administrator privilege.
can_delete_messages (``bool``, *optional*):
True, if the administrator can delete messages of other users.
can_manage_video_chats (``bool``, *optional*):
Groups and supergroups only.
True, if the administrator can manage video chats (also called group calls).
can_restrict_members (``bool``, *optional*):
True, if the administrator can restrict, ban or unban chat members.
can_promote_members (``bool``, *optional*):
True, if the administrator can add new administrators with a subset of his own privileges or demote
administrators that he has promoted, directly or indirectly (promoted by administrators that were appointed
by the user).
can_change_info (``bool``, *optional*):
True, if the user is allowed to change the chat title, photo and other settings.
can_post_messages (``bool``, *optional*):
Channels only.
True, if the administrator can post messages in the channel.
can_edit_messages (``bool``, *optional*):
Channels only.
True, if the administrator can edit messages of other users and can pin messages.
can_invite_users (``bool``, *optional*):
True, if the user is allowed to invite new users to the chat.
can_pin_messages (``bool``, *optional*):
Groups and supergroups only.
True, if the user is allowed to pin messages.
is_anonymous (``bool``, *optional*):
True, if the user's presence in the chat is hidden.
"""
def __init__(
self,
*,
can_manage_chat: bool = True,
can_delete_messages: bool = False,
can_manage_video_chats: bool = False, # Groups and supergroups only
can_restrict_members: bool = False,
can_promote_members: bool = False,
can_change_info: bool = False,
can_post_messages: bool = False, # Channels only
can_edit_messages: bool = False, # Channels only
can_invite_users: bool = False,
can_pin_messages: bool = False, # Groups and supergroups only
is_anonymous: bool = False
):
super().__init__(None)
self.can_manage_chat: bool = can_manage_chat
self.can_delete_messages: bool = can_delete_messages
self.can_manage_video_chats: bool = can_manage_video_chats
self.can_restrict_members: bool = can_restrict_members
self.can_promote_members: bool = can_promote_members
self.can_change_info: bool = can_change_info
self.can_post_messages: bool = can_post_messages
self.can_edit_messages: bool = can_edit_messages
self.can_invite_users: bool = can_invite_users
self.can_pin_messages: bool = can_pin_messages
self.is_anonymous: bool = is_anonymous
@staticmethod
def _parse(admin_rights: "raw.base.ChatAdminRights") -> "ChatPrivileges":
return ChatPrivileges(
can_manage_chat=admin_rights.other,
can_delete_messages=admin_rights.delete_messages,
can_manage_video_chats=admin_rights.manage_call,
can_restrict_members=admin_rights.ban_users,
can_promote_members=admin_rights.add_admins,
can_change_info=admin_rights.change_info,
can_post_messages=admin_rights.post_messages,
can_edit_messages=admin_rights.edit_messages,
can_invite_users=admin_rights.invite_users,
can_pin_messages=admin_rights.pin_messages,
is_anonymous=admin_rights.anonymous
) | PypiClean |
/Flask-Material-Lite-0.0.1.tar.gz/Flask-Material-Lite-0.0.1/flask_material_lite/__init__.py |
__app_version__ = '0.0.1'
__material_version__ = '1.0'
import re
from flask import Blueprint, current_app, url_for
try:
from wtforms.fields import HiddenField
except ImportError:
def is_hidden_field_filter(field):
raise RuntimeError('WTForms is not installed.')
else:
def is_hidden_field_filter(field):
return isinstance(field, HiddenField)
class CDN(object):
"""Base class for CDN objects."""
def get_resource_url(self, filename):
"""Return resource url for filename."""
raise NotImplementedError
class StaticCDN(object):
"""A CDN that serves content from the local application.
:param static_endpoint: Endpoint to use.
:param rev: If ``True``, honor ``MATERIAL_QUERYSTRING_REVVING``.
"""
def __init__(self, static_endpoint='static', rev=False):
self.static_endpoint = static_endpoint
self.rev = rev
def get_resource_url(self, filename):
extra_args = {}
if self.rev and current_app.config['MATERIAL_QUERYSTRING_REVVING']:
extra_args['material'] = __version__
return url_for(self.static_endpoint, filename=filename, **extra_args)
class WebCDN(object):
"""Serves files from the Web.
:param baseurl: The baseurl. Filenames are simply appended to this URL.
"""
def __init__(self, baseurl):
self.baseurl = baseurl
def get_resource_url(self, filename):
return self.baseurl + filename
class ConditionalCDN(object):
"""Serves files from one CDN or another, depending on whether a
configuration value is set.
:param confvar: Configuration variable to use.
:param primary: CDN to use if the configuration variable is ``True``.
:param fallback: CDN to use otherwise.
"""
def __init__(self, confvar, primary, fallback):
self.confvar = confvar
self.primary = primary
self.fallback = fallback
def get_resource_url(self, filename):
if current_app.config[self.confvar]:
return self.primary.get_resource_url(filename)
return self.fallback.get_resource_url(filename)
def material_find_resource(filename, cdn, use_minified=None, local=True):
"""Resource finding function, also available in templates.
Tries to find a resource, will force SSL depending on
``MATERIAL_CDN_FORCE_SSL`` settings.
:param filename: File to find a URL for.
:param cdn: Name of the CDN to use.
:param use_minified': If set to ``True``/``False``, use/don't use
minified. If ``None``, honors
``MATERIAL_USE_MINIFIED``.
:param local: If ``True``, uses the ``local``-CDN when
``MATERIAL_SERVE_LOCAL`` is enabled. If ``False``, uses
the ``static``-CDN instead.
:return: A URL.
"""
config = current_app.config
if config['MATERIAL_SERVE_LOCAL']:
if 'css/' not in filename and 'js/' not in filename:
filename = 'js/' + filename
if None == use_minified:
use_minified = config['MATERIAL_USE_MINIFIED']
if use_minified:
filename = '%s.min.%s' % tuple(filename.rsplit('.', 1))
cdns = current_app.extensions['material_lite']['cdns']
resource_url = cdns[cdn].get_resource_url(filename)
if resource_url.startswith('//') and config['MATERIAL_CDN_FORCE_SSL']:
resource_url = 'https:%s' % resource_url
return resource_url
class Material_Lite(object):
def __init__(self, app=None):
if app is not None:
self.init_app(app)
def init_app(self, app):
MATERIAL_VERSION = '1.0'
# JQUERY_VERSION = '1.11.3'
# HTML5SHIV_VERSION = '3.7.2'
# RESPONDJS_VERSION = '1.4.2'
app.config.setdefault('MATERIAL_USE_MINIFIED', True)
app.config.setdefault('MATERIAL_CDN_FORCE_SSL', False)
app.config.setdefault('MATERIAL_QUERYSTRING_REVVING', True)
app.config.setdefault('MATERIAL_SERVE_LOCAL', False)
app.config.setdefault('MATERIAL_LOCAL_SUBDOMAIN', None)
blueprint = Blueprint(
'material_lite',
__name__,
template_folder='templates',
static_folder='static',
static_url_path=app.static_url_path + '/material_lite',
subdomain=app.config['MATERIAL_LOCAL_SUBDOMAIN'])
app.register_blueprint(blueprint)
app.jinja_env.globals['material_is_hidden_field'] =\
is_hidden_field_filter
app.jinja_env.globals['material_find_resource'] =\
material_find_resource
if not hasattr(app, 'extensions'):
app.extensions = {}
local = StaticCDN('material_lite.static', rev=True)
static = StaticCDN()
def lwrap(cdn, primary=static):
return ConditionalCDN('MATERIAL_SERVE_LOCAL', primary, cdn)
material = lwrap(
WebCDN('//storage.googleapis.com/code.getmdl.io/1.0.0/material.min.js'),
local)
icons = lwrap(
WebCDN('//fonts.googleapis.com/icon?family=Material+Icons'),
local)
theme = lwrap(
WebCDN('//storage.googleapis.com/code.getmdl.io/1.0.0/material.indigo-pink.min.css'),
local)
# html5shiv = lwrap(
# WebCDN('//cdnjs.cloudflare.com/ajax/libs/html5shiv/%s/'
# % HTML5SHIV_VERSION))
# respondjs = lwrap(
# WebCDN('//cdnjs.cloudflare.com/ajax/libs/respond.js/%s/'
# % RESPONDJS_VERSION))
app.extensions['material_lite'] = {
'cdns': {
'local': local,
'static': static,
'material': material,
'icons': icons,
'theme': theme
# 'jquery': jquery,
# 'html5shiv': html5shiv,
# 'respond.js': respondjs,
},
} | PypiClean |
/Electrum-CHI-3.3.8.tar.gz/Electrum-CHI-3.3.8/electrum_chi/electrum/gui/qt/console.py |
import sys
import os
import re
import traceback
from PyQt5 import QtCore
from PyQt5 import QtGui
from PyQt5 import QtWidgets
from electrum import util
from electrum.i18n import _
from .util import MONOSPACE_FONT
class OverlayLabel(QtWidgets.QLabel):
STYLESHEET = '''
QLabel, QLabel link {
color: rgb(0, 0, 0);
background-color: rgb(248, 240, 200);
border: 1px solid;
border-color: rgb(255, 114, 47);
padding: 2px;
}
'''
def __init__(self, text, parent):
super().__init__(text, parent)
self.setMinimumHeight(150)
self.setGeometry(0, 0, self.width(), self.height())
self.setStyleSheet(self.STYLESHEET)
self.setMargin(0)
parent.setHorizontalScrollBarPolicy(QtCore.Qt.ScrollBarAlwaysOff)
self.setWordWrap(True)
def mousePressEvent(self, e):
self.hide()
def on_resize(self, w):
padding = 2 # px, from the stylesheet above
self.setFixedWidth(w - padding)
class Console(QtWidgets.QPlainTextEdit):
def __init__(self, prompt='>> ', startup_message='', parent=None):
QtWidgets.QPlainTextEdit.__init__(self, parent)
self.prompt = prompt
self.history = []
self.namespace = {}
self.construct = []
self.setGeometry(50, 75, 600, 400)
self.setWordWrapMode(QtGui.QTextOption.WrapAnywhere)
self.setUndoRedoEnabled(False)
self.document().setDefaultFont(QtGui.QFont(MONOSPACE_FONT, 10, QtGui.QFont.Normal))
self.showMessage(startup_message)
self.updateNamespace({'run':self.run_script})
self.set_json(False)
warning_text = "<h1>{}</h1><br>{}<br><br>{}".format(
_("Warning!"),
_("Do not paste code here that you don't understand. Executing the wrong code could lead "
"to your coins being irreversibly lost."),
_("Click here to hide this message.")
)
self.messageOverlay = OverlayLabel(warning_text, self)
def resizeEvent(self, e):
super().resizeEvent(e)
vertical_scrollbar_width = self.verticalScrollBar().width() * self.verticalScrollBar().isVisible()
self.messageOverlay.on_resize(self.width() - vertical_scrollbar_width)
def set_json(self, b):
self.is_json = b
def run_script(self, filename):
with open(filename) as f:
script = f.read()
# eval is generally considered bad practice. use it wisely!
result = eval(script, self.namespace, self.namespace)
def updateNamespace(self, namespace):
self.namespace.update(namespace)
def showMessage(self, message):
self.appendPlainText(message)
self.newPrompt()
def clear(self):
self.setPlainText('')
self.newPrompt()
def newPrompt(self):
if self.construct:
prompt = '.' * len(self.prompt)
else:
prompt = self.prompt
self.completions_pos = self.textCursor().position()
self.completions_visible = False
self.appendPlainText(prompt)
self.moveCursor(QtGui.QTextCursor.End)
def getCommand(self):
doc = self.document()
curr_line = doc.findBlockByLineNumber(doc.lineCount() - 1).text()
curr_line = curr_line.rstrip()
curr_line = curr_line[len(self.prompt):]
return curr_line
def setCommand(self, command):
if self.getCommand() == command:
return
doc = self.document()
curr_line = doc.findBlockByLineNumber(doc.lineCount() - 1).text()
self.moveCursor(QtGui.QTextCursor.End)
for i in range(len(curr_line) - len(self.prompt)):
self.moveCursor(QtGui.QTextCursor.Left, QtGui.QTextCursor.KeepAnchor)
self.textCursor().removeSelectedText()
self.textCursor().insertText(command)
self.moveCursor(QtGui.QTextCursor.End)
def show_completions(self, completions):
if self.completions_visible:
self.hide_completions()
c = self.textCursor()
c.setPosition(self.completions_pos)
completions = map(lambda x: x.split('.')[-1], completions)
t = '\n' + ' '.join(completions)
if len(t) > 500:
t = t[:500] + '...'
c.insertText(t)
self.completions_end = c.position()
self.moveCursor(QtGui.QTextCursor.End)
self.completions_visible = True
def hide_completions(self):
if not self.completions_visible:
return
c = self.textCursor()
c.setPosition(self.completions_pos)
l = self.completions_end - self.completions_pos
for x in range(l): c.deleteChar()
self.moveCursor(QtGui.QTextCursor.End)
self.completions_visible = False
def getConstruct(self, command):
if self.construct:
prev_command = self.construct[-1]
self.construct.append(command)
if not prev_command and not command:
ret_val = '\n'.join(self.construct)
self.construct = []
return ret_val
else:
return ''
else:
if command and command[-1] == (':'):
self.construct.append(command)
return ''
else:
return command
def getHistory(self):
return self.history
def setHisory(self, history):
self.history = history
def addToHistory(self, command):
if command[0:1] == ' ':
return
if command and (not self.history or self.history[-1] != command):
self.history.append(command)
self.history_index = len(self.history)
def getPrevHistoryEntry(self):
if self.history:
self.history_index = max(0, self.history_index - 1)
return self.history[self.history_index]
return ''
def getNextHistoryEntry(self):
if self.history:
hist_len = len(self.history)
self.history_index = min(hist_len, self.history_index + 1)
if self.history_index < hist_len:
return self.history[self.history_index]
return ''
def getCursorPosition(self):
c = self.textCursor()
return c.position() - c.block().position() - len(self.prompt)
def setCursorPosition(self, position):
self.moveCursor(QtGui.QTextCursor.StartOfLine)
for i in range(len(self.prompt) + position):
self.moveCursor(QtGui.QTextCursor.Right)
def register_command(self, c, func):
methods = { c: func}
self.updateNamespace(methods)
def runCommand(self):
command = self.getCommand()
self.addToHistory(command)
command = self.getConstruct(command)
if command:
tmp_stdout = sys.stdout
class stdoutProxy():
def __init__(self, write_func):
self.write_func = write_func
self.skip = False
def flush(self):
pass
def write(self, text):
if not self.skip:
stripped_text = text.rstrip('\n')
self.write_func(stripped_text)
QtCore.QCoreApplication.processEvents()
self.skip = not self.skip
if type(self.namespace.get(command)) == type(lambda:None):
self.appendPlainText("'{}' is a function. Type '{}()' to use it in the Python console."
.format(command, command))
self.newPrompt()
return
sys.stdout = stdoutProxy(self.appendPlainText)
try:
try:
# eval is generally considered bad practice. use it wisely!
result = eval(command, self.namespace, self.namespace)
if result is not None:
if self.is_json:
util.print_msg(util.json_encode(result))
else:
self.appendPlainText(repr(result))
except SyntaxError:
# exec is generally considered bad practice. use it wisely!
exec(command, self.namespace, self.namespace)
except SystemExit:
self.close()
except BaseException:
traceback_lines = traceback.format_exc().split('\n')
# Remove traceback mentioning this file, and a linebreak
for i in (3,2,1,-1):
traceback_lines.pop(i)
self.appendPlainText('\n'.join(traceback_lines))
sys.stdout = tmp_stdout
self.newPrompt()
self.set_json(False)
def keyPressEvent(self, event):
if event.key() == QtCore.Qt.Key_Tab:
self.completions()
return
self.hide_completions()
if event.key() in (QtCore.Qt.Key_Enter, QtCore.Qt.Key_Return):
self.runCommand()
return
if event.key() == QtCore.Qt.Key_Home:
self.setCursorPosition(0)
return
if event.key() == QtCore.Qt.Key_PageUp:
return
elif event.key() in (QtCore.Qt.Key_Left, QtCore.Qt.Key_Backspace):
if self.getCursorPosition() == 0:
return
elif event.key() == QtCore.Qt.Key_Up:
self.setCommand(self.getPrevHistoryEntry())
return
elif event.key() == QtCore.Qt.Key_Down:
self.setCommand(self.getNextHistoryEntry())
return
elif event.key() == QtCore.Qt.Key_L and event.modifiers() == QtCore.Qt.ControlModifier:
self.clear()
super(Console, self).keyPressEvent(event)
def completions(self):
cmd = self.getCommand()
# note for regex: new words start after ' ' or '(' or ')'
lastword = re.split(r'[ ()]', cmd)[-1]
beginning = cmd[0:-len(lastword)]
path = lastword.split('.')
prefix = '.'.join(path[:-1])
prefix = (prefix + '.') if prefix else prefix
ns = self.namespace.keys()
if len(path) == 1:
ns = ns
else:
assert len(path) > 1
obj = self.namespace.get(path[0])
try:
for attr in path[1:-1]:
obj = getattr(obj, attr)
except AttributeError:
ns = []
else:
ns = dir(obj)
completions = []
for name in ns:
if name[0] == '_':continue
if name.startswith(path[-1]):
completions.append(prefix+name)
completions.sort()
if not completions:
self.hide_completions()
elif len(completions) == 1:
self.hide_completions()
self.setCommand(beginning + completions[0])
else:
# find common prefix
p = os.path.commonprefix(completions)
if len(p)>len(lastword):
self.hide_completions()
self.setCommand(beginning + p)
else:
self.show_completions(completions)
welcome_message = '''
---------------------------------------------------------------
Welcome to a primitive Python interpreter.
---------------------------------------------------------------
'''
if __name__ == '__main__':
app = QtWidgets.QApplication(sys.argv)
console = Console(startup_message=welcome_message)
console.updateNamespace({'myVar1' : app, 'myVar2' : 1234})
console.show()
sys.exit(app.exec_()) | PypiClean |
/AltAnalyze-2.1.3.15.tar.gz/AltAnalyze-2.1.3.15/altanalyze/build_scripts/SubGeneViewerExport.py |
#Permission is hereby granted, free of charge, to any person obtaining a copy
#of this software and associated documentation files (the "Software"), to deal
#in the Software without restriction, including without limitation the rights
#to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
#copies of the Software, and to permit persons to whom the Software is furnished
#to do so, subject to the following conditions:
#THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
#INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
#PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
#HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
#OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
#SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
import sys,string,os
sys.path.insert(1, os.path.join(sys.path[0], '..')) ### import parent dir dependencies
import os.path
import unique
import export
dirfile = unique
############ File Import Functions #############
def filepath(filename):
fn = unique.filepath(filename)
return fn
def read_directory(sub_dir):
dir_list = unique.read_directory(sub_dir)
#add in code to prevent folder names from being included
dir_list2 = []
for entry in dir_list:
if entry[-4:] == ".txt" or entry[-4:] == ".all" or entry[-5:] == ".data" or entry[-3:] == ".fa":
dir_list2.append(entry)
return dir_list2
def returnDirectories(sub_dir):
dir=os.path.dirname(dirfile.__file__)
dir_list = os.listdir(dir + sub_dir)
###Below code used to prevent FILE names from being included
dir_list2 = []
for entry in dir_list:
if "." not in entry: dir_list2.append(entry)
return dir_list2
class GrabFiles:
def setdirectory(self,value): self.data = value
def display(self): print self.data
def searchdirectory(self,search_term):
#self is an instance while self.data is the value of the instance
files = getDirectoryFiles(self.data,search_term)
if len(files)<1: print 'files not found'
return files
def returndirectory(self):
dir_list = getAllDirectoryFiles(self.data)
return dir_list
def getAllDirectoryFiles(import_dir):
all_files = []
dir_list = read_directory(import_dir) #send a sub_directory to a function to identify all files in a directory
for data in dir_list: #loop through each file in the directory to output results
data_dir = import_dir[1:]+'/'+data
all_files.append(data_dir)
return all_files
def getDirectoryFiles(import_dir,search_term):
dir_list = read_directory(import_dir) #send a sub_directory to a function to identify all files in a directory
matches=[]
for data in dir_list: #loop through each file in the directory to output results
data_dir = import_dir[1:]+'/'+data
if search_term in data_dir: matches.append(data_dir)
return matches
def cleanUpLine(line):
line = string.replace(line,'\n','')
line = string.replace(line,'\c','')
data = string.replace(line,'\r','')
data = string.replace(data,'"','')
return data
############### Main Program ###############
def importAnnotationData(filename):
fn=filepath(filename); x=1
global gene_symbol_db; gene_symbol_db={}
for line in open(fn,'rU').xreadlines():
data = cleanUpLine(line)
t = string.split(data,'\t')
if x==0: x=1
else:
gene = t[0]
try: symbol = t[1]
except IndexError: symbol = ''
if len(symbol)>0: gene_symbol_db[gene] = symbol
def importGeneData(filename,data_type):
fn=filepath(filename); x=0; gene_db={}
for line in open(fn,'rU').xreadlines():
data = cleanUpLine(line)
t = string.split(data,'\t')
if x==0:x=1
else:
proceed = 'yes'
if data_type == 'junction': gene, region5, region3 = t; value_str = region5+':'+region3
if data_type == 'feature':
probeset, gene, feature, region = t; value_str = region,feature+':'+region+':'+probeset ###access region data later
#if (gene,region) not in region_db: region_db[gene,region] = feature,probeset ### Needed for processed structure table (see two lines down)
try: region_db[gene,region].append((feature,probeset)) ### Needed for processed structure table (see two lines down)
except KeyError: region_db[gene,region] = [(feature,probeset)]
try: region_count_db[(gene,region)]+=1
except KeyError: region_count_db[(gene,region)]=1
###have to add in when parsing structure probeset values for nulls (equal to 0)
if data_type == 'structure':
gene, exon, type, block, region, const, start, annot = t; region_id = exon
if len(annot)<1: annot = '---'
if (gene,exon) in region_db:
probeset_data = region_db[(gene,exon)]
for (feature,probeset) in probeset_data:
count = str(region_count_db[(gene,exon)]) ###here, feature is the label (reversed below)
value_str = feature+':'+exon+':'+probeset+':'+type+':'+count+':'+const+':'+start+':'+annot
if gene in gene_symbol_db: ###Only incorporate gene data with a gene symbol, since Cytoscape currently requires this
try: gene_db[gene].append(value_str)
except KeyError: gene_db[gene] = [value_str]
proceed = 'no'
else: ### Occurs when no probeset is present: E.g. the imaginary first and last UTR region if doesn't exit
feature = exon ###feature contains the region information, exon is the label used in Cytoscape
exon,null = string.split(exon,'.')
probeset = '0'
count = '1'
null_value_str = exon,exon+':'+feature+':'+probeset ###This is how Alex has it... to display the label without the '.1' first
try: feature_db[gene].append(null_value_str)
except KeyError: feature_db[gene] = [null_value_str]
value_str = exon+':'+feature+':'+probeset+':'+type+':'+count+':'+const+':'+start+':'+annot
if gene in structure_region_db:
order_db = structure_region_db[gene]
order_db[exon] = block
else:
order_db = {}
order_db[exon] = block
structure_region_db[gene] = order_db
if gene in gene_symbol_db and proceed == 'yes': ###Only incorporate gene data with a gene symbol, since Cytoscape currently requires this
try: gene_db[gene].append(value_str)
except KeyError: gene_db[gene] = [value_str]
return gene_db
def exportData(gene_db,data_type,species):
export_file = 'AltDatabase/ensembl/SubGeneViewer/'+species+'/Xport_sgv_'+data_type+'.csv'
if data_type == 'feature': title = 'gene'+'\t'+'symbol'+'\t'+'sgv_feature'+'\n'
if data_type == 'structure': title = 'gene'+'\t'+'symbol'+'\t'+'sgv_structure'+'\n'
if data_type == 'splice': title = 'gene'+'\t'+'symbol'+'\t'+'sgv_splice'+'\n'
data = export.createExportFile(export_file,'AltDatabase/ensembl/SubGeneViewer/'+species)
#fn=filepath(export_file); data = open(fn,'w')
data.write(title)
for gene in gene_db:
try:
symbol = gene_symbol_db[gene]
value_str_list = gene_db[gene]
value_str = string.join(value_str_list,',')
values = string.join([gene,symbol,value_str],'\t')+'\n'; data.write(values)
except KeyError: null = []
data.close()
print "exported to",export_file
def customLSDeepCopy(ls):
ls2=[]
for i in ls: ls2.append(i)
return ls2
def reorganizeData(species):
global region_db; global region_count_db; global structure_region_db; global feature_db
region_db={}; region_count_db={}; structure_region_db={}
import_dir = '/AltDatabase/ensembl/'+species
g = GrabFiles(); g.setdirectory(import_dir)
exon_struct_file = g.searchdirectory('exon-structure')
feature_file = g.searchdirectory('feature-data')
junction_file = g.searchdirectory('junction-data')
annot_file = g.searchdirectory('Ensembl-annotations.')
importAnnotationData(annot_file[0])
### Run the files through the same function which has options for different pieces of data. Feature data is processed a bit differently
### since fake probeset data is supplied for intron and UTR features not probed for
splice_db = importGeneData(junction_file[0],'junction')
feature_db = importGeneData(feature_file[0],'feature')
structure_db = importGeneData(exon_struct_file[0],'structure')
for gene in feature_db:
order_db = structure_region_db[gene]
temp_list0 = []; temp_list = []; rank = 1
for (region,value_str) in feature_db[gene]:
###First, we have to get the existing order... this is important because when we sort, it screw up ranking within an intron with many probesets
temp_list0.append((rank,region,value_str)); rank+=1
for (rank,region,value_str) in temp_list0:
try: block_number = order_db[region]
except KeyError: print gene, region, order_db;kill
temp_list.append((int(block_number),rank,value_str)) ###Combine the original ranking plus the ranking included from taking into account regions not covered by probesets
temp_list.sort()
temp_list2 = []
for (block,rank,value_str) in temp_list:
temp_list2.append(value_str)
feature_db[gene] = temp_list2
exportData(splice_db,'splice',species)
exportData(structure_db,'structure',species)
exportData(feature_db,'feature',species)
if __name__ == '__main__':
dirfile = unique
species = 'Hs'
reorganizeData(species) | PypiClean |
/Djamo-2.67.0-rc2.tar.gz/Djamo-2.67.0-rc2/docs/source/index.rst | .. Djamo documentation master file, created by
sphinx-quickstart on Sun Mar 25 22:02:09 2013.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
.. sectionauthor:: Sameer Rahmani <[email protected]>
Welcome to Djamo's documentation!
=================================
**Djamo** is another wrapper (ORM like) package for `PyMmongo <http://api.mongodb.org/python/current/>`_ and `MongDB <http://www.mongodb.org/>`_. This documentation attempts to explain everything you need to know to use PyMongo
:doc:`installation`
How to install **Djamo**
:doc:`quickstart`
Learn Djamo basics quickly
:doc:`api/index`
Djamo internal API for developers
:doc:`faq`
Frequently asked questions
:doc:`otherresources`
Find more resources to learn about **MongoDB**, **PyMong** and **Djamo**
:doc:`whatnottodo`
What should not to do, **Djamo** limitations.
Report Bugs
-----------
You can report bugs and request features in our issue tracker in `github <https://github.com/Yellowen/Djamo/issues>`_.
If you find any mistakes in this document please report them too.
Contribute
----------
Contributions can be as simple as minor tweaks to this documentation. To contribute, fork the project on `github <https://github.com/Yellowen/Djamo>`_ and send a pull request.
Indices and tables
------------------
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`
| PypiClean |
/CWR-API-0.0.40.tar.gz/CWR-API-0.0.40/cwr/file.py | __author__ = 'Bernardo Martínez Garrido'
__license__ = 'MIT'
__status__ = 'Development'
class CWRFile(object):
"""
Represents a CWR file and all the data contained in it.
This can be divided into two groups: the metadata and the transmission
data.
The first is indicated, according to the standard, by the file name. While
the second is contained inside the file.
Both are to be represented with the classes in this module. FileTag for
the metadata, and Transmission for the file contents.
"""
def __init__(self,
tag,
transmission
):
"""
Constructs a CWRFile.
The tag should be a FileTag, and the transmission a Transmission.
:param tag: the file metadata
:param transmission: the file contents
"""
self._tag = tag
self._transmission = transmission
def __str__(self):
return '%s [%s]' % (
self._tag, self._transmission)
def __repr__(self):
return '<class %s>(tag=%r, transmission=%r)' % (
self.__class__.__name__, self._tag,
self._transmission)
@property
def tag(self):
"""
The file's metadata tag.
This is stored as a FileTag.
:return: the file's metadata
"""
return self._tag
@tag.setter
def tag(self, value):
self._tag = value
@property
def transmission(self):
"""
The file's transmission.
This wraps all the file's data, and is stored as a Transmission class.
:return: the file's transmission
"""
return self._transmission
@transmission.setter
def transmission(self, value):
self._transmission = value
class FileTag(object):
"""
Represents a CWR file metadata, which is tagged on the filename.
This data identifies a concrete file in the file system and, according to
the standard, is indicated in the file name, using the pattern
CWyynnnnsss_rrr.Vxx,, where each section means the following:
CW - Header indicating it is a CWR file.
yy - Year.
nnnn - Sequence. This was originally 2 numbers, later changed to 4.
sss - Sender. 2 or 3 digits.
rrr - Receiver. 2 or 3 digits.
xx - Version of the CWR standard (version x.x).
So according to this, the files sent between a sender and a receiver each
year are numerated following a sequence. Then a termination is added
indicating the version of the CWR standard specification used on the file.
"""
def __init__(self,
year,
sequence_n,
sender,
receiver,
version
):
"""
Constructs a FileTag.
:param year: year the file was created
:param sequence_n: sequence number for the file
:param sender: sender ID
:param receiver: received ID
:param version: CWR version of the file
"""
self._year = year
self._sequence_n = sequence_n
self._sender = sender
self._receiver = receiver
self._version = version
def __str__(self):
return 'file number %s, year %s, sent from %s to %s (CWR v%s)' % (
self._sequence_n, self._year, self._sender, self._receiver,
self._version)
def __repr__(self):
return '<class %s>(year=%s, sequence_n=%r, sender=%r, ' \
'receiver=%r, version=%r)' % (
self.__class__.__name__, self._year,
self._sequence_n,
self._sender, self._receiver,
self._version)
@property
def year(self):
"""
The year in which the file has been created. This is a numeric value.
:return: the file's year
"""
return self._year
@year.setter
def year(self, value):
self._year = value
@property
def sequence_n(self):
"""
File sequence number. This is a numeric value.
This value indicates the position of this file among all those sent
from the sender to the receiver.
So if the sequence number is 10 this would be the tenth file sent.
:return: the file sequence number
"""
return self._sequence_n
@sequence_n.setter
def sequence_n(self, value):
self._sequence_n = value
@property
def sender(self):
"""
The file sender ID. This is an alphanumeric code.
:return: the file sender ID
"""
return self._sender
@sender.setter
def sender(self, value):
self._sender = value
@property
def receiver(self):
"""
The file receiver ID. This is an alphanumeric code.
:return: the file receiver ID
"""
return self._receiver
@receiver.setter
def receiver(self, value):
self._receiver = value
@property
def version(self):
"""
The CWR standard specification used to code the file. This is a comma
separated numeric value.
:return: the CWR standard specification version used
"""
return self._version
@version.setter
def version(self, value):
self._version = value | PypiClean |
/Django-4.2.4.tar.gz/Django-4.2.4/django/db/backends/sqlite3/introspection.py | from collections import namedtuple
import sqlparse
from django.db import DatabaseError
from django.db.backends.base.introspection import BaseDatabaseIntrospection
from django.db.backends.base.introspection import FieldInfo as BaseFieldInfo
from django.db.backends.base.introspection import TableInfo
from django.db.models import Index
from django.utils.regex_helper import _lazy_re_compile
FieldInfo = namedtuple(
"FieldInfo", BaseFieldInfo._fields + ("pk", "has_json_constraint")
)
field_size_re = _lazy_re_compile(r"^\s*(?:var)?char\s*\(\s*(\d+)\s*\)\s*$")
def get_field_size(name):
"""Extract the size number from a "varchar(11)" type name"""
m = field_size_re.search(name)
return int(m[1]) if m else None
# This light wrapper "fakes" a dictionary interface, because some SQLite data
# types include variables in them -- e.g. "varchar(30)" -- and can't be matched
# as a simple dictionary lookup.
class FlexibleFieldLookupDict:
# Maps SQL types to Django Field types. Some of the SQL types have multiple
# entries here because SQLite allows for anything and doesn't normalize the
# field type; it uses whatever was given.
base_data_types_reverse = {
"bool": "BooleanField",
"boolean": "BooleanField",
"smallint": "SmallIntegerField",
"smallint unsigned": "PositiveSmallIntegerField",
"smallinteger": "SmallIntegerField",
"int": "IntegerField",
"integer": "IntegerField",
"bigint": "BigIntegerField",
"integer unsigned": "PositiveIntegerField",
"bigint unsigned": "PositiveBigIntegerField",
"decimal": "DecimalField",
"real": "FloatField",
"text": "TextField",
"char": "CharField",
"varchar": "CharField",
"blob": "BinaryField",
"date": "DateField",
"datetime": "DateTimeField",
"time": "TimeField",
}
def __getitem__(self, key):
key = key.lower().split("(", 1)[0].strip()
return self.base_data_types_reverse[key]
class DatabaseIntrospection(BaseDatabaseIntrospection):
data_types_reverse = FlexibleFieldLookupDict()
def get_field_type(self, data_type, description):
field_type = super().get_field_type(data_type, description)
if description.pk and field_type in {
"BigIntegerField",
"IntegerField",
"SmallIntegerField",
}:
# No support for BigAutoField or SmallAutoField as SQLite treats
# all integer primary keys as signed 64-bit integers.
return "AutoField"
if description.has_json_constraint:
return "JSONField"
return field_type
def get_table_list(self, cursor):
"""Return a list of table and view names in the current database."""
# Skip the sqlite_sequence system table used for autoincrement key
# generation.
cursor.execute(
"""
SELECT name, type FROM sqlite_master
WHERE type in ('table', 'view') AND NOT name='sqlite_sequence'
ORDER BY name"""
)
return [TableInfo(row[0], row[1][0]) for row in cursor.fetchall()]
def get_table_description(self, cursor, table_name):
"""
Return a description of the table with the DB-API cursor.description
interface.
"""
cursor.execute(
"PRAGMA table_info(%s)" % self.connection.ops.quote_name(table_name)
)
table_info = cursor.fetchall()
if not table_info:
raise DatabaseError(f"Table {table_name} does not exist (empty pragma).")
collations = self._get_column_collations(cursor, table_name)
json_columns = set()
if self.connection.features.can_introspect_json_field:
for line in table_info:
column = line[1]
json_constraint_sql = '%%json_valid("%s")%%' % column
has_json_constraint = cursor.execute(
"""
SELECT sql
FROM sqlite_master
WHERE
type = 'table' AND
name = %s AND
sql LIKE %s
""",
[table_name, json_constraint_sql],
).fetchone()
if has_json_constraint:
json_columns.add(column)
return [
FieldInfo(
name,
data_type,
get_field_size(data_type),
None,
None,
None,
not notnull,
default,
collations.get(name),
pk == 1,
name in json_columns,
)
for cid, name, data_type, notnull, default, pk in table_info
]
def get_sequences(self, cursor, table_name, table_fields=()):
pk_col = self.get_primary_key_column(cursor, table_name)
return [{"table": table_name, "column": pk_col}]
def get_relations(self, cursor, table_name):
"""
Return a dictionary of {column_name: (ref_column_name, ref_table_name)}
representing all foreign keys in the given table.
"""
cursor.execute(
"PRAGMA foreign_key_list(%s)" % self.connection.ops.quote_name(table_name)
)
return {
column_name: (ref_column_name, ref_table_name)
for (
_,
_,
ref_table_name,
column_name,
ref_column_name,
*_,
) in cursor.fetchall()
}
def get_primary_key_columns(self, cursor, table_name):
cursor.execute(
"PRAGMA table_info(%s)" % self.connection.ops.quote_name(table_name)
)
return [name for _, name, *_, pk in cursor.fetchall() if pk]
def _parse_column_or_constraint_definition(self, tokens, columns):
token = None
is_constraint_definition = None
field_name = None
constraint_name = None
unique = False
unique_columns = []
check = False
check_columns = []
braces_deep = 0
for token in tokens:
if token.match(sqlparse.tokens.Punctuation, "("):
braces_deep += 1
elif token.match(sqlparse.tokens.Punctuation, ")"):
braces_deep -= 1
if braces_deep < 0:
# End of columns and constraints for table definition.
break
elif braces_deep == 0 and token.match(sqlparse.tokens.Punctuation, ","):
# End of current column or constraint definition.
break
# Detect column or constraint definition by first token.
if is_constraint_definition is None:
is_constraint_definition = token.match(
sqlparse.tokens.Keyword, "CONSTRAINT"
)
if is_constraint_definition:
continue
if is_constraint_definition:
# Detect constraint name by second token.
if constraint_name is None:
if token.ttype in (sqlparse.tokens.Name, sqlparse.tokens.Keyword):
constraint_name = token.value
elif token.ttype == sqlparse.tokens.Literal.String.Symbol:
constraint_name = token.value[1:-1]
# Start constraint columns parsing after UNIQUE keyword.
if token.match(sqlparse.tokens.Keyword, "UNIQUE"):
unique = True
unique_braces_deep = braces_deep
elif unique:
if unique_braces_deep == braces_deep:
if unique_columns:
# Stop constraint parsing.
unique = False
continue
if token.ttype in (sqlparse.tokens.Name, sqlparse.tokens.Keyword):
unique_columns.append(token.value)
elif token.ttype == sqlparse.tokens.Literal.String.Symbol:
unique_columns.append(token.value[1:-1])
else:
# Detect field name by first token.
if field_name is None:
if token.ttype in (sqlparse.tokens.Name, sqlparse.tokens.Keyword):
field_name = token.value
elif token.ttype == sqlparse.tokens.Literal.String.Symbol:
field_name = token.value[1:-1]
if token.match(sqlparse.tokens.Keyword, "UNIQUE"):
unique_columns = [field_name]
# Start constraint columns parsing after CHECK keyword.
if token.match(sqlparse.tokens.Keyword, "CHECK"):
check = True
check_braces_deep = braces_deep
elif check:
if check_braces_deep == braces_deep:
if check_columns:
# Stop constraint parsing.
check = False
continue
if token.ttype in (sqlparse.tokens.Name, sqlparse.tokens.Keyword):
if token.value in columns:
check_columns.append(token.value)
elif token.ttype == sqlparse.tokens.Literal.String.Symbol:
if token.value[1:-1] in columns:
check_columns.append(token.value[1:-1])
unique_constraint = (
{
"unique": True,
"columns": unique_columns,
"primary_key": False,
"foreign_key": None,
"check": False,
"index": False,
}
if unique_columns
else None
)
check_constraint = (
{
"check": True,
"columns": check_columns,
"primary_key": False,
"unique": False,
"foreign_key": None,
"index": False,
}
if check_columns
else None
)
return constraint_name, unique_constraint, check_constraint, token
def _parse_table_constraints(self, sql, columns):
# Check constraint parsing is based of SQLite syntax diagram.
# https://www.sqlite.org/syntaxdiagrams.html#table-constraint
statement = sqlparse.parse(sql)[0]
constraints = {}
unnamed_constrains_index = 0
tokens = (token for token in statement.flatten() if not token.is_whitespace)
# Go to columns and constraint definition
for token in tokens:
if token.match(sqlparse.tokens.Punctuation, "("):
break
# Parse columns and constraint definition
while True:
(
constraint_name,
unique,
check,
end_token,
) = self._parse_column_or_constraint_definition(tokens, columns)
if unique:
if constraint_name:
constraints[constraint_name] = unique
else:
unnamed_constrains_index += 1
constraints[
"__unnamed_constraint_%s__" % unnamed_constrains_index
] = unique
if check:
if constraint_name:
constraints[constraint_name] = check
else:
unnamed_constrains_index += 1
constraints[
"__unnamed_constraint_%s__" % unnamed_constrains_index
] = check
if end_token.match(sqlparse.tokens.Punctuation, ")"):
break
return constraints
def get_constraints(self, cursor, table_name):
"""
Retrieve any constraints or keys (unique, pk, fk, check, index) across
one or more columns.
"""
constraints = {}
# Find inline check constraints.
try:
table_schema = cursor.execute(
"SELECT sql FROM sqlite_master WHERE type='table' and name=%s"
% (self.connection.ops.quote_name(table_name),)
).fetchone()[0]
except TypeError:
# table_name is a view.
pass
else:
columns = {
info.name for info in self.get_table_description(cursor, table_name)
}
constraints.update(self._parse_table_constraints(table_schema, columns))
# Get the index info
cursor.execute(
"PRAGMA index_list(%s)" % self.connection.ops.quote_name(table_name)
)
for row in cursor.fetchall():
# SQLite 3.8.9+ has 5 columns, however older versions only give 3
# columns. Discard last 2 columns if there.
number, index, unique = row[:3]
cursor.execute(
"SELECT sql FROM sqlite_master "
"WHERE type='index' AND name=%s" % self.connection.ops.quote_name(index)
)
# There's at most one row.
(sql,) = cursor.fetchone() or (None,)
# Inline constraints are already detected in
# _parse_table_constraints(). The reasons to avoid fetching inline
# constraints from `PRAGMA index_list` are:
# - Inline constraints can have a different name and information
# than what `PRAGMA index_list` gives.
# - Not all inline constraints may appear in `PRAGMA index_list`.
if not sql:
# An inline constraint
continue
# Get the index info for that index
cursor.execute(
"PRAGMA index_info(%s)" % self.connection.ops.quote_name(index)
)
for index_rank, column_rank, column in cursor.fetchall():
if index not in constraints:
constraints[index] = {
"columns": [],
"primary_key": False,
"unique": bool(unique),
"foreign_key": None,
"check": False,
"index": True,
}
constraints[index]["columns"].append(column)
# Add type and column orders for indexes
if constraints[index]["index"]:
# SQLite doesn't support any index type other than b-tree
constraints[index]["type"] = Index.suffix
orders = self._get_index_columns_orders(sql)
if orders is not None:
constraints[index]["orders"] = orders
# Get the PK
pk_columns = self.get_primary_key_columns(cursor, table_name)
if pk_columns:
# SQLite doesn't actually give a name to the PK constraint,
# so we invent one. This is fine, as the SQLite backend never
# deletes PK constraints by name, as you can't delete constraints
# in SQLite; we remake the table with a new PK instead.
constraints["__primary__"] = {
"columns": pk_columns,
"primary_key": True,
"unique": False, # It's not actually a unique constraint.
"foreign_key": None,
"check": False,
"index": False,
}
relations = enumerate(self.get_relations(cursor, table_name).items())
constraints.update(
{
f"fk_{index}": {
"columns": [column_name],
"primary_key": False,
"unique": False,
"foreign_key": (ref_table_name, ref_column_name),
"check": False,
"index": False,
}
for index, (column_name, (ref_column_name, ref_table_name)) in relations
}
)
return constraints
def _get_index_columns_orders(self, sql):
tokens = sqlparse.parse(sql)[0]
for token in tokens:
if isinstance(token, sqlparse.sql.Parenthesis):
columns = str(token).strip("()").split(", ")
return ["DESC" if info.endswith("DESC") else "ASC" for info in columns]
return None
def _get_column_collations(self, cursor, table_name):
row = cursor.execute(
"""
SELECT sql
FROM sqlite_master
WHERE type = 'table' AND name = %s
""",
[table_name],
).fetchone()
if not row:
return {}
sql = row[0]
columns = str(sqlparse.parse(sql)[0][-1]).strip("()").split(", ")
collations = {}
for column in columns:
tokens = column[1:].split()
column_name = tokens[0].strip('"')
for index, token in enumerate(tokens):
if token == "COLLATE":
collation = tokens[index + 1]
break
else:
collation = None
collations[column_name] = collation
return collations | PypiClean |
/Cantera-3.0.0b1-cp311-cp311-win_amd64.whl/cantera/ck2yaml.py |
# This file is part of Cantera. See License.txt in the top-level directory or
# at https://cantera.org/license.txt for license and copyright information.
"""
ck2yaml.py: Convert Chemkin-format mechanisms to Cantera YAML input files
Usage:
ck2yaml [--input=<filename>]
[--thermo=<filename>]
[--transport=<filename>]
[--surface=<filename>]
[--name=<name>]
[--extra=<filename>]
[--output=<filename>]
[--single-intermediate-temperature]
[--permissive]
[--quiet]
[--no-validate]
[-d | --debug]
Example:
ck2yaml --input=chem.inp --thermo=therm.dat --transport=tran.dat
If the output file name is not given, an output file with the same name as the
input file, with the extension changed to '.yaml'.
An input file containing only species definitions (which can be referenced from
phase definitions in other input files) can be created by specifying only a
thermo file.
For the case of a surface mechanism, the gas phase input file should be
specified as 'input' and the surface phase input file should be specified as
'surface'.
The '--single-intermediate-temperature' option should be used with thermo data where
only a single break temperature is used and the last value in the first line of each
species thermo entry is the molecular weight instead.
The '--permissive' option allows certain recoverable parsing errors (such as
duplicate transport data) to be ignored. The '--name=<name>' option
is used to override default phase names (that is, 'gas').
The '--extra=<filename>' option takes a YAML file as input. This option can be
used to add to the file description, or to define custom fields that are
included in the YAML output.
"""
import logging
import os.path
import sys
import numpy as np
import re
import getopt
import textwrap
from email.utils import formatdate
try:
from ruamel import yaml
except ImportError:
import ruamel_yaml as yaml
# yaml.version_info is a tuple with the three parts of the version
yaml_version = yaml.version_info
# We choose ruamel.yaml 0.15.34 as the minimum version
# since it is the highest version available in the Ubuntu
# 18.04 repositories and seems to work. Older versions such as
# 0.13.14 on CentOS7 and 0.10.23 on Ubuntu 16.04 raise an exception
# that they are missing the RoundTripRepresenter
yaml_min_version = (0, 15, 34)
if yaml_version < yaml_min_version:
raise RuntimeError(
"The minimum supported version of ruamel.yaml is 0.15.34. If you "
"installed ruamel.yaml from your operating system's package manager, "
"please install an updated version using pip or conda."
)
BlockMap = yaml.comments.CommentedMap
logger = logging.getLogger(__name__)
loghandler = logging.StreamHandler(sys.stdout)
logformatter = logging.Formatter('%(message)s')
loghandler.setFormatter(logformatter)
logger.handlers.clear()
logger.addHandler(loghandler)
logger.setLevel(logging.INFO)
def FlowMap(*args, **kwargs):
m = yaml.comments.CommentedMap(*args, **kwargs)
m.fa.set_flow_style()
return m
def FlowList(*args, **kwargs):
lst = yaml.comments.CommentedSeq(*args, **kwargs)
lst.fa.set_flow_style()
return lst
# Improved float formatting requires Numpy >= 1.14
if hasattr(np, 'format_float_positional'):
def float2string(data):
if data == 0:
return '0.0'
elif 0.01 <= abs(data) < 10000:
return np.format_float_positional(data, trim='0')
else:
return np.format_float_scientific(data, trim='0')
else:
def float2string(data):
return repr(data)
def represent_float(self, data):
# type: (Any) -> Any
if data != data:
value = '.nan'
elif data == self.inf_value:
value = '.inf'
elif data == -self.inf_value:
value = '-.inf'
else:
value = float2string(data)
return self.represent_scalar(u'tag:yaml.org,2002:float', value)
yaml.RoundTripRepresenter.add_representer(float, represent_float)
QUANTITY_UNITS = {'MOL': 'mol',
'MOLE': 'mol',
'MOLES': 'mol',
'MOLEC': 'molec',
'MOLECULES': 'molec'}
ENERGY_UNITS = {'CAL/': 'cal/mol',
'CAL/MOL': 'cal/mol',
'CAL/MOLE': 'cal/mol',
'EVOL': 'eV',
'EVOLTS': 'eV',
'JOUL': 'J/mol',
'JOULES/MOL': 'J/mol',
'JOULES/MOLE': 'J/mol',
'KCAL': 'kcal/mol',
'KCAL/MOL': 'kcal/mol',
'KCAL/MOLE': 'kcal/mol',
'KELV': 'K',
'KELVIN': 'K',
'KELVINS': 'K',
'KJOU': 'kJ/mol',
'KJOULES/MOL': 'kJ/mol',
'KJOULES/MOLE': 'kJ/mol'}
def strip_nonascii(s):
return s.encode('ascii', 'ignore').decode()
def compatible_quantities(quantity_basis, units):
if quantity_basis == 'mol':
return 'molec' not in units
elif quantity_basis == 'molec':
return 'molec' in units or 'mol' not in units
else:
raise ValueError('Unknown quantity basis: "{}"'.format(quantity_basis))
class InputError(Exception):
"""
An exception class for exceptional behavior involving Chemkin-format
mechanism files. Pass a string describing the circumstances that caused
the exceptional behavior.
"""
def __init__(self, message, *args, **kwargs):
message += ("\nPlease check https://cantera.org/tutorials/"
"ck2yaml-tutorial.html#debugging-common-errors-in-ck-files"
"\nfor the correct Chemkin syntax.")
if args or kwargs:
super().__init__(message.format(*args, **kwargs))
else:
super().__init__(message)
class Species:
def __init__(self, label, sites=None):
self.label = label
self.thermo = None
self.transport = None
self.sites = sites
self.composition = None
self.note = None
def __str__(self):
return self.label
@classmethod
def to_yaml(cls, representer, node):
out = BlockMap([('name', node.label),
('composition', FlowMap(node.composition.items()))])
if node.thermo:
out['thermo'] = node.thermo
if node.transport:
out['transport'] = node.transport
if node.sites:
out['sites'] = node.sites
if node.note:
out['note'] = node.note
return representer.represent_dict(out)
class Nasa7:
"""
Thermodynamic data parameterized as two seven-coefficient NASA
polynomials.
See https://cantera.org/science/science-species.html#the-nasa-7-coefficient-polynomial-parameterization
"""
def __init__(self, *, Tmin, Tmax, Tmid, low_coeffs, high_coeffs, note=''):
self.Tmin = Tmin
self.Tmax = Tmax
self.Tmid = Tmid
self.low_coeffs = low_coeffs
self.high_coeffs = high_coeffs
self.note = note
@classmethod
def to_yaml(cls, representer, node):
out = BlockMap([('model', 'NASA7')])
if node.Tmid is not None:
out['temperature-ranges'] = FlowList([node.Tmin, node.Tmid, node.Tmax])
out['data'] = [FlowList(node.low_coeffs), FlowList(node.high_coeffs)]
else:
out['temperature-ranges'] = FlowList([node.Tmin, node.Tmax])
out['data'] = [FlowList(node.low_coeffs)]
if node.note:
note = textwrap.dedent(node.note.rstrip())
if '\n' in note:
note = yaml.scalarstring.PreservedScalarString(note)
out['note'] = note
return representer.represent_dict(out)
class Nasa9:
"""
Thermodynamic data parameterized as any number of nine-coefficient NASA
polynomials.
See https://cantera.org/science/science-species.html#the-nasa-9-coefficient-polynomial-parameterization
:param data:
List of polynomials, where each polynomial is written as
```
[(T_low, T_high), [a_0, a_1, ..., a_8]]
```
"""
def __init__(self, *, data, note=''):
self.note = note
self.data = list(sorted(data))
self.Tranges = [self.data[0][0][0]]
for i in range(1, len(data)):
if abs(self.data[i-1][0][1] - self.data[i][0][0]) > 0.01:
raise ValueError('NASA9 polynomials contain non-adjacent temperature ranges')
self.Tranges.append(self.data[i][0][0])
self.Tranges.append(self.data[-1][0][1])
@classmethod
def to_yaml(cls, representer, node):
out = BlockMap([('model', 'NASA9')])
out['temperature-ranges'] = FlowList(node.Tranges)
out['data'] = [FlowList(poly) for (trange, poly) in node.data]
if node.note:
out['note'] = node.note
return representer.represent_dict(out)
class Reaction:
"""
:param index:
A unique nonnegative integer index
:param reactants:
A list of `(stoichiometry, species name)` tuples
:param products:
A list of `(stoichiometry, species name)` tuples
:param kinetics:
A `KineticsModel` instance which describes the rate constant
:param reversible:
Boolean indicating whether the reaction is reversible
:param duplicate:
Boolean indicating whether the reaction is a known (permitted) duplicate
:param forward_orders:
A dictionary specifying a non-default reaction order (value) for each
specified species (key)
:param third_body:
A string name used for the third-body species written in
pressure-dependent reaction types (usually "M")
"""
def __init__(self, parser, index=-1, reactants=None, products=None,
kinetics=None, reversible=True, duplicate=False,
forward_orders=None, third_body=None):
self.parser = parser
self.index = index
self.reactants = reactants # list of (stoichiometry, species) tuples
self.products = products # list of (stoichiometry, species) tuples
self.kinetics = kinetics
self.reversible = reversible
self.duplicate = duplicate
self.forward_orders = forward_orders or {}
self.third_body = ''
self.comment = ''
def _coeff_string(self, coeffs):
L = []
for stoichiometry, species in coeffs:
if stoichiometry != 1:
L.append('{0} {1}'.format(stoichiometry, species))
else:
L.append(str(species))
expression = ' + '.join(L)
expression += self.kinetics.reaction_string_suffix(self.third_body)
return expression
def __str__(self):
"""
Return a string representation of the reaction, such as 'A + B <=> C + D'.
"""
return '{}{}{}'.format(self._coeff_string(self.reactants),
' <=> ' if self.reversible else ' => ',
self._coeff_string(self.products))
@classmethod
def to_yaml(cls, representer, node):
out = BlockMap([('equation', str(node))])
out.yaml_add_eol_comment('Reaction {}'.format(node.index), 'equation')
if node.duplicate:
out['duplicate'] = True
node.kinetics.reduce(out)
if node.forward_orders:
out['orders'] = FlowMap(node.forward_orders)
if any((float(x) < 0 for x in node.forward_orders.values())):
out['negative-orders'] = True
node.parser.warn('Negative reaction order for reaction {} ({}).'.format(
node.index, str(node)))
reactant_names = {r[1].label for r in node.reactants}
if any((species not in reactant_names for species in node.forward_orders)):
out['nonreactant-orders'] = True
node.parser.warn('Non-reactant order for reaction {} ({}).'.format(
node.index, str(node)))
if node.comment:
comment = textwrap.dedent(node.comment.rstrip())
if '\n' in comment:
comment = yaml.scalarstring.PreservedScalarString(comment)
out['note'] = comment
return representer.represent_dict(out)
class KineticsModel:
"""
A base class for kinetics models
"""
pressure_dependent = None # overloaded in derived classes
def __init__(self):
self.efficiencies = {}
def reaction_string_suffix(self, species):
"""
Suffix for reactant and product strings, used for pressure-dependent
reactions
"""
return ''
def reduce(self, output):
"""
Assign data from this object to the YAML mapping ``output``
"""
raise InputError('reduce is not implemented for objects of class {}',
self.__class__.__name__)
class Arrhenius:
"""
Represent a modified Arrhenius rate.
:param A:
The pre-exponential factor, given as a tuple consisting of a floating
point value and a units string
:param b:
The temperature exponent
:param Ea:
The activation energy, given as a tuple consisting of a floating
point value and a units string
"""
def __init__(self, A=0.0, b=0.0, Ea=0.0, *, parser):
self.A = A
self.b = b
self.Ea = Ea
self.parser = parser
def as_yaml(self, extra=()):
out = FlowMap(extra)
if compatible_quantities(self.parser.output_quantity_units, self.A[1]):
out['A'] = self.A[0]
else:
out['A'] = "{0:e} {1}".format(*self.A)
out['b'] = self.b
if self.Ea[1] == self.parser.output_energy_units:
out['Ea'] = self.Ea[0]
else:
out['Ea'] = "{0} {1}".format(*self.Ea)
return out
class ElementaryRate(KineticsModel):
"""
A reaction rate described by a single Arrhenius expression.
See https://cantera.org/science/kinetics.html#reactions-with-a-pressure-independent-rate
:param rate:
The Arrhenius expression describing this reaction rate.
"""
pressure_dependent = False
def __init__(self, rate, **kwargs):
KineticsModel.__init__(self, **kwargs)
self.rate = rate
def reduce(self, output):
output['rate-constant'] = self.rate.as_yaml()
if self.rate.A[0] < 0:
output['negative-A'] = True
class SurfaceRate(KineticsModel):
"""
An Arrhenius-like reaction occurring on a surface
See https://cantera.org/science/kinetics.html#surface-reactions
:param rate:
The Arrhenius expression describing this reaction rate.
:param coverages:
A list of tuples where each tuple specifies the coverage dependencies
for a species, in the form `(species_name, a_k, m_k, E_k)`
:param is_sticking:
True if the Arrhenius expression is a parameterization of a sticking
coefficient, rather than the rate constant itself.
:param motz_wise:
True if the sticking coefficient should be translated into a rate
coefficient using the correction factor developed by Motz & Wise for
reactions with high (near-unity) sticking coefficients
"""
pressure_dependent = False
def __init__(self, *, rate, coverages, is_sticking, motz_wise, **kwargs):
KineticsModel.__init__(self, **kwargs)
self.rate = rate
self.coverages = coverages
self.is_sticking = is_sticking
self.motz_wise = motz_wise
def reduce(self, output):
if self.is_sticking:
output['sticking-coefficient'] = self.rate.as_yaml()
else:
output['rate-constant'] = self.rate.as_yaml()
if self.motz_wise is not None:
output['Motz-Wise'] = self.motz_wise
if self.coverages:
covdeps = BlockMap()
for species,A,m,E in self.coverages:
# Energy units for coverage modification match energy units for
# base reaction
if self.rate.Ea[1] != self.rate.parser.output_energy_units:
E = '{} {}'.format(E, self.rate.Ea[1])
covdeps[species] = FlowList([A, m, E])
output['coverage-dependencies'] = covdeps
class PDepArrhenius(KineticsModel):
"""
A rate calculated by interpolating between Arrhenius expressions at
various pressures.
See https://cantera.org/science/kinetics.html#pressure-dependent-arrhenius-rate-expressions-p-log
:param pressures:
A list of pressures at which Arrhenius expressions are given.
:param pressure_units:
A string indicating the units used for the pressures
:param arrhenius:
A list of `Arrhenius` objects at each given pressure
"""
pressure_dependent = True
def __init__(self, *, pressures, pressure_units, arrhenius, **kwargs):
KineticsModel.__init__(self, **kwargs)
self.pressures = pressures
self.pressure_units = pressure_units
self.arrhenius = arrhenius or []
def reduce(self, output):
output['type'] = 'pressure-dependent-Arrhenius'
rates = []
for pressure, arrhenius in zip(self.pressures, self.arrhenius):
rates.append(arrhenius.as_yaml(
[('P', '{0} {1}'.format(pressure, self.pressure_units))]))
output['rate-constants'] = rates
class Chebyshev(KineticsModel):
"""
A rate calculated in terms of a bivariate Chebyshev polynomial.
See https://cantera.org/science/kinetics.html#chebyshev-reaction-rate-expressions
:param coeffs:
Matrix of Chebyshev coefficients, dimension N_T by N_P
:param Tmin:
Minimum temperature for which the parameterization is valid
:param Tmax:
Maximum temperature for which the parameterization is valid
:param Pmin:
Minimum pressure for which the parameterization is valid, given as a
`(value, units)` tuple
:param Pmax:
Maximum pressure for which the parameterization is valid, given as a
`(value, units)` tuple
:param quantity_units:
Quantity units for the rate constant
"""
pressure_dependent = True
def __init__(self, coeffs, *, Tmin, Tmax, Pmin, Pmax, quantity_units,
**kwargs):
KineticsModel.__init__(self, **kwargs)
self.Tmin = Tmin
self.Tmax = Tmax
self.Pmin = Pmin
self.Pmax = Pmax
self.coeffs = coeffs
self.quantity_units = quantity_units
def reduce(self, output):
output['type'] = 'Chebyshev'
output['temperature-range'] = FlowList([self.Tmin, self.Tmax])
output['pressure-range'] = FlowList(['{0} {1}'.format(*self.Pmin),
'{0} {1}'.format(*self.Pmax)])
if self.quantity_units is not None:
output['units'] = FlowMap([('quantity', self.quantity_units)])
output['data'] = [FlowList(float(v) for v in row) for row in self.coeffs]
class ThreeBody(KineticsModel):
"""
A rate calculated for a reaction which includes a third-body collider.
See https://cantera.org/science/kinetics.html#three-body-reactions
:param high_rate:
The Arrhenius kinetics (high-pressure limit)
:param efficiencies:
A mapping of species names to collider efficiencies
"""
pressure_dependent = True
def __init__(self, high_rate=None, efficiencies=None, **kwargs):
KineticsModel.__init__(self, **kwargs)
self.high_rate = high_rate
self.efficiencies = efficiencies or {}
def reaction_string_suffix(self, species):
return ' + M'
def reduce(self, output):
output['type'] = 'three-body'
output['rate-constant'] = self.high_rate.as_yaml()
if self.high_rate.A[0] < 0:
output['negative-A'] = True
if self.efficiencies:
output['efficiencies'] = FlowMap(self.efficiencies)
class Falloff(ThreeBody):
"""
A rate for a pressure-dependent falloff reaction.
See https://cantera.org/science/kinetics.html#falloff-reactions
:param low_rate:
The Arrhenius kinetics at the low-pressure limit
:param high_rate:
The Arrhenius kinetics at the high-pressure limit
:param efficiencies:
A mapping of species names to collider efficiencies
:param F:
Falloff function parameterization
"""
def __init__(self, low_rate=None, F=None, **kwargs):
ThreeBody.__init__(self, **kwargs)
self.low_rate = low_rate
self.F = F
def reaction_string_suffix(self, species):
return ' (+{})'.format(species)
def reduce(self, output):
output['type'] = 'falloff'
output['low-P-rate-constant'] = self.low_rate.as_yaml()
output['high-P-rate-constant'] = self.high_rate.as_yaml()
if self.high_rate.A[0] < 0 and self.low_rate.A[0] < 0:
output['negative-A'] = True
if self.F:
self.F.reduce(output)
if self.efficiencies:
output['efficiencies'] = FlowMap(self.efficiencies)
class ChemicallyActivated(ThreeBody):
"""
A rate for a chemically-activated reaction.
See https://cantera.org/science/kinetics.html#chemically-activated-reactions
:param low_rate:
The Arrhenius kinetics at the low-pressure limit
:param high_rate:
The Arrhenius kinetics at the high-pressure limit
:param efficiencies:
A mapping of species names to collider efficiencies
:param F:
Falloff function parameterization
"""
def __init__(self, low_rate=None, F=None, **kwargs):
ThreeBody.__init__(self, **kwargs)
self.low_rate = low_rate
self.F = F
def reaction_string_suffix(self, species):
return ' (+{})'.format(species)
def reduce(self, output):
output['type'] = 'chemically-activated'
output['low-P-rate-constant'] = self.low_rate.as_yaml()
output['high-P-rate-constant'] = self.high_rate.as_yaml()
if self.high_rate.A[0] < 0 and self.low_rate.A[0] < 0:
output['negative-A'] = True
if self.F:
self.F.reduce(output)
if self.efficiencies:
output['efficiencies'] = FlowMap(self.efficiencies)
class Troe:
"""
The Troe falloff function, described with either 3 or 4 parameters.
See https://cantera.org/science/kinetics.html#the-troe-falloff-function
"""
def __init__(self, A=0.0, T3=0.0, T1=0.0, T2=None):
self.A = A
self.T3 = T3
self.T1 = T1
self.T2 = T2
def reduce(self, output):
troe = FlowMap([('A', self.A), ('T3', self.T3), ('T1', self.T1)])
if self.T2 is not None:
troe['T2'] = self.T2
output['Troe'] = troe
class Sri:
"""
The SRI falloff function, described with either 3 or 5 parameters.
See https://cantera.org/science/kinetics.html#the-sri-falloff-function
"""
def __init__(self, *, A, B, C, D=None, E=None):
self.A = A
self.B = B
self.C = C
self.D = D
self.E = E
def reduce(self, output):
sri = FlowMap([('A', self.A), ('B', self.B), ('C', self.C)])
if self.D is not None:
sri['D'] = self.D
if self.E is not None:
sri['E'] = self.E
output['SRI'] = sri
class TransportData:
geometry_flags = ['atom', 'linear', 'nonlinear']
def __init__(self, parser, label, geometry, well_depth, collision_diameter,
dipole_moment, polarizability, z_rot, note=''):
try:
geometry = int(geometry)
except ValueError:
try:
geometry = float(geometry)
except ValueError:
raise InputError(
"Invalid geometry flag '{}' for species '{}'. "
"Flag should be an integer.", geometry, label) from None
if geometry == int(geometry):
geometry = int(geometry)
parser.warn("Incorrect geometry flag syntax for species {0}. "
"If --permissive was given, the flag was automatically "
"converted to an integer.".format(label))
else:
raise InputError(
"Invalid float geometry flag '{}' for species '{}'. "
"Flag should be an integer.", geometry, label) from None
if geometry not in (0, 1, 2):
raise InputError("Invalid geometry flag value '{}' for species '{}'. "
"Flag value should be 0, 1, or 2.", geometry, label)
self.geometry = self.geometry_flags[int(geometry)]
self.well_depth = float(well_depth)
self.collision_diameter = float(collision_diameter)
self.dipole_moment = float(dipole_moment)
self.polarizability = float(polarizability)
self.z_rot = float(z_rot)
self.note = note.strip()
@classmethod
def to_yaml(cls, representer, node):
out = BlockMap([('model', 'gas'),
('geometry', node.geometry),
('well-depth', node.well_depth),
('diameter', node.collision_diameter)])
if node.dipole_moment:
out['dipole'] = node.dipole_moment
if node.polarizability:
out['polarizability'] = node.polarizability
if node.z_rot:
out['rotational-relaxation'] = node.z_rot
if node.note:
out['note'] = node.note
return representer.represent_dict(out)
def fortFloat(s):
"""
Convert a string representation of a floating point value to a float,
allowing for some of the peculiarities of allowable Fortran representations.
"""
return float(s.strip().lower().replace('d', 'e').replace('e ', 'e+'))
def get_index(seq, value):
"""
Find the first location in *seq* which contains a case-insensitive,
whitespace-insensitive match for *value*. Returns *None* if no match is
found.
"""
if isinstance(seq, str):
seq = seq.split()
value = value.lower().strip()
for i, item in enumerate(seq):
if item.lower() == value:
return i
return None
def contains(seq, value):
if isinstance(seq, str):
return value.lower() in seq.lower()
else:
return get_index(seq, value) is not None
class Surface:
def __init__(self, name, site_density):
self.name = name
self.site_density = site_density
self.species_list = []
self.reactions = []
class Parser:
def __init__(self):
self.processed_units = False
self.energy_units = 'cal/mol' # for the current REACTIONS section
self.output_energy_units = 'cal/mol' # for the output file
self.quantity_units = 'mol' # for the current REACTIONS section
self.output_quantity_units = 'mol' # for the output file
self.motz_wise = None
self.single_intermediate_temperature = False
self.warning_as_error = True
self.elements = []
self.element_weights = {} # for custom elements only
self.species_list = [] # bulk species only
self.species_dict = {} # bulk and surface species
self.surfaces = []
self.reactions = []
self.header_lines = []
self.extra = {} # for extra entries
self.files = [] # input file names
def warn(self, message):
if self.warning_as_error:
raise InputError(message)
else:
logger.warning(message)
@staticmethod
def parse_composition(elements, nElements, width):
"""
Parse the elemental composition from a 7 or 9 coefficient NASA polynomial
entry.
"""
composition = {}
for i in range(nElements):
symbol = elements[width*i:width*i+2].strip()
count = elements[width*i+2:width*i+width].strip()
if not symbol:
continue
try:
# Convert to float first for cases where ``count`` is a string
# like "2.00".
count = int(float(count))
if count:
composition[symbol.capitalize()] = count
except ValueError:
pass
return composition
@staticmethod
def get_rate_constant_units(length_dims, length_units, quantity_dims,
quantity_units, time_dims=1, time_units='s'):
units = ''
if length_dims:
units += length_units
if length_dims > 1:
units += '^' + str(length_dims)
if quantity_dims:
units += '/' + quantity_units
if quantity_dims > 1:
units += '^' + str(quantity_dims)
if time_dims:
units += '/' + time_units
if time_dims > 1:
units += '^' + str(time_dims)
if units.startswith('/'):
units = '1' + units
return units
def add_element(self, element_string):
if '/' in element_string:
name, weight, _ = element_string.split('/')
weight = fortFloat(weight)
name = name.capitalize()
self.elements.append(name)
self.element_weights[name] = weight
else:
self.elements.append(element_string.capitalize())
def read_NASA7_entry(self, lines, TintDefault, comments):
"""
Read a thermodynamics entry for one species in a Chemkin-format file
(consisting of two 7-coefficient NASA polynomials). Returns the label of
the species, the thermodynamics model as a :class:`Nasa7` object, and
the elemental composition of the species.
For more details on this format, see `Debugging common errors in CK files
<https://cantera.org/tutorials/ck2yaml-tutorial.html#debugging-common-errors-in-ck-files>`__.
"""
identifier = lines[0][0:24].split(maxsplit=1)
species = identifier[0].strip()
if len(identifier) > 1:
note = identifier[1]
else:
note = ''
comments = '\n'.join(c.rstrip() for c in comments if c.strip())
if comments and note:
note = '\n'.join((note, comments))
elif comments:
note = comments
# Normal method for specifying the elemental composition
composition = self.parse_composition(lines[0][24:44], 4, 5)
# Chemkin-style extended elemental composition: additional lines
# indicated by '&' continuation character on preceding lines. Element
# names and abundances are separated by whitespace (not fixed width)
if lines[0].rstrip().endswith('&'):
complines = []
for i in range(len(lines)-1):
if lines[i].rstrip().endswith('&'):
complines.append(lines[i+1])
else:
break
lines = [lines[0]] + lines[i+1:]
comp = ' '.join(line.rstrip('&\n') for line in complines).split()
composition = {}
for i in range(0, len(comp), 2):
composition[comp[i].capitalize()] = int(comp[i+1])
# Non-standard extended elemental composition data may be located beyond
# column 80 on the first line of the thermo entry
if len(lines[0]) > 80:
elements = lines[0][80:]
composition2 = self.parse_composition(elements, len(elements)//10, 10)
composition.update(composition2)
if not composition:
raise InputError("Error parsing elemental composition for "
"species '{}'.", species)
for symbol in composition.keys():
# Some CHEMKIN input files may have quantities of elements with
# more than 3 digits. This violates the column-based input format
# standard, so the entry cannot be read and we need to raise a
# more useful error message.
if any(map(str.isdigit, symbol)) and symbol not in self.elements:
raise InputError("Error parsing elemental composition for "
"species thermo entry:\n{}\nElement amounts "
"can have no more than 3 digits.",
"".join(lines))
# Extract the NASA polynomial coefficients
# Remember that the high-T polynomial comes first!
Tmin = fortFloat(lines[0][45:55])
Tmax = fortFloat(lines[0][55:65])
if self.single_intermediate_temperature:
# Intermediate temperature is shared across all species, except if the
# species only has one temperature range
Tint = TintDefault if Tmin < TintDefault < Tmax else None
else:
# Non-default intermediate temperature can be provided
try:
Tint = fortFloat(lines[0][65:75])
except ValueError:
Tint = TintDefault if Tmin < TintDefault < Tmax else None
high_coeffs = [fortFloat(lines[i][j:k])
for i,j,k in [(1,0,15), (1,15,30), (1,30,45), (1,45,60),
(1,60,75), (2,0,15), (2,15,30)]]
low_coeffs = [fortFloat(lines[i][j:k])
for i,j,k in [(2,30,45), (2,45,60), (2,60,75), (3,0,15),
(3,15,30), (3,30,45), (3,45,60)]]
# Cases where only one temperature range is needed
if Tint == Tmin or Tint == Tmax or high_coeffs == low_coeffs:
Tint = None
# Duplicate the valid set of coefficients if only one range is provided
if Tint is None:
if all(c == 0 for c in low_coeffs):
# Use the first set of coefficients if the second is all zeros
coeffs = high_coeffs
elif all(c == 0 for c in high_coeffs):
# Use the second set of coefficients if the first is all zeros
coeffs = low_coeffs
elif high_coeffs == low_coeffs:
# If the coefficients are duplicated, that's fine too
coeffs = low_coeffs
else:
raise InputError(
"Only one temperature range defined but two distinct sets of "
"coefficients given for species thermo entry:\n{}\n",
"".join(lines))
thermo = Nasa7(Tmin=Tmin, Tmax=Tmax, Tmid=None, low_coeffs=coeffs,
high_coeffs=None, note=note)
else:
thermo = Nasa7(Tmin=Tmin, Tmax=Tmax, Tmid=Tint, low_coeffs=low_coeffs,
high_coeffs=high_coeffs, note=note)
return species, thermo, composition
def read_NASA9_entry(self, entry, comments):
"""
Read a thermodynamics ``entry`` for one species given as one or more
9-coefficient NASA polynomials, written in the format described in
Appendix A of NASA Reference Publication 1311 (McBride and Gordon, 1996).
Returns the label of the species, the thermodynamics model as a
:class:`Nasa9` object, and the elemental composition of the species
"""
tokens = entry[0].split()
species = tokens[0]
note = ' '.join(tokens[1:])
N = int(entry[1][:2])
note2 = entry[1][3:9].strip()
if note and note2:
note = '{0} [{1}]'.format(note, note2)
elif note2:
note = note2
comments = '\n'.join(c.rstrip() for c in comments if c.strip())
if comments and note:
note = '\n'.join((note, comments))
elif comments:
note = comments
composition = self.parse_composition(entry[1][10:50], 5, 8)
polys = []
try:
for i in range(N):
A, B, C = entry[2+3*i:2+3*(i+1)]
Trange = [fortFloat(A[1:11]), fortFloat(A[11:21])]
coeffs = [fortFloat(B[0:16]), fortFloat(B[16:32]),
fortFloat(B[32:48]), fortFloat(B[48:64]),
fortFloat(B[64:80]), fortFloat(C[0:16]),
fortFloat(C[16:32]), fortFloat(C[48:64]),
fortFloat(C[64:80])]
polys.append((Trange, coeffs))
except (IndexError, ValueError) as err:
raise InputError('Error while reading thermo entry for species {}:\n{}.',
species, err)
thermo = Nasa9(data=polys, note=note)
return species, thermo, composition
def setup_kinetics(self):
# We look for species including the next permissible character. '\n' is
# appended to the reaction string to identify the last species in the
# reaction string. Checking this character is necessary to correctly
# identify species with names ending in '+' or '='.
self.species_tokens = set()
for next_char in ('<', '=', '(', '+', '\n'):
self.species_tokens.update(k + next_char for k in self.species_dict)
self.other_tokens = {'M': 'third-body', 'm': 'third-body',
'(+M)': 'falloff3b', '(+m)': 'falloff3b',
'<=>': 'equal', '=>': 'equal', '=': 'equal',
'HV': 'photon', 'hv': 'photon'}
self.other_tokens.update(('(+{})'.format(k), 'falloff3b: {}'.format(k))
for k in self.species_dict)
self.Slen = max(map(len, self.other_tokens))
def read_kinetics_entry(self, entry, surface):
"""
Read a kinetics ``entry`` for a single reaction as loaded from a
Chemkin-format file. Returns a :class:`Reaction` object with the
reaction and its associated kinetics.
"""
# Handle non-default units which apply to this entry
energy_units = self.energy_units
quantity_units = self.quantity_units
if 'units' in entry.lower():
for units in sorted(QUANTITY_UNITS, key=lambda k: -len(k)):
pattern = re.compile(r'units *\/ *{} *\/'.format(re.escape(units)),
flags=re.IGNORECASE)
m = pattern.search(entry)
if m:
entry = pattern.sub('', entry)
quantity_units = QUANTITY_UNITS[units]
break
for units in sorted(ENERGY_UNITS, key=lambda k: -len(k)):
pattern = re.compile(r'units *\/ *{} *\/'.format(re.escape(units)),
re.IGNORECASE)
m = pattern.search(entry)
if m:
entry = pattern.sub('', entry)
energy_units = ENERGY_UNITS[units]
break
lines = entry.strip().splitlines()
# The first line contains the reaction equation and a set of modified Arrhenius parameters
tokens = lines[0].split()
A = float(tokens[-3])
b = float(tokens[-2])
Ea = float(tokens[-1])
reaction = ''.join(tokens[:-3]) + '\n'
original_reaction = reaction # for use in error messages
# Identify tokens in the reaction expression in order of
# decreasing length
locs = {}
for i in range(self.Slen, 0, -1):
for j in range(len(reaction)-i+1):
test = reaction[j:j+i]
if test in self.species_tokens:
reaction = reaction[:j] + ' '*(i-1) + reaction[j+i-1:]
locs[j] = test[:-1], 'species'
elif test in self.other_tokens:
reaction = reaction[:j] + '\n'*i + reaction[j+i:]
locs[j] = test, self.other_tokens[test]
# Anything that's left should be a stoichiometric coefficient or a '+'
# between species
for token in reaction.split():
j = reaction.find(token)
i = len(token)
reaction = reaction[:j] + ' '*i + reaction[j+i:]
if token == '+':
continue
try:
locs[j] = int(token), 'coeff'
except ValueError:
try:
locs[j] = float(token), 'coeff'
except ValueError:
raise InputError('Unexpected token "{}" in reaction expression "{}".',
token, original_reaction)
reactants = []
products = []
stoichiometry = 1
lhs = True
for token, kind in [v for k,v in sorted(locs.items())]:
if kind == 'equal':
reversible = token in ('<=>', '=')
lhs = False
elif kind == 'coeff':
stoichiometry = token
elif lhs:
reactants.append((stoichiometry, token, kind))
stoichiometry = 1
else:
products.append((stoichiometry, token, kind))
stoichiometry = 1
if lhs:
raise InputError("Failed to find reactant/product delimiter in reaction string.")
# Create a new Reaction object for this reaction
reaction = Reaction(reactants=[], products=[], reversible=reversible,
parser=self)
def parse_expression(expression, dest):
third_body_name = None
third_body = False # simple third body reaction (non-falloff)
photon = False
for stoichiometry, species, kind in expression:
if kind == 'third-body':
third_body = True
third_body_name = 'M'
elif kind == 'falloff3b':
third_body_name = 'M'
elif kind.startswith('falloff3b:'):
third_body_name = kind.split()[1]
elif kind == 'photon':
photon = True
else:
dest.append((stoichiometry, self.species_dict[species]))
return third_body_name, third_body, photon
third_body_name_r, third_body, photon_r = parse_expression(reactants, reaction.reactants)
third_body_name_p, third_body, photon_p = parse_expression(products, reaction.products)
if third_body_name_r != third_body_name_p:
raise InputError('Third bodies do not match: "{}" and "{}" in'
' reaction entry:\n\n{}', third_body_name_r, third_body_name_p, entry)
if photon_r:
raise InputError('Reactant photon not supported. '
'Found in reaction:\n{}', entry.strip())
if photon_p and reversible:
self.warn('Found reversible reaction containing a product photon:'
'\n{0}\nIf the "--permissive" option was specified, this will '
'be converted to an irreversible reaction with the photon '
'removed.'.format(entry.strip()))
reaction.reversible = False
reaction.third_body = third_body_name_r
# Determine the appropriate units for k(T) and k(T,P) based on the number of reactants
# This assumes elementary kinetics for all reactions
rStoich = sum(r[0] for r in reaction.reactants) + (1 if third_body else 0)
if rStoich < 1:
raise InputError('No reactant species for reaction {}.', reaction)
length_dim = 3 * (rStoich - 1)
quantity_dim = rStoich - 1
kunits = self.get_rate_constant_units(length_dim, 'cm',
quantity_dim, quantity_units)
klow_units = self.get_rate_constant_units(length_dim + 3, 'cm',
quantity_dim + 1, quantity_units)
# The rest of the first line contains Arrhenius parameters
arrhenius = Arrhenius(
A=(A, kunits),
b=b,
Ea=(Ea, energy_units),
parser=self
)
low_rate = None
high_rate = None
falloff = None
pdep_arrhenius = []
efficiencies = {}
coverages = []
cheb_coeffs = []
revReaction = None
is_sticking = None
motz_wise = None
Tmin = Tmax = Pmin = Pmax = None # Chebyshev parameters
degreeT = degreeP = None
# Note that the subsequent lines could be in any order
for line in lines[1:]:
if not line.strip():
continue
tokens = line.split('/')
parsed = False
if 'stick' in line.lower():
parsed = True
is_sticking = True
if 'mwon' in line.lower():
parsed = True
motz_wise = True
if 'mwoff' in line.lower():
parsed = True
motz_wise = False
if 'dup' in line.lower():
# Duplicate reaction
parsed = True
reaction.duplicate = True
if 'low' in line.lower():
# Low-pressure-limit Arrhenius parameters for "falloff" reaction
parsed = True
tokens = tokens[1].split()
low_rate = Arrhenius(
A=(float(tokens[0].strip()), klow_units),
b=float(tokens[1].strip()),
Ea=(float(tokens[2].strip()), energy_units),
parser=self
)
elif 'high' in line.lower():
# High-pressure-limit Arrhenius parameters for "chemically
# activated" reaction
parsed = True
tokens = tokens[1].split()
high_rate = Arrhenius(
A=(float(tokens[0].strip()), kunits),
b=float(tokens[1].strip()),
Ea=(float(tokens[2].strip()), energy_units),
parser=self
)
# Need to fix units on the base reaction:
arrhenius.A = (arrhenius.A[0], klow_units)
elif 'rev' in line.lower():
parsed = True
reaction.reversible = False
tokens = tokens[1].split()
# If the A factor in the rev line is zero, don't create the reverse reaction
if float(tokens[0].strip()) != 0.0:
# Create a reaction proceeding in the opposite direction
revReaction = Reaction(reactants=reaction.products,
products=reaction.reactants,
third_body=reaction.third_body,
reversible=False,
parser=self)
rev_rate = Arrhenius(
A=(float(tokens[0].strip()), klow_units),
b=float(tokens[1].strip()),
Ea=(float(tokens[2].strip()), energy_units),
parser=self
)
if third_body:
revReaction.kinetics = ThreeBody(rev_rate)
else:
revReaction.kinetics = ElementaryRate(rev_rate)
elif 'ford' in line.lower():
parsed = True
tokens = tokens[1].split()
reaction.forward_orders[tokens[0].strip()] = float(tokens[1])
elif 'troe' in line.lower():
# Troe falloff parameters
parsed = True
tokens = tokens[1].split()
falloff = Troe(A=float(tokens[0].strip()),
T3=float(tokens[1].strip()),
T1=float(tokens[2].strip()),
T2=float(tokens[3].strip()) if len(tokens) > 3 else None)
elif 'sri' in line.lower():
# SRI falloff parameters
parsed = True
tokens = tokens[1].split()
A = float(tokens[0].strip())
B = float(tokens[1].strip())
C = float(tokens[2].strip())
try:
D = float(tokens[3].strip())
E = float(tokens[4].strip())
except (IndexError, ValueError):
D = None
E = None
if D is None or E is None:
falloff = Sri(A=A, B=B, C=C)
else:
falloff = Sri(A=A, B=B, C=C, D=D, E=E)
elif 'cov' in line.lower():
parsed = True
C = tokens[1].split()
coverages.append(
[C[0], fortFloat(C[1]), fortFloat(C[2]), fortFloat(C[3])])
elif 'cheb' in line.lower():
# Chebyshev parameters
parsed = True
tokens = [t.strip() for t in tokens]
if contains(tokens, 'TCHEB'):
index = get_index(tokens, 'TCHEB')
tokens2 = tokens[index+1].split()
Tmin = float(tokens2[0].strip())
Tmax = float(tokens2[1].strip())
if contains(tokens, 'PCHEB'):
index = get_index(tokens, 'PCHEB')
tokens2 = tokens[index+1].split()
Pmin = (float(tokens2[0].strip()), 'atm')
Pmax = (float(tokens2[1].strip()), 'atm')
if contains(tokens, 'TCHEB') or contains(tokens, 'PCHEB'):
pass
elif degreeT is None or degreeP is None:
tokens2 = tokens[1].split()
degreeT = int(float(tokens2[0].strip()))
degreeP = int(float(tokens2[1].strip()))
cheb_coeffs.extend([float(t.strip()) for t in tokens2[2:]])
else:
tokens2 = tokens[1].split()
cheb_coeffs.extend([float(t.strip()) for t in tokens2])
elif 'plog' in line.lower():
# Pressure-dependent Arrhenius parameters
parsed = True
third_body = False # strip optional third-body collider
tokens = tokens[1].split()
pdep_arrhenius.append([float(tokens[0].strip()), Arrhenius(
A=(float(tokens[1].strip()), kunits),
b=float(tokens[2].strip()),
Ea=(float(tokens[3].strip()), energy_units),
parser=self
)])
elif len(tokens) >= 2:
# Assume a list of collider efficiencies
parsed = True
for collider, efficiency in zip(tokens[0::2], tokens[1::2]):
efficiencies[collider.strip()] = float(efficiency.strip())
if not parsed:
raise InputError('Unparsable line:\n"""\n{}\n"""', line)
# Decide which kinetics to keep and store them on the reaction object.
# At most one of the special cases should be true
tests = [cheb_coeffs, pdep_arrhenius, low_rate, high_rate, third_body,
surface]
if sum(bool(t) for t in tests) > 1:
raise InputError('Reaction {} contains parameters for more than '
'one reaction type.', original_reaction)
if cheb_coeffs:
if Tmin is None or Tmax is None:
raise InputError('Missing TCHEB line for reaction {}', reaction)
if Pmin is None or Pmax is None:
raise InputError('Missing PCHEB line for reaction {}', reaction)
if len(cheb_coeffs) != degreeT * degreeP:
raise InputError('Incorrect number of Chebyshev coefficients. '
'Expected {}*{} = {} but got {}', degreeT, degreeP,
degreeT * degreeP, len(cheb_coeffs))
if quantity_units == self.quantity_units:
quantity_units = None
reaction.kinetics = Chebyshev(
Tmin=Tmin, Tmax=Tmax, Pmin=Pmin, Pmax=Pmax,
quantity_units=quantity_units,
coeffs=np.array(cheb_coeffs, np.float64).reshape((degreeT, degreeP)))
elif pdep_arrhenius:
reaction.kinetics = PDepArrhenius(
pressures=[P for P, arrh in pdep_arrhenius],
pressure_units="atm",
arrhenius=[arrh for P, arrh in pdep_arrhenius]
)
elif low_rate is not None:
reaction.kinetics = Falloff(high_rate=arrhenius,
low_rate=low_rate,
F=falloff,
efficiencies=efficiencies)
elif high_rate is not None:
reaction.kinetics = ChemicallyActivated(high_rate=high_rate,
low_rate=arrhenius,
F=falloff,
efficiencies=efficiencies)
elif third_body:
reaction.kinetics = ThreeBody(high_rate=arrhenius,
efficiencies=efficiencies)
elif reaction.third_body:
raise InputError('Reaction equation implies pressure '
'dependence but no alternate rate parameters (such as HIGH or '
'LOW) were given for reaction {}.', reaction)
elif surface:
reaction.kinetics = SurfaceRate(rate=arrhenius,
coverages=coverages,
is_sticking=is_sticking,
motz_wise=motz_wise)
else:
reaction.kinetics = ElementaryRate(arrhenius)
if revReaction:
revReaction.duplicate = reaction.duplicate
revReaction.kinetics.efficiencies = reaction.kinetics.efficiencies
return reaction, revReaction
def load_extra_file(self, path):
"""
Load YAML-formatted entries from ``path`` on disk.
"""
try:
yaml_ = yaml.YAML(typ="rt")
with open(path, 'rt', encoding="utf-8") as stream:
yml = yaml_.load(stream)
except yaml.constructor.ConstructorError:
with open(path, "rt", encoding="utf-8") as stream:
# Ensure that the loader remains backward-compatible with legacy
# ruamel.yaml versions (prior to 0.17.0).
yml = yaml.round_trip_load(stream)
# do not overwrite reserved field names
reserved = {'generator', 'input-files', 'cantera-version', 'date',
'units', 'phases', 'species', 'reactions'}
reserved &= set(yml.keys())
if reserved:
raise InputError("The YAML file '{}' provided as '--extra' input "
"must not redefine reserved field name: "
"'{}'".format(path, reserved))
# replace header lines
if 'description' in yml:
if isinstance(yml['description'], str):
if self.header_lines:
self.header_lines += ['']
self.header_lines += yml.pop('description').split('\n')
else:
raise InputError("The alternate description provided in "
"'{}' needs to be a string".format(path))
# remainder
self.extra = yml
def load_chemkin_file(self, path, skip_undeclared_species=True, surface=False):
"""
Load a Chemkin-format input file from ``path`` on disk.
"""
transportLines = []
self.line_number = 0
with open(path, 'r', errors='ignore') as ck_file:
def readline():
self.line_number += 1
line = strip_nonascii(ck_file.readline())
if '!' in line:
return line.split('!', 1)
elif line:
return line, ''
else:
return None, None
# @todo: This loop is a bit of a mess, and could probably be cleaned
# up by refactoring it into a set of methods for processing each
# input file section.
line, comment = readline()
advance = True
inHeader = True
header = []
indent = 80
while line is not None:
tokens = line.split() or ['']
if inHeader and not line.strip():
header.append(comment.rstrip())
if comment.strip() != '': # skip indent calculation if empty
indent = min(indent, re.search('[^ ]', comment).start())
if tokens[0].upper().startswith('ELEM'):
inHeader = False
tokens = tokens[1:]
while line is not None and get_index(line, 'END') is None:
# Grudging support for implicit end of section
start = line.strip().upper().split()
if start and start[0] in ('SPEC', 'SPECIES'):
self.warn('"ELEMENTS" section implicitly ended by start of '
'next section on line {0}.'.format(self.line_number))
advance = False
tokens.pop()
break
line, comment = readline()
# Normalize custom atomic weights
line = re.sub(r'\s*/\s*([0-9\.EeDd+-]+)\s*/', r'/\1/ ', line)
tokens.extend(line.split())
for token in tokens:
if token.upper() == 'END':
break
self.add_element(token)
elif tokens[0].upper().startswith('SPEC'):
# List of species identifiers
species = tokens[1:]
inHeader = False
comments = {}
while line is not None and get_index(line, 'END') is None:
# Grudging support for implicit end of section
start = line.strip().upper().split()
if start and start[0] in ('REAC', 'REACTIONS', 'TRAN',
'TRANSPORT', 'THER', 'THERMO'):
self.warn('"SPECIES" section implicitly ended by start of '
'next section on line {0}.'.format(self.line_number))
advance = False
species.pop()
# Fix the case where there THERMO ALL or REAC UNITS
# ends the species section
if (species[-1].upper().startswith('THER') or
species[-1].upper().startswith('REAC')):
species.pop()
break
line, comment = readline()
comment = comment.strip()
line_species = line.split()
if len(line_species) == 1 and comment:
comments[line_species[0]] = comment
species.extend(line_species)
for token in species:
if token.upper() == 'END':
break
if token in self.species_dict:
species = self.species_dict[token]
self.warn('Found additional declaration of species {}'.format(species))
else:
species = Species(label=token)
if token in comments:
species.note = comments[token]
self.species_dict[token] = species
self.species_list.append(species)
elif tokens[0].upper().startswith('SITE'):
# List of species identifiers for surface species
if '/' in tokens[0]:
surf_name = tokens[0].split('/')[1]
else:
surf_name = 'surface{}'.format(len(self.surfaces)+1)
tokens = tokens[1:]
site_density = None
for token in tokens[:]:
if token.upper().startswith('SDEN/'):
site_density = fortFloat(token.split('/')[1])
tokens.remove(token)
if site_density is None:
raise InputError('SITE section defined with no site density')
self.surfaces.append(Surface(name=surf_name,
site_density=site_density))
surf = self.surfaces[-1]
inHeader = False
while line is not None and get_index(line, 'END') is None:
# Grudging support for implicit end of section
start = line.strip().upper().split()
if start and start[0] in ('REAC', 'REACTIONS', 'THER',
'THERMO'):
self.warn('"SITE" section implicitly ended by start of '
'next section on line {}.'.format(self.line_number))
advance = False
tokens.pop()
# Fix the case where there THERMO ALL or REAC UNITS
# ends the species section
if (tokens[-1].upper().startswith('THER') or
tokens[-1].upper().startswith('REAC')):
tokens.pop()
break
line, comment = readline()
tokens.extend(line.split())
for token in tokens:
if token.upper() == 'END':
break
if token.count('/') == 2:
# species occupies a specific number of sites
token, sites, _ = token.split('/')
sites = float(sites)
else:
sites = None
if token in self.species_dict:
species = self.species_dict[token]
self.warn('Found additional declaration of species {0}'.format(species))
else:
species = Species(label=token, sites=sites)
self.species_dict[token] = species
surf.species_list.append(species)
elif tokens[0].upper().startswith('THER') and contains(line, 'NASA9'):
inHeader = False
entryLength = None
entry = []
# Gather comments on lines preceding and within this entry
comments = []
while line is not None and get_index(line, 'END') != 0:
# Grudging support for implicit end of section
start = line.strip().upper().split()
if start and start[0] in ('REAC', 'REACTIONS', 'TRAN', 'TRANSPORT'):
self.warn('"THERMO" section implicitly ended by start of '
'next section on line {0}.'.format(self.line_number))
advance = False
tokens.pop()
break
line, comment = readline()
comments.append(comment)
if not line:
continue
if entryLength is None:
entryLength = 0
# special case if (redundant) temperature ranges are
# given as the first line
try:
s = line.split()
float(s[0]), float(s[1]), float(s[2])
continue
except (IndexError, ValueError):
pass
entry.append(line)
if len(entry) == 2:
entryLength = 2 + 3 * int(line.split()[0])
if len(entry) == entryLength:
label, thermo, comp = self.read_NASA9_entry(entry, comments)
comments = []
entry = []
if label not in self.species_dict:
if skip_undeclared_species:
logger.info('Skipping unexpected species "{0}" while reading thermodynamics entry.'.format(label))
continue
else:
# Add a new species entry
species = Species(label=label)
self.species_dict[label] = species
self.species_list.append(species)
else:
species = self.species_dict[label]
# use the first set of thermo data found
if species.thermo is not None:
self.warn('Found additional thermo entry for species {0}. '
'If --permissive was given, the first entry is used.'.format(label))
else:
species.thermo = thermo
species.composition = comp
elif tokens[0].upper().startswith('THER'):
# List of thermodynamics (hopefully one per species!)
inHeader = False
line, comment = readline()
if line is not None and get_index(line, 'END') is None:
TintDefault = float(line.split()[1])
thermo = []
current = []
# Gather comments on lines preceding and within this entry
comments = [comment]
while line is not None and get_index(line, 'END') != 0:
# Grudging support for implicit end of section
start = line.strip().upper().split()
if start and start[0] in ('REAC', 'REACTIONS', 'TRAN', 'TRANSPORT'):
self.warn('"THERMO" section implicitly ended by start of '
'next section on line {0}.'.format(self.line_number))
advance = False
tokens.pop()
break
if comment:
current.append('!'.join((line, comment)))
else:
current.append(line)
if len(line) >= 80 and line[79] in ['1', '2', '3', '4']:
thermo.append(line)
if line[79] == '4':
try:
label, thermo, comp = self.read_NASA7_entry(thermo, TintDefault, comments)
except Exception as e:
error_line_number = self.line_number - len(current) + 1
error_entry = ''.join(current).rstrip()
logger.info(
'Error while reading thermo entry starting on line {0}:\n'
'"""\n{1}\n"""'.format(error_line_number, error_entry)
)
raise
if label not in self.species_dict:
if skip_undeclared_species:
logger.info(
'Skipping unexpected species "{0}" while'
' reading thermodynamics entry.'.format(label))
thermo = []
line, comment = readline()
current = []
comments = [comment]
continue
else:
# Add a new species entry
species = Species(label=label)
self.species_dict[label] = species
self.species_list.append(species)
else:
species = self.species_dict[label]
# use the first set of thermo data found
if species.thermo is not None:
self.warn('Found additional thermo entry for species {0}. '
'If --permissive was given, the first entry is used.'.format(label))
else:
species.thermo = thermo
species.composition = comp
thermo = []
current = []
comments = []
elif thermo and thermo[-1].rstrip().endswith('&'):
# Include Chemkin-style extended elemental composition
thermo.append(line)
line, comment = readline()
comments.append(comment)
elif tokens[0].upper().startswith('REAC'):
# Reactions section
inHeader = False
for token in tokens[1:]:
token = token.upper()
if token in ENERGY_UNITS:
self.energy_units = ENERGY_UNITS[token]
if not self.processed_units:
self.output_energy_units = ENERGY_UNITS[token]
elif token in QUANTITY_UNITS:
self.quantity_units = QUANTITY_UNITS[token]
if not self.processed_units:
self.output_quantity_units = QUANTITY_UNITS[token]
elif token == 'MWON':
self.motz_wise = True
elif token == 'MWOFF':
self.motz_wise = False
else:
raise InputError("Unrecognized token on REACTIONS line, {0!r}", token)
self.processed_units = True
kineticsList = []
commentsList = []
startLines = []
kinetics = ''
comments = ''
line, comment = readline()
if surface:
reactions = self.surfaces[-1].reactions
else:
reactions = self.reactions
while line is not None and get_index(line, 'END') is None:
# Grudging support for implicit end of section
start = line.strip().upper().split()
if start and start[0] in ('TRAN', 'TRANSPORT'):
self.warn('"REACTIONS" section implicitly ended by start of '
'next section on line {0}.'.format(self.line_number))
advance = False
break
lineStartsWithComment = not line and comment
line = line.rstrip()
comment = comment.rstrip()
if '=' in line and not lineStartsWithComment:
# Finish previous record
if comment:
# End of line comment belongs with this reaction
comments += comment + '\n'
comment = ''
kineticsList.append(kinetics)
commentsList.append(comments)
startLines.append(self.line_number)
kinetics = ''
comments = ''
if line.strip():
kinetics += line + '\n'
if comment:
comments += comment + '\n'
line, comment = readline()
# Don't forget the last reaction!
if kinetics.strip() != '':
kineticsList.append(kinetics)
commentsList.append(comments)
# We don't actually know whether comments belong to the
# previous or next reaction, but to keep them positioned
# correctly, we associate them with the next reaction. A
# comment after the last reaction is associated with that
# reaction
if kineticsList and kineticsList[0] == '':
kineticsList.pop(0)
final_comment = commentsList.pop()
if final_comment and commentsList[-1]:
commentsList[-1] = commentsList[-1].rstrip() + '\n' + final_comment
elif final_comment:
commentsList[-1] = final_comment
self.setup_kinetics()
for kinetics, comment, line_number in zip(kineticsList, commentsList, startLines):
try:
reaction, revReaction = self.read_kinetics_entry(kinetics, surface)
except Exception as e:
self.line_number = line_number
logger.info('Error reading reaction starting on '
'line {0}:\n"""\n{1}\n"""'.format(
line_number, kinetics.rstrip()))
raise
reaction.line_number = line_number
reaction.comment = comment
reactions.append(reaction)
if revReaction is not None:
revReaction.line_number = line_number
reactions.append(revReaction)
for index, reaction in enumerate(reactions):
reaction.index = index + 1
elif tokens[0].upper().startswith('TRAN'):
inHeader = False
line, comment = readline()
transport_start_line = self.line_number
while line is not None and get_index(line, 'END') is None:
# Grudging support for implicit end of section
start = line.strip().upper().split()
if start and start[0] in ('REAC', 'REACTIONS'):
self.warn('"TRANSPORT" section implicitly ended by start of '
'next section on line {0}.'.format(self.line_number))
advance = False
tokens.pop()
break
if comment:
transportLines.append('!'.join((line, comment)))
else:
transportLines.append(line)
line, comment = readline()
elif line.strip():
raise InputError('Section starts with unrecognized keyword'
'\n"""\n{}\n"""', line.rstrip())
if advance:
line, comment = readline()
else:
advance = True
for h in header:
self.header_lines.append(h[indent:])
if transportLines:
self.parse_transport_data(transportLines, path, transport_start_line)
def parse_transport_data(self, lines, filename, line_offset):
"""
Parse the Chemkin-format transport data in ``lines`` (a list of strings)
and add that transport data to the previously-loaded species.
"""
for i,line in enumerate(lines):
original_line = line
line = line.strip()
if not line or line.startswith('!'):
continue
if get_index(line, 'END') == 0:
break
if '!' in line:
line, comment = line.split('!', 1)
else:
comment = ''
data = line.split()
speciesName = data[0]
if speciesName in self.species_dict:
if len(data) != 7:
raise InputError('Unable to parse line {} of {}:\n"""\n{}"""\n'
'6 transport parameters expected, but found {}.',
line_offset + i, filename, original_line, len(data)-1)
if self.species_dict[speciesName].transport is None:
self.species_dict[speciesName].transport = TransportData(self, *data, note=comment)
else:
self.warn('Ignoring duplicate transport data'
' for species "{}" on line {} of "{}".'.format(
speciesName, line_offset + i, filename))
def write_yaml(self, name='gas', out_name='mech.yaml'):
emitter = yaml.YAML()
emitter.width = 70
emitter.register_class(Species)
emitter.register_class(Nasa7)
emitter.register_class(Nasa9)
emitter.register_class(TransportData)
emitter.register_class(Reaction)
with open(out_name, 'w') as dest:
have_transport = True
for s in self.species_list:
if not s.transport:
have_transport = False
surface_names = []
n_reacting_phases = 0
if self.reactions:
n_reacting_phases += 1
for surf in self.surfaces:
surface_names.append(surf.name)
if surf.reactions:
n_reacting_phases += 1
# Write header lines
desc = '\n'.join(line.rstrip() for line in self.header_lines)
desc = desc.strip('\n')
desc = textwrap.dedent(desc)
if desc.strip():
emitter.dump({'description': yaml.scalarstring.PreservedScalarString(desc)}, dest)
# Additional information regarding conversion
files = [os.path.basename(f) for f in self.files]
metadata = BlockMap([
("generator", "ck2yaml"),
("input-files", FlowList(files)),
("cantera-version", "3.0.0b1"),
("date", formatdate(localtime=True)),
])
if desc.strip():
metadata.yaml_set_comment_before_after_key('generator', before='\n')
emitter.dump(metadata, dest)
# Write extra entries
if self.extra:
extra = BlockMap(self.extra)
key = list(self.extra.keys())[0]
extra.yaml_set_comment_before_after_key(key, before='\n')
emitter.dump(extra, dest)
units = FlowMap([('length', 'cm'), ('time', 's')])
units['quantity'] = self.output_quantity_units
units['activation-energy'] = self.output_energy_units
units_map = BlockMap([('units', units)])
units_map.yaml_set_comment_before_after_key('units', before='\n')
emitter.dump(units_map, dest)
phases = []
reactions = []
if name is not None:
phase = BlockMap()
phase['name'] = name
phase['thermo'] = 'ideal-gas'
phase['elements'] = FlowList(self.elements)
phase['species'] = FlowList(S.label for S in self.species_list)
if self.reactions:
phase['kinetics'] = 'gas'
if n_reacting_phases == 1:
reactions.append(('reactions', self.reactions))
else:
rname = '{}-reactions'.format(name)
phase['reactions'] = [rname]
reactions.append((rname, self.reactions))
if have_transport:
phase['transport'] = 'mixture-averaged'
phase['state'] = FlowMap([('T', 300.0), ('P', '1 atm')])
phases.append(phase)
for surf in self.surfaces:
# Write definitions for surface phases
phase = BlockMap()
phase['name'] = surf.name
phase['thermo'] = 'ideal-surface'
phase['adjacent-phases'] = FlowList([name])
phase['elements'] = FlowList(self.elements)
phase['species'] = FlowList(S.label for S in surf.species_list)
phase['site-density'] = surf.site_density
if self.motz_wise is not None:
phase['Motz-Wise'] = self.motz_wise
if surf.reactions:
phase['kinetics'] = 'surface'
if n_reacting_phases == 1:
reactions.append(('reactions', surf.reactions))
else:
rname = '{}-reactions'.format(surf.name)
phase['reactions'] = [rname]
reactions.append((rname, surf.reactions))
phase['state'] = FlowMap([('T', 300.0), ('P', '1 atm')])
phases.append(phase)
if phases:
phases_map = BlockMap([('phases', phases)])
phases_map.yaml_set_comment_before_after_key('phases', before='\n')
emitter.dump(phases_map, dest)
# Write data on custom elements
if self.element_weights:
elements = []
for name, weight in sorted(self.element_weights.items()):
E = BlockMap([('symbol', name), ('atomic-weight', weight)])
elements.append(E)
elementsMap = BlockMap([('elements', elements)])
elementsMap.yaml_set_comment_before_after_key('elements', before='\n')
emitter.dump(elementsMap, dest)
# Write the individual species data
all_species = list(self.species_list)
for species in all_species:
if species.composition is None:
raise InputError('No thermo data found for '
'species {!r}'.format(species.label))
for surf in self.surfaces:
all_species.extend(surf.species_list)
speciesMap = BlockMap([('species', all_species)])
speciesMap.yaml_set_comment_before_after_key('species', before='\n')
emitter.dump(speciesMap, dest)
# Write the reactions section(s)
for label, R in reactions:
reactionsMap = BlockMap([(label, R)])
reactionsMap.yaml_set_comment_before_after_key(label, before='\n')
emitter.dump(reactionsMap, dest)
# Names of surface phases need to be returned so they can be imported as
# part of mechanism validation
return surface_names
@staticmethod
def convert_mech(input_file, thermo_file=None, transport_file=None,
surface_file=None, phase_name='gas', extra_file=None,
out_name=None, single_intermediate_temperature=False, quiet=False,
permissive=None):
parser = Parser()
parser.single_intermediate_temperature = single_intermediate_temperature
if quiet:
logger.setLevel(level=logging.ERROR)
else:
logger.setLevel(level=logging.INFO)
if permissive is not None:
parser.warning_as_error = not permissive
if input_file:
parser.files.append(input_file)
input_file = os.path.expanduser(input_file)
if not os.path.exists(input_file):
raise IOError('Missing input file: {0!r}'.format(input_file))
try:
# Read input mechanism files
parser.load_chemkin_file(input_file)
except Exception as err:
logger.warning("\nERROR: Unable to parse '{0}' near line {1}:\n{2}\n".format(
input_file, parser.line_number, err))
raise
else:
phase_name = None
if thermo_file:
parser.files.append(thermo_file)
thermo_file = os.path.expanduser(thermo_file)
if not os.path.exists(thermo_file):
raise IOError('Missing thermo file: {0!r}'.format(thermo_file))
try:
parser.load_chemkin_file(thermo_file,
skip_undeclared_species=bool(input_file))
except Exception:
logger.warning("\nERROR: Unable to parse '{0}' near line {1}:\n".format(
thermo_file, parser.line_number))
raise
if transport_file:
parser.files.append(transport_file)
transport_file = os.path.expanduser(transport_file)
if not os.path.exists(transport_file):
raise IOError('Missing transport file: {0!r}'.format(transport_file))
with open(transport_file, 'r', errors='ignore') as f:
lines = [strip_nonascii(line) for line in f]
parser.parse_transport_data(lines, transport_file, 1)
# Transport validation: make sure all species have transport data
for s in parser.species_list:
if s.transport is None:
raise InputError("No transport data for species '{}'.", s)
if surface_file:
parser.files.append(surface_file)
surface_file = os.path.expanduser(surface_file)
if not os.path.exists(surface_file):
raise IOError('Missing input file: {0!r}'.format(surface_file))
try:
# Read input mechanism files
parser.load_chemkin_file(surface_file, surface=True)
except Exception as err:
logger.warning("\nERROR: Unable to parse '{0}' near line {1}:\n{2}\n".format(
surface_file, parser.line_number, err))
raise
if extra_file:
parser.files.append(extra_file)
extra_file = os.path.expanduser(extra_file)
if not os.path.exists(extra_file):
raise IOError('Missing input file: {0!r}'.format(extra_file))
try:
# Read input mechanism files
parser.load_extra_file(extra_file)
except Exception as err:
logger.warning("\nERROR: Unable to parse '{0}':\n{1}\n".format(
extra_file, err))
raise
if out_name:
out_name = os.path.expanduser(out_name)
else:
out_name = os.path.splitext(input_file)[0] + '.yaml'
# Write output file
surface_names = parser.write_yaml(name=phase_name, out_name=out_name)
if not quiet:
nReactions = len(parser.reactions) + sum(len(surf.reactions) for surf in parser.surfaces)
logger.info('Wrote YAML mechanism file to {0!r}.'.format(out_name))
logger.info('Mechanism contains {0} species and {1} reactions.'.format(
len(parser.species_list), nReactions))
return parser, surface_names
def show_duplicate_reactions(self, error_message):
# Find the reaction numbers of the duplicate reactions by looking at
# the YAML file lines shown in the error message generated by
# Kinetics::checkDuplicates.
reactions = []
for line in error_message.split('\n'):
match = re.match('>.*# Reaction ([0-9]+)', line)
if match:
reactions.append(int(match.group(1))-1)
if len(reactions) != 2:
# Something went wrong while parsing the error message, so just
# display it as-is instead of trying to be clever.
logger.warning(error_message)
return
# Give an error message that references the line numbers in the
# original input file.
equation = str(self.reactions[reactions[0]])
lines = [self.reactions[i].line_number for i in reactions]
logger.warning('Undeclared duplicate reaction {}\nfound on lines {} and {} of '
'the kinetics input file.'.format(equation, lines[0], lines[1]))
def convert_mech(input_file, thermo_file=None, transport_file=None,
surface_file=None, phase_name='gas', extra_file=None,
out_name=None, single_intermediate_temperature=False, quiet=False,
permissive=None):
_, surface_names = Parser.convert_mech(
input_file, thermo_file, transport_file, surface_file, phase_name,
extra_file, out_name, single_intermediate_temperature, quiet, permissive)
return surface_names
def main(argv):
longOptions = ['input=', 'thermo=', 'transport=', 'surface=', 'name=',
'extra=', 'output=', 'permissive', 'help', 'debug',
'single-intermediate-temperature', 'quiet', 'no-validate', 'id=']
try:
optlist, args = getopt.getopt(argv, 'dh', longOptions)
options = dict()
for o,a in optlist:
options[o] = a
if args:
raise getopt.GetoptError('Unexpected command line option: ' +
repr(' '.join(args)))
except getopt.GetoptError as e:
logger.error('ck2yaml.py: Error parsing arguments:')
logger.error(e)
logger.error('Run "ck2yaml.py --help" to see usage help.')
sys.exit(1)
if not options or '-h' in options or '--help' in options:
logger.info(__doc__)
sys.exit(0)
input_file = options.get('--input')
thermo_file = options.get('--thermo')
single_intermediate_temperature = '--single-intermediate-temperature' in options
permissive = '--permissive' in options
quiet = '--quiet' in options
transport_file = options.get('--transport')
surface_file = options.get('--surface')
if '--id' in options:
phase_name = options.get('--id', 'gas')
logger.warning("\nFutureWarning: "
"Option '--id=...' will be replaced by '--name=...'")
else:
phase_name = options.get('--name', 'gas')
if not input_file and not thermo_file:
logger.error('At least one of the arguments "--input=..." or "--thermo=..."'
' must be provided.\nRun "ck2yaml.py --help" to see usage help.')
sys.exit(1)
extra_file = options.get('--extra')
if '--output' in options:
out_name = options['--output']
if not out_name.endswith('.yaml') and not out_name.endswith('.yml'):
out_name += '.yaml'
elif input_file:
out_name = os.path.splitext(input_file)[0] + '.yaml'
else:
out_name = os.path.splitext(thermo_file)[0] + '.yaml'
parser, surfaces = Parser.convert_mech(input_file, thermo_file,
transport_file, surface_file, phase_name, extra_file, out_name,
single_intermediate_temperature, quiet, permissive)
# Do full validation by importing the resulting mechanism
if not input_file:
# Can't validate input files that don't define a phase
return
if '--no-validate' in options:
return
try:
from cantera import Solution, Interface
except ImportError:
logger.warning('WARNING: Unable to import Cantera Python module. '
'Output mechanism has not been validated')
sys.exit(0)
try:
logger.info('Validating mechanism...')
gas = Solution(out_name)
for surf_name in surfaces:
phase = Interface(out_name, surf_name, [gas])
logger.info('PASSED')
except RuntimeError as e:
logger.info('FAILED')
msg = str(e)
if 'Undeclared duplicate reactions' in msg:
parser.show_duplicate_reactions(msg)
else:
logger.warning(e)
sys.exit(1)
def script_entry_point():
main(sys.argv[1:])
if __name__ == '__main__':
main(sys.argv[1:]) | PypiClean |
/Booktype-1.5.tar.gz/Booktype-1.5/lib/booki/site_static/js/jquery.bubblepopup.v2.3.1.min.js | eval(function(p,a,c,k,e,r){e=function(c){return(c<a?'':e(parseInt(c/a)))+((c=c%a)>35?String.fromCharCode(c+29):c.toString(36))};if(!''.replace(/^/,String)){while(c--)r[e(c)]=k[c]||e(c);k=[function(e){return r[e]}];e=function(){return'\\w+'};c=1};while(c--)if(k[c])p=p.replace(new RegExp('\\b'+e(c)+'\\b','g'),k[c]);return p}('(6(a){a.1j.3C=6(){4 c=X;a(W).1g(6(d,e){4 b=a(e).1K("1U");5(b!=X&&7 b=="1a"&&!a.19(b)&&!a.18(b)&&b.3!=X&&7 b.3=="1a"&&!a.19(b.3)&&!a.18(b.3)&&7 b.3.1v!="1w"){c=b.3.1v?U:Q}12 Q});12 c};a.1j.45=6(){4 b=X;a(W).1g(6(e,f){4 d=a(f).1K("1U");5(d!=X&&7 d=="1a"&&!a.19(d)&&!a.18(d)&&d.3!=X&&7 d.3=="1a"&&!a.19(d.3)&&!a.18(d.3)&&7 d.3.1V!="1w"&&d.3.1V!=X){b=c(d.3.1V)}12 Q});6 c(d){12 2z 2Q(d*2R)}12 b};a.1j.4d=6(){4 b=X;a(W).1g(6(e,f){4 d=a(f).1K("1U");5(d!=X&&7 d=="1a"&&!a.19(d)&&!a.18(d)&&d.3!=X&&7 d.3=="1a"&&!a.19(d.3)&&!a.18(d.3)&&7 d.3.1W!="1w"&&d.3.1W!=X){b=c(d.3.1W)}12 Q});6 c(d){12 2z 2Q(d*2R)}12 b};a.1j.3G=6(){4 b=X;a(W).1g(6(e,f){4 d=a(f).1K("1U");5(d!=X&&7 d=="1a"&&!a.19(d)&&!a.18(d)&&d.3!=X&&7 d.3=="1a"&&!a.19(d.3)&&!a.18(d.3)&&7 d.3.1L!="1w"&&d.3.1L!=X){b=c(d.3.1L)}12 Q});6 c(d){12 2z 2Q(d*2R)}12 b};a.1j.3H=6(){4 b=X;a(W).1g(6(d,e){4 c=a(e).1K("1U");5(c!=X&&7 c=="1a"&&!a.19(c)&&!a.18(c)&&c.3!=X&&7 c.3=="1a"&&!a.19(c.3)&&!a.18(c.3)&&7 c.3.T!="1w"){b=a("#"+c.3.T).Z>0?a("#"+c.3.T).2p():X}12 Q});12 b};a.1j.3D=6(){4 b=X;a(W).1g(6(d,e){4 c=a(e).1K("1U");5(c!=X&&7 c=="1a"&&!a.19(c)&&!a.18(c)&&c.3!=X&&7 c.3=="1a"&&!a.19(c.3)&&!a.18(c.3)&&7 c.3.T!="1w"){b=c.3.T}12 Q});12 b};a.1j.4h=6(){4 b=0;a(W).1g(6(d,e){4 c=a(e).1K("1U");5(c!=X&&7 c=="1a"&&!a.19(c)&&!a.18(c)&&c.3!=X&&7 c.3=="1a"&&!a.19(c.3)&&!a.18(c.3)&&7 c.3.T!="1w"){a(e).2h("33");a(e).2h("2S");a(e).2h("30");a(e).2h("2G");a(e).2h("2L");a(e).2h("2x");a(e).2h("2s");a(e).2h("28");a(e).1K("1U",{});5(a("#"+c.3.T).Z>0){a("#"+c.3.T).2H()}b++}});12 b};a.1j.3x=6(){4 c=Q;a(W).1g(6(d,e){4 b=a(e).1K("1U");5(b!=X&&7 b=="1a"&&!a.19(b)&&!a.18(b)&&b.3!=X&&7 b.3=="1a"&&!a.19(b.3)&&!a.18(b.3)&&7 b.3.T!="1w"){c=U}12 Q});12 c};a.1j.48=6(){4 b={};a(W).1g(6(c,d){b=a(d).1K("1U");5(b!=X&&7 b=="1a"&&!a.19(b)&&!a.18(b)&&b.3!=X&&7 b.3=="1a"&&!a.19(b.3)&&!a.18(b.3)){44 b.3}1d{b=X}12 Q});5(a.18(b)){b=X}12 b};a.1j.4e=6(b,c){a(W).1g(6(d,e){5(7 c!="1I"){c=U}a(e).1e("2S",[b,c])})};a.1j.4c=6(b){a(W).1g(6(c,d){a(d).1e("30",[b])})};a.1j.47=6(b,c){a(W).1g(6(d,e){a(e).1e("2s",[b,c,U]);12 Q})};a.1j.46=6(b,c){a(W).1g(6(d,e){a(e).1e("2s",[b,c,U])})};a.1j.3X=6(){a(W).1g(6(b,c){a(c).1e("28",[U]);12 Q})};a.1j.3U=6(){a(W).1g(6(b,c){a(c).1e("28",[U])})};a.1j.3P=6(){a(W).1g(6(b,c){a(c).1e("2L");12 Q})};a.1j.3O=6(){a(W).1g(6(b,c){a(c).1e("2L")})};a.1j.3N=6(){a(W).1g(6(b,c){a(c).1e("2x");12 Q})};a.1j.3M=6(){a(W).1g(6(b,c){a(c).1e("2x")})};a.1j.3J=6(e){4 r={2J:W,2X:[],2Y:"1U",3w:["S","13","1b"],3n:["R","13","1c"],3j:\'<3i 1y="{1N} {3g}"{36} T="{37}"> <38{3b}> <3c> <2y> <14 1y="{1N}-S-R"{2m-2Z}>{2m-2O}</14> <14 1y="{1N}-S-13"{2m-3u}>{2m-20}</14> <14 1y="{1N}-S-1c"{2m-2U}>{2m-2P}</14> </2y> <2y> <14 1y="{1N}-13-R"{20-2Z}>{20-2O}</14> <14 1y="{1N}-1H"{31}>{2T}</14> <14 1y="{1N}-13-1c"{20-2U}>{20-2P}</14> </2y> <2y> <14 1y="{1N}-1b-R"{2l-2Z}>{2l-2O}</14> <14 1y="{1N}-1b-13"{2l-3u}>{2l-20}</14> <14 1y="{1N}-1b-1c"{2l-2U}>{2l-2P}</14> </2y> </3c> </38> </3i>\',3:{T:X,1L:X,1W:X,1V:X,1v:Q,1J:Q,1r:Q,1A:Q,1Y:Q,1B:Q,25:{}},15:"S",3v:["R","S","1c","1b"],11:"27",35:["R","27","1c","S","13","1b"],2K:["R","27","1c"],32:["S","13","1b"],1n:"3Y",1p:X,1o:X,1x:{},1u:{},1H:X,1O:{},V:{11:"27",1F:Q},1i:U,2q:U,22:Q,2k:U,23:"2E",3t:["2E","2V"],26:"2V",3o:["2E","2V"],1M:3h,1P:3h,29:0,2a:0,Y:"3e",21:"3F",2b:"3e-4f/",1h:{2A:"4a",1E:"43"},1T:6(){},1S:6(){},1m:[]};h(e);6 g(v){4 w={3:{},1p:r.1p,1o:r.1o,1x:r.1x,1u:r.1u,15:r.15,11:r.11,1n:r.1n,1M:r.1M,1P:r.1P,29:r.29,2a:r.2a,23:r.23,26:r.26,V:r.V,1H:r.1H,1O:r.1O,Y:r.Y,21:r.21,2b:r.2b,1h:r.1h,1i:r.1i,2k:r.2k,2q:r.2q,22:r.22,1T:r.1T,1S:r.1S,1m:r.1m};4 t=a.3E(Q,w,(7 v=="1a"&&!a.19(v)&&!a.18(v)&&v!=X?v:{}));t.3.T=r.3.T;t.3.1L=r.3.1L;t.3.1W=r.3.1W;t.3.1V=r.3.1V;t.3.1v=r.3.1v;t.3.1J=r.3.1J;t.3.1r=r.3.1r;t.3.1A=r.3.1A;t.3.1Y=r.3.1Y;t.3.1B=r.3.1B;t.3.25=r.3.25;t.1p=(7 t.1p=="1Q"||7 t.1p=="2c")&&10(t.1p)>0?10(t.1p):r.1p;t.1o=(7 t.1o=="1Q"||7 t.1o=="2c")&&10(t.1o)>0?10(t.1o):r.1o;t.1x=t.1x!=X&&7 t.1x=="1a"&&!a.19(t.1x)&&!a.18(t.1x)?t.1x:r.1x;t.1u=t.1u!=X&&7 t.1u=="1a"&&!a.19(t.1u)&&!a.18(t.1u)?t.1u:r.1u;t.15=7 t.15=="1Q"&&o(t.15.1X(),r.3v)?t.15.1X():r.15;t.11=7 t.11=="1Q"&&o(t.11.1X(),r.35)?t.11.1X():r.11;t.1n=(7 t.1n=="1Q"||7 t.1n=="2c")&&10(t.1n)>=0?10(t.1n):r.1n;t.1M=7 t.1M=="2c"&&10(t.1M)>0?10(t.1M):r.1M;t.1P=7 t.1P=="2c"&&10(t.1P)>0?10(t.1P):r.1P;t.29=7 t.29=="2c"&&t.29>=0?t.29:r.29;t.2a=7 t.2a=="2c"&&t.2a>=0?t.2a:r.2a;t.23=7 t.23=="1Q"&&o(t.23.1X(),r.3t)?t.23.1X():r.23;t.26=7 t.26=="1Q"&&o(t.26.1X(),r.3o)?t.26.1X():r.26;t.V=t.V!=X&&7 t.V=="1a"&&!a.19(t.V)&&!a.18(t.V)?t.V:r.V;t.V.11=7 t.V.11!="1w"?t.V.11:r.V.11;t.V.1F=7 t.V.1F!="1w"?t.V.1F:r.V.1F;t.1H=7 t.1H=="1Q"&&t.1H.Z>0?t.1H:r.1H;t.1O=t.1O!=X&&7 t.1O=="1a"&&!a.19(t.1O)&&!a.18(t.1O)?t.1O:r.1O;t.Y=j(7 t.Y=="1Q"&&t.Y.Z>0?t.Y:r.Y);t.21=7 t.21=="1Q"&&t.21.Z>0?a.3d(t.21):r.21;t.2b=7 t.2b=="1Q"&&t.2b.Z>0?a.3d(t.2b):r.2b;t.1h=t.1h!=X&&7 t.1h=="1a"&&!a.19(t.1h)&&!a.18(t.1h)&&(7 10(t.1h.2A)=="2c"&&7 10(t.1h.1E)=="2c")?t.1h:r.1h;t.1i=7 t.1i=="1I"&&t.1i==U?U:Q;t.2k=7 t.2k=="1I"&&t.2k==U?U:Q;t.2q=7 t.2q=="1I"&&t.2q==U?U:Q;t.22=7 t.22=="1I"&&t.22==U?U:Q;t.1T=7 t.1T=="6"?t.1T:r.1T;t.1S=7 t.1S=="6"?t.1S:r.1S;t.1m=a.19(t.1m)?t.1m:r.1m;5(t.15=="R"||t.15=="1c"){t.11=o(t.11,r.32)?t.11:"13"}1d{t.11=o(t.11,r.2K)?t.11:"27"}1R(4 u 2r t.V){2g(u){17"11":t.V.11=7 t.V.11=="1Q"&&o(t.V.11.1X(),r.35)?t.V.11.1X():r.V.11;5(t.15=="R"||t.15=="1c"){t.V.11=o(t.V.11,r.32)?t.V.11:"13"}1d{t.V.11=o(t.V.11,r.2K)?t.V.11:"27"}16;17"1F":t.V.1F=t.V.1F==U?U:Q;16}}12 t}6 l(t){5(t==0){12 0}5(t>0){12-(1s.1t(t))}1d{12 1s.1t(t)}}6 o(v,w){4 t=Q;1R(4 u 2r w){5(w[u]==v){t=U;16}}12 t}6 k(t){5(2W.3q){1R(4 v=t.Z-1;v>=0;v--){4 u=2W.3q("1G");u.2o=t[v];5(a.4g(t[v],r.2X)>-1){r.2X.3s(t[v])}}}}6 b(t){5(t.1m&&t.1m.Z>0){1R(4 u=0;u<t.1m.Z;u++){4 v=(t.1m[u].3m(0)!="#"?"#"+t.1m[u]:t.1m[u]);a(v).1k({34:"1F"})}}}6 s(u){5(u.1m&&u.1m.Z>0){1R(4 v=0;v<u.1m.Z;v++){4 x=(u.1m[v].3m(0)!="#"?"#"+u.1m[v]:u.1m[v]);a(x).1k({34:"3f"});4 w=a(x).Z;1R(4 t=0;t<w.Z;t++){a(w[t]).1k({34:"3f"})}}}}6 m(u){4 w=u.2b;4 t=u.21;4 v=(w.2I(w.Z-1)=="/"||w.2I(w.Z-1)=="\\\\")?w.2I(0,w.Z-1)+"/"+t+"/":w+"/"+t+"/";12 v+(u.1i==U?(a.1l.1D?"2e/":""):"2e/")}6 j(t){4 u=t.2I(0,1)=="."?t.2I(1,t.Z):t;12 u}6 q(u){5(a("#"+u.3.T).Z>0){4 t="1b-13";2g(u.15){17"R":t="13-1c";16;17"S":t="1b-13";16;17"1c":t="13-R";16;17"1b":t="S-13";16}5(o(u.V.11,r.2K)){a("#"+u.3.T).1f("14."+u.Y+"-"+t).1k("3a-11",u.V.11)}1d{a("#"+u.3.T).1f("14."+u.Y+"-"+t).1k("39-11",u.V.11)}}}6 p(v){4 H=r.3j;4 F=m(v);4 x="";4 G="";4 u="";5(!v.V.1F){2g(v.15){17"R":G="1c";u="{20-2P}";16;17"S":G="1b";u="{2l-20}";16;17"1c":G="R";u="{20-2O}";16;17"1b":G="S";u="{2m-20}";16}x=\'<1G 2o="\'+F+"V-"+G+"."+(v.1i==U?(a.1l.1D?"1C":"2n"):"1C")+\'" 2w="" 1y="\'+v.Y+\'-V" />\'}4 t=r.3w;4 z=r.3n;4 K,E,A,J;4 B="";4 y="";4 D=2z 3p();1R(E 2r t){A="";J="";1R(K 2r z){A=t[E]+"-"+z[K];A=A.42();J="{"+A+"40}";A="{"+A+"}";5(A==u){H=H.1z(A,x);B=""}1d{H=H.1z(A,"");B=""}5(t[E]+"-"+z[K]!="13-13"){y=F+t[E]+"-"+z[K]+"."+(v.1i==U?(a.1l.1D?"1C":"2n"):"1C");D.3s(y);H=H.1z(J,\' 2M="\'+B+"3L-3K:3I("+y+\');"\')}}}5(D.Z>0){k(D)}4 w="";5(v.1u!=X&&7 v.1u=="1a"&&!a.19(v.1u)&&!a.18(v.1u)){1R(4 C 2r v.1u){w+=C+":"+v.1u[C]+";"}}w+=(v.1p!=X||v.1o!=X)?(v.1p!=X?"1p:"+v.1p+"1Z;":"")+(v.1o!=X?"1o:"+v.1o+"1Z;":""):"";H=w.Z>0?H.1z("{3b}",\' 2M="\'+w+\'"\'):H.1z("{3b}","");4 I="";5(v.1x!=X&&7 v.1x=="1a"&&!a.19(v.1x)&&!a.18(v.1x)){1R(4 C 2r v.1x){I+=C+":"+v.1x[C]+";"}}H=I.Z>0?H.1z("{36}",\' 2M="\'+I+\'"\'):H.1z("{36}","");H=H.1z("{3g}",v.Y+"-"+v.21);H=v.3.T!=X?H.1z("{37}",v.3.T):H.1z("{37}","");3y(H.3z("{1N}")>-1){H=H.1z("{1N}",v.Y)}H=v.1H!=X?H.1z("{2T}",v.1H):H.1z("{2T}","");J="";1R(4 C 2r v.1O){J+=C+":"+v.1O[C]+";"}H=J.Z>0?H.1z("{31}",\' 2M="\'+J+\'"\'):H.1z("{31}","");12 H}6 f(){12 1s.3A(2z 2Q().3B()/2R)}6 c(E,N,x){4 O=x.15;4 K=x.11;4 z=x.1n;4 F=x.1h;4 I=2z 3p();4 u=N.2F();4 t=10(u.S);4 y=10(u.R);4 P=10(N.2v(Q));4 L=10(N.2u(Q));4 v=10(E.2v(Q));4 M=10(E.2u(Q));F.1E=1s.1t(10(F.1E));F.2A=1s.1t(10(F.2A));4 w=l(F.1E);4 J=l(F.1E);4 A=l(F.2A);4 H=m(x);2g(K){17"R":I.S=O=="S"?t-M-z+l(w):t+L+z+w;I.R=y+A;16;17"27":4 D=1s.1t(v-P)/2;I.S=O=="S"?t-M-z+l(w):t+L+z+w;I.R=v>=P?y-D:y+D;16;17"1c":4 D=1s.1t(v-P);I.S=O=="S"?t-M-z+l(w):t+L+z+w;I.R=v>=P?y-D+l(A):y+D+l(A);16;17"S":I.S=t+A;I.R=O=="R"?y-v-z+l(J):y+P+z+J;16;17"13":4 D=1s.1t(M-L)/2;I.S=M>=L?t-D:t+D;I.R=O=="R"?y-v-z+l(J):y+P+z+J;16;17"1b":4 D=1s.1t(M-L);I.S=M>=L?t-D+l(A):t+D+l(A);I.R=O=="R"?y-v-z+l(J):y+P+z+J;16}I.15=O;5(a("#"+x.3.T).Z>0&&a("#"+x.3.T).1f("1G."+x.Y+"-V").Z>0){a("#"+x.3.T).1f("1G."+x.Y+"-V").2H();4 G="1b";4 C="1b-13";2g(O){17"R":G="1c";C="13-1c";16;17"S":G="1b";C="1b-13";16;17"1c":G="R";C="13-R";16;17"1b":G="S";C="S-13";16}a("#"+x.3.T).1f("14."+x.Y+"-"+C).2D();a("#"+x.3.T).1f("14."+x.Y+"-"+C).2p(\'<1G 2o="\'+H+"V-"+G+"."+(x.1i==U?(a.1l.1D?"1C":"2n"):"1C")+\'" 2w="" 1y="\'+x.Y+\'-V" />\');q(x)}5(x.2q==U){5(I.S<a(1q).2i()||I.S+M>a(1q).2i()+a(1q).1o()){5(a("#"+x.3.T).Z>0&&a("#"+x.3.T).1f("1G."+x.Y+"-V").Z>0){a("#"+x.3.T).1f("1G."+x.Y+"-V").2H()}4 B="";5(I.S<a(1q).2i()){I.15="1b";I.S=t+L+z+w;5(a("#"+x.3.T).Z>0&&!x.V.1F){a("#"+x.3.T).1f("14."+x.Y+"-S-13").2D();a("#"+x.3.T).1f("14."+x.Y+"-S-13").2p(\'<1G 2o="\'+H+"V-S."+(x.1i==U?(a.1l.1D?"1C":"2n"):"1C")+\'" 2w="" 1y="\'+x.Y+\'-V" />\');B="S-13"}}1d{5(I.S+M>a(1q).2i()+a(1q).1o()){I.15="S";I.S=t-M-z+l(w);5(a("#"+x.3.T).Z>0&&!x.V.1F){a("#"+x.3.T).1f("14."+x.Y+"-1b-13").2D();a("#"+x.3.T).1f("14."+x.Y+"-1b-13").2p(\'<1G 2o="\'+H+"V-1b."+(x.1i==U?(a.1l.1D?"1C":"2n"):"1C")+\'" 2w="" 1y="\'+x.Y+\'-V" />\');B="1b-13"}}}5(I.R<0){I.R=0;5(B.Z>0){a("#"+x.3.T).1f("14."+x.Y+"-"+B).1k("3a-11","27")}}1d{5(I.R+v>a(1q).1p()){I.R=a(1q).1p()-v;5(B.Z>0){a("#"+x.3.T).1f("14."+x.Y+"-"+B).1k("3a-11","27")}}}}1d{5(I.R<0||I.R+v>a(1q).1p()){5(a("#"+x.3.T).Z>0&&a("#"+x.3.T).1f("1G."+x.Y+"-V").Z>0){a("#"+x.3.T).1f("1G."+x.Y+"-V").2H()}4 B="";5(I.R<0){I.15="1c";I.R=y+P+z+J;5(a("#"+x.3.T).Z>0&&!x.V.1F){a("#"+x.3.T).1f("14."+x.Y+"-13-R").2D();a("#"+x.3.T).1f("14."+x.Y+"-13-R").2p(\'<1G 2o="\'+H+"V-R."+(x.1i==U?(a.1l.1D?"1C":"2n"):"1C")+\'" 2w="" 1y="\'+x.Y+\'-V" />\');B="13-R"}}1d{5(I.R+v>a(1q).1p()){I.15="R";I.R=y-v-z+l(J);5(a("#"+x.3.T).Z>0&&!x.V.1F){a("#"+x.3.T).1f("14."+x.Y+"-13-1c").2D();a("#"+x.3.T).1f("14."+x.Y+"-13-1c").2p(\'<1G 2o="\'+H+"V-1c."+(x.1i==U?(a.1l.1D?"1C":"2n"):"1C")+\'" 2w="" 1y="\'+x.Y+\'-V" />\');B="13-1c"}}}5(I.S<a(1q).2i()){I.S=a(1q).2i();5(B.Z>0){a("#"+x.3.T).1f("14."+x.Y+"-"+B).1k("39-11","13")}}1d{5(I.S+M>a(1q).2i()+a(1q).1o()){I.S=(a(1q).2i()+a(1q).1o())-M;5(B.Z>0){a("#"+x.3.T).1f("14."+x.Y+"-"+B).1k("39-11","13")}}}}}}12 I}6 d(u,t){a(u).1K(r.2Y,t)}6 n(t){12 a(t).1K(r.2Y)}6 i(t){4 u=t!=X&&7 t=="1a"&&!a.19(t)&&!a.18(t)?U:Q;12 u}6 h(t){a(1q).3Q(6(){a(r.2J).1g(6(u,v){a(v).1e("2G")})});a(2W).3R(6(u){a(r.2J).1g(6(v,w){a(w).1e("33",[u.3S,u.3T])})});a(r.2J).1g(6(v,w){4 u=g(t);u.3.1L=f();u.3.T=u.Y+"-"+u.3.1L+"-"+v;d(w,u);a(w).2f("33",6(y,C,B){4 N=n(W);5(i(N)&&i(N.3)&&7 C!="1w"&&7 B!="1w"){5(N.2k){4 E=a(W);4 z=E.2F();4 L=10(z.S);4 H=10(z.R);4 F=10(E.2v(Q));4 K=10(E.2u(Q));4 J=Q;5(H<=C&&C<=F+H&&L<=B&&B<=K+L){J=U}1d{J=Q}5(J&&!N.3.1Y){N.3.1Y=U;d(W,N);5(N.23=="2E"){a(W).1e("2s")}1d{5(N.22&&a("#"+N.3.T).Z>0){4 x=a("#"+N.3.T);4 A=x.2F();4 D=10(A.S);4 I=10(A.R);4 G=10(x.2v(Q));4 M=10(x.2u(Q));5(I<=C&&C<=G+I&&D<=B&&B<=M+D){}1d{a(W).1e("28")}}1d{a(W).1e("28")}}}1d{5(!J&&N.3.1Y){N.3.1Y=Q;d(W,N);5(N.26=="2E"){a(W).1e("2s")}1d{5(N.22&&a("#"+N.3.T).Z>0){4 x=a("#"+N.3.T);4 A=x.2F();4 D=10(A.S);4 I=10(A.R);4 G=10(x.2v(Q));4 M=10(x.2u(Q));5(I<=C&&C<=G+I&&D<=B&&B<=M+D){}1d{a(W).1e("28")}}1d{a(W).1e("28")}}}1d{5(!J&&!N.3.1Y){5(N.22&&a("#"+N.3.T).Z>0&&!N.3.1r){4 x=a("#"+N.3.T);4 A=x.2F();4 D=10(A.S);4 I=10(A.R);4 G=10(x.2v(Q));4 M=10(x.2u(Q));5(I<=C&&C<=G+I&&D<=B&&B<=M+D){}1d{a(W).1e("28")}}}}}}}});a(w).2f("2S",6(A,x,z){4 y=n(W);5(i(y)&&i(y.3)&&7 x!="1w"){y.3.1W=f();5(7 z=="1I"&&z==U){y.1H=x}d(W,y);5(a("#"+y.3.T).Z>0){a("#"+y.3.T).1f("14."+y.Y+"-1H").2p(x);5(y.3.1A){a(W).1e("2G",[Q])}1d{a(W).1e("2G",[U])}}}});a(w).2f("30",6(A,z){4 x=n(W);5(i(x)&&i(x.3)){4 y=x;x=g(z);x.3.T=y.3.T;x.3.1L=y.3.1L;x.3.1W=f();x.3.1V=y.3.1V;x.3.1v=y.3.1v;x.3.1J=y.3.1J;x.3.25={};d(W,x)}});a(w).2f("2G",6(A,y){4 z=n(W);5(i(z)&&i(z.3)&&a("#"+z.3.T).Z>0&&z.3.1v==U){4 x=a("#"+z.3.T);4 C=c(x,a(W),z);4 B=2;5(7 y=="1I"&&y==U){x.1k({S:C.S,R:C.R})}1d{2g(z.15){17"R":x.1k({S:C.S,R:(C.15!=z.15?C.R-(1s.1t(z.1h.1E)*B):C.R+(1s.1t(z.1h.1E)*B))});16;17"S":x.1k({S:(C.15!=z.15?C.S-(1s.1t(z.1h.1E)*B):C.S+(1s.1t(z.1h.1E)*B)),R:C.R});16;17"1c":x.1k({S:C.S,R:(C.15!=z.15?C.R+(1s.1t(z.1h.1E)*B):C.R-(1s.1t(z.1h.1E)*B))});16;17"1b":x.1k({S:(C.15!=z.15?C.S+(1s.1t(z.1h.1E)*B):C.S-(1s.1t(z.1h.1E)*B)),R:C.R});16}}}});a(w).2f("2L",6(){4 x=n(W);5(i(x)&&i(x.3)){x.3.1J=U;d(W,x)}});a(w).2f("2x",6(){4 x=n(W);5(i(x)&&i(x.3)){x.3.1J=Q;d(W,x)}});a(w).2f("2s",6(x,A,D,G){4 H=n(W);5((7 G=="1I"&&G==U&&(i(H)&&i(H.3)))||(7 G=="1w"&&(i(H)&&i(H.3)&&!H.3.1J&&!H.3.1v))){5(7 G=="1I"&&G==U){a(W).1e("2x")}H.3.1v=U;H.3.1J=Q;H.3.1r=Q;H.3.1A=Q;5(i(H.3.25)){H=H.3.25}1d{H.3.25={}}5(i(A)){4 C=H;4 F=f();H=g(A);H.3.T=C.3.T;H.3.1L=C.3.1L;H.3.1W=F;H.3.1V=F;H.3.1v=U;H.3.1J=Q;H.3.1r=Q;H.3.1A=Q;H.3.1Y=C.3.1Y;H.3.1B=C.3.1B;H.3.25={};5(7 D=="1I"&&D==Q){C.3.1W=F;C.3.1V=F;H.3.25=C}}d(W,H);b(H);5(a("#"+H.3.T).Z>0){a("#"+H.3.T).2H()}4 y={};4 B=p(H);y=a(B);y.3V("3W");y=a("#"+H.3.T);y.1k({24:0,S:"3r",R:"3r",15:"3Z",2C:"41"});5(H.1i==U){5(a.1l.1D&&10(a.1l.2t)<9){a("#"+H.3.T+" 38").2B(H.Y+"-2e")}}q(H);4 E=c(y,a(W),H);y.1k({S:E.S,R:E.R});5(E.15==H.15){H.3.1B=Q}1d{H.3.1B=U}d(W,H);4 z=3l(6(){H.3.1r=U;d(w,H);y.3k();2g(H.15){17"R":y.2d({24:1,R:(H.3.1B?"-=":"+=")+H.1n+"1Z"},H.1M,"2j",6(){H.3.1r=Q;H.3.1A=U;d(w,H);5(H.1i==U){5(a.1l.1D&&10(a.1l.2t)>8){y.2B(H.Y+"-2e")}}H.1T()});16;17"S":y.2d({24:1,S:(H.3.1B?"-=":"+=")+H.1n+"1Z"},H.1M,"2j",6(){H.3.1r=Q;H.3.1A=U;d(w,H);5(H.1i==U){5(a.1l.1D&&10(a.1l.2t)>8){y.2B(H.Y+"-2e")}}H.1T()});16;17"1c":y.2d({24:1,R:(H.3.1B?"+=":"-=")+H.1n+"1Z"},H.1M,"2j",6(){H.3.1r=Q;H.3.1A=U;d(w,H);5(H.1i==U){5(a.1l.1D&&10(a.1l.2t)>8){y.2B(H.Y+"-2e")}}H.1T()});16;17"1b":y.2d({24:1,S:(H.3.1B?"+=":"-=")+H.1n+"1Z"},H.1M,"2j",6(){H.3.1r=Q;H.3.1A=U;d(w,H);5(H.1i==U){5(a.1l.1D&&10(a.1l.2t)>8){y.2B(H.Y+"-2e")}}H.1T()});16}},H.29)}});a(w).2f("28",6(B,x){4 A=n(W);5((7 x=="1I"&&x==U&&(i(A)&&i(A.3)&&a("#"+A.3.T).Z>0))||(7 x=="1w"&&(i(A)&&i(A.3)&&a("#"+A.3.T).Z>0&&!A.3.1J&&A.3.1v))){5(7 x=="1I"&&x==U){a(W).1e("2x")}A.3.1r=Q;A.3.1A=Q;d(W,A);4 y=a("#"+A.3.T);4 z=7 x=="1w"?A.2a:0;4 C=3l(6(){A.3.1r=U;d(w,A);y.3k();5(A.1i==U){5(a.1l.1D&&10(a.1l.2t)>8){y.49(A.Y+"-2e")}}2g(A.15){17"R":y.2d({24:0,R:(A.3.1B?"+=":"-=")+A.1n+"1Z"},A.1P,"2j",6(){A.3.1v=Q;A.3.1r=Q;A.3.1A=U;d(w,A);y.1k("2C","2N");A.1S()});16;17"S":y.2d({24:0,S:(A.3.1B?"+=":"-=")+A.1n+"1Z"},A.1P,"2j",6(){A.3.1v=Q;A.3.1r=Q;A.3.1A=U;d(w,A);y.1k("2C","2N");A.1S()});16;17"1c":y.2d({24:0,R:(A.3.1B?"-=":"+=")+A.1n+"1Z"},A.1P,"2j",6(){A.3.1v=Q;A.3.1r=Q;A.3.1A=U;d(w,A);y.1k("2C","2N");A.1S()});16;17"1b":y.2d({24:0,S:(A.3.1B?"-=":"+=")+A.1n+"1Z"},A.1P,"2j",6(){A.3.1v=Q;A.3.1r=Q;A.3.1A=U;d(w,A);y.1k("2C","2N");A.1S()});16}},z);A.3.1V=f();A.3.1J=Q;d(W,A);s(A)}})})}12 W}})(4b);',62,266,'|||privateVars|var|if|function|typeof|||||||||||||||||||||||||||||||||||||||||||||false|left|top|id|true|tail|this|null|baseClass|length|parseInt|align|return|middle|td|position|break|case|isEmptyObject|isArray|object|bottom|right|else|trigger|find|each|themeMargins|dropShadow|fn|css|browser|hideElementId|distance|height|width|window|is_animating|Math|abs|tableStyle|is_open|undefined|divStyle|class|replace|is_animation_complete|is_position_changed|gif|msie|difference|hidden|img|innerHtml|boolean|is_freezed|data|creation_datetime|openingSpeed|BASE_CLASS|innerHtmlStyle|closingSpeed|string|for|afterHidden|afterShown|private_jquerybubblepopup_options|last_display_datetime|last_modified_datetime|toLowerCase|is_mouse_over|px|MIDDLE|themeName|selectable|mouseOver|opacity|last_options|mouseOut|center|hidebubblepopup|openingDelay|closingDelay|themePath|number|animate|ie|bind|switch|unbind|scrollTop|swing|manageMouseEvents|BOTTOM|TOP|png|src|html|alwaysVisible|in|showbubblepopup|version|outerHeight|outerWidth|alt|unfreezebubblepopup|tr|new|total|addClass|display|empty|show|offset|positionbubblepopup|remove|substring|me|alignHorizontalValues|freezebubblepopup|style|none|LEFT|RIGHT|Date|1000|setbubblepopupinnerhtml|INNERHTML|RIGHT_STYLE|hide|document|cache|options_key|LEFT_STYLE|setbubblepopupoptions|INNERHTML_STYLE|alignVerticalValues|managebubblepopup|visibility|alignValues|DIV_STYLE|DIV_ID|table|vertical|text|TABLE_STYLE|tbody|trim|jquerybubblepopup|visible|TEMPLATE_CLASS|250|div|model_markup|stop|setTimeout|charAt|model_td|mouseOutValues|Array|createElement|0px|push|mouseOverValues|MIDDLE_STYLE|positionValues|model_tr|HasBubblePopup|while|indexOf|round|getTime|IsBubblePopupOpen|GetBubblePopupID|extend|azure|GetBubblePopupCreationDateTime|GetBubblePopupMarkup|url|CreateBubblePopup|image|background|UnfreezeAllBubblePopups|UnfreezeBubblePopup|FreezeAllBubblePopups|FreezeBubblePopup|resize|mousemove|pageX|pageY|HideAllBubblePopups|appendTo|body|HideBubblePopup|20px|absolute|_STYLE|block|toUpperCase|10px|delete|GetBubblePopupLastDisplayDateTime|ShowAllBubblePopups|ShowBubblePopup|GetBubblePopupOptions|removeClass|13px|jQuery|SetBubblePopupOptions|GetBubblePopupLastModifiedDateTime|SetBubblePopupInnerHtml|theme|inArray|RemoveBubblePopup'.split('|'),0,{})) | PypiClean |
/MarkdownSubscript-2.1.1.tar.gz/MarkdownSubscript-2.1.1/docs/installation.rst | .. highlight:: console
============
Installation
============
Stable release
--------------
The easiest way to install Markdown Subscript is to use `pip`_. ::
$ python -m pip install MarkdownSubscript
This will install the latest stable version. If you need an older
version, you may pin or limit the requirements. ::
$ python -m pip install 'MarkdownSubscript==2.1.0'
$ python -m pip install 'MarkdownSubscript>=2.0.0,<3'
If you don't have `pip`_ installed, this `Python installation guide`_ can guide
you through the process.
.. _pip: https://pip.pypa.io/en/stable/
.. _Python installation guide: https://docs.python-guide.org/starting/installation/
From source
------------
The source files for Markdown Subscript can be downloaded from the
`Github repo`_.
You may use pip to install the latest version: ::
$ python -m pip install git+git://github.com/jambonrose/markdown_subscript_extension.git
Alternatively, you can clone the public repository: ::
$ git clone git://github.com/jambonrose/markdown_subscript_extension
Or download the `tarball`_: ::
$ curl -OL https://github.com/jambonrose/markdown_subscript_extension/tarball/development
Once you have a copy of the source, you can install it with: ::
$ python setup.py install
.. _Github repo: https://github.com/jambonrose/markdown_subscript_extension
.. _tarball: https://github.com/jambonrose/markdown_subscript_extension/tarball/development
| PypiClean |
/DynIP-0.1e.tar.gz/DynIP-0.1e/dynip/server.py | Copyright (c) 2011, R. Kristoffer Hardy
All rights reserved.
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
"""
import socket
import json
import logging
import sys
import datetime
import traceback
import os
import argparse
import ConfigParser
import return_codes as rc
logging.basicConfig()
log = logging.getLogger(__name__)
log.setLevel(logging.WARNING)
CONFIG_SECTION = "DynIP:Server"
# DEFAULT_UDP_IP
# IP Address that the server will listen on.
# "*" means that dynip should self-discover the ip address
DEFAULT_SERVER_IP = "*"
# DEFAULT_UDP_PORT
# Port number that the server will listen on.
DEFAULT_SERVER_PORT = 28630
# DEFAULT_CLIENT_LOG_PATH
# Path (absolute path preferred) to the JSON file that will
# serve as the client log.
DEFAULT_CLIENT_LOG_PATH = "dynip.json"
# Add'l configuration (shouldn't have to be edited)
DATA_SIZE_MAX = 256
# Prepare argparser
argparser = argparse.ArgumentParser(description="")
argparser.add_argument('-v', '--verbose', help="Enable verbose (INFO-level) logging",
action='store_const',
default=logging.WARNING, const=logging.INFO)
argparser.add_argument('--debug', help="Enable debug (DEBUG-level) logging",
action='store_const',
default=logging.WARNING, const=logging.DEBUG)
argparser.add_argument('config', help='Configuration .conf file',
type=str, nargs=1)
def main():
"""
Listen for UDP packets and save the remote IP address and data
to the file specified in ``client_log_path`` in the configuration file
Notes: This reads the entire JSON file in on each packet, so
this is not suitable for any significant load or anything but
a trivial number of clients.
"""
# Parse the command line params
args = argparser.parse_args()
log.setLevel(min(args.verbose, args.debug))
try:
config = ConfigParser.ConfigParser(
{CONFIG_SECTION:
{'server_ip': DEFAULT_SERVER_IP,
'server_port': DEFAULT_SERVER_PORT,
'client_log_path': DEFAULT_CLIENT_LOG_PATH
}
})
config.read(args.config)
server_ip = config.get(CONFIG_SECTION, 'server_ip')
server_port = config.getint(CONFIG_SECTION, 'server_port')
client_log_path = config.get(CONFIG_SECTION, 'client_log_path')
except:
log.fatal("ERROR: Could not read configuration file {0}".format(args.config))
return rc.CANNOT_READ_CONFIG
log.info("Starting server...")
if os.path.exists(client_log_path) == False:
client_data = {}
else:
try:
log.info("Opening file at client_log_path: {0}".format(client_log_path))
client_log_fh = open(client_log_path, "r")
except:
log.fatal("ERROR: Could not open {0}".format(client_log_path))
return rc.CANNOT_OPEN_CLIENT_LOG_PATH
log.info("Opened client_log_path successfully".format(client_log_path))
try:
log.info("Importing json data from client_log_path")
client_data = json.load(client_log_fh)
if isinstance(client_data, dict) == False:
client_data = {}
except:
log.debug(traceback.format_exc())
log.info("Improper format of client_log_path file found. Starting from scratch.")
client_data = {}
log.debug(client_data)
client_log_fh.close()
log.info("Opening UDP socket")
sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
if server_ip == "*":
log.info("Discovering IP address")
server_ip = socket.gethostbyname(
socket.gethostname()
)
sock.bind((server_ip, server_port))
log.info("Listening on {0}:{1}".format(server_ip, server_port))
# Enter the listen loop
if listen_loop(sock, client_data, client_log_path) != True:
log.error("The listen_loop did not exit gracefully.")
# Shut down gracefully
log.info("Shutting down the server")
sock.close()
log.info("Server stopped")
return rc.OK
def listen_loop(sock, client_data, client_log_path):
"""
A blocking loop that listens for UDP packets, logs them, and then waits for the next one.
Exits when a KeyboardInterrupt is caught.
:param sock: The bound socket.socketobject
:type sock: socket.socketobject
:param client_data: The in-memory client data dict that is written out to the ``client_log_path`` on receipt of each packet.
:type client_data: dict
:param client_log_path: The filepath to the JSON-encoded client log file
:type client_log_path: str
"""
try:
while True:
log.debug("Waiting for the next packet")
# Block while waiting for the next packet
data, addr = sock.recvfrom(1024)
now = datetime.datetime.now().isoformat(' ')
log.debug("{now} Received packet from {addr} | Data: {data}".format(
addr=addr,
data=data,
now=now))
# Add the data to the client_data dict
client_data[data[:DATA_SIZE_MAX]] = [
addr[0],
now
]
# Write out the changes to a clean file
log.info("Saving data")
client_log_fh = open(client_log_path, "w")
json.dump(client_data, client_log_fh)
client_log_fh.close()
except KeyboardInterrupt:
# Break out of loop and exit gracefully
log.info("Caught KeyboardInterrupt. Exiting gracefully")
# Close client_log_fh if it is open
if client_log_fh is not None and client_log_fh.closed is False:
client_log_fh.close()
return True
def usage():
"""Print usage information"""
argparser.print_help()
if __name__ == "__main__":
sys.exit(main()) | PypiClean |
/Flask-APScheduler-1.12.4.tar.gz/Flask-APScheduler-1.12.4/flask_apscheduler/auth.py | import base64
from flask import request
from .utils import bytes_to_wsgi, wsgi_to_bytes
def get_authorization_header():
"""
Return request's 'Authorization:' header as
a two-tuple of (type, info).
"""
header = request.environ.get('HTTP_AUTHORIZATION')
if not header:
return None
header = wsgi_to_bytes(header)
try:
auth_type, auth_info = header.split(None, 1)
auth_type = auth_type.lower()
except ValueError:
return None
return auth_type, auth_info
class Authorization(dict):
"""
A class to hold the authorization data.
:param str auth_type: The authorization type. e.g: basic, bearer.
"""
def __init__(self, auth_type, **kwargs):
super(Authorization, self).__init__(**kwargs)
self.auth_type = auth_type
class HTTPAuth(object):
"""
A base class from which all authentication classes should inherit.
"""
def get_authorization(self):
"""
Get the authorization header.
:return Authentication: The authentication data or None if it is not present or invalid.
"""
raise NotImplemented()
def get_authenticate_header(self):
"""
Return the value of `WWW-Authenticate` header in a
`401 Unauthenticated` response.
"""
pass
class HTTPBasicAuth(HTTPAuth):
"""
HTTP Basic authentication.
"""
www_authenticate_realm = 'Authentication Required'
def get_authorization(self):
"""
Get the username and password for Basic authentication header.
:return Authentication: The authentication data or None if it is not present or invalid.
"""
auth = get_authorization_header()
if not auth:
return None
auth_type, auth_info = auth
if auth_type != b'basic':
return None
try:
username, password = base64.b64decode(auth_info).split(b':', 1)
except Exception:
return None
return Authorization('basic', username=bytes_to_wsgi(username), password=bytes_to_wsgi(password))
def get_authenticate_header(self):
"""
Return the value of `WWW-Authenticate` header in a
`401 Unauthenticated` response.
"""
return 'Basic realm="%s"' % self.www_authenticate_realm | PypiClean |
/Dabo-0.9.16.tar.gz/Dabo-0.9.16/dabo/ui/uitk/dFormMixin.py | """ dFormMixin.py """
import dPemMixin as pm
from dabo.dLocalize import _
from dabo.lib.utils import ustr
import dabo.dEvents as dEvents
class dFormMixin(pm.dPemMixin):
def __init__(self, preClass, parent=None, properties=None, *args, **kwargs):
# if parent:
# style = wx.DEFAULT_FRAME_STYLE|wx.FRAME_FLOAT_ON_PARENT
# else:
# style = wx.DEFAULT_FRAME_STYLE
# kwargs["style"] = style
super(dFormMixin, self).__init__(preClass, parent, properties, *args, **kwargs)
self.debugText = ""
self.useOldDebugDialog = False
self.restoredSP = False
self._holdStatusText = ""
if self.Application is not None:
self.Application.uiForms.add(self)
# def OnActivate(self, event):
# if bool(event.GetActive()) == True and self.restoredSP == False:
# # Restore the saved size and position, which can't happen
# # in __init__ because we may not have our name yet.
# self.restoredSP = True
# self.restoreSizeAndPosition()
# event.Skip()
#
#
# def afterSetMenuBar(self):
# """ Subclasses can extend the menu bar here.
# """
# pass
#
#
# def onDebugDlg(self, evt):
# # Handy hook for getting info.
# dlg = wx.TextEntryDialog(self, "Command to Execute", "Debug", self.debugText)
# if dlg.ShowModal() == wx.ID_OK:
# self.debugText = dlg.GetValue()
# try:
# # Handy shortcuts for common references
# bo = self.getBizobj()
# exec(self.debugText)
# except:
# dabo.log.info(_("Could not execute: %s") % self.debugText)
# dlg.Destroy()
#
#
# def getMenu(self):
# """ Get the navigation menu for this form.
#
# Every form maintains an internal menu of actions appropriate to itself.
# For instance, a dForm with a primary bizobj will maintain a menu with
# 'requery', 'save', 'next', etc. choices.
#
# This function sets up the internal menu, which can optionally be
# inserted into the mainForm's menu bar during SetFocus.
# """
# menu = dMenu.dMenu()
# return menu
#
#
# def OnClose(self, event):
# if self.GetParent() == wx.GetApp().GetTopWindow():
# self.Application.uiForms.remove(self)
# self.saveSizeAndPosition()
# event.Skip()
#
# def OnSetFocus(self, event):
# event.Skip()
#
#
# def OnKillFocus(self, event):
# event.Skip()
#
#
# def restoreSizeAndPosition(self):
# """ Restore the saved window geometry for this form.
#
# Ask dApp for the last saved setting of height, width, left, and top,
# and set those properties on this form.
# """
# if self.Application:
# name = self.getAbsoluteName()
#
# left = self.Application.getUserSetting("%s.left" % name)
# top = self.Application.getUserSetting("%s.top" % name)
# width = self.Application.getUserSetting("%s.width" % name)
# height = self.Application.getUserSetting("%s.height" % name)
#
# if (type(left), type(top)) == (type(int()), type(int())):
# self.SetPosition((left,top))
# if (type(width), type(height)) == (type(int()), type(int())):
# self.SetSize((width,height))
#
#
# def saveSizeAndPosition(self):
# """ Save the current size and position of this form.
# """
# if self.Application:
# if self == wx.GetApp().GetTopWindow():
# for form in self.Application.uiForms:
# try:
# form.saveSizeAndPosition()
# except wx.PyDeadObjectError:
# pass
#
# name = self.getAbsoluteName()
#
# pos = self.GetPosition()
# size = self.GetSize()
#
# self.Application.setUserSetting("%s.left" % name, "I", pos[0])
# self.Application.setUserSetting("%s.top" % name, "I", pos[1])
# self.Application.setUserSetting("%s.width" % name, "I", size[0])
# self.Application.setUserSetting("%s.height" % name, "I", size[1])
#
#
# def setStatusText(self, *args):
# """ Set the text of the status bar.
#
# Call this instead of SetStatusText() and dabo will decide whether to
# send the text to the main frame or this frame. This matters with MDI
# versus non-MDI forms.
# """
# if isinstance(self, wx.MDIChildFrame):
# controllingFrame = self.Application.MainForm
# else:
# controllingFrame = self
# if controllingFrame.GetStatusBar():
# controllingFrame.SetStatusText(*args)
# controllingFrame.GetStatusBar().Update()
#
#
# def _appendToMenu(self, menu, caption, function, bitmap=wx.NullBitmap, menuId=-1):
# item = wx.MenuItem(menu, menuId, caption)
# item.SetBitmap(bitmap)
# menu.AppendItem(item)
#
# if isinstance(self, wx.MDIChildFrame):
# controllingFrame = self.Application.MainForm
# else:
# controllingFrame = self
#
# if wx.Platform == '__WXMAC__':
# # Trial and error reveals that this works on Mac, while calling
# # controllingFrame.Bind does not. I've posted an inquiry about
# # this to [email protected], but in the meantime we have
# # this platform-specific code to tide us over.
# menu.Bind(wx.EVT_MENU, function, item)
# else:
# controllingFrame.Bind(wx.EVT_MENU, function, item)
#
#
# def _appendToToolBar(self, toolBar, caption, bitmap, function, statusText=""):
# toolId = wx.NewId()
# toolBar.AddSimpleTool(toolId, bitmap, caption, statusText)
#
# if isinstance(self, wx.MDIChildFrame):
# controllingFrame = self.Application.MainForm
# else:
# controllingFrame = self
# wx.EVT_MENU(controllingFrame, toolId, function)
#
#
# # property get/set/del functions follow:
# def _getIcon(self):
# try:
# return self._Icon
# except AttributeError:
# return None
# def _setIcon(self, icon):
# self.SetIcon(icon)
# self._Icon = icon # wx doesn't provide GetIcon()
#
# def _getIconBundle(self):
# try:
# return self._Icons
# except:
# return None
# def _setIconBundle(self, icons):
# self.SetIcons(icons)
# self._Icons = icons # wx doesn't provide GetIcons()
#
# def _getBorderResizable(self):
# return self._hasWindowStyleFlag(wx.RESIZE_BORDER)
# def _setBorderResizable(self, value):
# self._delWindowStyleFlag(wx.RESIZE_BORDER)
# if value:
# self._addWindowStyleFlag(wx.RESIZE_BORDER)
#
# def _getShowMaxButton(self):
# return self._hasWindowStyleFlag(wx.MAXIMIZE_BOX)
# def _setShowMaxButton(self, value):
# self._delWindowStyleFlag(wx.MAXIMIZE_BOX)
# if value:
# self._addWindowStyleFlag(wx.MAXIMIZE_BOX)
#
# def _getShowMinButton(self):
# return self._hasWindowStyleFlag(wx.MINIMIZE_BOX)
# def _setShowMinButton(self, value):
# self._delWindowStyleFlag(wx.MINIMIZE_BOX)
# if value:
# self._addWindowStyleFlag(wx.MINIMIZE_BOX)
#
# def _getShowCloseButton(self):
# return self._hasWindowStyleFlag(wx.CLOSE_BOX)
# def _setShowCloseButton(self, value):
# self._delWindowStyleFlag(wx.CLOSE_BOX)
# if value:
# self._addWindowStyleFlag(wx.CLOSE_BOX)
#
# def _getShowCaption(self):
# return self._hasWindowStyleFlag(wx.CAPTION)
# def _setShowCaption(self, value):
# self._delWindowStyleFlag(wx.CAPTION)
# if value:
# self._addWindowStyleFlag(wx.CAPTION)
#
# def _getShowSystemMenu(self):
# return self._hasWindowStyleFlag(wx.SYSTEM_MENU)
# def _setShowSystemMenu(self, value):
# self._delWindowStyleFlag(wx.SYSTEM_MENU)
# if value:
# self._addWindowStyleFlag(wx.SYSTEM_MENU)
#
# def _getTinyTitleBar(self):
# return self._hasWindowStyleFlag(wx.FRAME_TOOL_WINDOW)
# def _setTinyTitleBar(self, value):
# self._delWindowStyleFlag(wx.FRAME_TOOL_WINDOW)
# if value:
# self._addWindowStyleFlag(wx.FRAME_TOOL_WINDOW)
#
# def _getWindowState(self):
# try:
# if self.IsFullScreen():
# return 'FullScreen'
# elif self.IsMaximized():
# return 'Maximized'
# elif self.IsMinimized():
# return 'Minimized'
# else:
# return 'Normal'
# except AttributeError:
# # These only work on Windows, I fear
# return 'Normal'
#
# def _getWindowStateEditorInfo(self):
# return {'editor': 'list', 'values': ['Normal', 'Minimized', 'Maximized', 'FullScreen']}
#
# def _setWindowState(self, value):
# value = ustr(value)
# if value == 'Normal':
# if self.IsFullScreen():
# self.ShowFullScreen(False)
# elif self.IsMaximized():
# self.Maximize(False)
# elif self.IsIconized:
# self.Iconize(False)
# else:
# # window already normal, but just in case:
# self.Maximize(False)
# elif value == 'Minimized':
# self.Iconize()
# elif value == 'Maximized':
# self.Maximize()
# elif value == 'FullScreen':
# self.ShowFullScreen()
# else:
# raise ValueError("The only possible values are "
# "'Normal', 'Minimized', 'Maximized', and 'FullScreen'")
#
# # property definitions follow:
# Icon = property(_getIcon, _setIcon, None, 'Specifies the icon for the form. (wxIcon)')
# IconBundle = property(_getIconBundle, _setIconBundle, None,
# 'Specifies the set of icons for the form. (wxIconBundle)')
#
# BorderResizable = property(_getBorderResizable, _setBorderResizable, None,
# 'Specifies whether the user can resize this form. (bool).')
#
# ShowCaption = property(_getShowCaption, _setShowCaption, None,
# 'Specifies whether the caption is displayed in the title bar. (bool).')
#
# ShowMaxButton = property(_getShowMaxButton, _setShowMaxButton, None,
# 'Specifies whether a maximize button is displayed in the title bar. (bool).')
#
# ShowMinButton = property(_getShowMinButton, _setShowMinButton, None,
# 'Specifies whether a minimize button is displayed in the title bar. (bool).')
#
# ShowCloseButton = property(_getShowCloseButton, _setShowCloseButton, None,
# 'Specifies whether a close button is displayed in the title bar. (bool).')
#
# ShowSystemMenu = property(_getShowSystemMenu, _setShowSystemMenu, None,
# 'Specifies whether a system menu is displayed in the title bar. (bool).')
#
# TinyTitleBar = property(_getTinyTitleBar, _setTinyTitleBar, None,
# 'Specifies whether the title bar is small, like a tool window. (bool).')
#
# WindowState = property(_getWindowState, _setWindowState, None,
# 'Specifies the current state of the form. (int)\n'
# ' Normal \n'
# ' Minimized \n'
# ' Maximized \n'
# ' FullScreen') | PypiClean |
/HPI-0.3.20230327.tar.gz/HPI-0.3.20230327/my/location/gpslogger.py | REQUIRES = ["gpxpy"]
from my.config import location
from my.core import Paths, dataclass
@dataclass
class config(location.gpslogger):
# path[s]/glob to the synced gpx (XML) files
export_path: Paths
# default accuracy for gpslogger
accuracy: float = 50.0
from itertools import chain
from datetime import datetime, timezone
from pathlib import Path
from typing import Iterator, Sequence, List
import gpxpy # type: ignore[import]
from more_itertools import unique_everseen
from my.core import Stats, LazyLogger
from my.core.common import get_files, mcachew
from .common import Location
logger = LazyLogger(__name__, level="warning")
def _input_sort_key(path: Path) -> str:
if "_" in path.name:
return path.name.split("_", maxsplit=1)[1]
return path.name
def inputs() -> Sequence[Path]:
# gpslogger files can optionally be prefixed by a device id,
# like b5760c66102a5269_20211214142156.gpx
return sorted(get_files(config.export_path, glob="*.gpx", sort=False), key=_input_sort_key)
def _cachew_depends_on() -> List[float]:
return [p.stat().st_mtime for p in inputs()]
# TODO: could use a better cachew key/this has to recompute every file whenever the newest one changes
@mcachew(depends_on=_cachew_depends_on, logger=logger)
def locations() -> Iterator[Location]:
yield from unique_everseen(
chain(*map(_extract_locations, inputs())), key=lambda loc: loc.dt
)
def _extract_locations(path: Path) -> Iterator[Location]:
with path.open("r") as gf:
gpx_obj = gpxpy.parse(gf)
for track in gpx_obj.tracks:
for segment in track.segments:
for point in segment.points:
if point.time is None:
continue
# hmm - for gpslogger, seems that timezone is always SimpleTZ('Z'), which
# specifies UTC -- see https://github.com/tkrajina/gpxpy/blob/cb243b22841bd2ce9e603fe3a96672fc75edecf2/gpxpy/gpxfield.py#L38
yield Location(
lat=point.latitude,
lon=point.longitude,
accuracy=config.accuracy,
elevation=point.elevation,
dt=datetime.replace(point.time, tzinfo=timezone.utc),
datasource="gpslogger",
)
def stats() -> Stats:
from my.core import stat
return {**stat(locations)} | PypiClean |
/CO2meter-0.2.6-py3-none-any.whl/co2meter/homekit.py | import logging
import signal
from pyhap.accessory_driver import AccessoryDriver
from pyhap.accessory import Accessory, Category
import pyhap.loader as loader
import co2meter as co2
###############################################################################
PORT = 51826
PINCODE = b"800-11-400"
NAME = 'CO2 Monitor'
IDENTIFY = 'co2meter (https://github.com/vfilimonov/co2meter)'
CO2_THRESHOLD = 1200 # iPhone will show warning if the concentration is above
FREQUENCY = 45 # seconds - between consecutive reads from the device
###############################################################################
# Extended from: https://github.com/ikalchev/HAP-python
###############################################################################
class CO2Accessory(Accessory):
category = Category.SENSOR # This is for the icon in the iOS Home app.
def __init__(self, mon=None, freq=FREQUENCY, monitoring=True, bypass_decrypt=False, **kwargs):
""" Initialize sensor:
- call parent __init__
- save references to characteristics
- (optional) set up callbacks
If monitor object is not passed, it will be created.
freq defines interval in seconds between updating the values.
"""
if not monitoring and mon is None:
raise ValueError('For monitoring=False monitor object should be passed')
self.monitor = co2.CO2monitor(bypass_decrypt=bypass_decrypt) if mon is None else mon
self.frequency = freq
self.monitoring = monitoring
super(CO2Accessory, self).__init__(NAME, **kwargs)
#########################################################################
def temperature_changed(self, value):
""" Dummy callback """
logging.info("Temperature changed to: %s" % value)
def co2_changed(self, value):
""" Dummy callback """
logging.info("CO2 level is changed to: %s" % value)
#########################################################################
def _set_services(self):
""" Add services to be supported (called from __init__).
A loader creates Service and Characteristic objects based on json
representation such as the Apple-defined ones in pyhap/resources/.
"""
# This call sets AccessoryInformation, so we'll do this below
# super(CO2Accessory, self)._set_services()
char_loader = loader.get_char_loader()
serv_loader = loader.get_serv_loader()
# Mandatory: Information about device
info = self.monitor.info
serv_info = serv_loader.get("AccessoryInformation")
serv_info.get_characteristic("Name").set_value(NAME, False)
serv_info.get_characteristic("Manufacturer").set_value(info['manufacturer'], False)
serv_info.get_characteristic("Model").set_value(info['product_name'], False)
serv_info.get_characteristic("SerialNumber").set_value(info['serial_no'], False)
serv_info.get_characteristic("Identify").set_value(IDENTIFY, False)
# Need to ensure AccessoryInformation is with IID 1
self.add_service(serv_info)
# Temperature sensor: only mandatory characteristic
serv_temp = serv_loader.get("TemperatureSensor")
self.char_temp = serv_temp.get_characteristic("CurrentTemperature")
serv_temp.add_characteristic(self.char_temp)
# CO2 sensor: both mandatory and optional characteristic
serv_co2 = serv_loader.get("CarbonDioxideSensor")
self.char_high_co2 = serv_co2.get_characteristic("CarbonDioxideDetected")
self.char_co2 = char_loader.get("CarbonDioxideLevel")
serv_co2.add_characteristic(self.char_high_co2)
serv_co2.add_opt_characteristic(self.char_co2)
self.char_temp.setter_callback = self.temperature_changed
self.char_co2.setter_callback = self.co2_changed
self.add_service(serv_temp)
self.add_service(serv_co2)
#########################################################################
def _read_and_set(self):
if self.monitoring:
vals = self.monitor.read_data_raw(max_requests=1000)
else:
try:
vals = self.monitor._last_data
except:
return
self.char_co2.set_value(vals[1])
self.char_high_co2.set_value(vals[1] > CO2_THRESHOLD)
self.char_temp.set_value(int(vals[2]))
def run(self):
""" We override this method to implement what the accessory will do when it is
started. An accessory is started and stopped from the AccessoryDriver.
It might be convenient to use the Accessory's run_sentinel, which is a
threading. Event object which is set when the accessory should stop running.
"""
self._read_and_set()
while not self.run_sentinel.wait(self.frequency):
self._read_and_set()
def stop(self):
""" Here we should clean-up resources if necessary.
It is called by the AccessoryDriver when the Accessory is being stopped
(it is called right after run_sentinel is set).
"""
logging.info("Stopping accessory.")
###############################################################################
###############################################################################
def start_homekit(mon=None, port=PORT, host=None, monitoring=True,
handle_sigint=True, bypass_decrypt=False):
logging.basicConfig(level=logging.INFO)
acc = CO2Accessory(mon=mon, pincode=PINCODE, monitoring=monitoring, bypass_decrypt=bypass_decrypt)
# Start the accessory on selected port
driver = AccessoryDriver(acc, port=port, address=host)
# We want KeyboardInterrupts and SIGTERM (kill) to be handled by the driver itself,
# so that it can gracefully stop the accessory, server and advertising.
if handle_sigint:
signal.signal(signal.SIGINT, driver.signal_handler)
signal.signal(signal.SIGTERM, driver.signal_handler)
# Start it!
driver.start()
return driver
###############################################################################
if __name__ == '__main__':
start_homekit() | PypiClean |
/OASYS1-WOFRY-1.0.41.tar.gz/OASYS1-WOFRY-1.0.41/orangecontrib/wofry/widgets/wavefront_propagation/ow_undulator_gaussian_shell_model_1D.py | import numpy
import sys
from PyQt5.QtGui import QPalette, QColor, QFont
from PyQt5.QtWidgets import QMessageBox
from orangewidget import gui
from orangewidget import widget
from orangewidget.settings import Setting
from oasys.widgets import gui as oasysgui
from oasys.widgets import congruence
from oasys.util.oasys_util import TriggerIn, TriggerOut, EmittingStream
from syned.storage_ring.magnetic_structures.undulator import Undulator
from syned.beamline.beamline import Beamline
from wofryimpl.propagator.light_source import WOLightSource
from wofryimpl.beamline.beamline import WOBeamline
from orangecontrib.wofry.util.wofry_objects import WofryData
from orangecontrib.wofry.widgets.gui.ow_wofry_widget import WofryWidget
import scipy.constants as codata
class OWUndulatorGaussianShellModel1D(WofryWidget):
name = "Undulator Gaussian Shell-model 1D"
id = "UndulatorGSM1D"
description = "Undulator approximated by Gaussian Shell-model 1D"
icon = "icons/undulator_gsm_1d.png"
priority = 3
category = "Wofry Wavefront Propagation"
keywords = ["data", "file", "load", "read"]
inputs = [
("SynedData", Beamline, "receive_syned_data"),
("Trigger", TriggerOut, "receive_trigger_signal"),
]
outputs = [
{"name":"WofryData",
"type":WofryData,
"doc":"WofryData",
"id":"WofryData"}
]
number_of_points = Setting(1000)
initialize_from = Setting(0)
range_from = Setting(-0.00005)
range_to = Setting(0.00005)
steps_start = Setting(-0.00005)
steps_step = Setting(1e-7)
sigma_h = Setting(3.01836e-05)
sigma_v = Setting(3.63641e-06)
sigma_divergence_h = Setting(4.36821e-06)
sigma_divergence_v = Setting(1.37498e-06)
photon_energy = Setting(15000.0)
undulator_length = Setting(4.0)
use_emittances = Setting(1)
mode_index = Setting(0)
spectral_density_threshold = Setting(0.99)
correction_factor = Setting(1.0)
wavefront1D = None
def __init__(self):
super().__init__(is_automatic=False, show_view_options=True, show_script_tab=True)
self.runaction = widget.OWAction("Generate Wavefront", self)
self.runaction.triggered.connect(self.generate)
self.addAction(self.runaction)
gui.separator(self.controlArea)
gui.separator(self.controlArea)
button_box = oasysgui.widgetBox(self.controlArea, "", addSpace=False, orientation="horizontal")
button = gui.button(button_box, self, "Generate", callback=self.generate)
font = QFont(button.font())
font.setBold(True)
button.setFont(font)
palette = QPalette(button.palette()) # make a copy of the palette
palette.setColor(QPalette.ButtonText, QColor('Dark Blue'))
button.setPalette(palette) # assign new palette
button.setFixedHeight(45)
gui.separator(self.controlArea)
self.controlArea.setFixedWidth(self.CONTROL_AREA_WIDTH)
tabs_setting = oasysgui.tabWidget(self.controlArea)
tabs_setting.setFixedHeight(self.TABS_AREA_HEIGHT + 50)
tabs_setting.setFixedWidth(self.CONTROL_AREA_WIDTH-5)
self.tab_sou = oasysgui.createTabPage(tabs_setting, "Settings")
self.tab_emit = oasysgui.createTabPage(tabs_setting, "Emittances")
box_space = oasysgui.widgetBox(self.tab_sou, "Wavefront sampling", addSpace=False, orientation="vertical")
oasysgui.lineEdit(box_space, self, "number_of_points", "Number of Points",
labelWidth=300, tooltip="number_of_points",
valueType=int, orientation="horizontal")
gui.comboBox(box_space, self, "initialize_from", label="Space Initialization",
labelWidth=350,
items=["From Range", "From Steps"],
callback=self.set_Initialization,
sendSelectedValue=False, orientation="horizontal")
self.initialization_box_1 = oasysgui.widgetBox(box_space, "", addSpace=False, orientation="vertical")
oasysgui.lineEdit(self.initialization_box_1, self, "range_from", "From [m]",
labelWidth=300, tooltip="range_from",
valueType=float, orientation="horizontal")
oasysgui.lineEdit(self.initialization_box_1, self, "range_to", "To [m]",
labelWidth=300, tooltip="range_to",
valueType=float, orientation="horizontal")
self.initialization_box_2 = oasysgui.widgetBox(box_space, "", addSpace=False, orientation="vertical")
oasysgui.lineEdit(self.initialization_box_2, self, "steps_start", "Start [m]",
labelWidth=300, tooltip="steps_start",
valueType=float, orientation="horizontal")
oasysgui.lineEdit(self.initialization_box_2, self, "steps_step", "Step [m]",
labelWidth=300, tooltip="steps_step",
valueType=float, orientation="horizontal")
self.set_Initialization()
left_box_3 = oasysgui.widgetBox(self.tab_sou, "Undulator Parameters", addSpace=True, orientation="vertical")
oasysgui.lineEdit(left_box_3, self, "photon_energy", "Photon Energy [eV]",
labelWidth=250, tooltip="photon_energy",
valueType=float, orientation="horizontal")
oasysgui.lineEdit(left_box_3, self, "undulator_length", "Undulator Length [m]",
labelWidth=250, tooltip="undulator_length",
valueType=float, orientation="horizontal")
left_box_4 = oasysgui.widgetBox(self.tab_sou, "Working conditions", addSpace=True, orientation="vertical")
gui.comboBox(left_box_4, self, "use_emittances", label="Use emittances", labelWidth=350,
items=["No (coherent Gaussian Source)",
"Yes (GSM H emittance)",
"Yes (GSM V emittance)"
],
callback=self.set_visible,
sendSelectedValue=False, orientation="horizontal")
self.mode_index_box = oasysgui.widgetBox(left_box_4, "", addSpace=True, orientation="vertical", )
left_box_5 = oasysgui.widgetBox(self.mode_index_box, "", addSpace=True, orientation="horizontal", )
tmp = oasysgui.lineEdit(left_box_5, self, "mode_index", "Mode",
labelWidth=200, valueType=int, tooltip = "mode_index",
orientation="horizontal")
gui.button(left_box_5, self, "+1", callback=self.increase_mode_index, width=30)
gui.button(left_box_5, self, "-1", callback=self.decrease_mode_index, width=30)
gui.button(left_box_5, self, "0", callback=self.reset_mode_index, width=30)
oasysgui.lineEdit(self.mode_index_box, self, "spectral_density_threshold",
"Spectral Density Threshold (e.g. 0.99)",
labelWidth=300, tooltip="coherent_fraction_threshold",
valueType=float, orientation="horizontal")
oasysgui.lineEdit(self.mode_index_box, self, "correction_factor",
"Correction factor for sigmaI (default 1.0)",
labelWidth=300, tooltip="correction_factor", valueType=float, orientation="horizontal")
self.emittances_box_h = oasysgui.widgetBox(self.tab_emit, "Electron Horizontal beam sizes",
addSpace=True, orientation="vertical")
self.emittances_box_v = oasysgui.widgetBox(self.tab_emit, "Electron Vertical beam sizes",
addSpace=True, orientation="vertical")
self.le_sigma_h = oasysgui.lineEdit(self.emittances_box_h, self, "sigma_h", "Size RMS H",
labelWidth=250, tooltip="sigma_h",
valueType=float, orientation="horizontal")
oasysgui.lineEdit(self.emittances_box_h, self, "sigma_divergence_h", "Divergence RMS H [rad]",
labelWidth=250, tooltip="sigma_divergence_h",
valueType=float, orientation="horizontal")
self.le_sigma_v = oasysgui.lineEdit(self.emittances_box_v, self, "sigma_v", "Size RMS V",
labelWidth=250, tooltip="sigma_v",
valueType=float, orientation="horizontal")
oasysgui.lineEdit(self.emittances_box_v, self, "sigma_divergence_v", "Divergence RMS V [rad]",
labelWidth=250, tooltip="sigma_divergence_v",
valueType=float, orientation="horizontal")
self.set_visible()
def set_visible(self):
self.emittances_box_h.setVisible(self.use_emittances == 1)
self.emittances_box_v.setVisible(self.use_emittances == 2)
self.mode_index_box.setVisible(self.use_emittances >= 1)
def increase_mode_index(self):
self.mode_index += 1
self.generate()
def decrease_mode_index(self):
self.mode_index -= 1
if self.mode_index < 0: self.mode_index = 0
self.generate()
def reset_mode_index(self):
self.mode_index = 0
self.generate()
def set_Initialization(self):
self.initialization_box_1.setVisible(self.initialize_from == 0)
self.initialization_box_2.setVisible(self.initialize_from == 1)
def initializeTabs(self):
size = len(self.tab)
indexes = range(0, size)
for index in indexes:
self.tabs.removeTab(size-1-index)
self.titles = ["Wavefront 1D","Cumulated occupation"]
self.tab = []
self.plot_canvas = []
for index in range(0, len(self.titles)):
self.tab.append(gui.createTabPage(self.tabs, self.titles[index]))
self.plot_canvas.append(None)
for tab in self.tab:
tab.setFixedHeight(self.IMAGE_HEIGHT)
tab.setFixedWidth(self.IMAGE_WIDTH)
def check_fields(self):
congruence.checkStrictlyPositiveNumber(self.photon_energy, "Photon Energy")
if self.initialize_from == 0:
congruence.checkGreaterThan(self.range_to, self.range_from, "Range To", "Range From")
else:
congruence.checkStrictlyPositiveNumber(self.steps_step, "Step")
congruence.checkStrictlyPositiveNumber(self.number_of_points, "Number of Points")
congruence.checkNumber(self.mode_index, "Mode index")
congruence.checkStrictlyPositiveNumber(self.spectral_density_threshold, "Threshold")
congruence.checkStrictlyPositiveNumber(self.correction_factor, "Correction factor for SigmaI")
def receive_syned_data(self, data):
if not data is None:
if isinstance(data, Beamline):
if not data._light_source is None:
if isinstance(data._light_source._magnetic_structure, Undulator):
light_source = data._light_source
self.photon_energy = round(light_source._magnetic_structure.resonance_energy(light_source._electron_beam.gamma()), 3)
x, xp, y, yp = light_source._electron_beam.get_sigmas_all()
self.sigma_h = x
self.sigma_v = y
self.sigma_divergence_h = xp
self.sigma_divergence_v = yp
self.undulator_length = light_source._magnetic_structure._period_length*light_source._magnetic_structure._number_of_periods # in meter
else:
raise ValueError("Syned light source not congruent")
else:
raise ValueError("Syned data not correct: light source not present")
else:
raise ValueError("Syned data not correct")
def receive_trigger_signal(self, trigger):
if trigger and trigger.new_object == True:
if trigger.has_additional_parameter("variable_name"):
variable_name = trigger.get_additional_parameter("variable_name").strip()
variable_display_name = trigger.get_additional_parameter("variable_display_name").strip()
variable_value = trigger.get_additional_parameter("variable_value")
variable_um = trigger.get_additional_parameter("variable_um")
if "," in variable_name:
variable_names = variable_name.split(",")
for variable_name in variable_names:
setattr(self, variable_name.strip(), variable_value)
else:
setattr(self, variable_name, variable_value)
self.generate()
def get_light_source(self, sigmaI, beta):
print(">>>>n beta sigma", self.mode_index, beta, sigmaI, type(self.mode_index), type(beta), type(sigmaI))
return WOLightSource(
name = self.name ,
# electron_beam = None ,
# magnetic_structure = None ,
dimension = 1 ,
initialize_from = self.initialize_from ,
range_from_h = self.range_from ,
range_to_h = self.range_to ,
# range_from_v = None ,
# range_to_v = None ,
steps_start_h = self.steps_start ,
steps_step_h = self.steps_step ,
steps_start_v = None ,
# steps_step_v = None ,
number_of_points_h = self.number_of_points ,
# number_of_points_v = None ,
energy = self.photon_energy ,
sigma_h = sigmaI ,
# sigma_v = None ,
amplitude = 1.0 ,
kind_of_wave = (3 if (self.use_emittances > 0) else 2) ,
n_h = int(self.mode_index) ,
# n_v = None ,
beta_h = beta ,
# beta_v = None ,
units = 0,
# wavelength = 0,
# initialize_amplitude= 0,
# complex_amplitude_re= 0,
# complex_amplitude_im= 0,
# phase = 0,
# radius = 0,
# center = 0,
# inclination = 0,
# gaussian_shift = 0,
# add_random_phase = 0,
)
def calculate_gsm_parameters(self):
#
# calculations
#
wavelength = codata.h * codata.c / codata.e / self.photon_energy
sigma_r = 2.740 / 4 / numpy.pi * numpy.sqrt(wavelength * self.undulator_length)
sigma_r_prime = 0.69 * numpy.sqrt(wavelength / self.undulator_length)
print("Radiation values at photon energy=%f eV:" % self.photon_energy)
print(" intensity sigma : %6.3f um, FWHM: %6.3f um" % (sigma_r * 1e6, sigma_r * 2.355e6))
print(" intensity sigmaprime: %6.3f urad, FWHM: %6.3f urad" % (sigma_r_prime * 1e6, sigma_r_prime * 2.355e6))
q = 0
number_of_modes = 0
if self.use_emittances == 0:
sigmaI = sigma_r
beta = None
else:
Sx = numpy.sqrt(sigma_r ** 2 + self.sigma_h ** 2)
Sxp = numpy.sqrt(sigma_r_prime ** 2 + self.sigma_divergence_h ** 2)
Sy = numpy.sqrt(sigma_r ** 2 + self.sigma_v ** 2)
Syp = numpy.sqrt(sigma_r_prime ** 2 + self.sigma_divergence_v ** 2)
print("\nElectron beam values:")
print(" sigma_h : %6.3f um, sigma_v: %6.3f um\n" % (self.sigma_h * 1e6, self.sigma_v * 1e6))
print("\nPhoton beam values (convolution):")
print(" SIGMA_H p: %6.3f um, SIGMA_V: %6.3f um\n" % (Sx * 1e6, Sy * 1e6))
print(" SIGMA_H' : %6.3f urad, SIGMA_V': %6.3f urad\n" % (Sxp * 1e6, Syp * 1e6))
labels = ["", "H", "V"]
if self.use_emittances == 1:
cf = sigma_r * sigma_r_prime / Sx / Sxp
sigmaI = Sx
elif self.use_emittances == 2:
cf = sigma_r * sigma_r_prime / Sy / Syp
sigmaI = Sy
print("\nCoherence fraction (from %s emittance): %f" % (labels[self.use_emittances], cf))
sigmaI *= self.correction_factor
beta = cf / numpy.sqrt(1 - cf)
sigmaMu = beta * sigmaI
print("\nGaussian Shell-model (matching coherence fraction in %s direction):" % \
labels[self.use_emittances])
print(" beta: %6.3f" % beta)
print(" sigmaI : %6.3f um" % (sigmaI * 1e6))
print(" sigmaMu: %6.3f um" % (sigmaMu * 1e6))
q = 1.0 / (1 + beta ** 2 / 2 + beta * numpy.sqrt(1 + (beta / 2) ** 2))
number_of_modes = int(numpy.log(1.0 - self.spectral_density_threshold) / numpy.log(q))
if number_of_modes < 1: number_of_modes = 1
print("\nTo consider %f of spectral density in %s we need %d modes." % \
(self.spectral_density_threshold, labels[self.use_emittances], number_of_modes))
return sigmaI, beta, number_of_modes, q
def generate(self):
self.wofry_output.setText("")
sys.stdout = EmittingStream(textWritten=self.writeStdOut)
self.progressBarInit()
self.check_fields()
sigmaI, beta, _n, q = self.calculate_gsm_parameters()
light_source = self.get_light_source(sigmaI, beta)
self.wavefront1D = light_source.get_wavefront()
if self.use_emittances == 0:
self._cumulated_occupation = numpy.array([1.0])
else:
indices = numpy.arange(_n)
self._cumulated_occupation = (1.0 - q ** (indices+1))
if self.view_type != 0:
self.initializeTabs()
self.plot_results()
else:
self.progressBarFinished()
beamline = WOBeamline(light_source=light_source)
try:
self.wofry_python_script.set_code(beamline.to_python_code())
except:
pass
self.send("WofryData", WofryData(wavefront=self.wavefront1D, beamline=beamline))
def generate_python_code(self,sigmaI,beta=1.0):
txt = "#"
txt += "\n# create input_wavefront\n#"
txt += "\n#"
txt += "\nfrom wofry.propagator.wavefront1D.generic_wavefront import GenericWavefront1D"
if self.initialize_from == 0:
txt += "\ninput_wavefront = GenericWavefront1D.initialize_wavefront_from_range(x_min=%g,x_max=%g,number_of_points=%d)"%\
(self.range_from,self.range_to,self.number_of_points)
else:
txt += "\ninput_wavefront = GenericWavefront1D.initialize_wavefront_from_steps(x_start=%g, x_step=%g,number_of_points=%d)"%\
(self.steps_start,self.steps_step,self.number_of_points)
txt += "\ninput_wavefront.set_photon_energy(%g)"%(self.photon_energy)
if self.use_emittances == 0:
txt += "\ninput_wavefront.set_gaussian(%g, amplitude=1.0)"%(sigmaI)
else:
txt += "\ninput_wavefront.set_gaussian_hermite_mode(%g, %d, amplitude=1.0, shift=0.0, beta=%g)" % \
(sigmaI, self.mode_index, beta)
txt += "\n\n\nfrom srxraylib.plot.gol import plot"
txt += "\nplot(input_wavefront.get_abscissas(),input_wavefront.get_intensity())"
return txt
def do_plot_results(self, progressBarValue):
if not self.wavefront1D is None:
self.progressBarSet(progressBarValue)
self.plot_data1D(1e6 * self.wavefront1D.get_abscissas(),
self.wavefront1D.get_intensity(),
progressBarValue=progressBarValue,
tabs_canvas_index=0,
plot_canvas_index=0,
title=self.titles[0],
xtitle="Spatial Coordinate [$\mu$m]",
ytitle="Intensity",
calculate_fwhm=True)
self.plot_data1D(numpy.arange(self._cumulated_occupation.size),
self._cumulated_occupation,
progressBarValue=progressBarValue,
tabs_canvas_index=1,
plot_canvas_index=1,
title=self.titles[1],
xtitle="mode index",
ytitle="Cumulated occupation",
calculate_fwhm=False)
self.progressBarFinished()
if __name__ == "__main__":
import sys
from PyQt5.QtWidgets import QApplication
a = QApplication(sys.argv)
ow = OWUndulatorGaussianShellModel1D()
ow.show()
a.exec_()
ow.saveSettings() | PypiClean |
/0-orchestrator-1.1.0a7.tar.gz/0-orchestrator-1.1.0a7/zeroos/orchestrator/client/client_support.py | import json
import collections
from datetime import datetime
from uuid import UUID
from enum import Enum
from dateutil import parser
# python2/3 compatible basestring, for use in to_dict
try:
basestring
except NameError:
basestring = str
def timestamp_from_datetime(datetime):
"""
Convert from datetime format to timestamp format
Input: Time in datetime format
Output: Time in timestamp format
"""
return datetime.strftime('%Y-%m-%dT%H:%M:%S.%fZ')
def timestamp_to_datetime(timestamp):
"""
Convert from timestamp format to datetime format
Input: Time in timestamp format
Output: Time in datetime format
"""
return parser.parse(timestamp).replace(tzinfo=None)
def has_properties(cls, property, child_properties):
for child_prop in child_properties:
if getattr(property, child_prop, None) is None:
return False
return True
def list_factory(val, member_type):
if not isinstance(val, list):
raise ValueError('list_factory: value must be a list')
return [val_factory(v, member_type) for v in val]
def dict_factory(val, objmap):
# objmap is a dict outlining the structure of this value
# its format is {'attrname': {'datatype': [type], 'required': bool}}
objdict = {}
for attrname, attrdict in objmap.items():
value = val.get(attrname)
if value is not None:
for dt in attrdict['datatype']:
try:
if isinstance(dt, dict):
objdict[attrname] = dict_factory(value, attrdict)
else:
objdict[attrname] = val_factory(value, [dt])
except Exception:
pass
if objdict.get(attrname) is None:
raise ValueError('dict_factory: {attr}: unable to instantiate with any supplied type'.format(attr=attrname))
elif attrdict.get('required'):
raise ValueError('dict_factory: {attr} is required'.format(attr=attrname))
return objdict
def val_factory(val, datatypes):
"""
return an instance of `val` that is of type `datatype`.
keep track of exceptions so we can produce meaningful error messages.
"""
exceptions = []
for dt in datatypes:
try:
if isinstance(val, dt):
return val
return type_handler_object(val, dt)
except Exception as e:
exceptions.append(str(e))
# if we get here, we never found a valid value. raise an error
raise ValueError('val_factory: Unable to instantiate {val} from types {types}. Exceptions: {excs}'.
format(val=val, types=datatypes, excs=exceptions))
def to_json(cls, indent=0):
"""
serialize to JSON
:rtype: str
"""
# for consistency, use as_dict then go to json from there
return json.dumps(cls.as_dict(), indent=indent)
def to_dict(cls, convert_datetime=True):
"""
return a dict representation of the Event and its sub-objects
`convert_datetime` controls whether datetime objects are converted to strings or not
:rtype: dict
"""
def todict(obj):
"""
recurse the objects and represent as a dict
use the registered handlers if possible
"""
data = {}
if isinstance(obj, dict):
for (key, val) in obj.items():
data[key] = todict(val)
return data
if not convert_datetime and isinstance(obj, datetime):
return obj
elif type_handler_value(obj):
return type_handler_value(obj)
elif isinstance(obj, collections.Sequence) and not isinstance(obj, basestring):
return [todict(v) for v in obj]
elif hasattr(obj, "__dict__"):
for key, value in obj.__dict__.items():
if not callable(value) and not key.startswith('_'):
data[key] = todict(value)
return data
else:
return obj
return todict(cls)
class DatetimeHandler(object):
"""
output datetime objects as iso-8601 compliant strings
"""
@classmethod
def flatten(cls, obj):
"""flatten"""
return timestamp_from_datetime(obj)
@classmethod
def restore(cls, data):
"""restore"""
return timestamp_to_datetime(data)
class UUIDHandler(object):
"""
output UUID objects as a string
"""
@classmethod
def flatten(cls, obj):
"""flatten"""
return str(obj)
@classmethod
def restore(cls, data):
"""restore"""
return UUID(data)
class EnumHandler(object):
"""
output Enum objects as their value
"""
@classmethod
def flatten(cls, obj):
"""flatten"""
return obj.value
@classmethod
def restore(cls, data):
"""
cannot restore here because we don't know what type of enum it is
"""
raise NotImplementedError
handlers = {
datetime: DatetimeHandler,
Enum: EnumHandler,
UUID: UUIDHandler,
}
def handler_for(obj):
"""return the handler for the object type"""
for handler_type in handlers:
if isinstance(obj, handler_type):
return handlers[handler_type]
try:
for handler_type in handlers:
if issubclass(obj, handler_type):
return handlers[handler_type]
except TypeError:
# if obj isn't a class, issubclass will raise a TypeError
pass
def type_handler_value(obj):
"""
return the serialized (flattened) value from the registered handler for the type
"""
handler = handler_for(obj)
if handler:
return handler().flatten(obj)
def type_handler_object(val, objtype):
"""
return the deserialized (restored) value from the registered handler for the type
"""
handler = handlers.get(objtype)
if handler:
return handler().restore(val)
else:
return objtype(val) | PypiClean |
/Miniature-0.2.0.tar.gz/Miniature-0.2.0/miniature/processor/wand_processor.py | from __future__ import (print_function, division, absolute_import, unicode_literals)
from wand.api import library
from wand.image import Image, HistogramDict
from wand.color import Color
from .base import BaseProcessor
def fast_histogram(img):
h = HistogramDict(img)
pixels = h.pixels
return tuple(
library.PixelGetColorCount(pixels[i])
for i in range(h.size.value)
)
class Processor(BaseProcessor):
def _open_image(self, fp):
im = Image(file=fp)
info = {}
for k, v in im.metadata.items():
if ':' not in k:
continue
ns, k = k.split(':')
if ns not in info:
info[ns] = {}
info[ns][k] = v
info.update({
'format': im.format
})
return im, info
def _close(self, img):
img.destroy()
def _raw_save(self, img, format, **options):
img.format = format
if img.format == 'JPEG':
img.compression_quality = options.pop('quality', 85)
img.save(**options)
def _copy_image(self, img):
return img.clone()
def _get_color(self, color):
return Color(color)
def _get_size(self, img):
return img.size
def _get_mode(self, img):
return img.type
def _set_mode(self, img, mode, **options):
img.type = mode
return img
def _set_background(self, img, color):
bg = Image().blank(img.width, img.height, background=color)
bg.type = img.type
bg.composite(img, 0, 0)
img.destroy()
return bg
def _crop(self, img, x1, y1, x2, y2):
img.crop(x1, y1, x2, y2)
return img
def _resize(self, img, w, h, filter):
img.resize(w, h, filter or 'undefined')
return img
def _thumbnail(self, img, w, h, filter, upscale):
geometry = upscale and '{0}x{1}' or '{0}x{1}>'
img.transform(resize=geometry.format(w, h))
return img
def _rotate(self, img, angle):
img.rotate(angle)
return img
def _add_border(self, img, width, color):
bg = Image().blank(img.width + width * 2, img.height + width * 2, color)
bg.composite(img, width, width)
img.destroy()
return bg
def _get_histogram(self, img):
return fast_histogram(img) | PypiClean |
/Netfoll_TL-2.0.1-py3-none-any.whl/netfoll_tl/errors/common.py | import struct
import textwrap
from ..tl import TLRequest
class ReadCancelledError(Exception):
"""Occurs when a read operation was cancelled."""
def __init__(self):
super().__init__('The read operation was cancelled.')
class TypeNotFoundError(Exception):
"""
Occurs when a type is not found, for example,
when trying to read a TLObject with an invalid constructor code.
"""
def __init__(self, invalid_constructor_id, remaining):
super().__init__(
'Could not find a matching Constructor ID for the TLObject '
'that was supposed to be read with ID {:08x}. See the FAQ '
'for more details. '
'Remaining bytes: {!r}'.format(invalid_constructor_id, remaining))
self.invalid_constructor_id = invalid_constructor_id
self.remaining = remaining
class InvalidChecksumError(Exception):
"""
Occurs when using the TCP full mode and the checksum of a received
packet doesn't match the expected checksum.
"""
def __init__(self, checksum, valid_checksum):
super().__init__(
'Invalid checksum ({} when {} was expected). '
'This packet should be skipped.'
.format(checksum, valid_checksum))
self.checksum = checksum
self.valid_checksum = valid_checksum
class InvalidBufferError(BufferError):
"""
Occurs when the buffer is invalid, and may contain an HTTP error code.
For instance, 404 means "forgotten/broken authorization key", while
"""
def __init__(self, payload):
self.payload = payload
if len(payload) == 4:
self.code = -struct.unpack('<i', payload)[0]
super().__init__(
'Invalid response buffer (HTTP code {})'.format(self.code))
else:
self.code = None
super().__init__(
'Invalid response buffer (too short {})'.format(self.payload))
class AuthKeyNotFound(Exception):
"""
The server claims it doesn't know about the authorization key (session
file) currently being used. This might be because it either has never
seen this authorization key, or it used to know about the authorization
key but has forgotten it, either temporarily or permanently (possibly
due to server errors).
If the issue persists, you may need to recreate the session file and login
again. This is not done automatically because it is not possible to know
if the issue is temporary or permanent.
"""
def __init__(self):
super().__init__(textwrap.dedent(self.__class__.__doc__))
class SecurityError(Exception):
"""
Generic security error, mostly used when generating a new AuthKey.
"""
def __init__(self, *args):
if not args:
args = ['A security check failed.']
super().__init__(*args)
class CdnFileTamperedError(SecurityError):
"""
Occurs when there's a hash mismatch between the decrypted CDN file
and its expected hash.
"""
def __init__(self):
super().__init__(
'The CDN file has been altered and its download cancelled.'
)
class AlreadyInConversationError(Exception):
"""
Occurs when another exclusive conversation is opened in the same chat.
"""
def __init__(self):
super().__init__(
'Cannot open exclusive conversation in a '
'chat that already has one open conversation'
)
class BadMessageError(Exception):
"""Occurs when handling a bad_message_notification."""
ErrorMessages = {
16:
'msg_id too low (most likely, client time is wrong it would be '
'worthwhile to synchronize it using msg_id notifications and re-send '
'the original message with the "correct" msg_id or wrap it in a '
'container with a new msg_id if the original message had waited too '
'long on the client to be transmitted).',
17:
'msg_id too high (similar to the previous case, the client time has '
'to be synchronized, and the message re-sent with the correct msg_id).',
18:
'Incorrect two lower order msg_id bits (the server expects client '
'message msg_id to be divisible by 4).',
19:
'Container msg_id is the same as msg_id of a previously received '
'message (this must never happen).',
20:
'Message too old, and it cannot be verified whether the server has '
'received a message with this msg_id or not.',
32:
'msg_seqno too low (the server has already received a message with a '
'lower msg_id but with either a higher or an equal and odd seqno).',
33:
'msg_seqno too high (similarly, there is a message with a higher '
'msg_id but with either a lower or an equal and odd seqno).',
34:
'An even msg_seqno expected (irrelevant message), but odd received.',
35:
'Odd msg_seqno expected (relevant message), but even received.',
48:
'Incorrect server salt (in this case, the bad_server_salt response '
'is received with the correct salt, and the message is to be re-sent '
'with it).',
64:
'Invalid container.'
}
def __init__(self, request, code):
super().__init__(request, self.ErrorMessages.get(
code,
'Unknown error code (this should not happen): {}.'.format(code)))
self.code = code
class MultiError(Exception):
"""Exception container for multiple `TLRequest`'s."""
def __new__(cls, exceptions, result, requests):
if len(result) != len(exceptions) != len(requests):
raise ValueError(
'Need result, exception and request for each error')
for e, req in zip(exceptions, requests):
if not isinstance(e, BaseException) and e is not None:
raise TypeError(
"Expected an exception object, not '%r'" % e
)
if not isinstance(req, TLRequest):
raise TypeError(
"Expected TLRequest object, not '%r'" % req
)
if len(exceptions) == 1:
return exceptions[0]
self = BaseException.__new__(cls)
self.exceptions = list(exceptions)
self.results = list(result)
self.requests = list(requests)
return self | PypiClean |
/ARGs_OAP-2.3.2.tar.gz/ARGs_OAP-2.3.2/ARGs_OAP/bin/bbmap/rqcfilter2.sh |
usage(){
echo "
Written by Brian Bushnell
Last modified June 26, 2019
Description: RQCFilter2 is a revised version of RQCFilter that uses a common path for all dependencies.
The dependencies are available at http://portal.nersc.gov/dna/microbial/assembly/bushnell/RQCFilterData.tar
Performs quality-trimming, artifact removal, linker-trimming, adapter trimming, and spike-in removal using BBDuk.
Performs human/cat/dog/mouse/microbe removal using BBMap.
It requires 40 GB RAM for mousecatdoghuman, but only 1GB or so without them.
Usage: rqcfilter2.sh in=<input file> path=<output directory> rqcfilterdata=<path to RQCFilterData directory>
Primary I/O parameters:
in=<file> Input reads.
in2=<file> Use this if 2nd read of pairs are in a different file.
path=null Set to the directory to use for all output files.
Reference file paths:
rqcfilterdata= Path to unzipped RQCFilterData directory. Default is /global/projectb/sandbox/gaag/bbtools/RQCFilterData
ref=<file,file> Comma-delimited list of additional reference files for filtering via BBDuk.
Output parameters:
scafstats=scaffoldStats.txt Scaffold stats file name (how many reads matched which reference scaffold) .
kmerstats=kmerStats.txt Kmer stats file name (duk-like output).
log=status.log Progress log file name.
filelist=file-list.txt List of output files.
stats=filterStats.txt Overall stats file name.
stats2=filterStats2.txt Better overall stats file name.
ihist=ihist_merge.txt Insert size histogram name. Set to null to skip merging.
outribo=ribo.fq.gz Output for ribosomal reads, if removeribo=t.
reproduceName=reproduce.sh Name of shellscript to reproduce these results.
usetmpdir=t Write temp files to TMPDIR.
tmpdir= Override TMPDIR.
Adapter trimming parameters:
trimhdist=1 Hamming distance used for trimming.
trimhdist2= Hamming distance used for trimming with short kmers. If unset, trimhdist will be used.
trimk=23 Kmer length for trimming stage.
mink=11 Minimum kmer length for short kmers when trimming.
trimfragadapter=t Trim all known Illumina adapter sequences, including TruSeq and Nextera.
trimrnaadapter=f Trim Illumina TruSeq-RNA adapters.
bisulfite=f Currently, this trims the last 1bp from all reads after the adapter-trimming phase.
findadapters=t For paired-end files, attempt to discover the adapter sequence with BBMerge and use that rather than a set of known adapters.
swift=f Trim Swift sequences: Trailing C/T/N R1, leading G/A/N R2.
Quality trimming parameters:
qtrim=f Trim read ends to remove bases with quality below minq. Performed AFTER looking for kmers.
Values: rl (trim both ends), f (neither end), r (right end only), l (left end only).
trimq=10 Trim quality threshold. Must also set qtrim for direction.
minlength=45 (ml) Reads shorter than this after trimming will be discarded. Pairs will be discarded only if both are shorter.
mlf=0.333 (minlengthfraction) Reads shorter than this fraction of original length after trimming will be discarded.
minavgquality=5 (maq) Reads with average quality (before trimming) below this will be discarded.
maxns=0 Reads with more Ns than this will be discarded.
forcetrimmod=5 (ftm) If positive, right-trim length to be equal to zero, modulo this number.
forcetrimleft=-1 (ftl) If positive, trim bases to the left of this position
(exclusive, 0-based).
forcetrimright=-1 (ftr) If positive, trim bases to the right of this position
(exclusive, 0-based).
forcetrimright2=-1 (ftr2) If positive, trim this many bases on the right end.
Mapping parameters (for vertebrate contaminants):
mapk=14 Kmer length for mapping stage (9-15; longer is faster).
removehuman=f (human) Remove human reads via mapping.
keephuman=f Keep reads that map to human (or cat, dog, mouse) rather than removing them.
removedog=f (dog) Remove dog reads via mapping.
removecat=f (cat) Remove cat reads via mapping.
removemouse=f (mouse) Remove mouse reads via mapping.
aggressivehuman=f Aggressively remove human reads (and cat/dog/mouse) using unmasked references.
aggressivemicrobe=f Aggressively microbial contaminant reads using unmasked references.
aggressive=f Set both aggressivehuman and aggressivemicrobe at once.
mapref= Remove contaminants by mapping to this fasta file (or comma-delimited list).
Bloom filter parameters (for vertebrate mapping):
bloom=t Use a Bloom filter to accelerate mapping.
bloomminreads=4m Disable Bloom filter if there are fewer than this many reads.
bloomk=29 Kmer length for Bloom filter
bloomhashes=1 Number of hashes for the Bloom filter.
bloomminhits=6 Minimum consecutive hits to consider a read as matching.
bloomserial=t Use the serialized Bloom filter for greater loading speed.
This will use the default Bloom filter parameters.
Microbial contaminant removal parameters:
detectmicrobes=f Detect common microbes, but don't remove them. Use this OR removemicrobes, not both.
removemicrobes=f (microbes) Remove common contaminant microbial reads via mapping, and place them in a separate file.
taxlist= (tax) Remove these taxa from the database before filtering. Typically, this would be the organism name or NCBI ID, or a comma-delimited list. Organism names should have underscores instead of spaces, such as Escherichia_coli.
taxlevel=order (level) Level to remove. For example, 'phylum' would remove everything in the same phylum as entries in the taxlist.
taxtree=auto (tree) Override location of the TaxTree file.
gitable=auto Override location of the gitable file.
loadgitable=f Controls whether gi numbers may be used for taxonomy.
microberef= Path to fasta file of microbes.
microbebuild=1 Chooses which masking was used. 1 is most stringent and should be used for bacteria. Eukaryotes should use 3.
Extended microbial contaminant parameters:
detectmicrobes2=f (detectothermicrobes) Detect an extended set of microbes that are currently being screened. This can be used in conjunction with removemicrobes.
Filtering parameters (for artificial and genomic contaminants):
skipfilter=f Skip this phase. Not recommended.
filterpolya=f Remove reads containing poly-A sequence (for RNA-seq).
filterpolyg=0 Remove reads that start with a G polymer at least this long (0 disables).
trimpolyg=0 Trim reads that start or end with a G polymer at least this long (0 disables).
phix=t Remove reads containing phiX kmers.
lambda=f Remove reads containing Lambda phage kmers.
pjet=t Remove reads containing PJET kmers.
maskmiddle=t (mm) Treat the middle base of a kmer as a wildcard, to increase sensitivity in the presence of errors.
maxbadkmers=0 (mbk) Reads with more than this many contaminant kmers will be discarded.
filterhdist=1 Hamming distance used for filtering.
filterqhdist=1 Query hamming distance used for filtering.
copyundefined=f (cu) Match all possible bases for sequences containing degerate IUPAC symbols.
entropy=f Remove low-complexity reads. The threshold can be specified by e.g entropy=0.4; default is 0.42 if enabled.
entropyk=2 Kmer length to use for entropy calculation.
entropywindow=40 Window size to use for entropy calculation.
Spikein removal/quantification parameters:
mtst=f Remove mtst.
kapa=t Remove and quantify kapa.
spikeink=31 Kmer length for spikein removal.
spikeinhdist=0 Hamming distance for spikein removal.
spikeinref= Additional references for spikein removal (comma-delimited list).
Ribosomal filtering parameters:
ribohdist=1 Hamming distance used for rRNA removal.
riboedist=0 Edit distance used for rRNA removal.
removeribo=f (ribo) Remove ribosomal reads via kmer-matching, and place them in a separate file.
Organelle filtering parameters:
chloromap=f Remove chloroplast reads by mapping to this organism's chloroplast.
mitomap=f Remove mitochondrial reads by mapping to this organism's mitochondria.
ribomap=f Remove ribosomal reads by mapping to this organism's ribosomes.
NOTE: organism TaxID should be specified in taxlist, and taxlevel should be set to genus or species.
FilterByTile parameters:
filterbytile=f Run FilterByTile to remove reads from low-quality parts of the flowcell.
Clumpify parameters:
clumpify=f Run clumpify; all deduplication flags require this.
dedupe=f Remove duplicate reads; all deduplication flags require this.
opticaldupes=f Remove optical duplicates (Clumpify optical flag).
edgedupes=f Remove tile-edge duplicates (Clumpify spany and adjacent flags).
dpasses=1 Use this many deduplication passes.
dsubs=2 Allow this many substitutions between duplicates.
ddist=40 Remove optical/edge duplicates within this distance.
lowcomplexity=f Set to true for low-complexity libraries such as RNA-seq to improve estimation of memory requirements.
clumpifytmpdir=f Use TMPDIR for clumpify temp files.
clumpifygroups=-1 If positive, force Clumpify to use this many groups.
*** For NextSeq, the recommended deduplication flags are: clumpify dedupe edgedupes
*** For NovaSeq, the recommended deduplication flags are: clumpify dedupe opticaldupes ddist=12000
*** For HiSeq, the recommended deduplication flags are: clumpify dedupe opticaldupes
Sketch parameters:
sketch=t Run SendSketch on 2M read pairs.
silvalocal=t Use the local flag for Silva (requires running RQCFilter on NERSC).
sketchreads=1m Number of read pairs to sketch.
sketchsamplerate=1 Samplerate for SendSketch.
sketchminprob=0.2 Minprob for SendSketch.
sketchdb=nt,refseq,silva Servers to use for SendSketch.
Other processing parameters:
threads=auto (t) Set number of threads to use; default is number of logical processors.
library=frag Set to 'frag', 'clip', 'lfpe', or 'clrs'.
filterk=31 Kmer length for filtering stage.
rcomp=t Look for reverse-complements of kmers in addition to forward kmers.
nexteralmp=f Split into different files based on Nextera LMP junction sequence. Only for Nextera LMP, not normal Nextera.
extend=f Extend reads during merging to allow insert size estimation of non-overlapping reads.
monitor=f Kill this process if it crashes. monitor=600,0.01 would kill after 600 seconds under 1% usage.
pigz=t Use pigz for compression.
unpigz=t Use pigz for decompression.
khist=f Set to true to generate a kmer-frequency histogram of the output data.
merge=t Set to false to skip generation of insert size histogram.
Header-specific parameters: (NOTE - Be sure to disable these if the reads have improper headers, like SRA data.)
chastityfilter=t Remove reads failing chastity filter.
barcodefilter=crash Crash when improper barcodes are discovered. Set to 'f' to disable or 't' to just remove improper barcodes.
barcodes= A comma-delimited list of barcodes or files of barcodes.
Java Parameters:
-Xmx This will set Java's memory usage, overriding autodetection.
-Xmx20g will specify 20 gigs of RAM, and -Xmx200m will specify 200 megs.
The max is typically 85% of physical memory.
-eoom This flag will cause the process to exit if an
out-of-memory exception occurs. Requires Java 8u92+.
-da Disable assertions.
***** All additional parameters supported by BBDuk may also be used, and will be passed directly to BBDuk *****
Please contact Brian Bushnell at [email protected] if you encounter any problems.
"
}
#This block allows symlinked shellscripts to correctly set classpath.
pushd . > /dev/null
DIR="${BASH_SOURCE[0]}"
while [ -h "$DIR" ]; do
cd "$(dirname "$DIR")"
DIR="$(readlink "$(basename "$DIR")")"
done
cd "$(dirname "$DIR")"
DIR="$(pwd)/"
popd > /dev/null
#DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )/"
CP="$DIR""current/"
JNI="-Djava.library.path=""$DIR""jni/"
JNI=""
z="-Xmx40g"
z2="-Xms40g"
set=0
export TZ="America/Los_Angeles"
if [ -z "$1" ] || [[ $1 == -h ]] || [[ $1 == --help ]]; then
usage
exit
fi
calcXmx () {
source "$DIR""/calcmem.sh"
setEnvironment
parseXmx "$@"
if [[ $set == 1 ]]; then
return
fi
freeRam 39200m 84
if [[ $NSLOTS == 8 ]]; then
RAM=39200
fi
z="-Xmx${RAM}m"
z2="-Xms${RAM}m"
}
calcXmx "$@"
rqcfilter() {
if [[ $SHIFTER_RUNTIME == 1 ]]; then
#Ignore NERSC_HOST
shifter=1
elif [[ $NERSC_HOST == genepool ]]; then
module unload oracle-jdk
module load oracle-jdk/1.8_144_64bit
module load pigz
export TZ="America/Los_Angeles"
elif [[ $NERSC_HOST == denovo ]]; then
module unload java
module load java/1.8.0_144
module load pigz
export TZ="America/Los_Angeles"
elif [[ $NERSC_HOST == cori ]]; then
module use /global/common/software/m342/nersc-builds/denovo/Modules/jgi
module use /global/common/software/m342/nersc-builds/denovo/Modules/usg
module unload java
module load java/1.8.0_144
module load pigz
fi
local CMD="java $EA $EOOM $z $z2 $JNI -cp $CP jgi.RQCFilter2 jni=t $@"
echo $CMD >&2
eval $CMD
}
rqcfilter "$@" | PypiClean |
/Nproxypool-1.0.2.tar.gz/Nproxypool-1.0.2/nproxypool/base/db.py | import time
from random import choice
from redis import ConnectionPool, StrictRedis
from nproxypool.utils.exceptions import PoolEmptyException
class RedisPoolBase(object):
def __init__(self, **kwargs):
self._kwargs = kwargs
self._redis_uri = "redis://:{password}@{host}:{port}/{db}"
self._client = None
self._get_client()
self._value_pattern = '{};{}'
def __del__(self):
if self._client:
self._client.connection_pool.disconnect()
def _get_client(self):
if not self._kwargs.get('uri'):
password = self._kwargs.get('password', '')
host = self._kwargs.get('host', 'localhost')
port = self._kwargs.get('port', '6379')
db = self._kwargs.get('db', 1)
redis_uri = self._redis_uri.format(
password=password,
host=host,
port=port,
db=db
)
else:
redis_uri = self._kwargs['uri']
pool = ConnectionPool.from_url(redis_uri)
self._client = StrictRedis(connection_pool=pool)
@staticmethod
def _bytes_to_str(_value):
res = _value
if isinstance(_value, bytes):
res = str(_value, encoding="utf-8")
return res
# ====================================
# first-level pool operations
# ====================================
def get_all(self):
return self._client.keys()
def get_one(self, key):
"""
get one proxy from first-level pool
key: proxy
"""
r = self._client.get(key)
return r if not r else self._bytes_to_str(r)
def insert_one(self, key, score=60, expire=600):
"""
insert one proxy to first-level pool
key: proxy
score: initial score
expire: expire time
"""
if not self.get_one(key):
expire_stamp = int(time.time() + expire)
value = self._value_pattern.format(expire_stamp, score)
self._client.set(key, value)
self._client.expireat(key, expire_stamp)
def update_one(self, key, score=60, old_value=''):
"""
update score but never change the expire time.
"""
old_value = old_value or self.get_one(key)
_expire = old_value.split(';')[0]
expire_stamp = int(_expire) if _expire else int(time.time() - 300)
new_value = self._value_pattern.format(expire_stamp, score)
self._client.set(key, new_value)
self._client.expireat(key, expire_stamp)
def delete_one(self, key):
self._client.delete(key)
class RedisPool(RedisPoolBase):
def __init__(self, **kwargs):
super(RedisPool, self).__init__(**kwargs)
# ====================================
# second-level pool(zset) operations
# ====================================
def z_get_all_keys(self):
return self._client.keys()
def z_get_all(self, key):
return self._client.zrangebyscore(key, -60, 100)
def z_get(self, proxy, key):
"""
query score or test whether the proxy is existed in a zset
"""
return self._client.zscore(key, proxy)
def z_add(self, proxy, key, score):
if not self.z_get(proxy, key):
_map = {proxy: score}
self._client.zadd(key, _map)
def z_increase(self, proxy, key):
_map = {proxy: 100}
self._client.zadd(key, _map)
def z_delete(self, proxy, key):
self._client.zrem(key, proxy)
def z_decrease(self, proxy, key, score_threshold=20, to_decr=30):
"""
score_threshold: threshold for proxy delete
to_decr: score to decrease by when a verification failure occurred
"""
score = self.z_get(proxy, key)
if score:
new_score = score - to_decr
if new_score <= score_threshold:
self.z_delete(proxy, key)
else:
self._client.zincrby(
name=key,
value=proxy,
amount=0 - to_decr
)
def z_random_get_one(self, key='baidu'):
res = self._client.zrangebyscore(key, 100, 100)
if not res:
res = self._client.zrangebyscore(key, 85, 100)
if res:
proxy = choice(res)
proxy = self._bytes_to_str(proxy)
self.z_decrease(proxy, key, to_decr=1)
return proxy
else:
raise PoolEmptyException()
def z_get_total_num(self, key, min_score=0, max_score=100, get_list=False):
res = self._client.zrangebyscore(key, min_score, max_score)
if not get_list:
return len(res)
else:
return [self._bytes_to_str(i) for i in res] | PypiClean |
/Flask-RQ2-18.3.tar.gz/Flask-RQ2-18.3/CHANGELOG.rst | Changelog
---------
https://img.shields.io/badge/calver-YY.0M.MICRO-22bfda.svg
Flask-RQ2 follows the `CalVer <http://calver.org/>`_ version specification
in the form of::
YY.MINOR[.MICRO]
E.g.::
16.1.1
The ``MINOR`` number is **not** the month of the year. The ``MICRO`` number
is a patch level for ``YY.MINOR`` releases and must *not* be specified for
inital ``MINOR`` releases such as ``18.0`` or ``19.2``.
.. snip
18.3 (2018-12-20)
~~~~~~~~~~~~~~~~~
- **IMPORTANT!** Reqires redis-py >= 3.0 since RQ and rq-scheduler have
switched to that requirement. Please upgrade as soon as possible.
18.2.2 (2018-12-20)
~~~~~~~~~~~~~~~~~~~
- **Last release to support redis-py < 3.0.0!** Fixes version incompatibility
with rq-scheduler. Requires rq-scheduler < 0.9.0.
18.2, 18.2.1 (2018-11-29)
~~~~~~~~~~~~~~~~~~~~~~~~~
- Requires redis-py < 3.0.0 as long as RQ hasn't been made compatible to
that version. Please don't update redis-py to 3.x yet, it will break
using RQ.
More infos:
- https://github.com/rq/rq/issues/1014
- https://github.com/rq/Flask-RQ2/issues/75
- Require rq < 0.13.0 to cater to a possible Redis 3.0.0 compatible version.
18.1 (2018-09-19)
~~~~~~~~~~~~~~~~~
- Requires rq >= 0.12.0 and rq-scheduler >= 0.8.3 now.
- Fixes imcompatibility with the new rq 0.12.0 release with which the
``flask rq worker`` command would raise an error because of changes
in handling of the ``worker_ttl`` parameter defaults.
- Added support for Python 3.7. Since 'async' is a keyword in Python 3.7,
`RQ(async=True)` has been changed to `RQ(is_async=True)`. The `async`
keyword argument will still work, but raises a `DeprecationWarning`.
- Documentation fixes.
18.0 (2018-03-02)
~~~~~~~~~~~~~~~~~
- The project has been moved to the official RQ GitHub organization!
New URL: https://github.com/rq/flask-rq2
- Stop monkey-patching the scheduler module since rq-scheduler gained the
ability to use custom job classes.
**Requires rq-scheduler 0.8.2 or higher.**
- Adds `depends_on`, `at_front`, `meta` and `description` parameters to job
decorator.
**Requires rq==0.10.0 or higher.**
- Minor fixes for test infrastructure.
17.2 (2017-12-05)
~~~~~~~~~~~~~~~~~
- Allow dynamically setting timeout, result TTL and job TTL and other
parameters when enqueuing, scheduling or adding as a cron job.
17.1 (2017-12-05)
~~~~~~~~~~~~~~~~~
- Require Flask >= 0.10, but it's recommended to use at least 0.11.
- Require rq 0.8.0 or later and rq-scheduler 0.7.0 or later.
- Require setting ``FLASK_APP`` environment variable to load Flask app
during job performing.
- Add ``RQ_SCHEDULER_CLASS``, ``RQ_WORKER_CLASS``, ``RQ_JOB_CLASS`` and
``RQ_QUEUE_CLASS`` as configuration values.
- Add support for rq-scheduler's ``--burst`` option to automatically quit
after all work is done.
- Drop support for Flask-Script in favor of native Flask CLI support
(or via Flask-CLI app for Flask < 0.11).
- Drop support for Python 3.4.
- Allow setting the queue dynamically when enqueuing, scheduling or adding
as a cron job.
- Handle the result_ttl and queue_name job overrides better.
- Actually respect the ``RQ_SCHEDULER_INTERVAL`` config value.
- Move ``flask_rq2.helpers`` module to ``flask_rq2.functions``.
- Use a central Redis client and require app initialization before connecting.
You'll have to run ``RQ.init_app`` **before** you can queue or schedule
a job from now on.
17.0 (2017-02-15)
~~~~~~~~~~~~~~~~~
- Pin the rq version Flask-RQ2 depends on to >=0.6.0,<0.7.0 for now.
A bigger refactor will follow shortly that fixes those problems better.
- Allow overriding the `default_timeout` in case of using the
factory pattern.
- Run tests on Python 3.6.
16.1.1 (2016-09-08)
~~~~~~~~~~~~~~~~~~~
- Fix typos in docs.
16.1 (2016-09-08)
~~~~~~~~~~~~~~~~~
- Official Support for Flask >= 0.11
- Fix import paths to stop using ``flask.ext`` prefix.
16.0.2 (2016-05-20)
~~~~~~~~~~~~~~~~~~~
- Fix package description.
16.0.1 (2016-05-20)
~~~~~~~~~~~~~~~~~~~
- Make wheel file universal.
16.0 (2016-05-20)
~~~~~~~~~~~~~~~~~
- Initial release.
| PypiClean |
/Lantz-0.3.zip/Lantz-0.3/lantz/processors.py | import warnings
from . import Q_
from .log import LOGGER as _LOG
from stringparser import Parser
class DimensionalityWarning(Warning):
pass
def _do_nothing(value):
return value
def _getitem(a, b):
"""Return a[b] or if not found a[type(b)]
"""
try:
return a[b]
except KeyError:
return a[type(b)]
getitem = _getitem
def convert_to(units, on_dimensionless='warn', on_incompatible='raise',
return_float=False):
"""Return a function that convert a Quantity to to another units.
:param units: string or Quantity specifying the target units
:param on_dimensionless: how to proceed when a dimensionless number
number is given.
'raise' to raise an exception,
'warn' to log a warning and proceed,
'ignore' to silently proceed
:param on_incompatible: how to proceed when source and target units are
incompatible. Same options as `on_dimensionless`
:raises: :class:`ValueError` if the incoming value cannot be
properly converted
>>> convert_to('mV')(Q_(1, 'V'))
<Quantity(1000.0, 'millivolt')>
>>> convert_to('mV', return_float=True)(Q_(1, 'V'))
1000.0
"""
if on_dimensionless not in ('ignore', 'warn', 'raise'):
raise ValueError("{} is not a valid value for 'on_dimensionless'. "
"It should be either 'ignore', 'warn' or 'raise'".format(on_dimensionless))
if on_incompatible not in ('ignore', 'warn', 'raise'):
raise ValueError("{} is not a valid value for 'on_incompatible'. "
"It should be either 'ignore', 'warn' or 'raise'".format(on_dimensionless))
if isinstance(units, str):
units = Q_(1, units)
elif not isinstance(units, Q_):
raise ValueError("{} is not a valid value for 'units'. "
"It should be either str or Quantity")
if return_float:
def _inner(value):
if isinstance(value, Q_):
try:
return value.to(units).magnitude
except ValueError as e:
if on_incompatible == 'raise':
raise ValueError(e)
elif on_incompatible == 'warn':
msg = 'Unable to convert {} to {}. Ignoring source units.'.format(value, units)
warnings.warn(msg, DimensionalityWarning)
_LOG.warn(msg)
# on_incompatible == 'ignore'
return value.magnitude
else:
if not units.dimensionless:
if on_dimensionless == 'raise':
raise ValueError('Unable to convert {} to {}'.format(value, units))
elif on_dimensionless == 'warn':
msg = 'Assuming units `{1.units}` for {0}'.format(value, units)
warnings.warn(msg, DimensionalityWarning)
_LOG.warn(msg)
# on_incompatible == 'ignore'
return float(value)
return _inner
else:
def _inner(value):
if isinstance(value, Q_):
try:
return value.to(units)
except ValueError as e:
if on_incompatible == 'raise':
raise ValueError(e)
elif on_incompatible == 'warn':
msg = 'Assuming units `{1.units}` for {0}'.format(value, units)
warnings.warn(msg, DimensionalityWarning)
_LOG.warn(msg)
# on_incompatible == 'ignore'
return float(value.magnitude) * units
else:
if not units.dimensionless:
if on_dimensionless == 'raise':
raise ValueError('Unable to convert {} to {}'.format(value, units))
elif on_dimensionless == 'warn':
msg = 'Assuming units `{1.units}` for {0}'.format(value, units)
warnings.warn(msg, DimensionalityWarning)
_LOG.warn(msg)
# on_incompatible == 'ignore'
return float(value) * units
return _inner
class Processor(object):
"""Processor to convert the function parameters.
A `callable` argument will be used to convert the corresponding
function argument.
For example, here `x` will be converted to float, before entering
the function body::
>>> conv = Processor(float)
>>> conv
<class 'float'>
>>> conv('10')
10.0
The processor supports multiple argument conversion in a tuple::
>>> conv = Processor((float, str))
>>> type(conv)
<class 'lantz.processors.Processor'>
>>> conv(('10', 10))
(10.0, '10')
"""
def __new__(cls, processors):
if isinstance(processors, (tuple, list)):
if len(processors) > 1:
inst = super().__new__(cls)
inst.processors = tuple(cls._to_callable(processor)
for processor in processors)
return inst
else:
return cls._to_callable(processors[0])
else:
return cls._to_callable(processors)
def __call__(self, values):
return tuple(processor(value)
for processor, value in zip(self.processors, values))
@classmethod
def _to_callable(cls, obj):
if callable(obj):
return obj
if obj is None:
return _do_nothing
return cls.to_callable(obj)
@classmethod
def to_callable(cls, obj):
raise TypeError('Preprocessor argument must callable, not {}'.format(obj))
def __len__(self):
if isinstance(self.processors, tuple):
return len(self.processors)
return 1
class FromQuantityProcessor(Processor):
"""Processor to convert the units the function arguments.
The syntax is equal to `Processor` except that strings are interpreted
as units.
>>> conv = FromQuantityProcessor('ms')
>>> conv(Q_(1, 's'))
1000.0
"""
@classmethod
def to_callable(cls, obj):
if isinstance(obj, (str, Q_)):
return convert_to(obj, return_float=True)
raise TypeError('FromQuantityProcessor argument must be a string '
' or a callable, not {}'.format(obj))
class ToQuantityProcessor(Processor):
"""Decorator to convert the units the function arguments.
The syntax is equal to `Processor` except that strings are interpreted
as units.
>>> conv = ToQuantityProcessor('ms')
>>> conv(Q_(1, 's'))
<Quantity(1000.0, 'millisecond')>
>>> conv(1)
<Quantity(1.0, 'millisecond')>
"""
@classmethod
def to_callable(cls, obj):
if isinstance(obj, (str, Q_)):
return convert_to(obj, on_dimensionless='ignore')
raise TypeError('ToQuantityProcessor argument must be a string '
' or a callable, not {}'.format(obj))
class ParseProcessor(Processor):
"""Processor to convert/parse the function parameters.
The syntax is equal to `Processor` except that strings are interpreted
as a :class:Parser expression.
>>> conv = ParseProcessor('spam {:s} eggs')
>>> conv('spam ham eggs')
'ham'
>>> conv = ParseProcessor(('hi {:d}', 'bye {:s}'))
>>> conv(('hi 42', 'bye Brian'))
(42, 'Brian')
"""
@classmethod
def to_callable(cls, obj):
if isinstance(obj, str):
return Parser(obj)
raise TypeError('parse_params argument must be a string or a callable, '
'not {}'.format(obj))
class MapProcessor(Processor):
"""Processor to map the function parameter values.
The syntax is equal to `Processor` except that a dict is used as
mapping table.
Examples::
>>> conv = MapProcessor({True: 42})
>>> conv(True)
42
"""
@classmethod
def to_callable(cls, obj):
if isinstance(obj, dict):
return get_mapping(obj)
if isinstance(obj, set):
return check_membership(obj)
raise TypeError('MapProcessor argument must be a dict or a callable, '
'not {}'.format(obj))
class ReverseMapProcessor(Processor):
"""Processor to map the function parameter values.
The syntax is equal to `Processor` except that a dict is used as
mapping table.
Examples::
>>> conv = ReverseMapProcessor({True: 42})
>>> conv(42)
True
"""
#: Shared cache of reversed dictionaries indexed by the id()
__reversed_cache = {}
@classmethod
def to_callable(cls, obj):
if isinstance(obj, dict):
obj = cls.__reversed_cache.setdefault(id(obj),
{value: key for key, value
in obj.items()})
return get_mapping(obj)
if isinstance(obj, set):
return check_membership(obj)
raise TypeError('ReverseMapProcessor argument must be a dict or a callable, '
'not {}'.format(obj))
class RangeProcessor(Processor):
"""Processor to convert the units the function arguments.
The syntax is equal to `Processor` except that iterables are interpreted
as (low, high, step) specified ranges. Step is optional and max is included
>>> conv = RangeProcessor(((1, 2, .5), ))
>>> conv(1.7)
1.5
"""
@classmethod
def to_callable(cls, obj):
if not isinstance(obj, (list, tuple)):
raise TypeError('RangeProcessor argument must be a tuple/list '
'or a callable, not {}'.format(obj))
if not len(obj) in (1, 2, 3):
raise TypeError('RangeProcessor argument must be a tuple/list '
'with 1, 2 or 3 elements ([low,] high[, step]) '
'not {}'.format(len(obj)))
if len(obj) == 1:
return check_range_and_coerce_step(0, *obj)
return check_range_and_coerce_step(*obj)
def check_range_and_coerce_step(low, high, step=None):
"""
:param low:
:param high:
:param step:
:return:
>>> checker = check_range_and_coerce_step(1, 10)
>>> checker(1), checker(5), checker(10)
(1, 5, 10)
>>> checker(11)
Traceback (most recent call last):
...
ValueError: 11 not in range (1, 10)
>>> checker = check_range_and_coerce_step(1, 10, 1)
>>> checker(1), checker(5.4), checker(10)
(1, 5, 10)
"""
def _inner(value):
if not (low <= value <= high):
raise ValueError('{} not in range ({}, {})'.format(value, low, high))
if step:
value = round((value - low) / step) * step + low
return value
return _inner
def check_membership(container):
"""
:param container:
:return:
>>> checker = check_membership((1, 2, 3))
>>> checker(1)
1
>>> checker(0)
Traceback (most recent call last):
...
ValueError: 0 not in (1, 2, 3)
"""
def _inner(value):
if value not in container:
raise ValueError('{!r} not in {}'.format(value, container))
return value
return _inner
def get_mapping(container):
"""
>>> getter = get_mapping({'A': 42, 'B': 43})
>>> getter('A')
42
>>> getter(0)
Traceback (most recent call last):
...
ValueError: 0 not in ('A', 'B')
"""
def _inner(key):
if key not in container:
raise ValueError("{!r} not in {}".format(key, tuple(container.keys())))
return container[key]
return _inner | PypiClean |
/Cantonese-1.0.7-py3-none-any.whl/src/can_web_parser.py | import sys
from src.can_lexer import *
class WebParser(object):
def __init__(self, tokens : list, Node : list) -> None:
self.tokens = tokens
self.pos = 0
self.Node = Node
def get(self, offset : int) -> list:
if self.pos + offset >= len(self.tokens):
return ["", ""]
return self.tokens[self.pos + offset]
def match(self, name : str) -> bool:
if self.get(0)[1] == name:
return True
return False
def match_type(self, name : str) -> bool:
if self.get(0)[0] == name:
return True
return False
def check(self, a, b) -> None:
if a == b:
return
raise LookupError("Error Token:" + str(b))
def skip(self, offset) -> None:
self.pos += offset
def run(self, Nodes : list) -> None:
for node in Nodes:
if node[0] == "node_call":
web_call_new(node[1][0], node[1][1], node[2])
if node[0] == "node_css":
style_def(node[1][0], node[1][1], node[1][2])
def parse(self) -> None:
while True:
if self.match("老作一下"):
self.skip(1)
self.check(self.get(0)[1], "{")
self.skip(1)
stmt = []
node_main = []
while self.tokens[self.pos][1] != "}":
stmt.append(self.tokens[self.pos])
self.pos += 1
self.skip(1)
WebParser(stmt, node_main).parse()
self.Node = node_main
self.run(self.Node)
elif self.match_type("id"):
if self.get(1)[0] == "keywords" and self.get(1)[1] == "要点画":
id = self.get(0)[1]
self.skip(2)
style_stmt = []
node_style = []
while self.tokens[self.pos][1] != "搞掂":
style_stmt.append(self.tokens[self.pos])
self.pos += 1
self.skip(1)
self.cantonese_css_parser(style_stmt, id)
else:
name = self.get(0)[1]
self.skip(1)
self.check(self.get(0)[1], "=>")
self.skip(1)
self.check(self.get(0)[1], "[")
self.skip(1)
args = []
while self.tokens[self.pos][1] != "]":
args.append(self.tokens[self.pos][1])
self.pos += 1
self.skip(1)
with_style = False
if self.match('$'): # case 'style_with'
style_id = self.get(1)[1]
self.skip(2)
args.append(style_id)
with_style = True
web_ast_new(self.Node, "node_call", [name, args], with_style)
else:
break
def cantonese_css_parser(self, stmt : list, id : str) -> None:
cssParser(stmt, []).parse(id)
class cssParser(WebParser):
def parse(self, id : str) -> None:
while True:
if self.match_type("id"):
key = self.get(0)[1]
self.skip(1)
self.check(self.get(0)[1], "=>")
self.skip(1)
self.check(self.get(0)[1], "[")
self.skip(1)
value = []
while self.tokens[self.pos][1] != "]":
value.append(self.tokens[self.pos][1])
self.pos += 1
self.skip(1)
web_ast_new(self.Node, "node_css", [id, key, value])
else:
break
self.run(self.Node)
def web_ast_new(Node : list, type : str, ctx : list, with_style = True) -> None:
Node.append([type, ctx, with_style])
def get_str(s : str) -> str:
return eval("str(" + s + ")")
sym = {}
style_attr = {}
style_value_attr = {}
TO_HTML = "<html>\n"
def title(args : list, with_style : bool) -> None:
global TO_HTML
if len(args) == 1:
t_beg, t_end = "<title>", "</title>\n"
TO_HTML += t_beg + get_str(args[0]) + t_end
if len(args) >= 2:
style = args.pop() if with_style else ""
t_beg, t_end = "<title id = \"" + style + "\">", "</title>\n"
TO_HTML += t_beg + get_str(args[0]) + t_end
def h(args : list, with_style : bool) -> None:
global TO_HTML
if len(args) == 1:
h_beg, h_end = "<h1>", "</h1>\n"
TO_HTML += h_beg + get_str(args[0]) + h_end
if len(args) >= 2:
style = args.pop() if with_style else ""
size = "" if len(args) == 1 else args[1]
t_beg, t_end = "<h" + size + " id = \"" + style + "\">", "</h" + size + ">\n"
TO_HTML += t_beg + get_str(args[0]) + t_end
def img(args : list, with_style : bool) -> None:
global TO_HTML
if len(args) == 1:
i_beg, i_end = "<img src = ", ">\n"
TO_HTML += i_beg + get_str(args[0]) + i_end
if len(args) >= 2:
style = args.pop() if with_style else ""
i_beg, i_end = "<img id = \"" + style + "\" src = ", ">\n"
TO_HTML += i_beg + get_str(args[0]) + i_end
def audio(args : list, with_style : bool) -> None:
global TO_HTML
if len(args) == 1:
a_beg, a_end = "<audio src = ", "</audio>\n"
TO_HTML += a_beg + get_str(args[0]) + a_end
def sym_init() -> None:
global sym
global style_attr
sym['打标题'] = title
sym['拎笔'] = h
sym['睇下'] = img
sym['Music'] = audio
#sym['画格仔'] = table
style_attr['要咩色'] = "color"
style_attr['要咩背景颜色'] = "background-color"
style_attr['要点对齐'] = "text-align"
style_attr['要几高'] = 'height'
style_attr['要几阔'] = 'width'
style_value_attr['红色'] = "red"
style_value_attr['黄色'] = "yellow"
style_value_attr['白色'] = "white"
style_value_attr['黑色'] = "black"
style_value_attr['绿色'] = "green"
style_value_attr['蓝色'] = "blue"
style_value_attr['居中'] = "centre"
def head_init() -> None:
global TO_HTML
TO_HTML += "<head>\n"
TO_HTML += "<meta charset=\"utf-8\" />\n"
TO_HTML += "</head>\n"
def web_init() -> None:
global TO_HTML
sym_init()
head_init()
def web_end() -> None:
global TO_HTML
TO_HTML += "</html>"
style_sym = {}
def style_def(id : str, key : str, value : list) -> None:
global style_sym
if id not in style_sym:
style_sym[id] = [[key, value]]
return
style_sym[id].append([key, value])
def style_build(value : list) -> None:
s = ""
for item in value:
if get_str(item[1][0]) not in style_value_attr.keys() and item[0] in style_attr.keys():
s += style_attr[item[0]] + " : " + get_str(item[1][0]) + ";\n"
elif get_str(item[1][0]) not in style_value_attr.keys() and item[0] not in style_attr.keys():
s += item[0] + " : " + get_str(item[1][0]) + ";\n"
elif get_str(item[1][0]) in style_value_attr.keys() and item[0] not in style_attr.keys():
s += item[0] + " : " + style_value_attr[get_str(item[1][0])] + ";\n"
else:
s += style_attr[item[0]] + " : " + style_value_attr[get_str(item[1][0])] + ";\n"
return s
def style_exec(sym : dict) -> None:
global TO_HTML
gen = ""
s_beg, s_end = "\n<style type=\"text/css\">\n", "</style>\n"
for key, value in sym.items():
gen += "#" + key + "{\n" + style_build(value) + "}\n"
TO_HTML += s_beg + gen + s_end
def web_call_new(func : str, args_list : list, with_style = False) -> None:
if func in sym:
sym[func](args_list, with_style)
else:
func(args_list, with_style)
def get_html_file(name : str) -> str:
return name[ : len(name) - len('cantonese')] + 'html'
class WebLexer(lexer):
def __init__(self, code, keywords):
super().__init__(code, keywords)
self.re_callfunc, self.re_expr, self.op,\
self.op_get_code, self.op_gen_code, \
self.build_in_funcs, self.bif_get_code, \
self.bif_gen_code = "", "", "", "", "", "", "", ""
def get_token(self):
self.skip_space()
if len(self.code) == 0:
return ['EOF', 'EOF']
c = self.code[0]
if self.isChinese(c) or c == '_' or c.isalpha():
token = self.scan_identifier()
if token in self.keywords:
return ['keywords', token]
return ['id', token]
if c == '=':
if self.check("=>"):
self.next(2)
return ['keywords', "=>"]
if c in ('\'', '"'):
return ['string', self.scan_short_string()]
if c == '.' or c.isdigit():
token = self.scan_number()
return ['num', token]
if c == '{':
self.next(1)
return ["keywords", c]
if c == '}':
self.next(1)
return ["keywords", c]
if c == '[':
self.next(1)
return ["keywords", c]
if c == ']':
self.next(1)
return ["keywords", c]
if c == '$':
self.next(1)
return ["keywords", c]
if c == '(':
self.next(1)
return ["keywords", c]
if c == ')':
self.next(1)
return ["keywords", c]
self.error("睇唔明嘅Token: " + c)
def cantonese_web_run(code : str, file_name : str, open_serv = True) -> None:
global TO_HTML
keywords = ("老作一下", "要点画", "搞掂", "执嘢")
lex = WebLexer(code, keywords)
tokens = []
while True:
token = lex.get_token()
tokens.append(token)
if token == ['EOF', 'EOF']:
break
web_init()
WebParser(tokens, []).parse()
web_end()
if style_sym != {}:
style_exec(style_sym)
print(TO_HTML)
if open_serv:
import socket
ip_port = ('127.0.0.1', 80)
back_log = 10
buffer_size = 1024
webserver = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
webserver.bind(ip_port)
webserver.listen(back_log)
print("Cantonese Web Starting at 127.0.0.1:80 ...")
while True:
conn, addr = webserver.accept()
recvdata = conn.recv(buffer_size)
conn.sendall(bytes("HTTP/1.1 201 OK\r\n\r\n", "utf-8"))
conn.sendall(bytes(TO_HTML, "utf-8"))
conn.close()
if input("input Y to exit:"):
print("Cantonese Web exiting...")
break
else:
f = open(get_html_file(file_name), 'w', encoding = 'utf-8')
f.write(TO_HTML)
sys.exit(0) | PypiClean |
/DI_engine-0.4.9-py3-none-any.whl/ding/envs/env_manager/subprocess_env_manager.py | from typing import Any, Union, List, Tuple, Dict, Callable, Optional
from multiprocessing import connection, get_context
from collections import namedtuple
from ditk import logging
import platform
import time
import copy
import gymnasium
import gym
import traceback
import torch
import pickle
import numpy as np
import treetensor.numpy as tnp
from easydict import EasyDict
from types import MethodType
from ding.data import ShmBufferContainer, ShmBuffer
from ding.envs.env import BaseEnvTimestep
from ding.utils import PropagatingThread, LockContextType, LockContext, ENV_MANAGER_REGISTRY, make_key_as_identifier, \
remove_illegal_item, CloudPickleWrapper
from .base_env_manager import BaseEnvManager, EnvState, timeout_wrapper
def is_abnormal_timestep(timestep: namedtuple) -> bool:
if isinstance(timestep.info, dict):
return timestep.info.get('abnormal', False)
elif isinstance(timestep.info, list) or isinstance(timestep.info, tuple):
return timestep.info[0].get('abnormal', False) or timestep.info[1].get('abnormal', False)
else:
raise TypeError("invalid env timestep type: {}".format(type(timestep.info)))
@ENV_MANAGER_REGISTRY.register('async_subprocess')
class AsyncSubprocessEnvManager(BaseEnvManager):
"""
Overview:
Create an AsyncSubprocessEnvManager to manage multiple environments.
Each Environment is run by a respective subprocess.
Interfaces:
seed, launch, ready_obs, step, reset, active_env
"""
config = dict(
episode_num=float("inf"),
max_retry=5,
step_timeout=None,
auto_reset=True,
retry_type='reset',
reset_timeout=None,
retry_waiting_time=0.1,
# subprocess specified args
shared_memory=True,
copy_on_get=True,
context='spawn' if platform.system().lower() == 'windows' else 'fork',
wait_num=2,
step_wait_timeout=0.01,
connect_timeout=60,
reset_inplace=False,
)
def __init__(
self,
env_fn: List[Callable],
cfg: EasyDict = EasyDict({}),
) -> None:
"""
Overview:
Initialize the AsyncSubprocessEnvManager.
Arguments:
- env_fn (:obj:`List[Callable]`): The function to create environment
- cfg (:obj:`EasyDict`): Config
.. note::
- wait_num: for each time the minimum number of env return to gather
- step_wait_timeout: for each time the minimum number of env return to gather
"""
super().__init__(env_fn, cfg)
self._shared_memory = self._cfg.shared_memory
self._copy_on_get = self._cfg.copy_on_get
self._context = self._cfg.context
self._wait_num = self._cfg.wait_num
self._step_wait_timeout = self._cfg.step_wait_timeout
self._lock = LockContext(LockContextType.THREAD_LOCK)
self._connect_timeout = self._cfg.connect_timeout
self._async_args = {
'step': {
'wait_num': min(self._wait_num, self._env_num),
'timeout': self._step_wait_timeout
}
}
self._reset_inplace = self._cfg.reset_inplace
if not self._auto_reset:
assert not self._reset_inplace, "reset_inplace is unavailable when auto_reset=False."
def _create_state(self) -> None:
r"""
Overview:
Fork/spawn sub-processes(Call ``_create_env_subprocess``) and create pipes to transfer the data.
"""
self._env_episode_count = {env_id: 0 for env_id in range(self.env_num)}
self._ready_obs = {env_id: None for env_id in range(self.env_num)}
self._reset_param = {i: {} for i in range(self.env_num)}
if self._shared_memory:
obs_space = self._observation_space
if isinstance(obs_space, (gym.spaces.Dict, gymnasium.spaces.Dict)):
# For multi_agent case, such as multiagent_mujoco and petting_zoo mpe.
# Now only for the case that each agent in the team have the same obs structure
# and corresponding shape.
shape = {k: v.shape for k, v in obs_space.spaces.items()}
dtype = {k: v.dtype for k, v in obs_space.spaces.items()}
else:
shape = obs_space.shape
dtype = obs_space.dtype
self._obs_buffers = {
env_id: ShmBufferContainer(dtype, shape, copy_on_get=self._copy_on_get)
for env_id in range(self.env_num)
}
else:
self._obs_buffers = {env_id: None for env_id in range(self.env_num)}
self._pipe_parents, self._pipe_children = {}, {}
self._subprocesses = {}
for env_id in range(self.env_num):
self._create_env_subprocess(env_id)
self._waiting_env = {'step': set()}
self._closed = False
def _create_env_subprocess(self, env_id):
# start a new one
ctx = get_context(self._context)
self._pipe_parents[env_id], self._pipe_children[env_id] = ctx.Pipe()
self._subprocesses[env_id] = ctx.Process(
# target=self.worker_fn,
target=self.worker_fn_robust,
args=(
self._pipe_parents[env_id],
self._pipe_children[env_id],
CloudPickleWrapper(self._env_fn[env_id]),
self._obs_buffers[env_id],
self.method_name_list,
self._reset_timeout,
self._step_timeout,
self._reset_inplace,
),
daemon=True,
name='subprocess_env_manager{}_{}'.format(env_id, time.time())
)
self._subprocesses[env_id].start()
self._pipe_children[env_id].close()
self._env_states[env_id] = EnvState.INIT
if self._env_replay_path is not None:
self._pipe_parents[env_id].send(['enable_save_replay', [self._env_replay_path[env_id]], {}])
self._pipe_parents[env_id].recv()
@property
def ready_env(self) -> List[int]:
active_env = [i for i, s in self._env_states.items() if s == EnvState.RUN]
return [i for i in active_env if i not in self._waiting_env['step']]
@property
def ready_obs(self) -> Dict[int, Any]:
"""
Overview:
Get the next observations.
Return:
A dictionary with observations and their environment IDs.
Note:
The observations are returned in np.ndarray.
Example:
>>> obs_dict = env_manager.ready_obs
>>> actions_dict = {env_id: model.forward(obs) for env_id, obs in obs_dict.items())}
"""
no_done_env_idx = [i for i, s in self._env_states.items() if s != EnvState.DONE]
sleep_count = 0
while not any([self._env_states[i] == EnvState.RUN for i in no_done_env_idx]):
if sleep_count != 0 and sleep_count % 10000 == 0:
logging.warning(
'VEC_ENV_MANAGER: all the not done envs are resetting, sleep {} times'.format(sleep_count)
)
time.sleep(0.001)
sleep_count += 1
return {i: self._ready_obs[i] for i in self.ready_env}
@property
def ready_imgs(self, render_mode: Optional[str] = 'rgb_array') -> Dict[int, Any]:
"""
Overview:
Get the next renderd frames.
Return:
A dictionary with rendered frames and their environment IDs.
Note:
The rendered frames are returned in np.ndarray.
"""
for i in self.ready_env:
self._pipe_parents[i].send(['render', None, {'render_mode': render_mode}])
data = {i: self._pipe_parents[i].recv() for i in self.ready_env}
self._check_data(data)
return data
def launch(self, reset_param: Optional[Dict] = None) -> None:
"""
Overview:
Set up the environments and their parameters.
Arguments:
- reset_param (:obj:`Optional[Dict]`): Dict of reset parameters for each environment, key is the env_id, \
value is the cooresponding reset parameters.
"""
assert self._closed, "please first close the env manager"
if reset_param is not None:
assert len(reset_param) == len(self._env_fn)
self._create_state()
self.reset(reset_param)
def reset(self, reset_param: Optional[Dict] = None) -> None:
"""
Overview:
Reset the environments their parameters.
Arguments:
- reset_param (:obj:`List`): Dict of reset parameters for each environment, key is the env_id, \
value is the cooresponding reset parameters.
"""
self._check_closed()
if reset_param is None:
reset_env_list = [env_id for env_id in range(self._env_num)]
else:
reset_env_list = reset_param.keys()
for env_id in reset_param:
self._reset_param[env_id] = reset_param[env_id]
# clear previous info
for env_id in reset_env_list:
if env_id in self._waiting_env['step']:
self._pipe_parents[env_id].recv()
self._waiting_env['step'].remove(env_id)
sleep_count = 0
while any([self._env_states[i] == EnvState.RESET for i in reset_env_list]):
if sleep_count != 0 and sleep_count % 10000 == 0:
logging.warning(
'VEC_ENV_MANAGER: not all the envs finish resetting, sleep {} times'.format(sleep_count)
)
time.sleep(0.001)
sleep_count += 1
# reset env
reset_thread_list = []
for i, env_id in enumerate(reset_env_list):
# set seed
if self._env_seed[env_id] is not None:
try:
if self._env_dynamic_seed is not None:
self._pipe_parents[env_id].send(['seed', [self._env_seed[env_id], self._env_dynamic_seed], {}])
else:
self._pipe_parents[env_id].send(['seed', [self._env_seed[env_id]], {}])
ret = self._pipe_parents[env_id].recv()
self._check_data({env_id: ret})
self._env_seed[env_id] = None # seed only use once
except BaseException as e:
logging.warning(
"subprocess reset set seed failed, ignore and continue... \n subprocess exception traceback: \n"
+ traceback.format_exc()
)
self._env_states[env_id] = EnvState.RESET
reset_thread = PropagatingThread(target=self._reset, args=(env_id, ))
reset_thread.daemon = True
reset_thread_list.append(reset_thread)
for t in reset_thread_list:
t.start()
for t in reset_thread_list:
t.join()
def _reset(self, env_id: int) -> None:
def reset_fn():
if self._pipe_parents[env_id].poll():
recv_data = self._pipe_parents[env_id].recv()
raise RuntimeError("unread data left before sending to the pipe: {}".format(repr(recv_data)))
# if self._reset_param[env_id] is None, just reset specific env, not pass reset param
if self._reset_param[env_id] is not None:
assert isinstance(self._reset_param[env_id], dict), type(self._reset_param[env_id])
self._pipe_parents[env_id].send(['reset', [], self._reset_param[env_id]])
else:
self._pipe_parents[env_id].send(['reset', [], None])
if not self._pipe_parents[env_id].poll(self._connect_timeout):
raise ConnectionError("env reset connection timeout") # Leave it to try again
obs = self._pipe_parents[env_id].recv()
self._check_data({env_id: obs}, close=False)
if self._shared_memory:
obs = self._obs_buffers[env_id].get()
# it is necessary to add lock for the updates of env_state
with self._lock:
self._env_states[env_id] = EnvState.RUN
self._ready_obs[env_id] = obs
exceptions = []
for _ in range(self._max_retry):
try:
reset_fn()
return
except BaseException as e:
logging.info("subprocess exception traceback: \n" + traceback.format_exc())
if self._retry_type == 'renew' or isinstance(e, pickle.UnpicklingError):
self._pipe_parents[env_id].close()
if self._subprocesses[env_id].is_alive():
self._subprocesses[env_id].terminate()
self._create_env_subprocess(env_id)
exceptions.append(e)
time.sleep(self._retry_waiting_time)
logging.error("Env {} reset has exceeded max retries({})".format(env_id, self._max_retry))
runtime_error = RuntimeError(
"Env {} reset has exceeded max retries({}), and the latest exception is: {}".format(
env_id, self._max_retry, str(exceptions[-1])
)
)
runtime_error.__traceback__ = exceptions[-1].__traceback__
if self._closed: # exception cased by main thread closing parent_remote
return
else:
self.close()
raise runtime_error
def step(self, actions: Dict[int, Any]) -> Dict[int, namedtuple]:
"""
Overview:
Step all environments. Reset an env if done.
Arguments:
- actions (:obj:`Dict[int, Any]`): {env_id: action}
Returns:
- timesteps (:obj:`Dict[int, namedtuple]`): {env_id: timestep}. Timestep is a \
``BaseEnvTimestep`` tuple with observation, reward, done, env_info.
Example:
>>> actions_dict = {env_id: model.forward(obs) for env_id, obs in obs_dict.items())}
>>> timesteps = env_manager.step(actions_dict):
>>> for env_id, timestep in timesteps.items():
>>> pass
.. note:
- The env_id that appears in ``actions`` will also be returned in ``timesteps``.
- Each environment is run by a subprocess separately. Once an environment is done, it is reset immediately.
- Async subprocess env manager use ``connection.wait`` to poll.
"""
self._check_closed()
env_ids = list(actions.keys())
assert all([self._env_states[env_id] == EnvState.RUN for env_id in env_ids]
), 'current env state are: {}, please check whether the requested env is in reset or done'.format(
{env_id: self._env_states[env_id]
for env_id in env_ids}
)
for env_id, act in actions.items():
self._pipe_parents[env_id].send(['step', [act], None])
timesteps = {}
step_args = self._async_args['step']
wait_num, timeout = min(step_args['wait_num'], len(env_ids)), step_args['timeout']
rest_env_ids = list(set(env_ids).union(self._waiting_env['step']))
ready_env_ids = []
cur_rest_env_ids = copy.deepcopy(rest_env_ids)
while True:
rest_conn = [self._pipe_parents[env_id] for env_id in cur_rest_env_ids]
ready_conn, ready_ids = AsyncSubprocessEnvManager.wait(rest_conn, min(wait_num, len(rest_conn)), timeout)
cur_ready_env_ids = [cur_rest_env_ids[env_id] for env_id in ready_ids]
assert len(cur_ready_env_ids) == len(ready_conn)
# timesteps.update({env_id: p.recv() for env_id, p in zip(cur_ready_env_ids, ready_conn)})
for env_id, p in zip(cur_ready_env_ids, ready_conn):
try:
timesteps.update({env_id: p.recv()})
except pickle.UnpicklingError as e:
timestep = BaseEnvTimestep(None, None, None, {'abnormal': True})
timesteps.update({env_id: timestep})
self._pipe_parents[env_id].close()
if self._subprocesses[env_id].is_alive():
self._subprocesses[env_id].terminate()
self._create_env_subprocess(env_id)
self._check_data(timesteps)
ready_env_ids += cur_ready_env_ids
cur_rest_env_ids = list(set(cur_rest_env_ids).difference(set(cur_ready_env_ids)))
# At least one not done env timestep, or all envs' steps are finished
if any([not t.done for t in timesteps.values()]) or len(ready_conn) == len(rest_conn):
break
self._waiting_env['step']: set
for env_id in rest_env_ids:
if env_id in ready_env_ids:
if env_id in self._waiting_env['step']:
self._waiting_env['step'].remove(env_id)
else:
self._waiting_env['step'].add(env_id)
if self._shared_memory:
for i, (env_id, timestep) in enumerate(timesteps.items()):
timesteps[env_id] = timestep._replace(obs=self._obs_buffers[env_id].get())
for env_id, timestep in timesteps.items():
if is_abnormal_timestep(timestep):
self._env_states[env_id] = EnvState.ERROR
continue
if timestep.done:
self._env_episode_count[env_id] += 1
if self._env_episode_count[env_id] < self._episode_num:
if self._auto_reset:
if self._reset_inplace: # reset in subprocess at once
self._env_states[env_id] = EnvState.RUN
self._ready_obs[env_id] = timestep.obs
else:
# in this case, ready_obs is updated in ``self._reset``
self._env_states[env_id] = EnvState.RESET
reset_thread = PropagatingThread(target=self._reset, args=(env_id, ), name='regular_reset')
reset_thread.daemon = True
reset_thread.start()
else:
# in the case that auto_reset=False, caller should call ``env_manager.reset`` manually
self._env_states[env_id] = EnvState.NEED_RESET
else:
self._env_states[env_id] = EnvState.DONE
else:
self._ready_obs[env_id] = timestep.obs
return timesteps
# This method must be staticmethod, otherwise there will be some resource conflicts(e.g. port or file)
# Env must be created in worker, which is a trick of avoiding env pickle errors.
# A more robust version is used by default. But this one is also preserved.
@staticmethod
def worker_fn(
p: connection.Connection,
c: connection.Connection,
env_fn_wrapper: 'CloudPickleWrapper',
obs_buffer: ShmBuffer,
method_name_list: list,
reset_inplace: bool = False,
) -> None: # noqa
"""
Overview:
Subprocess's target function to run.
"""
torch.set_num_threads(1)
env_fn = env_fn_wrapper.data
env = env_fn()
p.close()
try:
while True:
try:
cmd, args, kwargs = c.recv()
except EOFError: # for the case when the pipe has been closed
c.close()
break
try:
if cmd == 'getattr':
ret = getattr(env, args[0])
elif cmd in method_name_list:
if cmd == 'step':
timestep = env.step(*args, **kwargs)
if is_abnormal_timestep(timestep):
ret = timestep
else:
if reset_inplace and timestep.done:
obs = env.reset()
timestep = timestep._replace(obs=obs)
if obs_buffer is not None:
obs_buffer.fill(timestep.obs)
timestep = timestep._replace(obs=None)
ret = timestep
elif cmd == 'reset':
ret = env.reset(*args, **kwargs) # obs
if obs_buffer is not None:
obs_buffer.fill(ret)
ret = None
elif args is None and kwargs is None:
ret = getattr(env, cmd)()
else:
ret = getattr(env, cmd)(*args, **kwargs)
else:
raise KeyError("not support env cmd: {}".format(cmd))
c.send(ret)
except Exception as e:
# when there are some errors in env, worker_fn will send the errors to env manager
# directly send error to another process will lose the stack trace, so we create a new Exception
logging.warning("subprocess exception traceback: \n" + traceback.format_exc())
c.send(
e.__class__(
'\nEnv Process Exception:\n' + ''.join(traceback.format_tb(e.__traceback__)) + repr(e)
)
)
if cmd == 'close':
c.close()
break
except KeyboardInterrupt:
c.close()
@staticmethod
def worker_fn_robust(
parent,
child,
env_fn_wrapper,
obs_buffer,
method_name_list,
reset_timeout=None,
step_timeout=None,
reset_inplace=False,
) -> None:
"""
Overview:
A more robust version of subprocess's target function to run. Used by default.
"""
torch.set_num_threads(1)
env_fn = env_fn_wrapper.data
env = env_fn()
parent.close()
@timeout_wrapper(timeout=step_timeout)
def step_fn(*args, **kwargs):
timestep = env.step(*args, **kwargs)
if is_abnormal_timestep(timestep):
ret = timestep
else:
if reset_inplace and timestep.done:
obs = env.reset()
timestep = timestep._replace(obs=obs)
if obs_buffer is not None:
obs_buffer.fill(timestep.obs)
timestep = timestep._replace(obs=None)
ret = timestep
return ret
@timeout_wrapper(timeout=reset_timeout)
def reset_fn(*args, **kwargs):
try:
ret = env.reset(*args, **kwargs)
if obs_buffer is not None:
obs_buffer.fill(ret)
ret = None
return ret
except BaseException as e:
logging.warning("subprocess exception traceback: \n" + traceback.format_exc())
env.close()
raise e
while True:
try:
cmd, args, kwargs = child.recv()
except EOFError: # for the case when the pipe has been closed
child.close()
break
try:
if cmd == 'getattr':
ret = getattr(env, args[0])
elif cmd in method_name_list:
if cmd == 'step':
ret = step_fn(*args)
elif cmd == 'reset':
if kwargs is None:
kwargs = {}
ret = reset_fn(*args, **kwargs)
elif cmd == 'render':
from ding.utils import render
ret = render(env, **kwargs)
elif args is None and kwargs is None:
ret = getattr(env, cmd)()
else:
ret = getattr(env, cmd)(*args, **kwargs)
else:
raise KeyError("not support env cmd: {}".format(cmd))
child.send(ret)
except BaseException as e:
logging.debug("Sub env '{}' error when executing {}".format(str(env), cmd))
# when there are some errors in env, worker_fn will send the errors to env manager
# directly send error to another process will lose the stack trace, so we create a new Exception
logging.warning("subprocess exception traceback: \n" + traceback.format_exc())
child.send(
e.__class__('\nEnv Process Exception:\n' + ''.join(traceback.format_tb(e.__traceback__)) + repr(e))
)
if cmd == 'close':
child.close()
break
def _check_data(self, data: Dict, close: bool = True) -> None:
exceptions = []
for i, d in data.items():
if isinstance(d, BaseException):
self._env_states[i] = EnvState.ERROR
exceptions.append(d)
# when receiving env Exception, env manager will safely close and raise this Exception to caller
if len(exceptions) > 0:
if close:
self.close()
raise exceptions[0]
# override
def __getattr__(self, key: str) -> Any:
self._check_closed()
# we suppose that all the envs has the same attributes, if you need different envs, please
# create different env managers.
if not hasattr(self._env_ref, key):
raise AttributeError("env `{}` doesn't have the attribute `{}`".format(type(self._env_ref), key))
if isinstance(getattr(self._env_ref, key), MethodType) and key not in self.method_name_list:
raise RuntimeError("env getattr doesn't supports method({}), please override method_name_list".format(key))
for _, p in self._pipe_parents.items():
p.send(['getattr', [key], {}])
data = {i: p.recv() for i, p in self._pipe_parents.items()}
self._check_data(data)
ret = [data[i] for i in self._pipe_parents.keys()]
return ret
# override
def enable_save_replay(self, replay_path: Union[List[str], str]) -> None:
"""
Overview:
Set each env's replay save path.
Arguments:
- replay_path (:obj:`Union[List[str], str]`): List of paths for each environment; \
Or one path for all environments.
"""
if isinstance(replay_path, str):
replay_path = [replay_path] * self.env_num
self._env_replay_path = replay_path
# override
def close(self) -> None:
"""
Overview:
CLose the env manager and release all related resources.
"""
if self._closed:
return
self._closed = True
for _, p in self._pipe_parents.items():
p.send(['close', None, None])
for env_id, p in self._pipe_parents.items():
if not p.poll(5):
continue
p.recv()
for i in range(self._env_num):
self._env_states[i] = EnvState.VOID
# disable process join for avoiding hang
# for p in self._subprocesses:
# p.join()
for _, p in self._subprocesses.items():
p.terminate()
for _, p in self._pipe_parents.items():
p.close()
@staticmethod
def wait(rest_conn: list, wait_num: int, timeout: Optional[float] = None) -> Tuple[list, list]:
"""
Overview:
Wait at least enough(len(ready_conn) >= wait_num) connections within timeout constraint.
If timeout is None and wait_num == len(ready_conn), means sync mode;
If timeout is not None, will return when len(ready_conn) >= wait_num and
this method takes more than timeout seconds.
"""
assert 1 <= wait_num <= len(rest_conn
), 'please indicate proper wait_num: <wait_num: {}, rest_conn_num: {}>'.format(
wait_num, len(rest_conn)
)
rest_conn_set = set(rest_conn)
ready_conn = set()
start_time = time.time()
while len(rest_conn_set) > 0:
if len(ready_conn) >= wait_num and timeout:
if (time.time() - start_time) >= timeout:
break
finish_conn = set(connection.wait(rest_conn_set, timeout=timeout))
ready_conn = ready_conn.union(finish_conn)
rest_conn_set = rest_conn_set.difference(finish_conn)
ready_ids = [rest_conn.index(c) for c in ready_conn]
return list(ready_conn), ready_ids
@ENV_MANAGER_REGISTRY.register('subprocess')
class SyncSubprocessEnvManager(AsyncSubprocessEnvManager):
config = dict(
episode_num=float("inf"),
max_retry=5,
step_timeout=None,
auto_reset=True,
reset_timeout=None,
retry_type='reset',
retry_waiting_time=0.1,
# subprocess specified args
shared_memory=True,
copy_on_get=True,
context='spawn' if platform.system().lower() == 'windows' else 'fork',
wait_num=float("inf"), # inf mean all the environments
step_wait_timeout=None,
connect_timeout=60,
reset_inplace=False, # if reset_inplace=True in SyncSubprocessEnvManager, the interaction can be reproducible.
)
def step(self, actions: Dict[int, Any]) -> Dict[int, namedtuple]:
"""
Overview:
Step all environments. Reset an env if done.
Arguments:
- actions (:obj:`Dict[int, Any]`): {env_id: action}
Returns:
- timesteps (:obj:`Dict[int, namedtuple]`): {env_id: timestep}. Timestep is a \
``BaseEnvTimestep`` tuple with observation, reward, done, env_info.
Example:
>>> actions_dict = {env_id: model.forward(obs) for env_id, obs in obs_dict.items())}
>>> timesteps = env_manager.step(actions_dict):
>>> for env_id, timestep in timesteps.items():
>>> pass
.. note::
- The env_id that appears in ``actions`` will also be returned in ``timesteps``.
- Each environment is run by a subprocess separately. Once an environment is done, it is reset immediately.
"""
self._check_closed()
env_ids = list(actions.keys())
assert all([self._env_states[env_id] == EnvState.RUN for env_id in env_ids]
), 'current env state are: {}, please check whether the requested env is in reset or done'.format(
{env_id: self._env_states[env_id]
for env_id in env_ids}
)
for env_id, act in actions.items():
# it is necessary to set kwargs as None for saving cost of serialization in some env like cartpole,
# and step method never uses kwargs in known envs.
self._pipe_parents[env_id].send(['step', [act], None])
# === This part is different from async one. ===
# === Because operate in this way is more efficient. ===
timesteps = {}
ready_conn = [self._pipe_parents[env_id] for env_id in env_ids]
# timesteps.update({env_id: p.recv() for env_id, p in zip(env_ids, ready_conn)})
for env_id, p in zip(env_ids, ready_conn):
try:
timesteps.update({env_id: p.recv()})
except pickle.UnpicklingError as e:
timestep = BaseEnvTimestep(None, None, None, {'abnormal': True})
timesteps.update({env_id: timestep})
self._pipe_parents[env_id].close()
if self._subprocesses[env_id].is_alive():
self._subprocesses[env_id].terminate()
self._create_env_subprocess(env_id)
self._check_data(timesteps)
# ======================================================
if self._shared_memory:
# TODO(nyz) optimize sync shm
for i, (env_id, timestep) in enumerate(timesteps.items()):
timesteps[env_id] = timestep._replace(obs=self._obs_buffers[env_id].get())
for env_id, timestep in timesteps.items():
if is_abnormal_timestep(timestep):
self._env_states[env_id] = EnvState.ERROR
continue
if timestep.done:
self._env_episode_count[env_id] += 1
if self._env_episode_count[env_id] < self._episode_num:
if self._auto_reset:
if self._reset_inplace: # reset in subprocess at once
self._env_states[env_id] = EnvState.RUN
self._ready_obs[env_id] = timestep.obs
else:
# in this case, ready_obs is updated in ``self._reset``
self._env_states[env_id] = EnvState.RESET
reset_thread = PropagatingThread(target=self._reset, args=(env_id, ), name='regular_reset')
reset_thread.daemon = True
reset_thread.start()
else:
# in the case that auto_reset=False, caller should call ``env_manager.reset`` manually
self._env_states[env_id] = EnvState.NEED_RESET
else:
self._env_states[env_id] = EnvState.DONE
else:
self._ready_obs[env_id] = timestep.obs
return timesteps
@ENV_MANAGER_REGISTRY.register('subprocess_v2')
class SubprocessEnvManagerV2(SyncSubprocessEnvManager):
"""
Overview:
SyncSubprocessEnvManager for new task pipeline and interfaces coupled with treetensor.
"""
@property
def ready_obs(self) -> tnp.array:
"""
Overview:
Get the ready (next) observation in ``tnp.array`` type, which is uniform for both async/sync scenarios.
Return:
- ready_obs (:obj:`tnp.array`): A stacked treenumpy-type observation data.
Example:
>>> obs = env_manager.ready_obs
>>> action = model(obs) # model input np obs and output np action
>>> timesteps = env_manager.step(action)
"""
no_done_env_idx = [i for i, s in self._env_states.items() if s != EnvState.DONE]
sleep_count = 0
while not any([self._env_states[i] == EnvState.RUN for i in no_done_env_idx]):
if sleep_count != 0 and sleep_count % 10000 == 0:
logging.warning(
'VEC_ENV_MANAGER: all the not done envs are resetting, sleep {} times'.format(sleep_count)
)
time.sleep(0.001)
sleep_count += 1
return tnp.stack([tnp.array(self._ready_obs[i]) for i in self.ready_env])
def step(self, actions: Union[List[tnp.ndarray], tnp.ndarray]) -> List[tnp.ndarray]:
"""
Overview:
Execute env step according to input actions. And reset an env if done.
Arguments:
- actions (:obj:`Union[List[tnp.ndarray], tnp.ndarray]`): actions came from outer caller like policy.
Returns:
- timesteps (:obj:`List[tnp.ndarray]`): Each timestep is a tnp.array with observation, reward, done, \
info, env_id.
"""
if isinstance(actions, tnp.ndarray):
# zip operation will lead to wrong behaviour if not split data
split_action = tnp.split(actions, actions.shape[0])
split_action = [s.squeeze(0) for s in split_action]
else:
split_action = actions
actions = {env_id: a for env_id, a in zip(self.ready_obs_id, split_action)}
timesteps = super().step(actions)
new_data = []
for env_id, timestep in timesteps.items():
obs, reward, done, info = timestep
# make the type and content of key as similar as identifier,
# in order to call them as attribute (e.g. timestep.xxx), such as ``TimeLimit.truncated`` in cartpole info
info = make_key_as_identifier(info)
info = remove_illegal_item(info)
new_data.append(tnp.array({'obs': obs, 'reward': reward, 'done': done, 'info': info, 'env_id': env_id}))
return new_data | PypiClean |
/Extractor-0.5.tar.gz/Extractor-0.5/README | Python bindings for GNU libextractor
About libextractor
==================
libextractor is a simple library for keyword extraction. libextractor
does not support all formats but supports a simple plugging mechanism
such that you can quickly add extractors for additional formats, even
without recompiling libextractor. libextractor typically ships with a
dozen helper-libraries that can be used to obtain keywords from common
file-types.
libextractor is a part of the GNU project (http://www.gnu.org/).
Dependencies
============
* python >= 2.3
web site: http://www.python.org/
* libextractor > 0.5
web site: http://gnunet.org/libextractor
* ctypes >= 0.9
web site: http://starship.python.net/crew/theller/ctypes/
* setuptools (optional)
web site: http://cheeseshop.python.org/pypi/setuptools
Performances
============
Surprisingly the original C native library is only 20% faster than
this python ctypes bindings. Here a quick and dirty bench:
The C extract on Extractor test files:
$ time `find Extractor/test -type f -not -name "*.svn*"|xargs extract`
real 0m0.403s
user 0m0.303s
sys 0m0.061s
Same data with the ctypes python bindings:
$ time `find Extractor/test -type f -not -name "*.svn*"|xargs extract.py`
real 0m0.661s
user 0m0.529s
sys 0m0.074s
Install
=======
Using the tarball (as root):
# python setup.py install
Using the egg (as root):
# easy_install Extractor-*.egg
Copyright
=========
Copyright (C) 2006 Bader Ladjemi <[email protected]>
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software
Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
see COPYING for details | PypiClean |
/InvestOpenDataTools-1.0.2.tar.gz/InvestOpenDataTools-1.0.2/opendatatools/economy/nbs_agent.py |
from opendatatools.common import RestAgent
import json
import pandas as pd
nbs_city_map = {
'北京':'110000',
'天津':'120000',
'石家庄':'130100',
'唐山':'130200',
'秦皇岛':'130300',
'太原':'140100',
'呼和浩特':'150100',
'包头':'150200',
'沈阳':'210100',
'大连':'210200',
'丹东':'210600',
'锦州':'210700',
'长春':'220100',
'吉林':'220200',
'哈尔滨':'230100',
'牡丹江':'231000',
'上海':'310000',
'南京':'320100',
'无锡':'320200',
'徐州':'320300',
'扬州':'321000',
'杭州':'330100',
'宁波':'330200',
'温州':'330300',
'金华':'330700',
'合肥':'340100',
'蚌埠':'340300',
'安庆':'340800',
'福州':'350100',
'厦门':'350200',
'泉州':'350500',
'南昌':'360100',
'九江':'360400',
'赣州':'360700',
'济南':'370100',
'青岛':'370200',
'烟台':'370600',
'济宁':'370800',
'郑州':'410100',
'洛阳':'410300',
'平顶山':'410400',
'武汉':'420100',
'宜昌':'420500',
'襄阳':'420600',
'长沙':'430100',
'岳阳':'430600',
'常德':'430700',
'广州':'440100',
'韶关':'440200',
'深圳':'440300',
'湛江':'440800',
'惠州':'441300',
'南宁':'450100',
'桂林':'450300',
'北海':'450500',
'海口':'460100',
'三亚':'460200',
'重庆':'500000',
'成都':'510100',
'泸州':'510500',
'南充':'511300',
'贵阳':'520100',
'遵义':'520300',
'昆明':'530100',
'大理':'532900',
'西安':'610100',
'兰州':'620100',
'西宁':'630100',
'银川':'640100',
'乌鲁木齐':'650100',
}
nbs_region_map = {
'北京' : '110000',
'天津' : '120000',
'河北省' : '130000',
'山西省' : '140000',
'内蒙古自治区' : '150000',
'辽宁省' : '210000',
'吉林省' : '220000',
'黑龙江省' : '230000',
'上海' : '310000',
'江苏省' : '320000',
'浙江省' : '330000',
'安徽省' : '340000',
'福建省' : '350000',
'江西省' : '360000',
'山东省' : '370000',
'河南省' : '410000',
'湖北省' : '420000',
'湖南省' : '430000',
'广东省' : '440000',
'广西壮族自治区' : '450000',
'海南省' : '460000',
'重庆' : '500000',
'四川省' : '510000',
'贵州省' : '520000',
'云南省' : '530000',
'西藏自治区' : '540000',
'陕西省' : '610000',
'甘肃省' : '620000',
'青海省' : '630000',
'宁夏回族自治区' : '640000',
'新疆维吾尔自治区': '650000',
}
nbs_indicator_map_df = {
# 地方GDP
'A010101':'地区生产总值_累计值(亿元)',
'A010103':'地区生产总值指数(上年=100)_累计值(%)',
}
nbs_indicator_map = {
# 年度GDP
'A020101':'国民总收入(亿元)',
'A020102':'国内生产总值(亿元)',
'A020103':'第一产业增加值(亿元)',
'A020104':'第二产业增加值(亿元)',
'A020105':'第三产业增加值(亿元)',
'A020106':'人均国内生产总值(元)',
# 人口数量
'A030101':'年末总人口(万人)',
'A030102':'男性人口(万人)',
'A030103':'女性人口(万人)',
'A030104':'城镇人口(万人)',
'A030105':'乡村人口(万人)',
# 人口结构
'A030301':'年末总人口(万人)',
'A030302':'0-14岁人口(万人)',
'A030303':'15-64岁人口(万人)',
'A030304':'65岁及以上人口(万人)',
'A030305':'总抚养比(%)',
'A030306':'少儿抚养比(%)',
'A030307':'老年抚养比(%)',
# 70 个大中城市商品房价格情况
'A010801':'新建住宅销售价格指数(上月=100)',
'A010802':'新建住宅销售价格指数(上年=100)',
'A010803':'新建住宅销售价格指数(2015=100)',
'A010804':'新建商品住宅销售价格指数(上月=100)',
'A010805':'新建商品住宅销售价格指数(上年=100)',
'A010806':'新建商品住宅销售价格指数(2015=100)',
'A010807':'二手住宅销售价格指数(上月=100)',
'A010808':'二手住宅销售价格指数(上年=100)',
'A010809':'二手住宅销售价格指数(2015=100)',
'A01080A':'90平米及以下新建商品住宅销售价格指数(上月=100)',
'A01080B':'90平米及以下新建商品住宅销售价格指数(上年=100)',
'A01080C':'90平米及以下新建商品住宅销售价格指数(2015=100)',
'A01080D':'90-144平米新建商品住宅销售价格指数(上月=100)',
'A01080E':'90-144平米新建商品住宅销售价格指数(上年=100)',
'A01080F':'90-144平米新建商品住宅销售价格指数(2015=100)',
'A01080G':'144平米以上新建商品住宅销售价格指数(上月=100)',
'A01080H':'144平米以上新建商品住宅销售价格指数(上年=100)',
'A01080I':'144平米以上新建商品住宅销售价格指数(2015=100)',
'A01080J':'90平米及以下二手住宅销售价格指数(上月=100)',
'A01080K':'90平米及以下二手住宅销售价格指数(上年=100)',
'A01080L':'90平米及以下二手住宅销售价格指数(2015=100)',
'A01080M':'90-144平米二手住宅销售价格指数(上月=100)',
'A01080N':'90-144平米二手住宅销售价格指数(上年=100)',
'A01080O':'90-144平米二手住宅销售价格指数(2015=100)',
'A01080P':'144平米以上二手住宅销售价格指数(上月=100)',
'A01080Q':'144平米以上二手住宅销售价格指数(上年=100)',
'A01080R':'144平米以上二手住宅销售价格指数(2015=100)',
#CPI 相关
'A01010101':'居民消费价格指数(上年同月=100)',
'A01010102':'食品烟酒类居民消费价格指数(上年同月=100)',
'A01010103':'衣着类居民消费价格指数(上年同月=100)',
'A01010104':'居住类居民消费价格指数(上年同月=100)',
'A01010105':'生活用品及服务类居民消费价格指数(上年同月=100)',
'A01010106':'交通和通信类居民消费价格指数(上年同月=100)',
'A01010107':'教育文化和娱乐类居民消费价格指数(上年同月=100)',
'A01010108':'医疗保健类居民消费价格指数(上年同月=100)',
'A01010109':'其他用品和服务类居民消费价格指数(上年同月=100)',
# PPI相关
'A01080101':'工业生产者出厂价格指数(上年同月=100)',
'A01080102':'生产资料工业生产者出厂价格指数(上年同月=100)',
'A01080103':'生活资料工业生产者出厂价格指数(上年同月=100)',
'A010301': '工业生产者购进价格指数(上年同月=100)',
'A010302': '工业生产者出厂价格指数(上年同月=100)',
# GDP相关
'A010101':'国内生产总值_当季值(亿元)',
'A010102':'国内生产总值_累计值(亿元)',
'A010103':'第一产业增加值_当季值(亿元)',
'A010104':'第一产业增加值_累计值(亿元)',
'A010105':'第二产业增加值_当季值(亿元)',
'A010106':'第二产业增加值_累计值(亿元)',
'A010107':'第三产业增加值_当季值(亿元)',
'A010108':'第三产业增加值_累计值(亿元)',
'A010109':'农林牧渔业增加值_当季值(亿元)',
'A01010A':'农林牧渔业增加值_累计值(亿元)',
'A01010B':'工业增加值_当季值(亿元)',
'A01010C':'工业增加值_累计值(亿元)',
'A01010D':'制造业增加值_当季值(亿元)',
'A01010E':'制造业增加值_累计值(亿元)',
'A01011D':'建筑业增加值_当季值(亿元)',
'A01011E':'建筑业增加值_累计值(亿元)',
'A01011F':'批发和零售业增加值_当季值(亿元)',
'A01011G':'批发和零售业增加值_累计值(亿元)',
'A01011H':'交通运输、仓储和邮政业增加值_当季值(亿元)',
'A01011I':'交通运输、仓储和邮政业增加值_累计值(亿元)',
'A01011J':'住宿和餐饮业增加值_当季值(亿元)',
'A01011K':'住宿和餐饮业增加值_累计值(亿元)',
'A01011L':'金融业增加值_当季值(亿元)',
'A01011M':'金融业增加值_累计值(亿元)',
'A01011N':'房地产业增加值_当季值(亿元)',
'A01011O':'房地产业增加值_累计值(亿元)',
'A01011P':'信息传输、软件和信息技术服务业增加值_当季值(亿元)',
'A01011Q':'信息传输、软件和信息技术服务业增加值_累计值(亿元)',
'A01011R':'租赁和商务服务业增加值_当季值(亿元)',
'A01011S':'租赁和商务服务业增加值_累计值(亿元)',
'A01012P':'其他行业增加值_当季值(亿元)',
'A01012Q':'其他行业增加值_累计值(亿元)',
# GDP 增速有关
'A010401':'国内生产总值环比增长速度(%)',
# M0M1M2相关
'A1B0101':'货币和准货币(M2)供应量_期末值(亿元)',
'A1B0102':'货币和准货币(M2)供应量_同比增长(%)',
'A1B0103':'货币(M1)供应量_期末值(亿元)',
'A1B0104':'货币(M1)供应量_同比增长(%)',
'A1B0105':'流通中现金(M0)供应量_期末值(亿元)',
'A1B0106':'流通中现金(M0)供应量_同比增长(%)',
# 财政收入
'A1A0101':'国家财政收入_当期值(亿元)',
'A1A0102':'国家财政收入_累计值(亿元)',
'A1A0103':'国家财政收入_累计增长(%)',
# 财政支出
'A1A0201':'国家财政支出(不含债务还本)_当期值(亿元)',
'A1A0202':'国家财政支出(不含债务还本)_累计值(亿元)',
'A1A0203':'国家财政支出(不含债务还本)_累计增长(%)',
# 制造业 PMI 相关
'A190101':'制造业采购经理指数(%)',
'A190102':'生产指数(%)',
'A190103':'新订单指数(%)',
'A190104':'新出口订单指数(%)',
'A190105':'在手订单指数(%)',
'A190106':'产成品库存指数(%)',
'A190107':'采购量指数(%)',
'A190108':'进口指数(%)',
'A190109':'出厂价格指数(%)',
'A190119':'主要原材料购进价格指数(%)',
'A19011A':'原材料库存指数(%)',
'A19011B':'从业人员指数(%)',
'A19011C':'供应商配送时间指数(%)',
'A19011D':'生产经营活动预期指数(%)',
# 非织造业PMI
'A190201':'非制造业商务活动指数(%)',
'A190202':'新订单指数(%)',
'A190203':'新出口订单指数(%)',
'A190204':'在手订单指数(%)',
'A190205':'存货指数(%)',
'A190206':'投入品价格指数(%)',
'A190207':'销售价格指数(%)',
'A190208':'从业人员指数(%)',
'A190209':'供应商配送时间指数(%)',
'A19020A':'业务活动预期指数(%)',
# 综合PMI
'A190301':'综合PMI产出指数(%)',
# 进出口情况
'A160101':'进出口总值_当期值(千美元)',
'A160102':'进出口总值_同比增长(%)',
'A160103':'进出口总值_累计值(千美元)',
'A160104':'进出口总值_累计增长(%)',
'A160105':'出口总值_当期值(千美元)',
'A160106':'出口总值_同比增长(%)',
'A160107':'出口总值_累计值(千美元)',
'A160108':'出口总值_累计增长(%)',
'A160109':'进口总值_当期值(千美元)',
'A16010A':'进口总值_同比增长(%)',
'A16010B':'进口总值_累计值(千美元)',
'A16010C':'进口总值_累计增长(%)',
'A16010D':'进出口差额_当期值(千美元)',
'A16010E':'进出口差额_累计值(千美元)',
# FDI相关
'A160201':'外商直接投资合同项目数_累计值(个)',
'A160202':'外商直接投资合同项目数_累计增长(%)',
'A160203':'合资经营企业外商直接投资合同项目数_累计值(个)',
'A160204':'合资经营企业外商直接投资合同项目数_累计增长(%)',
'A160205':'合作经营企业外商直接投资合同项目数_累计值(个)',
'A160206':'合作经营企业外商直接投资合同项目数_累计增长(%)',
'A160207':'外资企业外商直接投资合同项目数_累计值(个)',
'A160208':'外资企业外商直接投资合同项目数_累计增长(%)',
'A160209':'外商投资股份制企业外商直接投资合同项目数_累计值(个)',
'A16020A':'外商投资股份制企业外商直接投资合同项目数_累计增长(%)',
'A16020B':'实际利用外商直接投资金额_累计值(百万美元)',
'A16020C':'实际利用外商直接投资金额_累计增长(%)',
'A16020D':'合资经营企业实际利用外商直接投资金额_累计值(百万美元)',
'A16020E':'合资经营企业实际利用外商直接投资金额_累计增长(%)',
'A16020F':'合作经营企业实际利用外商直接投资金额_累计值(百万美元)',
'A16020G':'合作经营企业实际利用外商直接投资金额_累计增长(%)',
'A16020H':'外资企业实际利用外商直接投资金额_累计值(百万美元)',
'A16020I':'外资企业实际利用外商直接投资金额_累计增长(%)',
'A16020J':'外商投资股份制企业实际利用外商直接投资金额_累计值(百万美元)',
'A16020K':'外商投资股份制企业实际利用外商直接投资金额_累计增长(%)',
# 社会消费品零售总额
'A150101':'社会消费品零售总额_当期值(亿元)',
'A150102':'社会消费品零售总额_累计值(亿元)',
'A150103':'社会消费品零售总额_同比增长(%)',
'A150104':'社会消费品零售总额_累计增长(%)',
'A150105':'限上单位消费品零售额_当期值(亿元)',
'A150106':'限上单位消费品零售额_累计值(亿元)',
'A150107':'限上单位消费品零售额_同比增长(%)',
'A150108':'限上单位消费品零售额_累计增长(%)',
# 网上零售额
'A150801':'网上零售额_累计值(亿元)',
'A150802':'网上零售额_累计增长(%)',
'A150803':'实物商品网上零售额_累计值(亿元)',
'A150804':'实物商品网上零售额_累计增长(%)',
'A150805':'吃类实物商品网上零售额_累计值(亿元)',
'A150806':'吃类实物商品网上零售额_累计增长(%)',
'A150807':'穿类实物商品网上零售额_累计值(亿元)',
'A150808':'穿类实物商品网上零售额_累计增长(%)',
'A150809':'用类实物商品网上零售额_累计值(亿元)',
'A150810':'用类实物商品网上零售额_累计增长(%)',
# 房地产开发投资
'A140101':'房地产投资_累计值(亿元)',
'A140102':'房地产投资_累计增长(%)',
'A140103':'房地产配套工程投资_累计值(亿元)',
'A140104':'房地产配套工程投资_累计增长(%)',
'A140105':'房地产住宅投资_累计值(亿元)',
'A140106':'房地产住宅投资_累计增长(%)',
'A140107':'90平方米及以下住房投资_累计值(亿元)',
'A140108':'90平方米及以下住房投资_累计增长(%)',
'A140109':'144平方米以上住房投资_累计值(亿元)',
'A14010A':'144平方米以上住房投资_累计增长(%)',
'A14010B':'别墅、高档公寓投资_累计值(亿元)',
'A14010C':'别墅、高档公寓投资_累计增长(%)',
'A14010D':'房地产办公楼投资_累计值(亿元)',
'A14010E':'房地产办公楼投资_累计增长(%)',
'A14010F':'房地产商业营业用房投资_累计值(亿元)',
'A14010G':'房地产商业营业用房投资_累计增长(%)',
'A14010H':'其它房地产投资_累计值(亿元)',
'A14010I':'其它房地产投资_累计增长(%)',
'A14010J':'房地产开发建筑工程投资_累计值(亿元)',
'A14010K':'房地产开发建筑工程投资_累计增长(%)',
'A14010L':'房地产开发安装工程投资_累计值(亿元)',
'A14010M':'房地产开发安装工程投资_累计增长(%)',
'A14010N':'房地产设备工器具购置投资_累计值(亿元)',
'A14010O':'房地产设备工器具购置投资_累计增长(%)',
'A14010P':'房地产其它费用投资_累计值(亿元)',
'A14010Q':'房地产其它费用投资_累计增长(%)',
'A14010R':'房地产土地购置费_累计值(亿元)',
'A14010S':'房地产土地购置费_累计增长(%)',
'A14010T':'房地产开发计划总投资_累计值(亿元)',
'A14010U':'房地产开发计划总投资_累计增长(%)',
'A14010V':'房地产开发新增固定资产投资_累计值(亿元)',
'A14010W':'房地产开发新增固定资产投资_累计增长(%)',
# 固定资产投资类
'A130101':'固定资产投资完成额_累计值(亿元)',
'A130102':'固定资产投资完成额_累计增长(%)',
'A130103':'国有及国有控股固定资产投资额_累计值(亿元)',
'A130104':'国有及国有控股固定资产投资额_累计增长(%)',
'A130105':'房地产开发投资额_累计值(亿元)',
'A130106':'房地产开发投资额_累计增长(%)',
'A130107':'第一产业固定资产投资完成额_累计值(亿元)',
'A130108':'第一产业固定资产投资完成额_累计增长(%)',
'A130109':'第二产业固定资产投资完成额_累计值(亿元)',
'A13010A':'第二产业固定资产投资完成额_累计增长(%)',
'A13010B':'第三产业固定资产投资完成额_累计值(亿元)',
'A13010C':'第三产业固定资产投资完成额_累计增长(%)',
'A13010D':'中央项目固定资产投资完成额_累计值(亿元)',
'A13010E':'中央项目固定资产投资完成额_累计增长(%)',
'A13010F':'地方项目固定资产投资完成额_累计值(亿元)',
'A13010G':'地方项目固定资产投资完成额_累计增长(%)',
'A13010H':'新建固定资产投资完成额_累计值(亿元)',
'A13010I':'新建固定资产投资完成额_累计增长(%)',
'A13010J':'扩建固定资产投资完成额_累计值(亿元)',
'A13010K':'扩建固定资产投资完成额_累计增长(%)',
'A13010L':'改建固定资产投资完成额_累计值(亿元)',
'A13010M':'改建固定资产投资完成额_累计增长(%)',
'A13010N':'建筑安装工程固定资产投资完成额_累计值(亿元)',
'A13010O':'建筑安装工程固定资产投资完成额_累计增长(%)',
'A13010P':'设备工器具购置固定资产投资完成额_累计值(亿元)',
'A13010Q':'设备工器具购置固定资产投资完成额_累计增长(%)',
'A13010R':'其他费用固定资产投资完成额_累计值(亿元)',
'A13010S':'其他费用固定资产投资完成额_累计增长(%)',
'A13010T':'房屋施工面积_累计值(万平方米)',
'A13010U':'房屋施工面积_累计增长(%)',
'A13010V':'房屋竣工面积_累计值(万平方米)',
'A13010W':'房屋竣工面积_累计增长(%)',
'A13010X':'新增固定资产_累计值(亿元)',
'A13010Y':'新增固定资产_累计增长(%)',
}
class NBSAgent(RestAgent):
def __init__(self):
RestAgent.__init__(self)
def prepare_cookies(self, url):
response = self.do_request(url, None)
if response is not None:
cookies = self.get_cookies()
return cookies
else:
return None
def get_indicator_map(self):
return pd.DataFrame(list(nbs_indicator_map.items()), columns=['indicator', 'name'])
def get_region_map(self):
return pd.DataFrame(list(nbs_region_map.items()), columns=['region', 'name'])
def get_city_map(self):
return pd.DataFrame(list(nbs_city_map.items()), columns=['city', 'name'])
# 获取全国指标
def _get_qg_indicator(self, cn, category, dbcode = 'hgyd'):
url = 'http://data.stats.gov.cn/easyquery.htm'
param = {
"m": "QueryData",
"dbcode": dbcode,
"rowcode": "zb",
"colcode": "sj",
"wds" : '[]',
"dfwds": '[{"wdcode":"sj","valuecode":"LAST36"}, {"wdcode": "zb", "valuecode": "%s"}]' % (category),
}
url = 'http://data.stats.gov.cn/easyquery.htm?cn=%s&zb=%s' % (cn, category)
cookies = self.prepare_cookies(url)
response = self.do_request(url, param, cookies=cookies)
rsp = json.loads(response)
code = rsp['returncode']
data = rsp['returndata']
records = data['datanodes']
result_list = []
for record in records:
value = record['data']['data']
strvalue = record['data']['strdata']
date = record['wds'][1]['valuecode']
indicator = record['wds'][0]['valuecode']
if indicator in nbs_indicator_map:
indicator_name = nbs_indicator_map[indicator]
else:
indicator_name = ""
result_list.append({
"indicator" : indicator,
"indicator_name" : indicator_name,
"date" : date,
"value" : value,
"strvalue" : strvalue,
})
return pd.DataFrame(result_list), ""
def _get_df_indicator(self, region, cn, category, dbcode = 'fsyd'):
if region not in nbs_region_map:
return None, '不合法的省份名称, 请通过get_region_map接口获取正确的省份名称'
region_code = nbs_region_map[region]
url = 'http://data.stats.gov.cn/easyquery.htm'
param = {
"m": "QueryData",
"dbcode": dbcode,
"rowcode": "zb",
"colcode": "sj",
"wds" : '[{"wdcode": "region", "valuecode": "%s"}]' % (region_code),
"dfwds": '[{"wdcode":"zb","valuecode":"%s"}, {"wdcode":"sj","valuecode":"LAST36"}]' % (category),
}
url = 'http://data.stats.gov.cn/easyquery.htm?cn=%s&zb=%s®=%s' % (cn, category,region_code)
cookies = self.prepare_cookies(url)
response = self.do_request(
url, param,
cookies=cookies
)
rsp = json.loads(response)
code = rsp['returncode']
data = rsp['returndata']
records = data['datanodes']
result_list = []
for record in records:
value = record['data']['data']
strvalue = record['data']['strdata']
date = record['wds'][2]['valuecode']
indicator = record['wds'][0]['valuecode']
if indicator in nbs_indicator_map_df:
indicator_name = nbs_indicator_map_df[indicator]
elif indicator in nbs_indicator_map:
indicator_name = nbs_indicator_map[indicator]
else:
indicator_name = ""
result_list.append({
"region" : region,
"indicator" : indicator,
"indicator_name" : indicator_name,
"date" : date,
"value" : value,
"strvalue" : strvalue,
})
return pd.DataFrame(result_list), ""
def _get_city_indicator(self, city, cn, category, dbcode = 'csyd'):
if city not in nbs_city_map:
return None, '不合法的城市名称,请通过get_city_map获取正确的城市名称'
region_code = nbs_city_map[city]
url = 'http://data.stats.gov.cn/easyquery.htm'
param = {
"m": "QueryData",
"dbcode": dbcode,
"rowcode": "zb",
"colcode": "sj",
"wds" : '[{"wdcode": "region", "valuecode": %s}]' % (region_code),
"dfwds": '[{"wdcode":"sj","valuecode":"LAST36"}]',
}
url = 'http://data.stats.gov.cn/easyquery.htm?cn=%s&zb=%s®=%s' % (cn, category,region_code)
cookies = self.prepare_cookies(url)
response = self.do_request(
url, param,
cookies=cookies
)
rsp = json.loads(response)
code = rsp['returncode']
data = rsp['returndata']
records = data['datanodes']
result_list = []
for record in records:
value = record['data']['data']
strvalue = record['data']['strdata']
date = record['wds'][2]['valuecode']
indicator = record['wds'][0]['valuecode']
if indicator in nbs_indicator_map:
indicator_name = nbs_indicator_map[indicator]
else:
indicator_name = ""
result_list.append({
"city" : city,
"indicator" : indicator,
"indicator_name" : indicator_name,
"date" : date,
"value" : value,
"strvalue" : strvalue,
})
return pd.DataFrame(result_list), ""
'''
年度数据
'''
# 年度GDP
def get_gdp_y(self):
return self._get_qg_indicator('C01', 'A0201', dbcode='hgnd')
def get_region_gdp_y(self, region):
return self._get_fs_indicator('E0103', 'A0201', dbcode='fsnd')
def get_population_size_y(self):
return self._get_qg_indicator('C01', 'A0301', dbcode='hgnd')
def get_population_structure_y(self):
return self._get_qg_indicator('C01', 'A0303', dbcode='hgnd')
# 70 个大中城市住宅销售价格指数
def get_house_price_index(self, city):
return self._get_city_indicator(city, 'E0104', 'A0108', 'csyd')
def get_cpi(self):
return self._get_qg_indicator('A01', 'A010101', dbcode = 'hgyd')
def get_region_cpi(self, region):
return self._get_df_indicator(region, 'E0101', 'A010101', dbcode = 'fsyd')
def get_ppi(self):
return self._get_qg_indicator('A01', 'A010801', dbcode = 'hgyd')
def get_region_ppi(self, region):
return self._get_df_indicator(region, 'E0101', 'A0103', dbcode = 'fsyd')
def get_gdp(self):
return self._get_qg_indicator('B01', 'A0101', dbcode='hgjd')
def get_region_gdp(self, region):
return self._get_df_indicator(region, 'E0102', 'A0101', dbcode='fsjd')
def get_gdp_q2q(self):
return self._get_qg_indicator('B01', 'A0104', dbcode = 'hgjd')
def get_M0_M1_M2(self):
return self._get_qg_indicator('A01', 'A1B01', dbcode = 'hgyd')
def get_fiscal_revenue(self):
return self._get_qg_indicator('A01', 'A1A01', dbcode = 'hgyd')
def get_fiscal_expend(self):
return self._get_qg_indicator('A01', 'A1A02', dbcode = 'hgyd')
def get_manufacturing_pmi(self):
return self._get_qg_indicator('A01', 'A0B01', dbcode = 'hgyd')
def get_non_manufacturing_pmi(self):
return self._get_qg_indicator('A01', 'A0B02', dbcode = 'hgyd')
def get_pmi(self):
return self._get_qg_indicator('A01', 'A0B03', dbcode = 'hgyd')
def get_import_export(self):
return self._get_qg_indicator('A01', 'A1601', dbcode = 'hgyd')
def get_fdi(self):
return self._get_qg_indicator('A01', 'A1602', dbcode = 'hgyd')
def get_retail_sales(self):
return self._get_qg_indicator('A01', 'A1501', dbcode = 'hgyd')
def get_online_retail_sales(self):
return self._get_qg_indicator('A01', 'A1508', dbcode='hgyd')
def get_online_retail_sales(self):
return self._get_qg_indicator('A01', 'A1508', dbcode='hgyd')
def get_realestate_investment(self):
return self._get_qg_indicator('A01', 'A1401', dbcode='hgyd')
def get_region_realestate_investment(self, region):
return self._get_df_indicator(region, 'E0101', 'A1401', dbcode='fsyd')
def get_fixed_asset_investment(self):
return self._get_qg_indicator('A01', 'A1301', dbcode='hgyd')
def get_region_fixed_asset_investment(self, region):
return self._get_df_indicator(region, 'E0101', 'A1301', dbcode='fsyd') | PypiClean |
/CsuTextSpotter-1.0.28.tar.gz/CsuTextSpotter-1.0.28/TextSpotter/cfg.py | data_root = 'images/'
char_dict_file = 'char_dict.json'
model = dict(
type='AE_TextSpotter',
pretrained='torchvision://resnet50',
backbone=dict(
type='ResNet',
depth=50,
num_stages=4,
out_indices=(0, 1, 2, 3),
frozen_stages=1,
style='pytorch'),
neck=dict(
type='FPN',
in_channels=[256, 512, 1024, 2048],
out_channels=256,
num_outs=5),
rpn_head=dict(
type='AETSRPNHead',
in_channels=256,
feat_channels=256,
anchor_scales=[8],
anchor_strides=[4, 8, 16, 32, 64],
text_anchor_ratios=[0.125, 0.25, 0.5, 1.0, 2.0, 4.0, 8.0],
char_anchor_ratios=[0.5, 1.0, 2.0],
target_means=[.0, .0, .0, .0],
target_stds=[1.0, 1.0, 1.0, 1.0],
loss_cls=dict(type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0),
loss_bbox=dict(type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=1.0)),
# text detection module
text_bbox_roi_extractor=dict(
type='SingleRoIExtractor',
roi_layer=dict(type='RoIAlign', out_size=7, sample_num=2),
out_channels=256,
featmap_strides=[4, 8, 16, 32]),
text_bbox_head=dict(
type='AETSBBoxHead',
num_shared_fcs=0,
num_cls_convs=2,
num_reg_convs=2,
in_channels=256,
conv_out_channels=256,
fc_out_channels=1024,
roi_feat_size=7,
num_classes=2,
target_means=[0., 0., 0., 0.],
target_stds=[0.1, 0.1, 0.2, 0.2],
reg_class_agnostic=True,
loss_cls=dict(type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0),
loss_bbox=dict(type='SmoothL1Loss', beta=1.0, loss_weight=1.0)),
text_mask_roi_extractor=dict(
type='SingleRoIExtractor',
roi_layer=dict(type='RoIAlign', out_size=14, sample_num=2),
out_channels=256,
featmap_strides=[4, 8, 16, 32]),
text_mask_head=dict(
type='AETSMaskHead',
num_convs=4,
in_channels=256,
conv_out_channels=256,
num_classes=2,
loss_mask=dict(type='CrossEntropyLoss', use_mask=True, loss_weight=1.0)),
# character-based recognition module
char_bbox_roi_extractor=dict(
type='SingleRoIExtractor',
roi_layer=dict(type='RoIAlign', out_size=14, sample_num=2),
out_channels=256,
featmap_strides=[4, 8, 16, 32]),
char_bbox_head=dict(
type='AETSBBoxHead',
num_shared_fcs=0,
num_cls_convs=4,
num_reg_convs=2,
in_channels=256,
conv_out_channels=256,
fc_out_channels=1024,
roi_feat_size=14,
num_classes=3614,
target_means=[0., 0., 0., 0.],
target_stds=[0.1, 0.1, 0.2, 0.2],
reg_class_agnostic=True,
loss_cls=dict(type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0),
loss_bbox=dict(type='SmoothL1Loss', beta=1.0, loss_weight=1.0)),
crm_cfg=dict(
char_dict_file=data_root + char_dict_file,
char_assign_iou=0.3),
# language module
lm_cfg=dict(
dictmap_file=data_root + 'dictmap_to_lower.json',
bert_vocab_file='bert-base-chinese/bert-base-chinese-vocab.txt',
bert_cfg_file='bert-base-chinese/bert-base-chinese-config.json',
bert_model_file='bert-base-chinese/bert-base-chinese-pytorch_model.bin',
sample_num=32,
pos_iou=0.8,
lang_score_weight=0.3,
lang_model=dict(
input_dim=768,
output_dim=2,
gru_num=2,
with_bi=True)))
# model training and testing settings
train_cfg = dict(
rpn=dict(
assigner=dict(
type='MaxIoUAssigner',
pos_iou_thr=0.7,
neg_iou_thr=0.3,
min_pos_iou=0.3,
ignore_iof_thr=-1),
sampler=dict(
type='RandomSampler',
num=256,
pos_fraction=0.5,
neg_pos_ub=-1,
add_gt_as_proposals=False),
allowed_border=0,
pos_weight=-1,
debug=False),
text_rpn_proposal=dict(
nms_across_levels=False,
nms_pre=2000,
nms_post=2000,
max_num=2000,
nms_thr=0.7,
min_bbox_size=0),
char_rpn_proposal=dict(
nms_across_levels=False,
nms_pre=2000,
nms_post=2000,
max_num=2000,
nms_thr=0.7,
min_bbox_size=0),
rcnn=dict(
assigner=dict(
type='MaxIoUAssigner',
pos_iou_thr=0.5,
neg_iou_thr=0.5,
min_pos_iou=0.5,
ignore_iof_thr=-1),
sampler=dict(
type='RandomSampler',
num=512,
pos_fraction=0.25,
neg_pos_ub=-1,
add_gt_as_proposals=True),
mask_size=28,
pos_weight=-1,
debug=False))
test_cfg = dict(
text_rpn=dict(
nms_across_levels=False,
nms_pre=900,
nms_post=900,
max_num=900,
nms_thr=0.7,
min_bbox_size=0),
char_rpn=dict(
nms_across_levels=False,
nms_pre=900,
nms_post=900,
max_num=900,
nms_thr=0.5, # 0.7
min_bbox_size=0),
text_rcnn=dict(
score_thr=0.01,
nms=dict(type='nms', iou_thr=0.9),
max_per_img=500,
mask_thr_binary=0.5),
char_rcnn=dict(
score_thr=0.1,
nms=dict(type='nms', iou_thr=0.1),
max_per_img=200,
mask_thr_binary=0.5),
recognizer=dict(
char_dict_file=data_root + char_dict_file,
char_assign_iou=0.5),
poly_iou=0.1,
ignore_thr=0.3)
# dataset settings
dataset_type = 'ReCTSDataset'
img_norm_cfg = dict(
mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
train_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='LoadAnnotations', with_bbox=True, with_mask=True),
dict(type='PhotoMetricDistortion',
brightness_delta=32,
contrast_range=(0.5, 1.5),
saturation_range=(0.5, 1.5),
hue_delta=18),
# dict(type='Resize', img_scale=(1333, 800), keep_ratio=True),
dict(type='Resize', img_scale=[(1664, 672), (1664, 928)], keep_ratio=True),
dict(type='RandomFlip', flip_ratio=0), # 0.5
dict(type='Normalize', **img_norm_cfg),
dict(type='Pad', size_divisor=32),
dict(type='DefaultFormatBundle'),
dict(type='Collect',
keys=['img', 'gt_bboxes', 'gt_labels', 'gt_masks'],
meta_keys=['filename', 'annpath', 'ori_shape', 'img_shape', 'pad_shape', 'scale_factor', 'flip',
'img_norm_cfg']),
]
test_pipeline = [
dict(type='LoadImageFromFile'),
dict(
type='MultiScaleFlipAug',
img_scale=(1333, 800),
flip=False,
transforms=[
dict(type='Resize', keep_ratio=True),
dict(type='RandomFlip'),
dict(type='Normalize', **img_norm_cfg),
dict(type='Pad', size_divisor=32),
dict(type='ImageToTensor', keys=['img']),
dict(type='Collect', keys=['img']),
])
]
data = dict(
# imgs_per_gpu=8,
# workers_per_gpu=8,
imgs_per_gpu=1,
workers_per_gpu=1,
train=dict(
type=dataset_type,
data_root=data_root,
ann_file='train/gt/',
img_prefix='train/img/',
cache_file='tda_rects_train_cache_file.json',
char_dict_file=char_dict_file,
pipeline=train_pipeline),
val=dict(
type=dataset_type,
data_root=data_root,
ann_file='train/gt/',
img_prefix='train/img/',
cache_file='tda_rects_val_cache_file.json',
char_dict_file=char_dict_file,
pipeline=test_pipeline),
test=dict(
type=dataset_type,
data_root=data_root,
ann_file=None,
img_prefix='rename/',
cache_file='tda_rects_test_cache_file.json',
char_dict_file=char_dict_file,
pipeline=test_pipeline)
)
# optimizer
optimizer = dict(type='SGD', lr=0.20, momentum=0.9, weight_decay=0.0001)
optimizer_config = dict(grad_clip=dict(max_norm=35, norm_type=2))
# learning policy
lr_config = dict(
policy='step',
warmup='linear',
warmup_iters=300,
warmup_ratio=1.0 / 3,
step=[8, 11])
checkpoint_config = dict(interval=1)
# yapf:disable
log_config = dict(
interval=50,
hooks=[
dict(type='TextLoggerHook'),
# dict(type='TensorboardLoggerHook')
])
# yapf:enable
evaluation = dict(interval=1)
# runtime settings
total_epochs = 12
dist_params = dict(backend='nccl')
log_level = 'INFO'
work_dir = 'work_dirs/rects_ae_textspotter_lm_r50_1x/'
load_from = 'work_dirs/rects_ae_textspotter_r50_1x/epoch_12.pth'
resume_from = None
workflow = [('train', 1)] | PypiClean |
/HavNegpy-1.2.tar.gz/HavNegpy-1.2/docs/_build/html/_build/html/_build/html/_build/html/_build/html/_build/html/hn_module_tutorial.ipynb | # Tutorial for the HN module of HavNegpy package
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import os
import HavNegpy as dd
%matplotlib qt
os.chdir(r'M:\Marshall_Data\mohamed_data\mohamed_data\n44')
def create_dataframe(f):
col_names = ['Freq', 'T', 'Eps1', 'Eps2']
#f = input(str("Enter the filename:"))
df = pd.read_csv(f, sep=r"\s+",index_col=False,usecols = [0,1,2,3],names=col_names,header=None,skiprows=4,encoding='unicode_escape',engine='python')
col1 = ['log f']
for start in range(0, len(df), 63):
name = df['T'][start]
#print(name)
col1.append(name)
df2 = pd.DataFrame()
f1 = df['Freq'][0:63].values
x1 = np.log10((f1))
e = pd.DataFrame(x1)
df2['log f'] = pd.concat([e],axis=1,ignore_index=True)
global Cooling,Heating
for start in range(0, len(df), 63):
f = df['Eps2'][start:start+63].values
ep = np.log10(f)
d = pd.DataFrame(ep)
df2[start] = pd.concat([d],axis=1,ignore_index=True)
df2.columns = col1
'''
a = int(len(col1)/3)
b = 2*a
c = int(len(col1)) - b
Heating1 = df2.iloc[8:,0:a+1]
Cooling = df2.iloc[8:,a+1:b+1]
Heating2 = df2.iloc[8:,b+1:]
heat1_col = col1[0:a+1]
cool_col = col1[a+1:b+1]
heat2_col = col1[b+1:]
Cooling.columns = cool_col
Heating1.columns = heat1_col
Heating2.columns = heat2_col
f2 = df['Freq'][8:59].values
x2 = np.log10((f2))
Cooling['Freq'] = x2
Heating1['Freq'] = x2
Heating2['Freq'] = x2
'''
Cooling = df2.iloc[:,0:25]
Heating = df2.iloc[:,25:]
return df,df2,Cooling,Heating #Heating2
df,df2,cool,heat = create_dataframe('EPS.TXT')
x,y = df2['log f'][9:], heat[40][9:]
plt.figure()
plt.scatter(x,y,label='data for fitting')
plt.xlabel('log f [Hz]')
plt.ylabel('log $\epsilon$"')
plt.legend()
plt.title('Example for HN fitting')
```
image of the plot we are using in this tutorial

```
''' instantiate the HN module from HavgNegpy'''
hn = dd.HN()
''' select range to perform hn fitting'''
''' the select range functions pops in a separate window and allows you two clicks to select the region of interest (ROI)'''
''' In this tutorial, I'll plot the ROI and append as an image in the next cell'''
x1,y1 = hn.select_range(x,y)
''' view the data from select range'''
plt.scatter(x1,y1,label = 'Data for fitting')
plt.xlabel('log f [Hz]')
plt.ylabel('log $\epsilon$"')
plt.legend()
plt.title('ROI selected from HN module')
```
image of the ROI from HN module
```
''' dump the initial guess parameters using dump parameters method (varies for each fn), which dumps the parameters in a json file'''
''' this is required before performing the first fitting as it takes the initial guess from the json file created'''
hn.dump_parameters_hn()
''' view the initial guess for the ROI using initial_view method'''
''' I'll append the image in the next cell'''
hn.initial_view_hn(x1,y1)
```
image of the initial guess
```
''' pefrorm least squares fitting'''
''' The image of the curve fit is added in the next cell '''
hn.fit(x1,y1)
```
Example of the fit performed using single HN function
the procedure is similar for double HN and HN with conductivity

```
'''create a file to save fit results using create_analysis file method'''
''' before saving fit results an analysis file has to be created '''
hn.create_analysis_file()
''' save the fit results using save_fit method of the corresponding fit function'''
''' takes one argument, read more on the documentation'''
hn.save_fit_hn(1)
```
| PypiClean |
/aleksis_core-3.1.5-py3-none-any.whl/aleksis/core/util/messages.py | import logging
from typing import Any, Optional
from django.contrib import messages
from django.http import HttpRequest
def add_message(
request: Optional[HttpRequest], level: int, message: str, **kwargs
) -> Optional[Any]:
"""Add a message.
Add a message to either Django's message framework, if called from a web request,
or to the default logger.
Default to DEBUG level.
"""
if request:
return messages.add_message(request, level, message, **kwargs)
else:
return logging.getLogger(__name__).log(level, message)
def debug(request: Optional[HttpRequest], message: str, **kwargs) -> Optional[Any]:
"""Add a debug message.
Add a message to either Django's message framework, if called from a web request,
or to the default logger.
Default to DEBUG level.
"""
return add_message(request, messages.DEBUG, message, **kwargs)
def info(request: Optional[HttpRequest], message: str, **kwargs) -> Optional[Any]:
"""Add a info message.
Add a message to either Django's message framework, if called from a web request,
or to the default logger.
Default to INFO level.
"""
return add_message(request, messages.INFO, message, **kwargs)
def success(request: Optional[HttpRequest], message: str, **kwargs) -> Optional[Any]:
"""Add a success message.
Add a message to either Django's message framework, if called from a web request,
or to the default logger.
Default to SUCCESS level.
"""
return add_message(request, messages.SUCCESS, message, **kwargs)
def warning(request: Optional[HttpRequest], message: str, **kwargs) -> Optional[Any]:
"""Add a warning message.
Add a message to either Django's message framework, if called from a web request,
or to the default logger.
Default to WARNING level.
"""
return add_message(request, messages.WARNING, message, **kwargs)
def error(request: Optional[HttpRequest], message: str, **kwargs) -> Optional[Any]:
"""Add an error message.
Add a message to either Django's message framework, if called from a web request,
or to the default logger.
Default to ERROR level.
"""
return add_message(request, messages.ERROR, message, **kwargs) | PypiClean |
/FreePyBX-1.0-RC1.tar.gz/FreePyBX-1.0-RC1/freepybx/public/js/dojox/css3/transition.js.uncompressed.js | define("dojox/css3/transition", ["dojo/_base/kernel",
"dojo/_base/lang",
"dojo/_base/declare",
"dojo/_base/array",
"dojo/_base/Deferred",
"dojo/DeferredList",
"dojo/on",
"dojo/_base/sniff"],
function(dojo, lang, declare, array, deferred, deferredList, on, has){
//TODO create cross platform animation/transition effects
var transitionEndEventName = "transitionend";
var transitionPrefix = "t"; //by default use "t" prefix and "ransition" to make word "transition"
var translateMethodStart = "translate3d(";//Android 2.x does not support translateX in CSS Transition, we need to use translate3d in webkit browsers
var translateMethodEnd = ",0,0)";
if(has("webkit")){
transitionPrefix = "WebkitT";
transitionEndEventName = "webkitTransitionEnd";
}else if(has("mozilla")){
transitionPrefix = "MozT";
translateMethodStart = "translateX(";
translateMethodEnd = ")";
}
//TODO find a way to lock the animation and prevent animation conflict
declare("dojox.css3.transition", null, {
constructor: function(args){
//default config should be in animation object itself instead of its prototype
//otherwise, it might be easy for making mistake of modifying prototype
var defaultConfig = {
startState: {},
endState: {},
node: null,
duration: 250,
"in": true,
direction: 1,
autoClear: true
};
lang.mixin(this, defaultConfig);
lang.mixin(this, args);
//create the deferred object which will resolve after the animation is finished.
//We can rely on "onAfterEnd" function to notify the end of a single animation,
//but using a deferred object is easier to wait for multiple animations end.
if(!this.deferred){
this.deferred = new deferred();
}
},
play: function(){
//play the animation using CSS3 Transition
dojox.css3.transition.groupedPlay([this]);
},
//method to apply the state of the transition
_applyState: function(state){
var style = this.node.style;
for(var property in state){
if(state.hasOwnProperty(property)){
style[property] = state[property];
}
}
},
//method to initialize state for transition
initState: function(){
//apply the immediate style change for initial state.
this.node.style[transitionPrefix + "ransitionProperty"] = "none";
this.node.style[transitionPrefix + "ransitionDuration"] = "0ms";
this._applyState(this.startState);
},
_beforeStart: function(){
if (this.node.style.display === "none"){
this.node.style.display = "";
}
this.beforeStart();
},
_beforeClear: function(){
this.node.style[transitionPrefix + "ransitionProperty"] = null;
this.node.style[transitionPrefix + "ransitionDuration"] = null;
if(this["in"] !== true){
this.node.style.display = "none";
}
this.beforeClear();
},
_onAfterEnd: function(){
this.deferred.resolve(this.node);
if(this.node.id && dojox.css3.transition.playing[this.node.id]===this.deferred){
delete dojox.css3.transition.playing[this.node.id];
}
this.onAfterEnd();
},
beforeStart: function(){
},
beforeClear: function(){
},
onAfterEnd: function(){
},
//method to start the transition
start: function(){
this._beforeStart();
var self = this;
//change the transition duration
self.node.style[transitionPrefix + "ransitionProperty"] = "all";
self.node.style[transitionPrefix + "ransitionDuration"] = self.duration + "ms";
//connect to clear the transition state after the transition end.
//Since the transition is conducted asynchronously, we need to
//connect to transition end event to clear the state
on.once(self.node, transitionEndEventName, function(){
self.clear();
});
this._applyState(this.endState);
},
//method to clear state after transition
clear: function(){
this._beforeClear();
this._removeState(this.endState);
console.log(this.node.id + " clear.");
this._onAfterEnd();
},
//create removeState method
_removeState: function(state){
var style = this.node.style;
for(var property in state){
if(state.hasOwnProperty(property)){
style[property] = null;
}
}
}
});
//TODO add the lock mechanism for all of the transition effects
// consider using only one object for one type of transition.
//TODO create the first animation, slide.
dojox.css3.transition.slide = function(node, config){
//TODO create the return and set the startState, endState of the return
var ret = new dojox.css3.transition(config);
ret.node = node;
var startX = "0";
var endX = "0";
if(ret["in"]){
if(ret.direction === 1){
startX = "100%";
}else{
startX = "-100%";
}
}else{
if(ret.direction === 1){
endX = "-100%";
}else{
endX = "100%";
}
}
ret.startState[transitionPrefix + "ransform"]=translateMethodStart+startX+translateMethodEnd;
ret.endState[transitionPrefix + "ransform"]=translateMethodStart+endX+translateMethodEnd;
return ret;
};
//fade in/out animation effects
dojox.css3.transition.fade = function(node, config){
var ret = new dojox.css3.transition(config);
ret.node = node;
var startOpacity = "0";
var endOpacity = "0";
if(ret["in"]){
endOpacity = "1";
}else{
startOpacity = "1";
}
lang.mixin(ret, {
startState:{
"opacity": startOpacity
},
endState:{
"opacity": endOpacity
}
});
return ret;
};
//fade in/out animation effects
dojox.css3.transition.flip = function(node, config){
var ret = new dojox.css3.transition(config);
ret.node = node;
if(ret["in"]){
//Need to set opacity here because Android 2.2 has bug that
//scale(...) in transform does not persist status
lang.mixin(ret,{
startState:{
"opacity": "0"
},
endState:{
"opacity": "1"
}
});
ret.startState[transitionPrefix + "ransform"]="scale(0,0.8) skew(0,-30deg)";
ret.endState[transitionPrefix + "ransform"]="scale(1,1) skew(0,0)";
}else{
lang.mixin(ret,{
startState:{
"opacity": "1"
},
endState:{
"opacity": "0"
}
});
ret.startState[transitionPrefix + "ransform"]="scale(1,1) skew(0,0)";
ret.endState[transitionPrefix + "ransform"]="scale(0,0.8) skew(0,30deg)";
}
return ret;
};
var getWaitingList = function(/*Array*/ nodes){
var defs = [];
array.forEach(nodes, function(node){
//check whether the node is under other animation
if(node.id && dojox.css3.transition.playing[node.id]){
//TODO hook on deferred object in dojox.css3.transition.playing
defs.push(dojox.css3.transition.playing[node.id]);
}
});
return new deferredList(defs);
};
dojox.css3.transition.getWaitingList = getWaitingList;
//TODO groupedPlay should ensure the UI update happens when
//all animations end.
//the group player to start multiple animations together
dojox.css3.transition.groupedPlay = function(/*Array*/args){
//args should be array of dojox.css3.transition
var animNodes = array.filter(args, function(item){
return item.node;
});
var waitingList = getWaitingList(animNodes);
//update registry with deferred objects in animations of args.
array.forEach(args, function(item){
if(item.node.id){
dojox.css3.transition.playing[item.node.id] = item.deferred;
}
});
//TODO wait for all deferred object in deferred list to resolve
dojo.when(waitingList, function(){
array.forEach(args, function(item){
//set the start state
item.initState();
});
//Assume the fps of the animation should be higher than 30 fps and
//allow the browser to use one frame's time to redraw so that
//the transition can be started
setTimeout(function(){
array.forEach(args, function(item){
item.start();
});
}, 33);
});
};
//the chain player to start multiple animations one by one
dojox.css3.transition.chainedPlay = function(/*Array*/args){
//args should be array of dojox.css3.transition
var animNodes = array.filter(args, function(item){
return item.node;
});
var waitingList = getWaitingList(animNodes);
//update registry with deferred objects in animations of args.
array.forEach(args, function(item){
if(item.node.id){
dojox.css3.transition.playing[item.node.id] = item.deferred;
}
});
dojo.when(waitingList, function(){
array.forEach(args, function(item){
//set the start state
item.initState();
});
//chain animations together
for (var i=1, len=args.length; i < len; i++){
args[i-1].deferred.then(lang.hitch(args[i], function(){
this.start();
}));
}
//Assume the fps of the animation should be higher than 30 fps and
//allow the browser to use one frame's time to redraw so that
//the transition can be started
setTimeout(function(){
args[0].start();
}, 33);
});
};
//TODO complete the registry mechanism for animation handling and prevent animation conflicts
dojox.css3.transition.playing = {};
return dojox.css3.transition;
}); | PypiClean |
/Jobtimize-0.0.5a2.tar.gz/Jobtimize-0.0.5a2/README.md | # Jobtimize
`Jobtimize` is a python package which collects, standardizes and completes information about job offers published on job search platforms.
The package is mainly based on scraping and text classification to fill in missing data.
|Release|Usage|Development|
|--- |--- |--- |
|[](https://pypi.org/project/Jobtimize/)|[](https://opensource.org/licenses/MIT)|[](https://travis-ci.com/HireCoffee/Jobtimize)|
|[](https://anaconda.org/lrakotoson/jobtimize)|[](https://pypi.org/project/Jobtimize/)|[](https://codecov.io/gh/HireCoffee/Jobtimize/)|
||[]%2F*[%40textLength%3D%27134.0%27%20and%20%40y%3D%27140%27]%2Ftext()&url=https%3A%2F%2Fpepy.tech%2Fbadge%2Fjobtimize&style=for-the-badge)](https://pepy.tech/project/Jobtimize) |[](https://www.python.org/)|
---
### What's new in the current version:
- [**v.0.0.5A** Changelog](https://github.com/HireCoffee/Jobtimize/blob/master/CHANGELOG.md)
---
# Dependencies
```
beautifulsoup4
jsonschema
lxml
pandas
```
# Installation
## Pypi
The safest way to install `Jobtimize` is to go through pip
```bash
pip install Jobtimize
```
## Conda
It is also possible to get the latest stable version with Anaconda Cloud
```bash
conda install -c lrakotoson jobtimize
```
## Git
The installation with git allows to have the latest version. However it can have some bugs.
```bash
pip install git+https://github.com/HireCoffee/Jobtimize.git
```
# How to use ?
As `Jobtimize` is a package, in python you just have to import it.
The main function (*for now*) is `Jobtimize.jobscrap`.
```python
from jobtimize import scraper
df = jobscrap(["Data Scientist", "Data Analyst"],
["UK", "FR"]
)
df.head()
```
The `df` object is a dataframe pandas, so it inherits all its methods.
# Contributing 🤝
🎊 Firstly, thank you for giving your time to contribute to `Jobtimize`. 🎊
If you have a new feature to submit, don't hesitate to **open an issue** _(By checking "new feature" to make it easier to read)_ We can discuss it freely there.
Then you can make a "pull request" as explained in the [contribution guidelines](https://github.com/HireCoffee/Jobtimize/blob/master/docs/CONTRIBUTING.md).
Same for all contributions, code improvement, documentation writing, translations... **all ideas are welcome!** Check out the [guidelines](https://github.com/HireCoffee/Jobtimize/blob/master/docs/CONTRIBUTING.md) to make it easier.
`Jobtimize` gets better with contributions.
| PypiClean |
/Impression-CMS-0.2.0.tar.gz/Impression-CMS-0.2.0/impression/themes/impression/static/js/bootstrap.js | if (typeof jQuery === 'undefined') { throw new Error('Bootstrap\'s JavaScript requires jQuery') }
/* ========================================================================
* Bootstrap: transition.js v3.1.1
* http://getbootstrap.com/javascript/#transitions
* ========================================================================
* Copyright 2011-2014 Twitter, Inc.
* Licensed under MIT (https://github.com/twbs/bootstrap/blob/master/LICENSE)
* ======================================================================== */
+function ($) {
'use strict';
// CSS TRANSITION SUPPORT (Shoutout: http://www.modernizr.com/)
// ============================================================
function transitionEnd() {
var el = document.createElement('bootstrap')
var transEndEventNames = {
'WebkitTransition' : 'webkitTransitionEnd',
'MozTransition' : 'transitionend',
'OTransition' : 'oTransitionEnd otransitionend',
'transition' : 'transitionend'
}
for (var name in transEndEventNames) {
if (el.style[name] !== undefined) {
return { end: transEndEventNames[name] }
}
}
return false // explicit for ie8 ( ._.)
}
// http://blog.alexmaccaw.com/css-transitions
$.fn.emulateTransitionEnd = function (duration) {
var called = false, $el = this
$(this).one($.support.transition.end, function () { called = true })
var callback = function () { if (!called) $($el).trigger($.support.transition.end) }
setTimeout(callback, duration)
return this
}
$(function () {
$.support.transition = transitionEnd()
})
}(jQuery);
/* ========================================================================
* Bootstrap: alert.js v3.1.1
* http://getbootstrap.com/javascript/#alerts
* ========================================================================
* Copyright 2011-2014 Twitter, Inc.
* Licensed under MIT (https://github.com/twbs/bootstrap/blob/master/LICENSE)
* ======================================================================== */
+function ($) {
'use strict';
// ALERT CLASS DEFINITION
// ======================
var dismiss = '[data-dismiss="alert"]'
var Alert = function (el) {
$(el).on('click', dismiss, this.close)
}
Alert.prototype.close = function (e) {
var $this = $(this)
var selector = $this.attr('data-target')
if (!selector) {
selector = $this.attr('href')
selector = selector && selector.replace(/.*(?=#[^\s]*$)/, '') // strip for ie7
}
var $parent = $(selector)
if (e) e.preventDefault()
if (!$parent.length) {
$parent = $this.hasClass('alert') ? $this : $this.parent()
}
$parent.trigger(e = $.Event('close.bs.alert'))
if (e.isDefaultPrevented()) return
$parent.removeClass('in')
function removeElement() {
$parent.trigger('closed.bs.alert').remove()
}
$.support.transition && $parent.hasClass('fade') ?
$parent
.one($.support.transition.end, removeElement)
.emulateTransitionEnd(150) :
removeElement()
}
// ALERT PLUGIN DEFINITION
// =======================
var old = $.fn.alert
$.fn.alert = function (option) {
return this.each(function () {
var $this = $(this)
var data = $this.data('bs.alert')
if (!data) $this.data('bs.alert', (data = new Alert(this)))
if (typeof option == 'string') data[option].call($this)
})
}
$.fn.alert.Constructor = Alert
// ALERT NO CONFLICT
// =================
$.fn.alert.noConflict = function () {
$.fn.alert = old
return this
}
// ALERT DATA-API
// ==============
$(document).on('click.bs.alert.data-api', dismiss, Alert.prototype.close)
}(jQuery);
/* ========================================================================
* Bootstrap: button.js v3.1.1
* http://getbootstrap.com/javascript/#buttons
* ========================================================================
* Copyright 2011-2014 Twitter, Inc.
* Licensed under MIT (https://github.com/twbs/bootstrap/blob/master/LICENSE)
* ======================================================================== */
+function ($) {
'use strict';
// BUTTON PUBLIC CLASS DEFINITION
// ==============================
var Button = function (element, options) {
this.$element = $(element)
this.options = $.extend({}, Button.DEFAULTS, options)
this.isLoading = false
}
Button.DEFAULTS = {
loadingText: 'loading...'
}
Button.prototype.setState = function (state) {
var d = 'disabled'
var $el = this.$element
var val = $el.is('input') ? 'val' : 'html'
var data = $el.data()
state = state + 'Text'
if (!data.resetText) $el.data('resetText', $el[val]())
$el[val](data[state] || this.options[state])
// push to event loop to allow forms to submit
setTimeout($.proxy(function () {
if (state == 'loadingText') {
this.isLoading = true
$el.addClass(d).attr(d, d)
} else if (this.isLoading) {
this.isLoading = false
$el.removeClass(d).removeAttr(d)
}
}, this), 0)
}
Button.prototype.toggle = function () {
var changed = true
var $parent = this.$element.closest('[data-toggle="buttons"]')
if ($parent.length) {
var $input = this.$element.find('input')
if ($input.prop('type') == 'radio') {
if ($input.prop('checked') && this.$element.hasClass('active')) changed = false
else $parent.find('.active').removeClass('active')
}
if (changed) $input.prop('checked', !this.$element.hasClass('active')).trigger('change')
}
if (changed) this.$element.toggleClass('active')
}
// BUTTON PLUGIN DEFINITION
// ========================
var old = $.fn.button
$.fn.button = function (option) {
return this.each(function () {
var $this = $(this)
var data = $this.data('bs.button')
var options = typeof option == 'object' && option
if (!data) $this.data('bs.button', (data = new Button(this, options)))
if (option == 'toggle') data.toggle()
else if (option) data.setState(option)
})
}
$.fn.button.Constructor = Button
// BUTTON NO CONFLICT
// ==================
$.fn.button.noConflict = function () {
$.fn.button = old
return this
}
// BUTTON DATA-API
// ===============
$(document).on('click.bs.button.data-api', '[data-toggle^=button]', function (e) {
var $btn = $(e.target)
if (!$btn.hasClass('btn')) $btn = $btn.closest('.btn')
$btn.button('toggle')
e.preventDefault()
})
}(jQuery);
/* ========================================================================
* Bootstrap: carousel.js v3.1.1
* http://getbootstrap.com/javascript/#carousel
* ========================================================================
* Copyright 2011-2014 Twitter, Inc.
* Licensed under MIT (https://github.com/twbs/bootstrap/blob/master/LICENSE)
* ======================================================================== */
+function ($) {
'use strict';
// CAROUSEL CLASS DEFINITION
// =========================
var Carousel = function (element, options) {
this.$element = $(element)
this.$indicators = this.$element.find('.carousel-indicators')
this.options = options
this.paused =
this.sliding =
this.interval =
this.$active =
this.$items = null
this.options.pause == 'hover' && this.$element
.on('mouseenter', $.proxy(this.pause, this))
.on('mouseleave', $.proxy(this.cycle, this))
}
Carousel.DEFAULTS = {
interval: 5000,
pause: 'hover',
wrap: true
}
Carousel.prototype.cycle = function (e) {
e || (this.paused = false)
this.interval && clearInterval(this.interval)
this.options.interval
&& !this.paused
&& (this.interval = setInterval($.proxy(this.next, this), this.options.interval))
return this
}
Carousel.prototype.getActiveIndex = function () {
this.$active = this.$element.find('.item.active')
this.$items = this.$active.parent().children()
return this.$items.index(this.$active)
}
Carousel.prototype.to = function (pos) {
var that = this
var activeIndex = this.getActiveIndex()
if (pos > (this.$items.length - 1) || pos < 0) return
if (this.sliding) return this.$element.one('slid.bs.carousel', function () { that.to(pos) })
if (activeIndex == pos) return this.pause().cycle()
return this.slide(pos > activeIndex ? 'next' : 'prev', $(this.$items[pos]))
}
Carousel.prototype.pause = function (e) {
e || (this.paused = true)
if (this.$element.find('.next, .prev').length && $.support.transition) {
this.$element.trigger($.support.transition.end)
this.cycle(true)
}
this.interval = clearInterval(this.interval)
return this
}
Carousel.prototype.next = function () {
if (this.sliding) return
return this.slide('next')
}
Carousel.prototype.prev = function () {
if (this.sliding) return
return this.slide('prev')
}
Carousel.prototype.slide = function (type, next) {
var $active = this.$element.find('.item.active')
var $next = next || $active[type]()
var isCycling = this.interval
var direction = type == 'next' ? 'left' : 'right'
var fallback = type == 'next' ? 'first' : 'last'
var that = this
if (!$next.length) {
if (!this.options.wrap) return
$next = this.$element.find('.item')[fallback]()
}
if ($next.hasClass('active')) return this.sliding = false
var e = $.Event('slide.bs.carousel', { relatedTarget: $next[0], direction: direction })
this.$element.trigger(e)
if (e.isDefaultPrevented()) return
this.sliding = true
isCycling && this.pause()
if (this.$indicators.length) {
this.$indicators.find('.active').removeClass('active')
this.$element.one('slid.bs.carousel', function () {
var $nextIndicator = $(that.$indicators.children()[that.getActiveIndex()])
$nextIndicator && $nextIndicator.addClass('active')
})
}
if ($.support.transition && this.$element.hasClass('slide')) {
$next.addClass(type)
$next[0].offsetWidth // force reflow
$active.addClass(direction)
$next.addClass(direction)
$active
.one($.support.transition.end, function () {
$next.removeClass([type, direction].join(' ')).addClass('active')
$active.removeClass(['active', direction].join(' '))
that.sliding = false
setTimeout(function () { that.$element.trigger('slid.bs.carousel') }, 0)
})
.emulateTransitionEnd($active.css('transition-duration').slice(0, -1) * 1000)
} else {
$active.removeClass('active')
$next.addClass('active')
this.sliding = false
this.$element.trigger('slid.bs.carousel')
}
isCycling && this.cycle()
return this
}
// CAROUSEL PLUGIN DEFINITION
// ==========================
var old = $.fn.carousel
$.fn.carousel = function (option) {
return this.each(function () {
var $this = $(this)
var data = $this.data('bs.carousel')
var options = $.extend({}, Carousel.DEFAULTS, $this.data(), typeof option == 'object' && option)
var action = typeof option == 'string' ? option : options.slide
if (!data) $this.data('bs.carousel', (data = new Carousel(this, options)))
if (typeof option == 'number') data.to(option)
else if (action) data[action]()
else if (options.interval) data.pause().cycle()
})
}
$.fn.carousel.Constructor = Carousel
// CAROUSEL NO CONFLICT
// ====================
$.fn.carousel.noConflict = function () {
$.fn.carousel = old
return this
}
// CAROUSEL DATA-API
// =================
$(document).on('click.bs.carousel.data-api', '[data-slide], [data-slide-to]', function (e) {
var $this = $(this), href
var $target = $($this.attr('data-target') || (href = $this.attr('href')) && href.replace(/.*(?=#[^\s]+$)/, '')) //strip for ie7
var options = $.extend({}, $target.data(), $this.data())
var slideIndex = $this.attr('data-slide-to')
if (slideIndex) options.interval = false
$target.carousel(options)
if (slideIndex = $this.attr('data-slide-to')) {
$target.data('bs.carousel').to(slideIndex)
}
e.preventDefault()
})
$(window).on('load', function () {
$('[data-ride="carousel"]').each(function () {
var $carousel = $(this)
$carousel.carousel($carousel.data())
})
})
}(jQuery);
/* ========================================================================
* Bootstrap: collapse.js v3.1.1
* http://getbootstrap.com/javascript/#collapse
* ========================================================================
* Copyright 2011-2014 Twitter, Inc.
* Licensed under MIT (https://github.com/twbs/bootstrap/blob/master/LICENSE)
* ======================================================================== */
+function ($) {
'use strict';
// COLLAPSE PUBLIC CLASS DEFINITION
// ================================
var Collapse = function (element, options) {
this.$element = $(element)
this.options = $.extend({}, Collapse.DEFAULTS, options)
this.transitioning = null
if (this.options.parent) this.$parent = $(this.options.parent)
if (this.options.toggle) this.toggle()
}
Collapse.DEFAULTS = {
toggle: true
}
Collapse.prototype.dimension = function () {
var hasWidth = this.$element.hasClass('width')
return hasWidth ? 'width' : 'height'
}
Collapse.prototype.show = function () {
if (this.transitioning || this.$element.hasClass('in')) return
var startEvent = $.Event('show.bs.collapse')
this.$element.trigger(startEvent)
if (startEvent.isDefaultPrevented()) return
var actives = this.$parent && this.$parent.find('> .panel > .in')
if (actives && actives.length) {
var hasData = actives.data('bs.collapse')
if (hasData && hasData.transitioning) return
actives.collapse('hide')
hasData || actives.data('bs.collapse', null)
}
var dimension = this.dimension()
this.$element
.removeClass('collapse')
.addClass('collapsing')
[dimension](0)
this.transitioning = 1
var complete = function () {
this.$element
.removeClass('collapsing')
.addClass('collapse in')
[dimension]('auto')
this.transitioning = 0
this.$element.trigger('shown.bs.collapse')
}
if (!$.support.transition) return complete.call(this)
var scrollSize = $.camelCase(['scroll', dimension].join('-'))
this.$element
.one($.support.transition.end, $.proxy(complete, this))
.emulateTransitionEnd(350)
[dimension](this.$element[0][scrollSize])
}
Collapse.prototype.hide = function () {
if (this.transitioning || !this.$element.hasClass('in')) return
var startEvent = $.Event('hide.bs.collapse')
this.$element.trigger(startEvent)
if (startEvent.isDefaultPrevented()) return
var dimension = this.dimension()
this.$element
[dimension](this.$element[dimension]())
[0].offsetHeight
this.$element
.addClass('collapsing')
.removeClass('collapse')
.removeClass('in')
this.transitioning = 1
var complete = function () {
this.transitioning = 0
this.$element
.trigger('hidden.bs.collapse')
.removeClass('collapsing')
.addClass('collapse')
}
if (!$.support.transition) return complete.call(this)
this.$element
[dimension](0)
.one($.support.transition.end, $.proxy(complete, this))
.emulateTransitionEnd(350)
}
Collapse.prototype.toggle = function () {
this[this.$element.hasClass('in') ? 'hide' : 'show']()
}
// COLLAPSE PLUGIN DEFINITION
// ==========================
var old = $.fn.collapse
$.fn.collapse = function (option) {
return this.each(function () {
var $this = $(this)
var data = $this.data('bs.collapse')
var options = $.extend({}, Collapse.DEFAULTS, $this.data(), typeof option == 'object' && option)
if (!data && options.toggle && option == 'show') option = !option
if (!data) $this.data('bs.collapse', (data = new Collapse(this, options)))
if (typeof option == 'string') data[option]()
})
}
$.fn.collapse.Constructor = Collapse
// COLLAPSE NO CONFLICT
// ====================
$.fn.collapse.noConflict = function () {
$.fn.collapse = old
return this
}
// COLLAPSE DATA-API
// =================
$(document).on('click.bs.collapse.data-api', '[data-toggle=collapse]', function (e) {
var $this = $(this), href
var target = $this.attr('data-target')
|| e.preventDefault()
|| (href = $this.attr('href')) && href.replace(/.*(?=#[^\s]+$)/, '') //strip for ie7
var $target = $(target)
var data = $target.data('bs.collapse')
var option = data ? 'toggle' : $this.data()
var parent = $this.attr('data-parent')
var $parent = parent && $(parent)
if (!data || !data.transitioning) {
if ($parent) $parent.find('[data-toggle=collapse][data-parent="' + parent + '"]').not($this).addClass('collapsed')
$this[$target.hasClass('in') ? 'addClass' : 'removeClass']('collapsed')
}
$target.collapse(option)
})
}(jQuery);
/* ========================================================================
* Bootstrap: dropdown.js v3.1.1
* http://getbootstrap.com/javascript/#dropdowns
* ========================================================================
* Copyright 2011-2014 Twitter, Inc.
* Licensed under MIT (https://github.com/twbs/bootstrap/blob/master/LICENSE)
* ======================================================================== */
+function ($) {
'use strict';
// DROPDOWN CLASS DEFINITION
// =========================
var backdrop = '.dropdown-backdrop'
var toggle = '[data-toggle=dropdown]'
var Dropdown = function (element) {
$(element).on('click.bs.dropdown', this.toggle)
}
Dropdown.prototype.toggle = function (e) {
var $this = $(this)
if ($this.is('.disabled, :disabled')) return
var $parent = getParent($this)
var isActive = $parent.hasClass('open')
clearMenus()
if (!isActive) {
if ('ontouchstart' in document.documentElement && !$parent.closest('.navbar-nav').length) {
// if mobile we use a backdrop because click events don't delegate
$('<div class="dropdown-backdrop"/>').insertAfter($(this)).on('click', clearMenus)
}
var relatedTarget = { relatedTarget: this }
$parent.trigger(e = $.Event('show.bs.dropdown', relatedTarget))
if (e.isDefaultPrevented()) return
$parent
.toggleClass('open')
.trigger('shown.bs.dropdown', relatedTarget)
$this.focus()
}
return false
}
Dropdown.prototype.keydown = function (e) {
if (!/(38|40|27)/.test(e.keyCode)) return
var $this = $(this)
e.preventDefault()
e.stopPropagation()
if ($this.is('.disabled, :disabled')) return
var $parent = getParent($this)
var isActive = $parent.hasClass('open')
if (!isActive || (isActive && e.keyCode == 27)) {
if (e.which == 27) $parent.find(toggle).focus()
return $this.click()
}
var desc = ' li:not(.divider):visible a'
var $items = $parent.find('[role=menu]' + desc + ', [role=listbox]' + desc)
if (!$items.length) return
var index = $items.index($items.filter(':focus'))
if (e.keyCode == 38 && index > 0) index-- // up
if (e.keyCode == 40 && index < $items.length - 1) index++ // down
if (!~index) index = 0
$items.eq(index).focus()
}
function clearMenus(e) {
$(backdrop).remove()
$(toggle).each(function () {
var $parent = getParent($(this))
var relatedTarget = { relatedTarget: this }
if (!$parent.hasClass('open')) return
$parent.trigger(e = $.Event('hide.bs.dropdown', relatedTarget))
if (e.isDefaultPrevented()) return
$parent.removeClass('open').trigger('hidden.bs.dropdown', relatedTarget)
})
}
function getParent($this) {
var selector = $this.attr('data-target')
if (!selector) {
selector = $this.attr('href')
selector = selector && /#[A-Za-z]/.test(selector) && selector.replace(/.*(?=#[^\s]*$)/, '') //strip for ie7
}
var $parent = selector && $(selector)
return $parent && $parent.length ? $parent : $this.parent()
}
// DROPDOWN PLUGIN DEFINITION
// ==========================
var old = $.fn.dropdown
$.fn.dropdown = function (option) {
return this.each(function () {
var $this = $(this)
var data = $this.data('bs.dropdown')
if (!data) $this.data('bs.dropdown', (data = new Dropdown(this)))
if (typeof option == 'string') data[option].call($this)
})
}
$.fn.dropdown.Constructor = Dropdown
// DROPDOWN NO CONFLICT
// ====================
$.fn.dropdown.noConflict = function () {
$.fn.dropdown = old
return this
}
// APPLY TO STANDARD DROPDOWN ELEMENTS
// ===================================
$(document)
.on('click.bs.dropdown.data-api', clearMenus)
.on('click.bs.dropdown.data-api', '.dropdown form', function (e) { e.stopPropagation() })
.on('click.bs.dropdown.data-api', toggle, Dropdown.prototype.toggle)
.on('keydown.bs.dropdown.data-api', toggle + ', [role=menu], [role=listbox]', Dropdown.prototype.keydown)
}(jQuery);
/* ========================================================================
* Bootstrap: modal.js v3.1.1
* http://getbootstrap.com/javascript/#modals
* ========================================================================
* Copyright 2011-2014 Twitter, Inc.
* Licensed under MIT (https://github.com/twbs/bootstrap/blob/master/LICENSE)
* ======================================================================== */
+function ($) {
'use strict';
// MODAL CLASS DEFINITION
// ======================
var Modal = function (element, options) {
this.options = options
this.$element = $(element)
this.$backdrop =
this.isShown = null
if (this.options.remote) {
this.$element
.find('.modal-content')
.load(this.options.remote, $.proxy(function () {
this.$element.trigger('loaded.bs.modal')
}, this))
}
}
Modal.DEFAULTS = {
backdrop: true,
keyboard: true,
show: true
}
Modal.prototype.toggle = function (_relatedTarget) {
return this[!this.isShown ? 'show' : 'hide'](_relatedTarget)
}
Modal.prototype.show = function (_relatedTarget) {
var that = this
var e = $.Event('show.bs.modal', { relatedTarget: _relatedTarget })
this.$element.trigger(e)
if (this.isShown || e.isDefaultPrevented()) return
this.isShown = true
this.escape()
this.$element.on('click.dismiss.bs.modal', '[data-dismiss="modal"]', $.proxy(this.hide, this))
this.backdrop(function () {
var transition = $.support.transition && that.$element.hasClass('fade')
if (!that.$element.parent().length) {
that.$element.appendTo(document.body) // don't move modals dom position
}
that.$element
.show()
.scrollTop(0)
if (transition) {
that.$element[0].offsetWidth // force reflow
}
that.$element
.addClass('in')
.attr('aria-hidden', false)
that.enforceFocus()
var e = $.Event('shown.bs.modal', { relatedTarget: _relatedTarget })
transition ?
that.$element.find('.modal-dialog') // wait for modal to slide in
.one($.support.transition.end, function () {
that.$element.focus().trigger(e)
})
.emulateTransitionEnd(300) :
that.$element.focus().trigger(e)
})
}
Modal.prototype.hide = function (e) {
if (e) e.preventDefault()
e = $.Event('hide.bs.modal')
this.$element.trigger(e)
if (!this.isShown || e.isDefaultPrevented()) return
this.isShown = false
this.escape()
$(document).off('focusin.bs.modal')
this.$element
.removeClass('in')
.attr('aria-hidden', true)
.off('click.dismiss.bs.modal')
$.support.transition && this.$element.hasClass('fade') ?
this.$element
.one($.support.transition.end, $.proxy(this.hideModal, this))
.emulateTransitionEnd(300) :
this.hideModal()
}
Modal.prototype.enforceFocus = function () {
$(document)
.off('focusin.bs.modal') // guard against infinite focus loop
.on('focusin.bs.modal', $.proxy(function (e) {
if (this.$element[0] !== e.target && !this.$element.has(e.target).length) {
this.$element.focus()
}
}, this))
}
Modal.prototype.escape = function () {
if (this.isShown && this.options.keyboard) {
this.$element.on('keyup.dismiss.bs.modal', $.proxy(function (e) {
e.which == 27 && this.hide()
}, this))
} else if (!this.isShown) {
this.$element.off('keyup.dismiss.bs.modal')
}
}
Modal.prototype.hideModal = function () {
var that = this
this.$element.hide()
this.backdrop(function () {
that.removeBackdrop()
that.$element.trigger('hidden.bs.modal')
})
}
Modal.prototype.removeBackdrop = function () {
this.$backdrop && this.$backdrop.remove()
this.$backdrop = null
}
Modal.prototype.backdrop = function (callback) {
var animate = this.$element.hasClass('fade') ? 'fade' : ''
if (this.isShown && this.options.backdrop) {
var doAnimate = $.support.transition && animate
this.$backdrop = $('<div class="modal-backdrop ' + animate + '" />')
.appendTo(document.body)
this.$element.on('click.dismiss.bs.modal', $.proxy(function (e) {
if (e.target !== e.currentTarget) return
this.options.backdrop == 'static'
? this.$element[0].focus.call(this.$element[0])
: this.hide.call(this)
}, this))
if (doAnimate) this.$backdrop[0].offsetWidth // force reflow
this.$backdrop.addClass('in')
if (!callback) return
doAnimate ?
this.$backdrop
.one($.support.transition.end, callback)
.emulateTransitionEnd(150) :
callback()
} else if (!this.isShown && this.$backdrop) {
this.$backdrop.removeClass('in')
$.support.transition && this.$element.hasClass('fade') ?
this.$backdrop
.one($.support.transition.end, callback)
.emulateTransitionEnd(150) :
callback()
} else if (callback) {
callback()
}
}
// MODAL PLUGIN DEFINITION
// =======================
var old = $.fn.modal
$.fn.modal = function (option, _relatedTarget) {
return this.each(function () {
var $this = $(this)
var data = $this.data('bs.modal')
var options = $.extend({}, Modal.DEFAULTS, $this.data(), typeof option == 'object' && option)
if (!data) $this.data('bs.modal', (data = new Modal(this, options)))
if (typeof option == 'string') data[option](_relatedTarget)
else if (options.show) data.show(_relatedTarget)
})
}
$.fn.modal.Constructor = Modal
// MODAL NO CONFLICT
// =================
$.fn.modal.noConflict = function () {
$.fn.modal = old
return this
}
// MODAL DATA-API
// ==============
$(document).on('click.bs.modal.data-api', '[data-toggle="modal"]', function (e) {
var $this = $(this)
var href = $this.attr('href')
var $target = $($this.attr('data-target') || (href && href.replace(/.*(?=#[^\s]+$)/, ''))) //strip for ie7
var option = $target.data('bs.modal') ? 'toggle' : $.extend({ remote: !/#/.test(href) && href }, $target.data(), $this.data())
if ($this.is('a')) e.preventDefault()
$target
.modal(option, this)
.one('hide', function () {
$this.is(':visible') && $this.focus()
})
})
$(document)
.on('show.bs.modal', '.modal', function () { $(document.body).addClass('modal-open') })
.on('hidden.bs.modal', '.modal', function () { $(document.body).removeClass('modal-open') })
}(jQuery);
/* ========================================================================
* Bootstrap: tooltip.js v3.1.1
* http://getbootstrap.com/javascript/#tooltip
* Inspired by the original jQuery.tipsy by Jason Frame
* ========================================================================
* Copyright 2011-2014 Twitter, Inc.
* Licensed under MIT (https://github.com/twbs/bootstrap/blob/master/LICENSE)
* ======================================================================== */
+function ($) {
'use strict';
// TOOLTIP PUBLIC CLASS DEFINITION
// ===============================
var Tooltip = function (element, options) {
this.type =
this.options =
this.enabled =
this.timeout =
this.hoverState =
this.$element = null
this.init('tooltip', element, options)
}
Tooltip.DEFAULTS = {
animation: true,
placement: 'top',
selector: false,
template: '<div class="tooltip"><div class="tooltip-arrow"></div><div class="tooltip-inner"></div></div>',
trigger: 'hover focus',
title: '',
delay: 0,
html: false,
container: false
}
Tooltip.prototype.init = function (type, element, options) {
this.enabled = true
this.type = type
this.$element = $(element)
this.options = this.getOptions(options)
var triggers = this.options.trigger.split(' ')
for (var i = triggers.length; i--;) {
var trigger = triggers[i]
if (trigger == 'click') {
this.$element.on('click.' + this.type, this.options.selector, $.proxy(this.toggle, this))
} else if (trigger != 'manual') {
var eventIn = trigger == 'hover' ? 'mouseenter' : 'focusin'
var eventOut = trigger == 'hover' ? 'mouseleave' : 'focusout'
this.$element.on(eventIn + '.' + this.type, this.options.selector, $.proxy(this.enter, this))
this.$element.on(eventOut + '.' + this.type, this.options.selector, $.proxy(this.leave, this))
}
}
this.options.selector ?
(this._options = $.extend({}, this.options, { trigger: 'manual', selector: '' })) :
this.fixTitle()
}
Tooltip.prototype.getDefaults = function () {
return Tooltip.DEFAULTS
}
Tooltip.prototype.getOptions = function (options) {
options = $.extend({}, this.getDefaults(), this.$element.data(), options)
if (options.delay && typeof options.delay == 'number') {
options.delay = {
show: options.delay,
hide: options.delay
}
}
return options
}
Tooltip.prototype.getDelegateOptions = function () {
var options = {}
var defaults = this.getDefaults()
this._options && $.each(this._options, function (key, value) {
if (defaults[key] != value) options[key] = value
})
return options
}
Tooltip.prototype.enter = function (obj) {
var self = obj instanceof this.constructor ?
obj : $(obj.currentTarget)[this.type](this.getDelegateOptions()).data('bs.' + this.type)
clearTimeout(self.timeout)
self.hoverState = 'in'
if (!self.options.delay || !self.options.delay.show) return self.show()
self.timeout = setTimeout(function () {
if (self.hoverState == 'in') self.show()
}, self.options.delay.show)
}
Tooltip.prototype.leave = function (obj) {
var self = obj instanceof this.constructor ?
obj : $(obj.currentTarget)[this.type](this.getDelegateOptions()).data('bs.' + this.type)
clearTimeout(self.timeout)
self.hoverState = 'out'
if (!self.options.delay || !self.options.delay.hide) return self.hide()
self.timeout = setTimeout(function () {
if (self.hoverState == 'out') self.hide()
}, self.options.delay.hide)
}
Tooltip.prototype.show = function () {
var e = $.Event('show.bs.' + this.type)
if (this.hasContent() && this.enabled) {
this.$element.trigger(e)
if (e.isDefaultPrevented()) return
var that = this;
var $tip = this.tip()
this.setContent()
if (this.options.animation) $tip.addClass('fade')
var placement = typeof this.options.placement == 'function' ?
this.options.placement.call(this, $tip[0], this.$element[0]) :
this.options.placement
var autoToken = /\s?auto?\s?/i
var autoPlace = autoToken.test(placement)
if (autoPlace) placement = placement.replace(autoToken, '') || 'top'
$tip
.detach()
.css({ top: 0, left: 0, display: 'block' })
.addClass(placement)
this.options.container ? $tip.appendTo(this.options.container) : $tip.insertAfter(this.$element)
var pos = this.getPosition()
var actualWidth = $tip[0].offsetWidth
var actualHeight = $tip[0].offsetHeight
if (autoPlace) {
var $parent = this.$element.parent()
var orgPlacement = placement
var docScroll = document.documentElement.scrollTop || document.body.scrollTop
var parentWidth = this.options.container == 'body' ? window.innerWidth : $parent.outerWidth()
var parentHeight = this.options.container == 'body' ? window.innerHeight : $parent.outerHeight()
var parentLeft = this.options.container == 'body' ? 0 : $parent.offset().left
placement = placement == 'bottom' && pos.top + pos.height + actualHeight - docScroll > parentHeight ? 'top' :
placement == 'top' && pos.top - docScroll - actualHeight < 0 ? 'bottom' :
placement == 'right' && pos.right + actualWidth > parentWidth ? 'left' :
placement == 'left' && pos.left - actualWidth < parentLeft ? 'right' :
placement
$tip
.removeClass(orgPlacement)
.addClass(placement)
}
var calculatedOffset = this.getCalculatedOffset(placement, pos, actualWidth, actualHeight)
this.applyPlacement(calculatedOffset, placement)
this.hoverState = null
var complete = function() {
that.$element.trigger('shown.bs.' + that.type)
}
$.support.transition && this.$tip.hasClass('fade') ?
$tip
.one($.support.transition.end, complete)
.emulateTransitionEnd(150) :
complete()
}
}
Tooltip.prototype.applyPlacement = function (offset, placement) {
var replace
var $tip = this.tip()
var width = $tip[0].offsetWidth
var height = $tip[0].offsetHeight
// manually read margins because getBoundingClientRect includes difference
var marginTop = parseInt($tip.css('margin-top'), 10)
var marginLeft = parseInt($tip.css('margin-left'), 10)
// we must check for NaN for ie 8/9
if (isNaN(marginTop)) marginTop = 0
if (isNaN(marginLeft)) marginLeft = 0
offset.top = offset.top + marginTop
offset.left = offset.left + marginLeft
// $.fn.offset doesn't round pixel values
// so we use setOffset directly with our own function B-0
$.offset.setOffset($tip[0], $.extend({
using: function (props) {
$tip.css({
top: Math.round(props.top),
left: Math.round(props.left)
})
}
}, offset), 0)
$tip.addClass('in')
// check to see if placing tip in new offset caused the tip to resize itself
var actualWidth = $tip[0].offsetWidth
var actualHeight = $tip[0].offsetHeight
if (placement == 'top' && actualHeight != height) {
replace = true
offset.top = offset.top + height - actualHeight
}
if (/bottom|top/.test(placement)) {
var delta = 0
if (offset.left < 0) {
delta = offset.left * -2
offset.left = 0
$tip.offset(offset)
actualWidth = $tip[0].offsetWidth
actualHeight = $tip[0].offsetHeight
}
this.replaceArrow(delta - width + actualWidth, actualWidth, 'left')
} else {
this.replaceArrow(actualHeight - height, actualHeight, 'top')
}
if (replace) $tip.offset(offset)
}
Tooltip.prototype.replaceArrow = function (delta, dimension, position) {
this.arrow().css(position, delta ? (50 * (1 - delta / dimension) + '%') : '')
}
Tooltip.prototype.setContent = function () {
var $tip = this.tip()
var title = this.getTitle()
$tip.find('.tooltip-inner')[this.options.html ? 'html' : 'text'](title)
$tip.removeClass('fade in top bottom left right')
}
Tooltip.prototype.hide = function () {
var that = this
var $tip = this.tip()
var e = $.Event('hide.bs.' + this.type)
function complete() {
if (that.hoverState != 'in') $tip.detach()
that.$element.trigger('hidden.bs.' + that.type)
}
this.$element.trigger(e)
if (e.isDefaultPrevented()) return
$tip.removeClass('in')
$.support.transition && this.$tip.hasClass('fade') ?
$tip
.one($.support.transition.end, complete)
.emulateTransitionEnd(150) :
complete()
this.hoverState = null
return this
}
Tooltip.prototype.fixTitle = function () {
var $e = this.$element
if ($e.attr('title') || typeof($e.attr('data-original-title')) != 'string') {
$e.attr('data-original-title', $e.attr('title') || '').attr('title', '')
}
}
Tooltip.prototype.hasContent = function () {
return this.getTitle()
}
Tooltip.prototype.getPosition = function () {
var el = this.$element[0]
return $.extend({}, (typeof el.getBoundingClientRect == 'function') ? el.getBoundingClientRect() : {
width: el.offsetWidth,
height: el.offsetHeight
}, this.$element.offset())
}
Tooltip.prototype.getCalculatedOffset = function (placement, pos, actualWidth, actualHeight) {
return placement == 'bottom' ? { top: pos.top + pos.height, left: pos.left + pos.width / 2 - actualWidth / 2 } :
placement == 'top' ? { top: pos.top - actualHeight, left: pos.left + pos.width / 2 - actualWidth / 2 } :
placement == 'left' ? { top: pos.top + pos.height / 2 - actualHeight / 2, left: pos.left - actualWidth } :
/* placement == 'right' */ { top: pos.top + pos.height / 2 - actualHeight / 2, left: pos.left + pos.width }
}
Tooltip.prototype.getTitle = function () {
var title
var $e = this.$element
var o = this.options
title = $e.attr('data-original-title')
|| (typeof o.title == 'function' ? o.title.call($e[0]) : o.title)
return title
}
Tooltip.prototype.tip = function () {
return this.$tip = this.$tip || $(this.options.template)
}
Tooltip.prototype.arrow = function () {
return this.$arrow = this.$arrow || this.tip().find('.tooltip-arrow')
}
Tooltip.prototype.validate = function () {
if (!this.$element[0].parentNode) {
this.hide()
this.$element = null
this.options = null
}
}
Tooltip.prototype.enable = function () {
this.enabled = true
}
Tooltip.prototype.disable = function () {
this.enabled = false
}
Tooltip.prototype.toggleEnabled = function () {
this.enabled = !this.enabled
}
Tooltip.prototype.toggle = function (e) {
var self = e ? $(e.currentTarget)[this.type](this.getDelegateOptions()).data('bs.' + this.type) : this
self.tip().hasClass('in') ? self.leave(self) : self.enter(self)
}
Tooltip.prototype.destroy = function () {
clearTimeout(this.timeout)
this.hide().$element.off('.' + this.type).removeData('bs.' + this.type)
}
// TOOLTIP PLUGIN DEFINITION
// =========================
var old = $.fn.tooltip
$.fn.tooltip = function (option) {
return this.each(function () {
var $this = $(this)
var data = $this.data('bs.tooltip')
var options = typeof option == 'object' && option
if (!data && option == 'destroy') return
if (!data) $this.data('bs.tooltip', (data = new Tooltip(this, options)))
if (typeof option == 'string') data[option]()
})
}
$.fn.tooltip.Constructor = Tooltip
// TOOLTIP NO CONFLICT
// ===================
$.fn.tooltip.noConflict = function () {
$.fn.tooltip = old
return this
}
}(jQuery);
/* ========================================================================
* Bootstrap: popover.js v3.1.1
* http://getbootstrap.com/javascript/#popovers
* ========================================================================
* Copyright 2011-2014 Twitter, Inc.
* Licensed under MIT (https://github.com/twbs/bootstrap/blob/master/LICENSE)
* ======================================================================== */
+function ($) {
'use strict';
// POPOVER PUBLIC CLASS DEFINITION
// ===============================
var Popover = function (element, options) {
this.init('popover', element, options)
}
if (!$.fn.tooltip) throw new Error('Popover requires tooltip.js')
Popover.DEFAULTS = $.extend({}, $.fn.tooltip.Constructor.DEFAULTS, {
placement: 'right',
trigger: 'click',
content: '',
template: '<div class="popover"><div class="arrow"></div><h3 class="popover-title"></h3><div class="popover-content"></div></div>'
})
// NOTE: POPOVER EXTENDS tooltip.js
// ================================
Popover.prototype = $.extend({}, $.fn.tooltip.Constructor.prototype)
Popover.prototype.constructor = Popover
Popover.prototype.getDefaults = function () {
return Popover.DEFAULTS
}
Popover.prototype.setContent = function () {
var $tip = this.tip()
var title = this.getTitle()
var content = this.getContent()
$tip.find('.popover-title')[this.options.html ? 'html' : 'text'](title)
$tip.find('.popover-content')[ // we use append for html objects to maintain js events
this.options.html ? (typeof content == 'string' ? 'html' : 'append') : 'text'
](content)
$tip.removeClass('fade top bottom left right in')
// IE8 doesn't accept hiding via the `:empty` pseudo selector, we have to do
// this manually by checking the contents.
if (!$tip.find('.popover-title').html()) $tip.find('.popover-title').hide()
}
Popover.prototype.hasContent = function () {
return this.getTitle() || this.getContent()
}
Popover.prototype.getContent = function () {
var $e = this.$element
var o = this.options
return $e.attr('data-content')
|| (typeof o.content == 'function' ?
o.content.call($e[0]) :
o.content)
}
Popover.prototype.arrow = function () {
return this.$arrow = this.$arrow || this.tip().find('.arrow')
}
Popover.prototype.tip = function () {
if (!this.$tip) this.$tip = $(this.options.template)
return this.$tip
}
// POPOVER PLUGIN DEFINITION
// =========================
var old = $.fn.popover
$.fn.popover = function (option) {
return this.each(function () {
var $this = $(this)
var data = $this.data('bs.popover')
var options = typeof option == 'object' && option
if (!data && option == 'destroy') return
if (!data) $this.data('bs.popover', (data = new Popover(this, options)))
if (typeof option == 'string') data[option]()
})
}
$.fn.popover.Constructor = Popover
// POPOVER NO CONFLICT
// ===================
$.fn.popover.noConflict = function () {
$.fn.popover = old
return this
}
}(jQuery);
/* ========================================================================
* Bootstrap: scrollspy.js v3.1.1
* http://getbootstrap.com/javascript/#scrollspy
* ========================================================================
* Copyright 2011-2014 Twitter, Inc.
* Licensed under MIT (https://github.com/twbs/bootstrap/blob/master/LICENSE)
* ======================================================================== */
+function ($) {
'use strict';
// SCROLLSPY CLASS DEFINITION
// ==========================
function ScrollSpy(element, options) {
var href
var process = $.proxy(this.process, this)
this.$element = $(element).is('body') ? $(window) : $(element)
this.$body = $('body')
this.$scrollElement = this.$element.on('scroll.bs.scroll-spy.data-api', process)
this.options = $.extend({}, ScrollSpy.DEFAULTS, options)
this.selector = (this.options.target
|| ((href = $(element).attr('href')) && href.replace(/.*(?=#[^\s]+$)/, '')) //strip for ie7
|| '') + ' .nav li > a'
this.offsets = $([])
this.targets = $([])
this.activeTarget = null
this.refresh()
this.process()
}
ScrollSpy.DEFAULTS = {
offset: 10
}
ScrollSpy.prototype.refresh = function () {
var offsetMethod = this.$element[0] == window ? 'offset' : 'position'
this.offsets = $([])
this.targets = $([])
var self = this
var $targets = this.$body
.find(this.selector)
.map(function () {
var $el = $(this)
var href = $el.data('target') || $el.attr('href')
var $href = /^#./.test(href) && $(href)
return ($href
&& $href.length
&& $href.is(':visible')
&& [[ $href[offsetMethod]().top + (!$.isWindow(self.$scrollElement.get(0)) && self.$scrollElement.scrollTop()), href ]]) || null
})
.sort(function (a, b) { return a[0] - b[0] })
.each(function () {
self.offsets.push(this[0])
self.targets.push(this[1])
})
}
ScrollSpy.prototype.process = function () {
var scrollTop = this.$scrollElement.scrollTop() + this.options.offset
var scrollHeight = this.$scrollElement[0].scrollHeight || this.$body[0].scrollHeight
var maxScroll = scrollHeight - this.$scrollElement.height()
var offsets = this.offsets
var targets = this.targets
var activeTarget = this.activeTarget
var i
if (scrollTop >= maxScroll) {
return activeTarget != (i = targets.last()[0]) && this.activate(i)
}
if (activeTarget && scrollTop <= offsets[0]) {
return activeTarget != (i = targets[0]) && this.activate(i)
}
for (i = offsets.length; i--;) {
activeTarget != targets[i]
&& scrollTop >= offsets[i]
&& (!offsets[i + 1] || scrollTop <= offsets[i + 1])
&& this.activate( targets[i] )
}
}
ScrollSpy.prototype.activate = function (target) {
this.activeTarget = target
$(this.selector)
.parentsUntil(this.options.target, '.active')
.removeClass('active')
var selector = this.selector +
'[data-target="' + target + '"],' +
this.selector + '[href="' + target + '"]'
var active = $(selector)
.parents('li')
.addClass('active')
if (active.parent('.dropdown-menu').length) {
active = active
.closest('li.dropdown')
.addClass('active')
}
active.trigger('activate.bs.scrollspy')
}
// SCROLLSPY PLUGIN DEFINITION
// ===========================
var old = $.fn.scrollspy
$.fn.scrollspy = function (option) {
return this.each(function () {
var $this = $(this)
var data = $this.data('bs.scrollspy')
var options = typeof option == 'object' && option
if (!data) $this.data('bs.scrollspy', (data = new ScrollSpy(this, options)))
if (typeof option == 'string') data[option]()
})
}
$.fn.scrollspy.Constructor = ScrollSpy
// SCROLLSPY NO CONFLICT
// =====================
$.fn.scrollspy.noConflict = function () {
$.fn.scrollspy = old
return this
}
// SCROLLSPY DATA-API
// ==================
$(window).on('load', function () {
$('[data-spy="scroll"]').each(function () {
var $spy = $(this)
$spy.scrollspy($spy.data())
})
})
}(jQuery);
/* ========================================================================
* Bootstrap: tab.js v3.1.1
* http://getbootstrap.com/javascript/#tabs
* ========================================================================
* Copyright 2011-2014 Twitter, Inc.
* Licensed under MIT (https://github.com/twbs/bootstrap/blob/master/LICENSE)
* ======================================================================== */
+function ($) {
'use strict';
// TAB CLASS DEFINITION
// ====================
var Tab = function (element) {
this.element = $(element)
}
Tab.prototype.show = function () {
var $this = this.element
var $ul = $this.closest('ul:not(.dropdown-menu)')
var selector = $this.data('target')
if (!selector) {
selector = $this.attr('href')
selector = selector && selector.replace(/.*(?=#[^\s]*$)/, '') //strip for ie7
}
if ($this.parent('li').hasClass('active')) return
var previous = $ul.find('.active:last a')[0]
var e = $.Event('show.bs.tab', {
relatedTarget: previous
})
$this.trigger(e)
if (e.isDefaultPrevented()) return
var $target = $(selector)
this.activate($this.parent('li'), $ul)
this.activate($target, $target.parent(), function () {
$this.trigger({
type: 'shown.bs.tab',
relatedTarget: previous
})
})
}
Tab.prototype.activate = function (element, container, callback) {
var $active = container.find('> .active')
var transition = callback
&& $.support.transition
&& $active.hasClass('fade')
function next() {
$active
.removeClass('active')
.find('> .dropdown-menu > .active')
.removeClass('active')
element.addClass('active')
if (transition) {
element[0].offsetWidth // reflow for transition
element.addClass('in')
} else {
element.removeClass('fade')
}
if (element.parent('.dropdown-menu')) {
element.closest('li.dropdown').addClass('active')
}
callback && callback()
}
transition ?
$active
.one($.support.transition.end, next)
.emulateTransitionEnd(150) :
next()
$active.removeClass('in')
}
// TAB PLUGIN DEFINITION
// =====================
var old = $.fn.tab
$.fn.tab = function ( option ) {
return this.each(function () {
var $this = $(this)
var data = $this.data('bs.tab')
if (!data) $this.data('bs.tab', (data = new Tab(this)))
if (typeof option == 'string') data[option]()
})
}
$.fn.tab.Constructor = Tab
// TAB NO CONFLICT
// ===============
$.fn.tab.noConflict = function () {
$.fn.tab = old
return this
}
// TAB DATA-API
// ============
$(document).on('click.bs.tab.data-api', '[data-toggle="tab"], [data-toggle="pill"]', function (e) {
e.preventDefault()
$(this).tab('show')
})
}(jQuery);
/* ========================================================================
* Bootstrap: affix.js v3.1.1
* http://getbootstrap.com/javascript/#affix
* ========================================================================
* Copyright 2011-2014 Twitter, Inc.
* Licensed under MIT (https://github.com/twbs/bootstrap/blob/master/LICENSE)
* ======================================================================== */
+function ($) {
'use strict';
// AFFIX CLASS DEFINITION
// ======================
var Affix = function (element, options) {
this.options = $.extend({}, Affix.DEFAULTS, options)
this.$window = $(window)
.on('scroll.bs.affix.data-api', $.proxy(this.checkPosition, this))
.on('click.bs.affix.data-api', $.proxy(this.checkPositionWithEventLoop, this))
this.$element = $(element)
this.affixed =
this.unpin =
this.pinnedOffset = null
this.checkPosition()
}
Affix.RESET = 'affix affix-top affix-bottom'
Affix.DEFAULTS = {
offset: 0
}
Affix.prototype.getPinnedOffset = function () {
if (this.pinnedOffset) return this.pinnedOffset
this.$element.removeClass(Affix.RESET).addClass('affix')
var scrollTop = this.$window.scrollTop()
var position = this.$element.offset()
return (this.pinnedOffset = position.top - scrollTop)
}
Affix.prototype.checkPositionWithEventLoop = function () {
setTimeout($.proxy(this.checkPosition, this), 1)
}
Affix.prototype.checkPosition = function () {
if (!this.$element.is(':visible')) return
var scrollHeight = $(document).height()
var scrollTop = this.$window.scrollTop()
var position = this.$element.offset()
var offset = this.options.offset
var offsetTop = offset.top
var offsetBottom = offset.bottom
if (this.affixed == 'top') position.top += scrollTop
if (typeof offset != 'object') offsetBottom = offsetTop = offset
if (typeof offsetTop == 'function') offsetTop = offset.top(this.$element)
if (typeof offsetBottom == 'function') offsetBottom = offset.bottom(this.$element)
var affix = this.unpin != null && (scrollTop + this.unpin <= position.top) ? false :
offsetBottom != null && (position.top + this.$element.height() >= scrollHeight - offsetBottom) ? 'bottom' :
offsetTop != null && (scrollTop <= offsetTop) ? 'top' : false
if (this.affixed === affix) return
if (this.unpin) this.$element.css('top', '')
var affixType = 'affix' + (affix ? '-' + affix : '')
var e = $.Event(affixType + '.bs.affix')
this.$element.trigger(e)
if (e.isDefaultPrevented()) return
this.affixed = affix
this.unpin = affix == 'bottom' ? this.getPinnedOffset() : null
this.$element
.removeClass(Affix.RESET)
.addClass(affixType)
.trigger($.Event(affixType.replace('affix', 'affixed')))
if (affix == 'bottom') {
this.$element.offset({ top: scrollHeight - offsetBottom - this.$element.height() })
}
}
// AFFIX PLUGIN DEFINITION
// =======================
var old = $.fn.affix
$.fn.affix = function (option) {
return this.each(function () {
var $this = $(this)
var data = $this.data('bs.affix')
var options = typeof option == 'object' && option
if (!data) $this.data('bs.affix', (data = new Affix(this, options)))
if (typeof option == 'string') data[option]()
})
}
$.fn.affix.Constructor = Affix
// AFFIX NO CONFLICT
// =================
$.fn.affix.noConflict = function () {
$.fn.affix = old
return this
}
// AFFIX DATA-API
// ==============
$(window).on('load', function () {
$('[data-spy="affix"]').each(function () {
var $spy = $(this)
var data = $spy.data()
data.offset = data.offset || {}
if (data.offsetBottom) data.offset.bottom = data.offsetBottom
if (data.offsetTop) data.offset.top = data.offsetTop
$spy.affix(data)
})
})
}(jQuery); | PypiClean |
/KaTrain-1.14.0-py3-none-any.whl/katrain/core/sgf_parser.py | import copy
import chardet
import math
import re
from collections import defaultdict
from typing import Any, Dict, List, Optional, Tuple
class ParseError(Exception):
"""Exception raised on a parse error"""
pass
class Move:
GTP_COORD = list("ABCDEFGHJKLMNOPQRSTUVWXYZ") + [
xa + c for xa in "ABCDEFGH" for c in "ABCDEFGHJKLMNOPQRSTUVWXYZ"
] # board size 52+ support
PLAYERS = "BW"
SGF_COORD = list("ABCDEFGHIJKLMNOPQRSTUVWXYZ".lower()) + list("ABCDEFGHIJKLMNOPQRSTUVWXYZ") # sgf goes to 52
@classmethod
def from_gtp(cls, gtp_coords, player="B"):
"""Initialize a move from GTP coordinates and player"""
if "pass" in gtp_coords.lower():
return cls(coords=None, player=player)
match = re.match(r"([A-Z]+)(\d+)", gtp_coords)
return cls(coords=(Move.GTP_COORD.index(match[1]), int(match[2]) - 1), player=player)
@classmethod
def from_sgf(cls, sgf_coords, board_size, player="B"):
"""Initialize a move from SGF coordinates and player"""
if sgf_coords == "" or (
sgf_coords == "tt" and board_size[0] <= 19 and board_size[1] <= 19
): # [tt] can be used as "pass" for <= 19x19 board
return cls(coords=None, player=player)
return cls(
coords=(Move.SGF_COORD.index(sgf_coords[0]), board_size[1] - Move.SGF_COORD.index(sgf_coords[1]) - 1),
player=player,
)
def __init__(self, coords: Optional[Tuple[int, int]] = None, player: str = "B"):
"""Initialize a move from zero-based coordinates and player"""
self.player = player
self.coords = coords
def __repr__(self):
return f"Move({self.player or ''}{self.gtp()})"
def __eq__(self, other):
return self.coords == other.coords and self.player == other.player
def __hash__(self):
return hash((self.coords, self.player))
def gtp(self):
"""Returns GTP coordinates of the move"""
if self.is_pass:
return "pass"
return Move.GTP_COORD[self.coords[0]] + str(self.coords[1] + 1)
def sgf(self, board_size):
"""Returns SGF coordinates of the move"""
if self.is_pass:
return ""
return f"{Move.SGF_COORD[self.coords[0]]}{Move.SGF_COORD[board_size[1] - self.coords[1] - 1]}"
@property
def is_pass(self):
"""Returns True if the move is a pass"""
return self.coords is None
@staticmethod
def opponent_player(player):
"""Returns the opposing player, i.e. W <-> B"""
return "W" if player == "B" else "B"
@property
def opponent(self):
"""Returns the opposing player, i.e. W <-> B"""
return self.opponent_player(self.player)
class SGFNode:
def __init__(self, parent=None, properties=None, move=None):
self.children = []
self.properties = defaultdict(list)
if properties:
for k, v in properties.items():
self.set_property(k, v)
self.parent = parent
if self.parent:
self.parent.children.append(self)
if parent and move:
self.set_property(move.player, move.sgf(self.board_size))
self._clear_cache()
def _clear_cache(self):
self.moves_cache = None
def __repr__(self):
return f"SGFNode({dict(self.properties)})"
def sgf_properties(self, **xargs) -> Dict:
"""For hooking into in a subclass and overriding/formatting any additional properties to be output."""
return copy.deepcopy(self.properties)
@staticmethod
def order_children(children):
"""For hooking into in a subclass and overriding branch order."""
return children
@property
def ordered_children(self):
return self.order_children(self.children)
@staticmethod
def _escape_value(value):
return re.sub(r"([\]\\])", r"\\\1", value) if isinstance(value, str) else value # escape \ and ]
@staticmethod
def _unescape_value(value):
return re.sub(r"\\([\]\\])", r"\1", value) if isinstance(value, str) else value # unescape \ and ]
def sgf(self, **xargs) -> str:
"""Generates an SGF, calling sgf_properties on each node with the given xargs, so it can filter relevant properties if needed."""
def node_sgf_str(node):
return ";" + "".join(
[
prop + "".join(f"[{self._escape_value(v)}]" for v in values)
for prop, values in node.sgf_properties(**xargs).items()
if values
]
)
stack = [")", self, "("]
sgf_str = ""
while stack:
item = stack.pop()
if isinstance(item, str):
sgf_str += item
else:
sgf_str += node_sgf_str(item)
if len(item.children) == 1:
stack.append(item.children[0])
elif item.children:
stack += sum([[")", c, "("] for c in item.ordered_children[::-1]], [])
return sgf_str
def add_list_property(self, property: str, values: List):
"""Add some values to the property list."""
# SiZe[19] ==> SZ[19] etc. for old SGF
normalized_property = re.sub("[a-z]", "", property)
self._clear_cache()
self.properties[normalized_property] += values
def get_list_property(self, property, default=None) -> Any:
"""Get the list of values for a property."""
return self.properties.get(property, default)
def set_property(self, property: str, value: Any):
"""Add some values to the property. If not a list, it will be made into a single-value list."""
if not isinstance(value, list):
value = [value]
self._clear_cache()
self.properties[property] = value
def get_property(self, property, default=None) -> Any:
"""Get the first value of the property, typically when exactly one is expected."""
return self.properties.get(property, [default])[0]
def clear_property(self, property) -> Any:
"""Removes property if it exists."""
return self.properties.pop(property, None)
@property
def parent(self) -> Optional["SGFNode"]:
"""Returns the parent node"""
return self._parent
@parent.setter
def parent(self, parent_node):
self._parent = parent_node
self._root = None
self._depth = None
@property
def root(self) -> "SGFNode":
"""Returns the root of the tree, cached for speed"""
if self._root is None:
self._root = self.parent.root if self.parent else self
return self._root
@property
def depth(self) -> int:
"""Returns the depth of this node, where root is 0, cached for speed"""
if self._depth is None:
moves = self.moves
if self.is_root:
self._depth = 0
else: # no increase on placements etc
self._depth = self.parent.depth + len(moves)
return self._depth
@property
def board_size(self) -> Tuple[int, int]:
"""Retrieves the root's SZ property, or 19 if missing. Parses it, and returns board size as a tuple x,y"""
size = str(self.root.get_property("SZ", "19"))
if ":" in size:
x, y = map(int, size.split(":"))
else:
x = int(size)
y = x
return x, y
@property
def komi(self) -> float:
"""Retrieves the root's KM property, or 6.5 if missing"""
try:
km = float(self.root.get_property("KM", 6.5))
except ValueError:
km = 6.5
return km
@property
def handicap(self) -> int:
try:
return int(self.root.get_property("HA", 0))
except ValueError:
return 0
@property
def ruleset(self) -> str:
"""Retrieves the root's RU property, or 'japanese' if missing"""
return self.root.get_property("RU", "japanese")
@property
def moves(self) -> List[Move]:
"""Returns all moves in the node - typically 'move' will be better."""
if self.moves_cache is None:
self.moves_cache = [
Move.from_sgf(move, player=pl, board_size=self.board_size)
for pl in Move.PLAYERS
for move in self.get_list_property(pl, [])
]
return self.moves_cache
def _expanded_placements(self, player):
sgf_pl = player if player is not None else "E" # AE
placements = self.get_list_property("A" + sgf_pl, [])
if not placements:
return []
to_be_expanded = [p for p in placements if ":" in p]
board_size = self.board_size
if to_be_expanded:
coords = {
Move.from_sgf(sgf_coord, player=player, board_size=board_size)
for sgf_coord in placements
if ":" not in sgf_coord
}
for p in to_be_expanded:
from_coord, to_coord = [Move.from_sgf(c, board_size=board_size) for c in p.split(":")[:2]]
for x in range(from_coord.coords[0], to_coord.coords[0] + 1):
for y in range(to_coord.coords[1], from_coord.coords[1] + 1): # sgf upside dn
if 0 <= x < board_size[0] and 0 <= y < board_size[1]:
coords.add(Move((x, y), player=player))
return list(coords)
else:
return [Move.from_sgf(sgf_coord, player=player, board_size=board_size) for sgf_coord in placements]
@property
def placements(self) -> List[Move]:
"""Returns all placements (AB/AW) in the node."""
return [coord for pl in Move.PLAYERS for coord in self._expanded_placements(pl)]
@property
def clear_placements(self) -> List[Move]:
"""Returns all AE clear square commends in the node."""
return self._expanded_placements(None)
@property
def move_with_placements(self) -> List[Move]:
"""Returns all moves (B/W) and placements (AB/AW) in the node."""
return self.placements + self.moves
@property
def move(self) -> Optional[Move]:
"""Returns the single move for the node if one exists, or None if no moves (or multiple ones) exist."""
moves = self.moves
if len(moves) == 1:
return moves[0]
@property
def is_root(self) -> bool:
"""Returns true if node is a root"""
return self.parent is None
@property
def is_pass(self) -> bool:
"""Returns true if associated move is pass"""
return not self.placements and self.move and self.move.is_pass
@property
def empty(self) -> bool:
"""Returns true if node has no children or properties"""
return not self.children and not self.properties
@property
def nodes_in_tree(self) -> List:
"""Returns all nodes in the tree rooted at this node"""
stack = [self]
nodes = []
while stack:
item = stack.pop(0)
nodes.append(item)
stack += item.children
return nodes
@property
def nodes_from_root(self) -> List:
"""Returns all nodes from the root up to this node, i.e. the moves played in the current branch of the game"""
nodes = [self]
n = self
while not n.is_root:
n = n.parent
nodes.append(n)
return nodes[::-1]
def play(self, move) -> "SGFNode":
"""Either find an existing child or create a new one with the given move."""
for c in self.children:
if c.move and c.move == move:
return c
return self.__class__(parent=self, move=move)
@property
def initial_player(self): # player for first node
root = self.root
if "PL" in root.properties: # explicit
return "B" if self.root.get_property("PL").upper().strip() == "B" else "W"
elif root.children: # child exist, use it if not placement
for child in root.children:
for color in "BW":
if color in child.properties:
return color
# b move or setup with only black moves like handicap
if "AB" in self.properties and "AW" not in self.properties:
return "W"
else:
return "B"
@property
def next_player(self):
"""Returns player to move"""
if self.is_root:
return self.initial_player
elif "B" in self.properties:
return "W"
elif "W" in self.properties:
return "B"
else: # only placements, find a parent node with a real move. TODO: better placement support
return self.parent.next_player
@property
def player(self):
"""Returns player that moved last. nb root is considered white played if no handicap stones are placed"""
if "B" in self.properties or ("AB" in self.properties and "W" not in self.properties):
return "B"
else:
return "W"
def place_handicap_stones(self, n_handicaps, tygem=False):
board_size_x, board_size_y = self.board_size
if min(board_size_x, board_size_y) < 3:
return # No
near_x = 3 if board_size_x >= 13 else min(2, board_size_x - 1)
near_y = 3 if board_size_y >= 13 else min(2, board_size_y - 1)
far_x = board_size_x - 1 - near_x
far_y = board_size_y - 1 - near_y
middle_x = board_size_x // 2 # what for even sizes?
middle_y = board_size_y // 2
if n_handicaps > 9 and board_size_x == board_size_y:
stones_per_row = math.ceil(math.sqrt(n_handicaps))
spacing = (far_x - near_x) / (stones_per_row - 1)
if spacing < near_x:
far_x += 1
near_x -= 1
spacing = (far_x - near_x) / (stones_per_row - 1)
coords = list({math.floor(0.5 + near_x + i * spacing) for i in range(stones_per_row)})
stones = sorted(
[(x, y) for x in coords for y in coords],
key=lambda xy: -((xy[0] - (board_size_x - 1) / 2) ** 2 + (xy[1] - (board_size_y - 1) / 2) ** 2),
)
else: # max 9
stones = [(far_x, far_y), (near_x, near_y), (far_x, near_y), (near_x, far_y)]
if n_handicaps % 2 == 1:
stones.append((middle_x, middle_y))
stones += [(near_x, middle_y), (far_x, middle_y), (middle_x, near_y), (middle_x, far_y)]
if tygem:
stones[2], stones[3] = stones[3], stones[2]
self.set_property(
"AB", list({Move(stone).sgf(board_size=(board_size_x, board_size_y)) for stone in stones[:n_handicaps]})
)
class SGF:
DEFAULT_ENCODING = "UTF-8"
_NODE_CLASS = SGFNode # Class used for SGF Nodes, can change this to something that inherits from SGFNode
# https://xkcd.com/1171/
SGFPROP_PAT = re.compile(r"\s*(?:\(|\)|;|(\w+)((\s*\[([^\]\\]|\\.)*\])+))", flags=re.DOTALL)
SGF_PAT = re.compile(r"\(;.*\)", flags=re.DOTALL)
@classmethod
def parse_sgf(cls, input_str) -> SGFNode:
"""Parse a string as SGF."""
match = re.search(cls.SGF_PAT, input_str)
clipped_str = match.group() if match else input_str
root = cls(clipped_str).root
# Fix weird FoxGo server KM values
if "foxwq" in root.get_list_property("AP", []):
if int(root.get_property("HA", 0)) >= 1:
corrected_komi = 0.5
elif root.get_property("RU").lower() in ["chinese", "cn"]:
corrected_komi = 7.5
else:
corrected_komi = 6.5
root.set_property("KM", corrected_komi)
return root
@classmethod
def parse_file(cls, filename, encoding=None) -> SGFNode:
is_gib = filename.lower().endswith(".gib")
is_ngf = filename.lower().endswith(".ngf")
"""Parse a file as SGF, encoding will be detected if not given."""
with open(filename, "rb") as f:
bin_contents = f.read()
if not encoding:
if is_gib or is_ngf or b"AP[foxwq]" in bin_contents:
encoding = "utf8"
else: # sgf
match = re.search(rb"CA\[(.*?)\]", bin_contents)
if match:
encoding = match[1].decode("ascii", errors="ignore")
else:
encoding = chardet.detect(bin_contents[:300])["encoding"]
# workaround for some compatibility issues for Windows-1252 and GB2312 encodings
if encoding == "Windows-1252" or encoding == "GB2312":
encoding = "GBK"
try:
decoded = bin_contents.decode(encoding=encoding, errors="ignore")
except LookupError:
decoded = bin_contents.decode(encoding=cls.DEFAULT_ENCODING, errors="ignore")
if is_ngf:
return cls.parse_ngf(decoded)
if is_gib:
return cls.parse_gib(decoded)
else: # sgf
return cls.parse_sgf(decoded)
def __init__(self, contents):
self.contents = contents
try:
self.ix = self.contents.index("(") + 1
except ValueError:
raise ParseError(f"Parse error: Expected '(' at start, found {self.contents[:50]}")
self.root = self._NODE_CLASS()
self._parse_branch(self.root)
def _parse_branch(self, current_move: SGFNode):
while self.ix < len(self.contents):
match = re.match(self.SGFPROP_PAT, self.contents[self.ix :])
if not match:
break
self.ix += len(match[0])
matched_item = match[0].strip()
if matched_item == ")":
return
if matched_item == "(":
self._parse_branch(self._NODE_CLASS(parent=current_move))
elif matched_item == ";":
# ignore ;) for old SGF
useless = self.ix < len(self.contents) and self.contents[self.ix :].strip() == ")"
# ignore ; that generate empty nodes
if not (current_move.empty or useless):
current_move = self._NODE_CLASS(parent=current_move)
else:
property, value = match[1], match[2].strip()[1:-1]
values = re.split(r"\]\s*\[", value)
current_move.add_list_property(property, [SGFNode._unescape_value(v) for v in values])
if self.ix < len(self.contents):
raise ParseError(f"Parse Error: unexpected character at {self.contents[self.ix:self.ix+25]}")
raise ParseError("Parse Error: expected ')' at end of input.")
# NGF parser adapted from https://github.com/fohristiwhirl/gofish/
@classmethod
def parse_ngf(cls, ngf):
ngf = ngf.strip()
lines = ngf.split("\n")
try:
boardsize = int(lines[1])
handicap = int(lines[5])
pw = lines[2].split()[0]
pb = lines[3].split()[0]
rawdate = lines[8][0:8]
komi = float(lines[7])
if handicap == 0 and int(komi) == komi:
komi += 0.5
except (IndexError, ValueError):
boardsize = 19
handicap = 0
pw = ""
pb = ""
rawdate = ""
komi = 0
re = ""
try:
if "hite win" in lines[10]:
re = "W+"
elif "lack win" in lines[10]:
re = "B+"
except IndexError:
pass
if handicap < 0 or handicap > 9:
raise ParseError(f"Handicap {handicap} out of range")
root = cls._NODE_CLASS()
node = root
# Set root values...
root.set_property("SZ", boardsize)
if handicap >= 2:
root.set_property("HA", handicap)
root.place_handicap_stones(handicap, tygem=True) # While this isn't Tygem, it uses the same layout
if komi:
root.set_property("KM", komi)
if len(rawdate) == 8:
ok = True
for n in range(8):
if rawdate[n] not in "0123456789":
ok = False
if ok:
date = rawdate[0:4] + "-" + rawdate[4:6] + "-" + rawdate[6:8]
root.set_property("DT", date)
if pw:
root.set_property("PW", pw)
if pb:
root.set_property("PB", pb)
if re:
root.set_property("RE", re)
# Main parser...
for line in lines:
line = line.strip().upper()
if len(line) >= 7:
if line[0:2] == "PM":
if line[4] in ["B", "W"]:
# move format is similar to SGF, but uppercase and out-by-1
key = line[4]
raw_move = line[5:7].lower()
if raw_move == "aa":
value = "" # pass
else:
value = chr(ord(raw_move[0]) - 1) + chr(ord(raw_move[1]) - 1)
node = cls._NODE_CLASS(parent=node)
node.set_property(key, value)
if len(root.children) == 0: # We'll assume we failed in this case
raise ParseError("Found no moves")
return root
# GIB parser adapted from https://github.com/fohristiwhirl/gofish/
@classmethod
def parse_gib(cls, gib):
def parse_player_name(raw):
name = raw
rank = ""
foo = raw.split("(")
if len(foo) == 2:
if foo[1][-1] == ")":
name = foo[0].strip()
rank = foo[1][0:-1]
return name, rank
def gib_make_result(grlt, zipsu):
easycases = {3: "B+R", 4: "W+R", 7: "B+T", 8: "W+T"}
if grlt in easycases:
return easycases[grlt]
if grlt in [0, 1]:
return "{}+{}".format("B" if grlt == 0 else "W", zipsu / 10)
return ""
def gib_get_result(line, grlt_regex, zipsu_regex):
try:
grlt = int(re.search(grlt_regex, line).group(1))
zipsu = int(re.search(zipsu_regex, line).group(1))
except: # noqa E722
return ""
return gib_make_result(grlt, zipsu)
root = cls._NODE_CLASS()
node = root
lines = gib.split("\n")
for line in lines:
line = line.strip()
if line.startswith("\\[GAMEBLACKNAME=") and line.endswith("\\]"):
s = line[16:-2]
name, rank = parse_player_name(s)
if name:
root.set_property("PB", name)
if rank:
root.set_property("BR", rank)
if line.startswith("\\[GAMEWHITENAME=") and line.endswith("\\]"):
s = line[16:-2]
name, rank = parse_player_name(s)
if name:
root.set_property("PW", name)
if rank:
root.set_property("WR", rank)
if line.startswith("\\[GAMEINFOMAIN="):
result = gib_get_result(line, r"GRLT:(\d+),", r"ZIPSU:(\d+),")
if result:
root.set_property("RE", result)
try:
komi = int(re.search(r"GONGJE:(\d+),", line).group(1)) / 10
if komi:
root.set_property("KM", komi)
except: # noqa E722
pass
if line.startswith("\\[GAMETAG="):
if "DT" not in root.properties:
try:
match = re.search(r"C(\d\d\d\d):(\d\d):(\d\d)", line)
date = "{}-{}-{}".format(match.group(1), match.group(2), match.group(3))
root.set_property("DT", date)
except: # noqa E722
pass
if "RE" not in root.properties:
result = gib_get_result(line, r",W(\d+),", r",Z(\d+),")
if result:
root.set_property("RE", result)
if "KM" not in root.properties:
try:
komi = int(re.search(r",G(\d+),", line).group(1)) / 10
if komi:
root.set_property("KM", komi)
except: # noqa E722
pass
if line[0:3] == "INI":
if node is not root:
raise ParseError("Node is not root")
setup = line.split()
try:
handicap = int(setup[3])
except ParseError:
continue
if handicap < 0 or handicap > 9:
raise ParseError(f"Handicap {handicap} out of range")
if handicap >= 2:
root.set_property("HA", handicap)
root.place_handicap_stones(handicap, tygem=True)
if line[0:3] == "STO":
move = line.split()
key = "B" if move[3] == "1" else "W"
try:
x = int(move[4])
y = 18 - int(move[5])
if not (0 <= x < 19 and 0 <= y < 19):
raise ParseError(f"Coordinates for move ({x},{y}) out of range on line {line}")
value = Move(coords=(x, y)).sgf(board_size=(19, 19))
except IndexError:
continue
node = cls._NODE_CLASS(parent=node)
node.set_property(key, value)
if len(root.children) == 0: # We'll assume we failed in this case
raise ParseError("No valid nodes found")
return root | PypiClean |
/Electrum-CHI-3.3.8.tar.gz/Electrum-CHI-3.3.8/packages/pip/_vendor/chardet/langhebrewmodel.py |
# 255: Control characters that usually does not exist in any text
# 254: Carriage/Return
# 253: symbol (punctuation) that does not belong to word
# 252: 0 - 9
# Windows-1255 language model
# Character Mapping Table:
WIN1255_CHAR_TO_ORDER_MAP = (
255,255,255,255,255,255,255,255,255,255,254,255,255,254,255,255, # 00
255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255, # 10
253,253,253,253,253,253,253,253,253,253,253,253,253,253,253,253, # 20
252,252,252,252,252,252,252,252,252,252,253,253,253,253,253,253, # 30
253, 69, 91, 79, 80, 92, 89, 97, 90, 68,111,112, 82, 73, 95, 85, # 40
78,121, 86, 71, 67,102,107, 84,114,103,115,253,253,253,253,253, # 50
253, 50, 74, 60, 61, 42, 76, 70, 64, 53,105, 93, 56, 65, 54, 49, # 60
66,110, 51, 43, 44, 63, 81, 77, 98, 75,108,253,253,253,253,253, # 70
124,202,203,204,205, 40, 58,206,207,208,209,210,211,212,213,214,
215, 83, 52, 47, 46, 72, 32, 94,216,113,217,109,218,219,220,221,
34,116,222,118,100,223,224,117,119,104,125,225,226, 87, 99,227,
106,122,123,228, 55,229,230,101,231,232,120,233, 48, 39, 57,234,
30, 59, 41, 88, 33, 37, 36, 31, 29, 35,235, 62, 28,236,126,237,
238, 38, 45,239,240,241,242,243,127,244,245,246,247,248,249,250,
9, 8, 20, 16, 3, 2, 24, 14, 22, 1, 25, 15, 4, 11, 6, 23,
12, 19, 13, 26, 18, 27, 21, 17, 7, 10, 5,251,252,128, 96,253,
)
# Model Table:
# total sequences: 100%
# first 512 sequences: 98.4004%
# first 1024 sequences: 1.5981%
# rest sequences: 0.087%
# negative sequences: 0.0015%
HEBREW_LANG_MODEL = (
0,3,3,3,3,3,3,3,3,3,3,2,3,3,3,3,3,3,3,3,3,3,3,2,3,2,1,2,0,1,0,0,
3,0,3,1,0,0,1,3,2,0,1,1,2,0,2,2,2,1,1,1,1,2,1,1,1,2,0,0,2,2,0,1,
3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,2,2,2,2,
1,2,1,2,1,2,0,0,2,0,0,0,0,0,1,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,1,0,
3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,2,2,2,
1,2,1,3,1,1,0,0,2,0,0,0,1,0,1,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,
3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,1,0,1,2,2,1,3,
1,2,1,1,2,2,0,0,2,2,0,0,0,0,1,0,1,0,0,0,1,0,0,0,0,0,0,1,0,1,1,0,
3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,2,3,3,2,2,2,2,3,2,
1,2,1,2,2,2,0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,1,0,
3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,2,3,3,2,3,2,2,3,2,2,2,1,2,2,2,2,
1,2,1,1,2,2,0,1,2,0,0,0,0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,1,0,
3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,2,0,2,2,2,2,2,
0,2,0,2,2,2,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,1,0,
3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,2,3,0,2,2,2,
0,2,1,2,2,2,0,0,2,1,0,0,0,0,1,0,1,0,0,0,0,0,0,2,0,0,0,0,0,0,1,0,
3,3,3,3,3,3,3,3,3,3,3,2,3,3,3,3,3,3,3,3,3,3,3,3,3,2,1,2,3,2,2,2,
1,2,1,2,2,2,0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,1,1,0,
3,3,3,3,3,3,3,3,3,2,3,3,3,2,3,3,3,3,3,3,3,3,3,3,3,3,3,1,0,2,0,2,
0,2,1,2,2,2,0,0,1,2,0,0,0,0,1,0,1,0,0,0,0,0,0,1,0,0,0,2,0,0,1,0,
3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,2,3,2,3,2,2,3,2,1,2,1,1,1,
0,1,1,1,1,1,3,0,1,0,0,0,0,2,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,
3,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,0,1,1,0,1,1,0,0,1,0,0,1,0,0,0,0,
0,0,1,0,0,0,0,0,2,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,2,2,2,2,2,2,2,
0,2,0,1,2,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,
3,3,3,3,3,3,3,3,3,2,3,3,3,2,1,2,3,3,2,3,3,3,3,2,3,2,1,2,0,2,1,2,
0,2,0,2,2,2,0,0,1,2,0,0,0,0,1,0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,1,0,
3,3,3,3,3,3,3,3,3,2,3,3,3,1,2,2,3,3,2,3,2,3,2,2,3,1,2,2,0,2,2,2,
0,2,1,2,2,2,0,0,1,2,0,0,0,0,1,0,0,0,0,0,1,0,0,1,0,0,0,1,0,0,1,0,
3,3,3,3,3,3,3,3,3,3,3,3,3,2,3,3,3,2,3,3,2,2,2,3,3,3,3,1,3,2,2,2,
0,2,0,1,2,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,
3,3,3,3,3,3,3,3,3,3,3,3,3,3,2,2,3,3,3,2,3,2,2,2,1,2,2,0,2,2,2,2,
0,2,0,2,2,2,0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,
3,3,3,3,3,3,3,3,3,3,3,2,3,3,3,1,3,2,3,3,2,3,3,2,2,1,2,2,2,2,2,2,
0,2,1,2,1,2,0,0,1,0,0,0,0,0,1,0,0,0,0,0,1,0,0,1,0,0,0,0,0,0,1,0,
3,3,3,3,3,3,2,3,2,3,3,2,3,3,3,3,2,3,2,3,3,3,3,3,2,2,2,2,2,2,2,1,
0,2,0,1,2,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,1,0,
3,3,3,3,3,3,3,3,3,2,1,2,3,3,3,3,3,3,3,2,3,2,3,2,1,2,3,0,2,1,2,2,
0,2,1,1,2,1,0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,2,0,
3,3,3,3,3,3,3,3,3,2,3,3,3,3,2,1,3,1,2,2,2,1,2,3,3,1,2,1,2,2,2,2,
0,1,1,1,1,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,1,0,0,2,0,0,0,0,0,0,0,0,
3,3,3,3,3,3,3,3,3,3,0,2,3,3,3,1,3,3,3,1,2,2,2,2,1,1,2,2,2,2,2,2,
0,2,0,1,1,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,1,0,
3,3,3,3,3,3,2,3,3,3,2,2,3,3,3,2,1,2,3,2,3,2,2,2,2,1,2,1,1,1,2,2,
0,2,1,1,1,1,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,
3,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,0,1,0,0,0,1,0,0,0,0,0,
1,0,1,0,0,0,0,0,2,0,0,0,0,0,1,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
3,3,3,3,3,2,3,3,2,3,1,2,2,2,2,3,2,3,1,1,2,2,1,2,2,1,1,0,2,2,2,2,
0,1,0,1,2,2,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,1,0,
3,0,0,1,1,0,1,0,0,1,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,2,2,0,
0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
3,0,1,0,1,0,1,1,0,1,1,0,0,0,1,1,0,1,1,1,0,0,0,0,0,0,1,0,0,0,0,0,
0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
3,0,0,0,1,1,0,1,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,
3,2,2,1,2,2,2,2,2,2,2,1,2,2,1,2,2,1,1,1,1,1,1,1,1,2,1,1,0,3,3,3,
0,3,0,2,2,2,2,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,
2,2,2,3,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,1,2,2,1,2,2,2,1,1,1,2,0,1,
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
2,2,2,2,2,2,2,2,2,2,2,1,2,2,2,2,2,2,2,2,2,2,2,0,2,2,0,0,0,0,0,0,
0,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
2,3,1,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,1,2,1,0,2,1,0,
0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
3,1,1,1,1,1,1,1,1,1,1,0,0,1,1,1,1,0,1,1,1,1,0,0,0,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,
0,3,1,1,2,2,2,2,2,1,2,2,2,1,1,2,2,2,2,2,2,2,1,2,2,1,0,1,1,1,1,0,
0,1,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
3,2,1,1,1,1,2,1,1,2,1,0,1,1,1,1,1,1,1,1,1,1,1,0,1,0,0,0,0,0,0,0,
0,0,2,0,0,0,0,0,0,0,0,1,1,0,0,0,0,1,1,0,0,1,1,0,0,0,0,0,0,1,0,0,
2,1,1,2,2,2,2,2,2,2,2,2,2,2,1,2,2,2,2,2,1,2,1,2,1,1,1,1,0,0,0,0,
0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
1,2,1,2,2,2,2,2,2,2,2,2,2,1,2,1,2,1,1,2,1,1,1,2,1,2,1,2,0,1,0,1,
0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
0,3,1,2,2,2,1,2,2,2,2,2,2,2,2,1,2,1,1,1,1,1,1,2,1,2,1,1,0,1,0,1,
0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
2,1,2,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,2,2,
0,2,0,1,2,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,
3,0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
2,1,1,1,1,1,1,1,0,1,1,0,1,0,0,1,0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,0,2,0,1,1,1,0,1,0,0,0,1,1,0,1,1,0,0,0,0,0,1,1,0,0,
0,1,1,1,2,1,2,2,2,0,2,0,2,0,1,1,2,1,1,1,1,2,1,0,1,1,0,0,0,0,0,0,
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,
1,0,1,0,0,0,0,0,1,0,1,2,2,0,1,0,0,1,1,2,2,1,2,0,2,0,0,0,1,2,0,1,
2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,0,2,0,2,1,2,0,2,0,0,1,1,1,1,1,1,0,1,0,0,0,1,0,0,1,
2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
0,0,1,0,0,0,0,0,1,0,2,1,1,0,1,0,0,1,1,1,2,2,0,0,1,0,0,0,1,0,0,1,
1,1,2,1,0,1,1,1,0,1,0,1,1,1,1,0,0,0,1,0,1,0,0,0,0,0,0,0,0,2,2,1,
0,2,0,1,2,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
2,1,0,0,1,0,1,1,1,1,0,0,0,0,0,1,0,0,0,0,1,1,0,0,0,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
1,1,1,1,1,1,1,1,1,2,1,0,1,1,1,1,1,1,1,1,1,1,1,0,1,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,0,0,0,1,1,1,0,0,0,0,1,1,1,0,1,1,0,1,0,0,0,1,1,0,1,
2,0,1,0,1,0,1,0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,0,1,0,1,1,1,0,1,0,0,1,1,2,1,1,2,0,1,0,0,0,1,1,0,1,
1,0,0,1,0,0,1,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,0,1,0,1,1,2,0,1,0,0,0,0,2,1,1,2,0,2,0,0,0,1,1,0,1,
1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,0,1,0,2,1,1,0,1,0,0,2,2,1,2,1,1,0,1,0,0,0,1,1,0,1,
2,0,1,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,0,0,0,1,2,2,0,0,0,0,0,1,1,0,1,0,0,1,0,0,0,0,1,0,1,
1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,0,0,0,1,2,2,0,0,0,0,2,1,1,1,0,2,1,1,0,0,0,2,1,0,1,
1,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,0,1,0,1,1,2,0,1,0,0,1,1,0,2,1,1,0,1,0,0,0,1,1,0,1,
2,2,1,1,1,0,1,1,0,1,1,0,1,0,0,0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,0,1,0,2,1,1,0,1,0,0,1,1,0,1,2,1,0,2,0,0,0,1,1,0,1,
2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,
0,1,0,0,2,0,2,1,1,0,1,0,1,0,0,1,0,0,0,0,1,0,0,0,1,0,0,0,0,0,1,0,
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
1,0,0,1,0,0,1,0,0,1,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,0,1,0,1,1,2,0,1,0,0,1,1,1,0,1,0,0,1,0,0,0,1,0,0,1,
1,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
1,0,0,0,0,0,0,0,1,0,1,1,0,0,1,0,0,2,1,1,1,1,1,0,1,0,0,0,0,1,0,1,
0,1,1,1,2,1,1,1,1,0,1,1,1,1,1,1,1,1,1,1,1,1,0,1,1,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,0,0,0,1,2,1,0,0,0,0,0,1,1,1,1,1,0,1,0,0,0,1,1,0,0,
)
Win1255HebrewModel = {
'char_to_order_map': WIN1255_CHAR_TO_ORDER_MAP,
'precedence_matrix': HEBREW_LANG_MODEL,
'typical_positive_ratio': 0.984004,
'keep_english_letter': False,
'charset_name': "windows-1255",
'language': 'Hebrew',
} | PypiClean |
/Finance-Hermes-0.3.6.tar.gz/Finance-Hermes-0.3.6/hermes/factors/technical/factor_volume.py | import copy
from numpy import fabs as npFabs
from hermes.factors.base import FactorBase, LongCallMixin, ShortMixin
from hermes.factors.technical.core.volume import *
class FactorVolume(FactorBase, LongCallMixin, ShortMixin):
def __init__(self, **kwargs):
__str__ = 'volume'
self.category = 'volume'
def _init_self(self, **kwargs):
pass
def AD(self,
data,
offset=None,
dependencies=['high', 'low', 'close', 'volume', 'open'],
**kwargs):
result = ad(copy.deepcopy(data['high']),
copy.deepcopy(data['low']),
copy.deepcopy(data['close']),
copy.deepcopy(data['volume']),
copy.deepcopy(data['open']),
offset=offset,
**kwargs)
return self._format(result, f"AD")
def ADOSC(self,
data,
fast=None,
slow=None,
offset=None,
dependencies=['high', 'low', 'close', 'volume', 'open'],
**kwargs):
fast = int(fast) if fast and fast > 0 else 3
slow = int(slow) if slow and slow > 0 else 10
result = adosc(copy.deepcopy(data['high']),
copy.deepcopy(data['low']),
copy.deepcopy(data['close']),
copy.deepcopy(data['volume']),
copy.deepcopy(data['open']),
fast=fast,
slow=slow,
offset=offset,
**kwargs)
return self._format(result, f"ADOSC_{fast}_{slow}")
def AOBV(self,
data,
fast=None,
slow=None,
max_lookback=None,
min_lookback=None,
offset=None,
dependencies=['close', 'volume'],
**kwargs):
fast = int(fast) if fast and fast > 0 else 4
slow = int(slow) if slow and slow > 0 else 12
max_lookback = int(
max_lookback) if max_lookback and max_lookback > 0 else 2
min_lookback = int(
min_lookback) if min_lookback and min_lookback > 0 else 2
if "length" in kwargs: kwargs.pop("length")
run_length = kwargs.pop("run_length", 2)
obv_, maf, mas, obv_long, obv_short = aobv(
copy.deepcopy(data['close']),
copy.deepcopy(data['volume']),
fast=fast,
slow=slow,
max_lookback=max_lookback,
min_lookback=min_lookback,
offset=offset,
**kwargs)
obv_ = self._format(
obv_,
f"AOBV_{fast}_{slow}_{min_lookback}_{max_lookback}_{run_length}")
maf = self._format(
maf,
f"AOBV_FAST_{fast}_{slow}_{min_lookback}_{max_lookback}_{run_length}"
)
mas = self._format(
mas,
f"AOBV_SLOW_{fast}_{slow}_{min_lookback}_{max_lookback}_{run_length}"
)
obv_long = self._format(
obv_long,
f"AOBV_LR_{fast}_{slow}_{min_lookback}_{max_lookback}_{run_length}"
)
obv_short = self._format(
obv_short,
f"AOBV_SR_{fast}_{slow}_{min_lookback}_{max_lookback}_{run_length}"
)
return obv_, maf, mas, obv_long, obv_short
def CMF(self,
data,
length=None,
offset=None,
dependencies=['high', 'low', 'close', 'volume', 'open'],
**kwargs):
length = int(length) if length and length > 0 else 20
result = cmf(copy.deepcopy(data['high']),
copy.deepcopy(data['low']),
copy.deepcopy(data['close']),
copy.deepcopy(data['volume']),
copy.deepcopy(data['open']),
length=length,
offset=offset)
return self._format(result, f"CMF_{length}")
def EFI(self,
data,
length=None,
drift=None,
offset=None,
dependencies=['close', 'volume'],
**kwargs):
length = int(length) if length and length > 0 else 13
result = efi(copy.deepcopy(data['close']),
copy.deepcopy(data['volume']),
length=length,
drift=drift,
offset=offset,
**kwargs)
return self._format(result, f"EFI_{length}")
def EOM(self,
data,
length=None,
divisor=None,
drift=None,
offset=None,
dependencies=['high', 'low', 'volume'],
**kwargs):
length = int(length) if length and length > 0 else 14
divisor = divisor if divisor and divisor > 0 else 100000000
result = eom(copy.deepcopy(data['high']),
copy.deepcopy(data['low']),
copy.deepcopy(data['volume']),
length=length,
divisor=divisor,
drift=drift,
offset=offset,
**kwargs)
return self._format(result, f"EOM_{length}")
def KVO(self,
data,
fast=None,
slow=None,
signal=None,
drift=None,
offset=None,
dependencies=['high', 'low', 'close', 'volume'],
**kwargs):
fast = int(fast) if fast and fast > 0 else 34
slow = int(slow) if slow and slow > 0 else 55
signal = int(signal) if signal and signal > 0 else 13
result, kvo_signal = kvo(copy.deepcopy(data['high']),
copy.deepcopy(data['low']),
copy.deepcopy(data['close']),
copy.deepcopy(data['volume']),
fast=fast,
slow=slow,
signal=signal,
drift=drift,
offset=offset,
**kwargs)
result = self._format(result, f"KVO_{fast}_{slow}_{signal}")
kvo_signal = self._format(kvo_signal, f"KVOs_{fast}_{slow}_{signal}")
return result, kvo_signal
def LINRATIO(self,
data,
length=None,
offset=None,
category='equal',
dependencies=['long'],
**kwargs):
length = int(length) if length and length > 0 else 5
result = linratio(copy.deepcopy(data['long']),
length=length,
offset=offset,
**kwargs)
return self._format(result, f"LINRATIO_{category}_{length}")
def LRTCHG(self,
data,
length=None,
offset=None,
category='equal',
dependencies=['long', 'openint'],
**kwargs):
length = int(length) if length and length > 0 else 5
result = lrtichg(copy.deepcopy(data['long']),
copy.deepcopy(data['openint']),
length=length,
offset=offset,
**kwargs)
return self._format(result, f"LRTCHG_{category}_{length}")
def LSSENTI(self,
data,
length=None,
offset=None,
category='equal',
dependencies=['long', 'short', 'openint'],
**kwargs):
length = int(length) if length and length > 0 else 5
result = lssenti(copy.deepcopy(data['long']),
copy.deepcopy(data['short']),
copy.deepcopy(data['openint']),
length=length,
offset=offset,
**kwargs)
return self._format(result, f"LSSENTI_{category}_{length}")
def NIC(self,
data,
length=None,
offset=None,
category='equal',
dependencies=['long', 'short'],
**kwargs):
length = int(length) if length and length > 0 else 5
result = nic(copy.deepcopy(data['long']),
copy.deepcopy(data['short']),
length=length,
offset=offset,
**kwargs)
return self._format(result, f"NIC_{category}_{length}")
def NITC(self,
data,
length=None,
offset=None,
category='equal',
dependencies=['long', 'short', 'openint'],
**kwargs):
length = int(length) if length and length > 0 else 5
result = nitc(copy.deepcopy(data['long']),
copy.deepcopy(data['short']),
copy.deepcopy(data['openint']),
length=length,
offset=offset,
**kwargs)
return self._format(result, f"NITC_{category}_{length}")
def NIR(self,
data,
length=None,
offset=None,
category='equal',
dependencies=['long', 'short'],
**kwargs):
length = int(length) if length and length > 0 else 5
result = nir(copy.deepcopy(data['long']),
copy.deepcopy(data['short']),
length=length,
offset=offset,
**kwargs)
return self._format(result, f"NIR_{category}_{length}")
def NVI(self,
data,
length=None,
initial=None,
offset=None,
dependencies=['close', 'volume'],
**kwargs):
length = int(length) if length and length > 0 else 1
result = nvi(copy.deepcopy(data['close']),
copy.deepcopy(data['volume']),
length=length,
initial=initial,
offset=offset,
**kwargs)
return self._format(result, f"NVI_{length}")
def OBV(self,
data,
offset=None,
dependencies=['close', 'volume'],
**kwargs):
result = obv(copy.deepcopy(data['close']),
copy.deepcopy(data['volume']),
offset=offset,
**kwargs)
return self._format(result, f"OBV")
def PVI(self,
data,
length=None,
initial=None,
offset=None,
dependencies=['close', 'volume'],
**kwargs):
length = int(length) if length and length > 0 else 1
result = pvi(copy.deepcopy(data['close']),
copy.deepcopy(data['volume']),
length=length,
initial=initial,
offset=offset,
**kwargs)
return self._format(result, f"PVI_{length}")
def PVOL(self,
data,
length=None,
offset=None,
dependencies=['close', 'volume'],
**kwargs):
result = pvol(copy.deepcopy(data['close']),
copy.deepcopy(data['volume']),
length=length,
offset=offset,
**kwargs)
return self._format(result, f"PVOL")
def PVT(self,
data,
drift=None,
offset=None,
dependencies=['close', 'volume'],
**kwargs):
result = pvt(copy.deepcopy(data['close']),
copy.deepcopy(data['volume']),
drift=drift,
offset=offset,
**kwargs)
return self._format(result, f"PVT")
def SINRATIO(self,
data,
length=None,
offset=None,
category='equal',
dependencies=['short'],
**kwargs):
length = int(length) if length and length > 0 else 5
result = sinratio(copy.deepcopy(data['short']),
length=length,
offset=offset,
**kwargs)
return self._format(result, f"SINRATIO_{category}_{length}")
def SRTCHG(self,
data,
length=None,
offset=None,
category='equal',
dependencies=['short', 'openint'],
**kwargs):
length = int(length) if length and length > 0 else 5
result = srtichg(copy.deepcopy(data['short']),
copy.deepcopy(data['openint']),
length=length,
offset=offset,
**kwargs)
return self._format(result, f"SRTCHG_{category}_{length}") | PypiClean |
/dipex-4.54.5.tar.gz/dipex-4.54.5/integrations/aarhus/initial_classes.py | from dataclasses import dataclass
from uuid import UUID
import uuids
@dataclass
class Class:
titel: str
facet: str
scope: str
bvn: str
uuid: UUID
CLASSES = [
Class(
"Postadresse",
"org_unit_address_type",
"DAR",
"AddressMailUnit",
uuids.UNIT_POSTADDR,
),
Class("LOS ID", "org_unit_address_type", "TEXT", "LOSID", uuids.UNIT_LOS),
Class("CVR nummer", "org_unit_address_type", "TEXT", "CVRUnit", uuids.UNIT_CVR),
Class("EAN nummer", "org_unit_address_type", "EAN", "EANUnit", uuids.UNIT_EAN),
Class(
"P-nummer",
"org_unit_address_type",
"PNUMBER",
"PNumber",
uuids.UNIT_PNR,
),
Class("SE-nummer", "org_unit_address_type", "TEXT", "SENumber", uuids.UNIT_SENR),
Class(
"IntDebitor-Nr",
"org_unit_address_type",
"TEXT",
"intdebit",
uuids.UNIT_DEBITORNR,
),
Class("WWW", "org_unit_address_type", "WWW", "UnitWeb", uuids.UNIT_WWW),
Class(
"Ekspeditionstid",
"org_unit_address_type",
"TEXT",
"UnitHours",
uuids.UNIT_HOURS,
),
Class(
"Telefontid",
"org_unit_address_type",
"TEXT",
"UnitPhoneHours",
uuids.UNIT_PHONEHOURS,
),
Class(
"Telefon",
"org_unit_address_type",
"PHONE",
"UnitPhone",
uuids.UNIT_PHONE,
),
Class("Fax", "org_unit_address_type", "PHONE", "UnitFax", uuids.UNIT_FAX),
Class("Email", "org_unit_address_type", "EMAIL", "UnitEmail", uuids.UNIT_EMAIL),
Class(
"Magkort",
"org_unit_address_type",
"TEXT",
"UnitMagID",
uuids.UNIT_MAG_ID,
),
Class(
"Alternativt navn",
"org_unit_address_type",
"TEXT",
"UnitNameAlt",
uuids.UNIT_NAME_ALT,
),
Class(
"Phone",
"employee_address_type",
"PHONE",
"PhoneEmployee",
uuids.PERSON_PHONE,
),
Class(
"Email",
"employee_address_type",
"EMAIL",
"EmailEmployee",
uuids.PERSON_EMAIL,
),
Class(
"Lokale",
"employee_address_type",
"TEXT",
"RoomEmployee",
uuids.PERSON_ROOM,
),
Class("Primær", "primary_type", "100000", "primary", uuids.PRIMARY),
Class("Ikke-primær", "primary_type", "0", "non-primary", uuids.NOT_PRIMARY),
Class(
"Linjeorganisation",
"org_unit_hierarchy",
"TEXT",
"linjeorg",
uuids.LINJE_ORG_HIERARCHY,
),
Class(
"Sikkerhedsorganisation",
"org_unit_hierarchy",
"TEXT",
"sikkerhedsorg",
uuids.SIKKERHEDS_ORG_HIERARCHY,
),
Class(
"Rolletype",
"role_type",
"TEXT",
"role_type",
UUID("964c31a2-6267-4388-bff5-42d6f3c5f708"),
),
Class(
"Orlovstype",
"leave_type",
"TEXT",
"leave_type",
UUID("d2892fa6-bc56-4c14-bd24-74ae0c71fa3a"),
),
Class(
"Alternativ stillingsbetegnelse",
"employee_address_type",
"TEXT",
"AltJobTitle",
uuids.PERSON_JOB_TITLE_ALT,
),
] | PypiClean |
/FreePyBX-1.0-RC1.tar.gz/FreePyBX-1.0-RC1/freepybx/public/js/dojox/app/scene.js.uncompressed.js | define("dojox/app/scene", ["dojo/_base/kernel",
"dojo/_base/declare",
"dojo/_base/connect",
"dojo/_base/array",
"dojo/_base/Deferred",
"dojo/_base/lang",
"dojo/_base/sniff",
"dojo/dom-style",
"dojo/dom-geometry",
"dojo/dom-class",
"dojo/dom-construct",
"dojo/dom-attr",
"dojo/query",
"dijit",
"dojox",
"dijit/_WidgetBase",
"dijit/_TemplatedMixin",
"dijit/_WidgetsInTemplateMixin",
"dojox/css3/transit",
"./animation",
"./model",
"./view",
"./bind"],
function(dojo,declare,connect, array,deferred,dlang,has,dstyle,dgeometry,cls,dconstruct,dattr,query,dijit,dojox,WidgetBase,Templated,WidgetsInTemplate,transit, anim, model, baseView, bind){
var marginBox2contentBox = function(/*DomNode*/ node, /*Object*/ mb){
// summary:
// Given the margin-box size of a node, return its content box size.
// Functions like dojo.contentBox() but is more reliable since it doesn't have
// to wait for the browser to compute sizes.
var cs = dstyle.getComputedStyle(node);
var me = dgeometry.getMarginExtents(node, cs);
var pb = dgeometry.getPadBorderExtents(node, cs);
return {
l: dstyle.toPixelValue(node, cs.paddingLeft),
t: dstyle.toPixelValue(node, cs.paddingTop),
w: mb.w - (me.w + pb.w),
h: mb.h - (me.h + pb.h)
};
};
var capitalize = function(word){
return word.substring(0,1).toUpperCase() + word.substring(1);
};
var size = function(widget, dim){
// size the child
var newSize = widget.resize ? widget.resize(dim) : dgeometry.setMarginBox(widget.domNode, dim);
// record child's size
if(newSize){
// if the child returned it's new size then use that
dojo.mixin(widget, newSize);
}else{
// otherwise, call marginBox(), but favor our own numbers when we have them.
// the browser lies sometimes
dojo.mixin(widget, dgeometry.getMarginBox(widget.domNode));
dojo.mixin(widget, dim);
}
};
return declare("dojox.app.scene", [dijit._WidgetBase, dijit._TemplatedMixin, dijit._WidgetsInTemplateMixin], {
isContainer: true,
widgetsInTemplate: true,
defaultView: "default",
selectedChild: null,
baseClass: "scene mblView",
isFullScreen: false,
defaultViewType: baseView,
//Temporary work around for getting a null when calling getParent
getParent: function(){return null;},
constructor: function(params,node){
this.children={};
if(params.parent){
this.parent=params.parent
}
if(params.app){
this.app = params.app;
}
},
buildRendering: function(){
this.inherited(arguments);
dstyle.set(this.domNode, {width: "100%", "height": "100%"});
cls.add(this.domNode,"dijitContainer");
},
splitChildRef: function(childId){
var id = childId.split(",");
if (id.length>0){
var to = id.shift();
}else{
console.warn("invalid child id passed to splitChildRef(): ", childId);
}
return {
id:to || this.defaultView,
next: id.join(',')
}
},
loadChild: function(childId,subIds){
// if no childId, load the default view
if (!childId) {
var parts = this.defaultView ? this.defaultView.split(",") : "default";
childId = parts.shift();
subIds = parts.join(',');
}
var cid = this.id+"_" + childId;
if (this.children[cid]){
return this.children[cid];
}
if (this.views&& this.views[childId]){
var conf = this.views[childId];
if (!conf.dependencies){conf.dependencies=[];}
var deps = conf.template? conf.dependencies.concat(["dojo/text!app/"+conf.template]) :
conf.dependencies.concat([]);
var def = new deferred();
if (deps.length>0) {
require(deps,function(){
def.resolve.call(def, arguments);
});
}else{
def.resolve(true);
}
var loadChildDeferred = new deferred();
var self = this;
deferred.when(def, function(){
var ctor;
if (conf.type){
ctor=dojo.getObject(conf.type);
}else if (self.defaultViewType){
ctor=self.defaultViewType;
}else{
throw Error("Unable to find appropriate ctor for the base child class");
}
var params = dojo.mixin({}, conf, {
id: self.id + "_" + childId,
templateString: conf.template?arguments[0][arguments[0].length-1]:"<div></div>",
parent: self,
app: self.app
})
if (subIds){
params.defaultView=subIds;
}
var child = new ctor(params);
//load child's model if it is not loaded before
if(!child.loadedModels){
child.loadedModels = model(conf.models, self.loadedModels)
//TODO need to find out a better way to get all bindable controls in a view
bind([child], child.loadedModels);
}
var addResult = self.addChild(child);
//publish /app/loadchild event
//application can subscript this event to do user define operation like select TabBarButton, add dynamic script text etc.
connect.publish("/app/loadchild", [child]);
var promise;
subIds = subIds.split(',');
if ((subIds[0].length > 0) && (subIds.length > 1)) {//TODO join subIds
promise = child.loadChild(subIds[0], subIds[1]);
}
else
if (subIds[0].length > 0) {
promise = child.loadChild(subIds[0], "");
}
dojo.when(promise, function(){
loadChildDeferred.resolve(addResult)
});
});
return loadChildDeferred;
}
throw Error("Child '" + childId + "' not found.");
},
resize: function(changeSize,resultSize){
var node = this.domNode;
// set margin box size, unless it wasn't specified, in which case use current size
if(changeSize){
dgeometry.setMarginBox(node, changeSize);
// set offset of the node
if(changeSize.t){ node.style.top = changeSize.t + "px"; }
if(changeSize.l){ node.style.left = changeSize.l + "px"; }
}
// If either height or width wasn't specified by the user, then query node for it.
// But note that setting the margin box and then immediately querying dimensions may return
// inaccurate results, so try not to depend on it.
var mb = resultSize || {};
dojo.mixin(mb, changeSize || {}); // changeSize overrides resultSize
if( !("h" in mb) || !("w" in mb) ){
mb = dojo.mixin(dgeometry.getMarginBox(node), mb); // just use dojo.marginBox() to fill in missing values
}
// Compute and save the size of my border box and content box
// (w/out calling dojo.contentBox() since that may fail if size was recently set)
var cs = dstyle.getComputedStyle(node);
var me = dgeometry.getMarginExtents(node, cs);
var be = dgeometry.getBorderExtents(node, cs);
var bb = (this._borderBox = {
w: mb.w - (me.w + be.w),
h: mb.h - (me.h + be.h)
});
var pe = dgeometry.getPadExtents(node, cs);
this._contentBox = {
l: dstyle.toPixelValue(node, cs.paddingLeft),
t: dstyle.toPixelValue(node, cs.paddingTop),
w: bb.w - pe.w,
h: bb.h - pe.h
};
// Callback for widget to adjust size of its children
this.layout();
},
layout: function(){
var fullScreenScene,children,hasCenter;
//console.log("fullscreen: ", this.selectedChild && this.selectedChild.isFullScreen);
if (this.selectedChild && this.selectedChild.isFullScreen) {
console.warn("fullscreen sceen layout");
/*
fullScreenScene=true;
children=[{domNode: this.selectedChild.domNode,region: "center"}];
dojo.query("> [region]",this.domNode).forEach(function(c){
if(this.selectedChild.domNode!==c.domNode){
dojo.style(c.domNode,"display","none");
}
})
*/
}else{
children = query("> [region]", this.domNode).map(function(node){
var w = dijit.getEnclosingWidget(node);
if (w){return w;}
return {
domNode: node,
region: dattr.get(node,"region")
}
});
if (this.selectedChild){
children = array.filter(children, function(c){
if (c.region=="center" && this.selectedChild && this.selectedChild.domNode!==c.domNode){
dstyle.set(c.domNode,"zIndex",25);
dstyle.set(c.domNode,'display','none');
return false;
}else if (c.region!="center"){
dstyle.set(c.domNode,"display","");
dstyle.set(c.domNode,"zIndex",100);
}
return c.domNode && c.region;
},this);
// this.selectedChild.region="center";
// dojo.attr(this.selectedChild.domNode,"region","center");
// dojo.style(this.selectedChild.domNode, "display","");
// dojo.style(this.selectedChild.domNode,"zIndex",50);
// children.push({domNode: this.selectedChild.domNode, region: "center"});
// children.push(this.selectedChild);
// console.log("children: ", children);
}else{
array.forEach(children, function(c){
if (c && c.domNode && c.region=="center"){
dstyle.set(c.domNode,"zIndex",25);
dstyle.set(c.domNode,'display','none');
}
});
}
}
// We don't need to layout children if this._contentBox is null for the operation will do nothing.
if (this._contentBox) {
this.layoutChildren(this.domNode, this._contentBox, children);
}
array.forEach(this.getChildren(), function(child){
if (!child._started && child.startup){
child.startup();
}
});
},
layoutChildren: function(/*DomNode*/ container, /*Object*/ dim, /*Widget[]*/ children,
/*String?*/ changedRegionId, /*Number?*/ changedRegionSize){
// summary
// Layout a bunch of child dom nodes within a parent dom node
// container:
// parent node
// dim:
// {l, t, w, h} object specifying dimensions of container into which to place children
// children:
// an array of Widgets or at least objects containing:
// * domNode: pointer to DOM node to position
// * region or layoutAlign: position to place DOM node
// * resize(): (optional) method to set size of node
// * id: (optional) Id of widgets, referenced from resize object, below.
// changedRegionId:
// If specified, the slider for the region with the specified id has been dragged, and thus
// the region's height or width should be adjusted according to changedRegionSize
// changedRegionSize:
// See changedRegionId.
// copy dim because we are going to modify it
dim = dojo.mixin({}, dim);
cls.add(container, "dijitLayoutContainer");
// Move "client" elements to the end of the array for layout. a11y dictates that the author
// needs to be able to put them in the document in tab-order, but this algorithm requires that
// client be last. TODO: move these lines to LayoutContainer? Unneeded other places I think.
children = array.filter(children, function(item){ return item.region != "center" && item.layoutAlign != "client"; })
.concat(array.filter(children, function(item){ return item.region == "center" || item.layoutAlign == "client"; }));
// set positions/sizes
array.forEach(children, function(child){
var elm = child.domNode,
pos = (child.region || child.layoutAlign);
// set elem to upper left corner of unused space; may move it later
var elmStyle = elm.style;
elmStyle.left = dim.l+"px";
elmStyle.top = dim.t+"px";
elmStyle.position = "absolute";
cls.add(elm, "dijitAlign" + capitalize(pos));
// Size adjustments to make to this child widget
var sizeSetting = {};
// Check for optional size adjustment due to splitter drag (height adjustment for top/bottom align
// panes and width adjustment for left/right align panes.
if(changedRegionId && changedRegionId == child.id){
sizeSetting[child.region == "top" || child.region == "bottom" ? "h" : "w"] = changedRegionSize;
}
// set size && adjust record of remaining space.
// note that setting the width of a <div> may affect its height.
if(pos == "top" || pos == "bottom"){
sizeSetting.w = dim.w;
size(child, sizeSetting);
dim.h -= child.h;
if(pos == "top"){
dim.t += child.h;
}else{
elmStyle.top = dim.t + dim.h + "px";
}
}else if(pos == "left" || pos == "right"){
sizeSetting.h = dim.h;
size(child, sizeSetting);
dim.w -= child.w;
if(pos == "left"){
dim.l += child.w;
}else{
elmStyle.left = dim.l + dim.w + "px";
}
}else if(pos == "client" || pos == "center"){
size(child, dim);
}
});
},
getChildren: function(){
return this._supportingWidgets;
},
startup: function(){
if(this._started){ return; }
this._started=true;
var parts = this.defaultView?this.defaultView.split(","):"default";
var toId, subIds;
toId= parts.shift();
subIds = parts.join(',');
if(this.views[this.defaultView] && this.views[this.defaultView]["defaultView"]){
subIds = this.views[this.defaultView]["defaultView"];
}
if(this.models && !this.loadedModels){
//if there is this.models config data and the models has not been loaded yet,
//load models at here using the configuration data and load model logic in model.js
this.loadedModels = model(this.models);
bind(this.getChildren(), this.loadedModels);
}
//startup assumes all children are loaded into DOM before startup is called
//startup will only start the current available children.
var cid = this.id + "_" + toId;
if (this.children[cid]) {
var next = this.children[cid];
this.set("selectedChild", next);
// If I am a not being controlled by a parent layout widget...
var parent = this.getParent && this.getParent();
if (!(parent && parent.isLayoutContainer)) {
// Do recursive sizing and layout of all my descendants
// (passing in no argument to resize means that it has to glean the size itself)
this.resize();
// Since my parent isn't a layout container, and my style *may be* width=height=100%
// or something similar (either set directly or via a CSS class),
// monitor when my size changes so that I can re-layout.
// For browsers where I can't directly monitor when my size changes,
// monitor when the viewport changes size, which *may* indicate a size change for me.
this.connect(has("ie") ? this.domNode : dojo.global, 'onresize', function(){
// Using function(){} closure to ensure no arguments to resize.
this.resize();
});
}
array.forEach(this.getChildren(), function(child){
child.startup();
});
//transition to _startView
if (this._startView && (this._startView != this.defaultView)) {
this.transition(this._startView, {});
}
}
},
addChild: function(widget){
cls.add(widget.domNode, this.baseClass + "_child");
widget.region = "center";;
dattr.set(widget.domNode,"region","center");
this._supportingWidgets.push(widget);
dconstruct.place(widget.domNode,this.domNode);
this.children[widget.id] = widget;
return widget;
},
removeChild: function(widget){
// summary:
// Removes the passed widget instance from this widget but does
// not destroy it. You can also pass in an integer indicating
// the index within the container to remove
if(widget){
var node = widget.domNode;
if(node && node.parentNode){
node.parentNode.removeChild(node); // detach but don't destroy
}
return widget;
}
},
_setSelectedChildAttr: function(child,opts){
if (child !== this.selectedChild) {
return deferred.when(child, dlang.hitch(this, function(child){
if (this.selectedChild){
if (this.selectedChild.deactivate){
this.selectedChild.deactivate();
}
dstyle.set(this.selectedChild.domNode,"zIndex",25);
}
//dojo.style(child.domNode, {
// "display": "",
// "zIndex": 50,
// "overflow": "auto"
//});
this.selectedChild = child;
dstyle.set(child.domNode, "display", "");
dstyle.set(child.domNode,"zIndex",50);
this.selectedChild=child;
if (this._started) {
if (child.startup && !child._started){
child.startup();
}else if (child.activate){
child.activate();
}
}
this.layout();
}));
}
},
transition: function(transitionTo,opts){
//summary:
// transitions from the currently visible scene to the defined scene.
// it should determine what would be the best transition unless
// an override in opts tells it to use a specific transitioning methodology
// the transitionTo is a string in the form of [view]@[scene]. If
// view is left of, the current scene will be transitioned to the default
// view of the specified scene (eg @scene2), if the scene is left off
// the app controller will instruct the active scene to the view (eg view1). If both
// are supplied (view1@scene2), then the application should transition to the scene,
// and instruct the scene to navigate to the view.
var toId,subIds,next, current = this.selectedChild;
console.log("scene", this.id, transitionTo);
if (transitionTo){
var parts = transitionTo.split(",");
toId= parts.shift();
subIds = parts.join(',');
}else{
toId = this.defaultView;
if(this.views[this.defaultView] && this.views[this.defaultView]["defaultView"]){
subIds = this.views[this.defaultView]["defaultView"];
}
}
next = this.loadChild(toId,subIds);
if (!current){
//assume this.set(...) will return a promise object if child is first loaded
//return nothing if child is already in array of this.children
return this.set("selectedChild",next);
}
var transitionDeferred = new deferred();
deferred.when(next, dlang.hitch(this, function(next){
var promise;
if (next!==current){
//TODO need to refactor here, when clicking fast, current will not be the
//view we want to start transition. For example, during transition 1 -> 2
//if user click button to transition to 3 and then transition to 1. It will
//perform transition 2 -> 3 and 2 -> 1 because current is always point to
//2 during 1 -> 2 transition.
var waitingList = anim.getWaitingList([next.domNode, current.domNode]);
//update registry with deferred objects in animations of args.
var transitionDefs = {};
transitionDefs[current.domNode.id] = anim.playing[current.domNode.id] = new deferred();
transitionDefs[next.domNode.id] = anim.playing[current.domNode.id] = new deferred();
deferred.when(waitingList, dojo.hitch(this, function(){
//assume next is already loaded so that this.set(...) will not return
//a promise object. this.set(...) will handles the this.selectedChild,
//activate or deactivate views and refresh layout.
this.set("selectedChild", next);
//publish /app/transition event
//application can subscript this event to do user define operation like select TabBarButton, etc.
connect.publish("/app/transition", [next, toId]);
transit(current.domNode,next.domNode,dojo.mixin({},opts,{transition: this.defaultTransition || "none", transitionDefs: transitionDefs})).then(dlang.hitch(this, function(){
//dojo.style(current.domNode, "display", "none");
if (subIds && next.transition){
promise = next.transition(subIds,opts);
}
deferred.when(promise, function(){
transitionDeferred.resolve();
});
}));
}));
return;
}
//we didn't need to transition, but continue to propogate.
if (subIds && next.transition){
promise = next.transition(subIds,opts);
}
deferred.when(promise, function(){
transitionDeferred.resolve();
});
}));
return transitionDeferred;
},
toString: function(){return this.id},
activate: function(){},
deactive: function(){}
});
}); | PypiClean |
/FFGo-1.12.7-py3-none-any.whl/ffgo/config.py |
import sys
import os
import re
import gzip
import contextlib
import gettext
import traceback
import collections
import itertools
import textwrap
from xml.etree import ElementTree
from tkinter import IntVar, StringVar
from tkinter.messagebox import askyesno, showinfo, showerror
import tkinter.font
from tkinter import ttk
from .gui.infowindow import InfoWindow
from . import misc
from .misc import resourceExists, textResourceStream
from .constants import *
from .logging import logger, LogLevel
from .fgdata.aircraft import Aircraft
def setupTranslationHelper(config):
global pgettext
translationHelper = misc.TranslationHelper(config)
pgettext = translationHelper.pgettext
class AbortConfig(Exception):
pass
# No translated strings to avoid depending on the language being already set
# and the translation system being in place. If this exception is raised and
# not caught in FFGo, it is a bug.
class NoSuchAircraft(Exception):
def __init__(self, aircraftName, aircraftDir):
self.name, self.dir = aircraftName, aircraftDir
def __str__(self):
return "no aircraft '{name}' in directory '{dir}'".format(
name=self.name, dir=self.dir)
class Config:
"""Read/write and store all data from config files."""
def __init__(self, cmdLineParams, master=None):
self.cmdLineParams = cmdLineParams
self.master = master
self.ai_path = '' # Path to FG_ROOT/AI directory.
self.defaultAptDatFile = '' # Path to FG_ROOT/Airports/apt.dat.gz file.
self.metar_path = '' # Path to FG_ROOT/Airports/metar.dat.gz file.
self.aircraft_dirs = [] # List of aircraft directories.
# Dictionary whose keys are aircraft names. For each aircraft name 'n',
# self.aircraftDict[n] is the list, in self.aircraft_dirs priority
# order, of all Aircraft instances with that name.
self.aircraftDict = {}
self.aircraftList = [] # Sorted list of Aircraft instances.
self.scenario_list = [] # List of selected scenarios.
# List of all aircraft carriers found in AI scenario folder.
# Each entry format is:
# ["ship name", "parking position"... , "scenario name"]
self.carrier_list = []
self.settings = [] # List of basic settings read from config file.
self.text = '' # String to be shown in command line options window.
# 'self.aircraftId' is the central variable telling which particular
# aircraft is selected in FFGo's interface. It is a tuple of the form
# (aircraftName, aircraftDir).
self.aircraftId = misc.Observable()
self.aircraft = StringVar()
self.aircraftDir = StringVar()
# Whenever 'self.aircraftId' is set, 'self.aircraft' and
# 'self.aircraftDir' are automatically updated to reflect the new value
# (and their observers called, even if the values didn't change).
self.aircraftId.trace("w", self.updateAircraftNameAndDirFromAircraftId)
# Note: the FFGo config file stores the values of 'self.aircraft' and
# 'self.aircraftDir' separately (this makes the compatibility
# path easy with versions that don't know about aircraftDir).
self.airport = StringVar() # ICAO code of the selected airport
self.alreadyProposedChanges = StringVar()
self.apt_data_source = IntVar()
self.auto_update_apt = IntVar()
self.carrier = StringVar() # when non-empty, we are in “carrier mode”
self.FG_aircraft = StringVar()
self.FG_bin = StringVar()
self.FG_root = StringVar()
self.FG_scenery = StringVar()
self.FG_download_dir = StringVar()
self.FG_working_dir = StringVar()
self.MagneticField_bin = StringVar()
self.MagneticField_bin.trace('w', self.updateMagFieldProvider)
self.filteredAptList = IntVar()
self.language = StringVar()
self.park = StringVar()
self.rwy = StringVar()
self.scenario = StringVar()
self.timeOfDay = StringVar()
self.season = StringVar()
self.enableTerraSync = IntVar()
self.enableRealWeatherFetch = IntVar()
self.startFGFullScreen = IntVar()
self.startFGPaused = IntVar()
self.enableMSAA = IntVar()
self.enableRembrandt = IntVar()
self.mainWindowGeometry = StringVar()
self.saveWindowPosition = IntVar()
self.baseFontSize = StringVar()
self.TkDefaultFontSize = IntVar()
# tkinter.BooleanVar feels kind of messy. Sometimes, it prints out as
# 'True', other times as '1'... IntVar seems more predictable.
self.showFGCommand = IntVar()
self.showFGCommandInSeparateWindow = IntVar()
self.FGCommandGeometry = StringVar()
self.showFGOutput = IntVar()
self.showFGOutputInSeparateWindow = IntVar()
self.FGOutputGeometry = StringVar()
self.autoscrollFGOutput = IntVar()
# Option to translate --parkpos into --lat, --lon and --heading (useful
# when --parkpos is broken in FlightGear)
self.fakeParkposOption = IntVar()
self.airportStatsManager = None # will be initialized later
self.aircraftStatsManager = None # ditto
self.airportStatsShowPeriod = IntVar()
self.airportStatsExpiryPeriod = IntVar()
self.aircraftStatsShowPeriod = IntVar()
self.aircraftStatsExpiryPeriod = IntVar()
self.keywords = {'--aircraft=': self.aircraft,
'--airport=': self.airport,
'--fg-root=': self.FG_root,
'--fg-scenery=': self.FG_scenery,
'--carrier=': self.carrier,
'--parkpos=': self.park,
'--runway=': self.rwy,
'TIME_OF_DAY=': self.timeOfDay,
'SEASON=': self.season,
'ENABLE_TERRASYNC=': self.enableTerraSync,
'ENABLE_REAL_WEATHER_FETCH=':
self.enableRealWeatherFetch,
'START_FG_FULL_SCREEN=': self.startFGFullScreen,
'START_FG_PAUSED=': self.startFGPaused,
'ENABLE_MULTI_SAMPLE_ANTIALIASING=': self.enableMSAA,
'ENABLE_REMBRANDT=': self.enableRembrandt,
'AIRCRAFT_DIR=': self.aircraftDir,
'AI_SCENARIOS=': self.scenario,
'ALREADY_PROPOSED_CHANGES=':
self.alreadyProposedChanges,
'APT_DATA_SOURCE=': self.apt_data_source,
'AUTO_UPDATE_APT=': self.auto_update_apt,
'FG_BIN=': self.FG_bin,
'FG_AIRCRAFT=': self.FG_aircraft,
'FG_DOWNLOAD_DIR=': self.FG_download_dir,
'FG_WORKING_DIR=': self.FG_working_dir,
'MAGNETICFIELD_BIN=': self.MagneticField_bin,
'FILTER_APT_LIST=': self.filteredAptList,
'LANG=': self.language,
'WINDOW_GEOMETRY=': self.mainWindowGeometry,
'SAVE_WINDOW_POSITION=': self.saveWindowPosition,
'BASE_FONT_SIZE=': self.baseFontSize,
'SHOW_FG_COMMAND=': self.showFGCommand,
'SHOW_FG_COMMAND_IN_SEPARATE_WINDOW=':
self.showFGCommandInSeparateWindow,
'FG_COMMAND_GEOMETRY=': self.FGCommandGeometry,
'SHOW_FG_OUTPUT=': self.showFGOutput,
'SHOW_FG_OUTPUT_IN_SEPARATE_WINDOW=':
self.showFGOutputInSeparateWindow,
'FG_OUTPUT_GEOMETRY=': self.FGOutputGeometry,
'AUTOSCROLL_FG_OUTPUT=': self.autoscrollFGOutput,
'FAKE_PARKPOS_OPTION=': self.fakeParkposOption,
'AIRPORT_STATS_SHOW_PERIOD=':
self.airportStatsShowPeriod,
'AIRPORT_STATS_EXPIRY_PERIOD=':
self.airportStatsExpiryPeriod,
'AIRCRAFT_STATS_SHOW_PERIOD=':
self.aircraftStatsShowPeriod,
'AIRCRAFT_STATS_EXPIRY_PERIOD=':
self.aircraftStatsExpiryPeriod}
# List of apt_dat.AptDatFileInfo instances extracted from the apt
# digest file: nothing so far (this indicates the list of apt.dat files
# used to build the apt digest file, with some metadata).
self.aptDatFilesInfoFromDigest = []
# In order to avoid using a lot of memory, detailed airport data is
# only loaded on demand. Since this is quite slow, keep a cache of the
# last retrieved data.
self.aptDatCache = collections.deque(maxlen=50)
self._earlyTranslationsSetup()
self._createUserDirectories()
self._maybeMigrateFromFGoConfig()
# Not having the FlightGear version at this point is not important
# enough to justify pestering the user about it. :-)
# Defer logging of the detected FG version to fit nicely with
# the other startup messages.
self.update(ignoreFGVersionError=True, logFGVersion=False)
self.setTkDefaultFontSize()
self.setupFonts(init=True)
def setTkDefaultFontSize(self):
"""Save unaltered TkDefaultFont size."""
size = tkinter.font.nametofont("TkDefaultFont").actual()["size"]
self.TkDefaultFontSize.set(size)
def setupFonts(self, init=False):
"""Setup the default fonts.
When called with init=True, custom fonts are created and
stored as attributes of self. Otherwise, they are simply
configured.
"""
# According to <https://www.tcl.tk/man/tcl8.4/TkCmd/font.htm>, font
# sizes are interpreted this way:
# - for positive values, the unit is points;
# - for negative values, the unit is pixels;
# - 0 is a special value for "a platform-dependent default size".
#
# Apparently, Tkinter doesn't accept floats for the 'size' parameter of
# <font>.configure(), even when positive (tested with Python 2.7.3).
baseSize = int(float(self.baseFontSize.get()))
# Get the actual size when baseSize == 0, otherwise scaling won't work
# since 0*factor == 0, regardless of the (finite) factor.
if baseSize == 0:
baseSize = self.TkDefaultFontSize.get()
def scale(factor):
return int(round(baseSize * factor))
def configFontSize(style, factor):
font = tkinter.font.nametofont("Tk%sFont" % style)
font.configure(size=scale(factor))
# Configure built-in fonts
for style in ("Default", "Text", "Fixed", "Caption", "Tooltip"):
# The 'if init:' here is a workaround for a weird problem: when
# saving the settings from the Preferences dialog, even if the very
# same font size is set here as the one that was used at program
# initialization, the main window layout gets broken, with the
# airport chooser Treeview taking more and more horizontal space
# every time the settings are saved. Avoiding to reconfigure the
# fonts in such "reinit" conditions works around the problem...
if init:
configFontSize(style, 1)
for style, factor in (("Menu", 20 / 18.), ("Heading", 20 / 18.),
("SmallCaption", 16 / 18.), ("Icon", 14 / 18.)):
if init: # Second part of the workaround mentioned above
configFontSize(style, factor)
# Create or configure custom fonts, depending on 'init'
aboutTitleFontSize = scale(42 / 18.)
if init:
self.aboutTitleFont = tkinter.font.Font(
family="Helvetica", weight="bold", size=aboutTitleFontSize)
else:
self.aboutTitleFont.configure(size=aboutTitleFontSize)
# Final part of the workaround mentioned above. Normally, the code
# should always be executed, regardless of the value of 'init'.
if init:
# For the ttk.Treeview widget
treeviewHeadingFontSize = scale(1.)
# Redundant test, right. Hopefully, one day, we'll be able to get
# rid of the workaround and this test won't be redundant anymore.
if init:
self.treeviewHeadingFont = tkinter.font.Font(
weight="normal", size=treeviewHeadingFontSize)
else:
self.treeviewHeadingFont.configure(size=treeviewHeadingFontSize)
style = ttk.Style()
style.configure("Treeview.Heading", font=self.treeviewHeadingFont)
def makeInstalledAptList(self):
logger.notice(_("Building the list of installed airports "
"(this may take some time)..."))
# writelines() used below doesn't automatically add line terminators
airports = [ icao + '\n' for icao in self._findInstalledApt() ]
logger.info("Opening '{}' for writing".format(INSTALLED_APT))
with open(INSTALLED_APT, "w", encoding="utf-8") as fout:
fout.writelines(airports)
def readMetarDat(self):
"""Fetch METAR station list from metar.dat.gz file"""
logger.info("Opening '{}' for reading".format(self.metar_path))
res = []
with gzip.open(self.metar_path, mode='rt', encoding='utf-8') as fin:
for line in fin:
if not line.startswith('#'):
res.append(line.strip())
return res
def _computeAircraftDirList(self):
FG_AIRCRAFT_env = os.getenv("FG_AIRCRAFT", "")
if FG_AIRCRAFT_env:
FG_AIRCRAFT_envList = FG_AIRCRAFT_env.split(os.pathsep)
else:
FG_AIRCRAFT_envList = []
# FG_ROOT/Aircraft
defaultAircraftDir = os.path.join(self.FG_root.get(),
DEFAULT_AIRCRAFT_DIR)
aircraft_dirs = (self.FG_aircraft.get().split(os.pathsep)
+ FG_AIRCRAFT_envList + [defaultAircraftDir])
return aircraft_dirs
def logDetectedFlightGearVersion(self, logLevel=LogLevel.notice,
prefix=True):
if self.FG_version is not None:
FG_version = str(self.FG_version)
else:
FG_version = pgettext("FlightGear version", "none")
# Uses the same string as in App.about()
message = _("Detected FlightGear version: {ver}").format(
ver=FG_version)
logger.log(logLevel, prefix, message)
def getFlightGearVersion(self, ignoreFGVersionError=False, log=False):
# This import requires the translation system [_() function] to be in
# place.
from .fgdata import fgversion
self.FG_version = None # in case an exception is raised below
FG_bin = self.FG_bin.get()
FG_root = self.FG_root.get()
exc = None
if FG_bin and FG_root:
try:
self.FG_version = fgversion.getFlightGearVersion(
FG_bin, FG_root, self.FG_working_dir.get())
except fgversion.error as e:
exc = e # may need to be raised later
if log:
self.logDetectedFlightGearVersion()
if exc is not None and not ignoreFGVersionError:
raise exc
# This is a callback for FFGo's misc.Observable class.
def updateAircraftNameAndDirFromAircraftId(self, aircraftId):
aircraftName, aircraftDir = aircraftId
self.aircraft.set(aircraftName)
self.aircraftDir.set(aircraftDir)
def aircraftWithNameAndDir(self, name, dir_):
"""Get the Aircraft instance for a given name and directory."""
try:
aircraftSeq = self.aircraftDict[name]
except KeyError:
raise NoSuchAircraft(name, dir_)
for aircraft in aircraftSeq:
# The idea is that the directory 'dir_' passed here should have
# been discovered earlier by a filesystem exploration, therefore
# there must be one Aircraft instance that has an exact match for
# both 'name' and 'dir_' (no need to use 'os.path.samefile()',
# which would be slower, could raise errors...).
if aircraft.dir == dir_:
return aircraft
else:
raise NoSuchAircraft(name, dir_)
def aircraftWithId(self, aircraftId):
"""Get the Aircraft instance for a given aircraft ID."""
return self.aircraftWithNameAndDir(*self.aircraftId.get())
def getCurrentAircraft(self):
"""Get the Aircraft instance for the currently-selected aircraft."""
return self.aircraftWithId(self.aircraftId.get())
def _findAircraft(self, acName, acDir):
"""Return an aircraft ID for 'acName' and 'acDir' if possible.
If no aircraft is found with the given name and directory, fall
back to:
- an identically-named aircraft in a different directory
(taking the first in FG_AIRCRAFT precedence order);
- if this isn't possible either, fall back to the default
aircraft. The returned aircraft ID will have an empty
directory component if even the default aircraft isn't
available in this case.
Log an appropriate warning or notice when a fallback strategy is
used.
"""
if acName in self.aircraftDict:
for ac in self.aircraftDict[acName]:
if ac.dir == acDir:
aircraft = ac
break
else:
aircraft = self.aircraftDict[acName][0]
logger.notice(
_("Could not find aircraft '{aircraft}' under '{dir}', "
"taking it from '{fallback}' instead").format(
aircraft=acName, dir=acDir, fallback=aircraft.dir))
else:
try:
defaultAircraftSeq = self.aircraftDict[DEFAULT_AIRCRAFT]
except KeyError:
aircraft = None
logger.warning(
_("Could not find the default aircraft: {aircraft}")
.format(aircraft=DEFAULT_AIRCRAFT))
else:
aircraft = defaultAircraftSeq[0]
logger.notice(
_("Could not find aircraft '{aircraft}', using "
"'{fallback}' from '{dir}' instead").format(
aircraft=acName, fallback=aircraft.name,
dir=aircraft.dir))
if aircraft is None:
return (DEFAULT_AIRCRAFT, '')
else:
return (aircraft.name, aircraft.dir)
def sanityChecks(self):
status, *rest = self.decodeParkingSetting(self.park.get())
if status == "invalid":
logger.warning(
_("Invalid syntax for the parking setting ({setting!r}), "
"resetting it.").format(setting=self.park.get()))
self.park.set('')
if self.rwy.get() and self.park.get():
# Impossible to at the same time set a non-default runway and a
# parking position. The latter wins. :-)
self.rwy.set('')
def update(self, path=None, ignoreFGVersionError=False, logFGVersion=True):
"""Read config file and update variables.
path is a path to different than default config file
"""
if self.aircraftStatsManager is None: # application init
# Requires the translation system to be in place
from . import stats_manager
self.aircraftStatsManager = \
stats_manager.AircraftStatsManager(self)
else:
# Save the in-memory statistics (from Aircraft instances) to
# persistent storage. This expires old stats, according to
# self.aircraftStatsExpiryPeriod.
self.aircraftStatsManager.save()
del self.settings
del self.text
del self.aircraft_dirs
del self.defaultAptDatFile
del self.ai_path
del self.metar_path
del self.aircraftDict
del self.aircraftList
del self.scenario_list
del self.carrier_list
# The variable will be set again right after reading the config
# file, therefore there is no need to run the callbacks now
# (such as updating the aircraft image).
self.aircraftId.set((DEFAULT_AIRCRAFT, ''), runCallbacks=False)
self.airport.set(DEFAULT_AIRPORT)
self.alreadyProposedChanges.set('')
self.apt_data_source.set(1)
self.auto_update_apt.set(1)
self.carrier.set('')
self.FG_aircraft.set('')
self.FG_bin.set('')
self.FG_root.set('')
self.FG_scenery.set('')
self.FG_download_dir.set('')
self.FG_working_dir.set('')
self.MagneticField_bin.set('')
self.language.set('')
self.baseFontSize.set(DEFAULT_BASE_FONT_SIZE)
self.mainWindowGeometry.set('')
self.saveWindowPosition.set('1')
self.showFGCommand.set('1')
self.showFGCommandInSeparateWindow.set('0')
self.FGCommandGeometry.set('')
self.showFGOutput.set('1')
self.showFGOutputInSeparateWindow.set('0')
self.FGOutputGeometry.set('')
self.autoscrollFGOutput.set('1')
self.park.set('')
self.fakeParkposOption.set('0')
self.rwy.set('')
self.scenario.set('')
self.timeOfDay.set('')
self.season.set('')
self.enableTerraSync.set('0')
self.enableRealWeatherFetch.set('0')
self.startFGFullScreen.set('1')
self.startFGPaused.set('0')
self.enableMSAA.set('0')
self.enableRembrandt.set('0')
self.filteredAptList.set(0)
self.airportStatsShowPeriod.set('365') # approx. one year
self.airportStatsExpiryPeriod.set('3652') # approx. ten years
self.aircraftStatsShowPeriod.set('365')
self.aircraftStatsExpiryPeriod.set('3652')
self.settings, self.text = self._read(path)
for line in self.settings:
cut = line.find('=') + 1
if cut:
name = line[:cut]
value = line[cut:]
if value:
if name in self.keywords:
var = self.keywords[name]
var.set(value)
# Useful to know when the airport has been changed
self.previousAirport = self.airport.get()
self._setLanguage(self.language.get())
setupTranslationHelper(self)
self.aircraft_dirs = self._computeAircraftDirList()
if self.FG_root.get():
self.defaultAptDatFile = os.path.join(self.FG_root.get(), APT_DAT)
else:
self.defaultAptDatFile = ""
self.ai_path = os.path.join(self.FG_root.get(), AI_DIR)
self.metar_path = os.path.join(self.FG_root.get(), METAR_DAT)
self.aircraftDict, self.aircraftList = self._readAircraft()
# Load the saved statistics into the new in-memory Aircraft instances
# (the set of aircraft may have just changed, hence the need to save
# the stats before the in-memory aircraft list is updated, and reload
# them afterwards).
self.aircraftStatsManager.load()
# Choose a suitable aircraft, even if the one defined by
# 'self.aircraft' and 'self.aircraftDir' isn't available.
self.aircraftId.set(self._findAircraft(self.aircraft.get(),
self.aircraftDir.get()))
self.scenario_list, self.carrier_list = self._readScenarios()
self.sanityChecks()
self.getFlightGearVersion(ignoreFGVersionError=ignoreFGVersionError,
log=logFGVersion)
# These imports require the translation system [_() function] to be in
# place.
from .fgdata import apt_dat, json_report
from .fgcmdbuilder import FGCommandBuilder
from .fgdata.fgversion import FlightGearVersion
fgBin = self.FG_bin.get()
# The fgfs option --json-report appeared in FlightGear 2016.4.1
if (fgBin and self.FG_version is not None and
self.FG_version >= FlightGearVersion([2016, 4, 1])):
# This may take a while!
logger.info(_("Querying FlightGear's JSON report..."), end=' ')
fgReport = json_report.getFlightGearJSONReport(
fgBin, self.FG_working_dir.get(),
FGCommandBuilder.sceneryPathsArgs(self))
logger.info(_("OK."))
# The FlightGear code for --json-report ensures that every element
# of this list is an existing file.
aptDatList = fgReport["navigation data"]["apt.dat files"]
elif os.path.isfile(self.defaultAptDatFile):
aptDatList = [self.defaultAptDatFile]
else:
aptDatList = []
self.aptDatSetManager = apt_dat.AptDatSetManager(aptDatList)
def write(self, text=None, path=None):
"""Write the configuration to a file.
text -- content of text window processed by CondConfigParser
(pass None to use the value of Config.text)
path -- path to the file the config will be written to
(the default config file is used if this argument is
empty or None)
"""
if not path:
path = CONFIG
if text is None:
text = self.text
options = []
keys = list(self.keywords.keys())
keys.sort()
for k in keys:
v = self.keywords[k]
if k in ('--carrier=', '--airport=', '--parkpos=', '--runway='):
if v.get():
options.append(k + v.get())
else:
options.append(k + str(v.get()))
s = '\n'.join(options)
logger.info("Opening config file for writing: '{}'".format(path))
with open(path, mode='w', encoding='utf-8') as config_out:
config_out.write(s + '\n' + CUT_LINE + '\n')
# Make sure the config file has exactly one newline at the end
while text.endswith('\n\n'):
text = text[:-1]
if not text.endswith('\n'):
text += '\n'
config_out.write(text)
def _findInstalledApt(self):
"""Walk through all scenery paths and find installed airports.
Take geographic coordinates from directory names and compare them
with airport coordinates from the apt digest file.
The result is a sorted list of airport identifiers for matching
airports.
"""
# These imports require the translation system [_() function] to be in
# place.
from .fgdata import json_report
from .fgcmdbuilder import FGCommandBuilder
from .fgdata.fgversion import FlightGearVersion
fgBin = self.FG_bin.get()
# The fgfs option --json-report appeared in FlightGear 2016.4.1
if (fgBin and self.FG_version is not None and
self.FG_version >= FlightGearVersion([2016, 4, 1])):
# This may take a while!
logger.info(_("Querying FlightGear's JSON report..."), end=' ')
fgReport = json_report.getFlightGearJSONReport(
fgBin, self.FG_working_dir.get(),
FGCommandBuilder.sceneryPathsArgs(self))
logger.info(_("OK."))
sceneryPaths = fgReport["config"]["scenery paths"]
else:
# Fallback method when --json-report isn't available. It is
# imperfect in case TerraSync is enabled and the TerraSync
# directory isn't listed in self.FG_scenery, because FlightGear
# *is* going to use it as a scenery path.
sceneryPaths = self.FG_scenery.get().split(os.pathsep)
coord_dict = {}
for scenery in sceneryPaths:
path = os.path.join(scenery, 'Terrain')
if os.path.exists(path):
for dir in os.listdir(path):
p = os.path.join(path, dir)
for coords in os.listdir(p):
d = os.path.join(p, coords)
if not os.path.isdir(d):
continue
logger.debug("Exploring Terrain directory '{}' -> '{}'"
.format(p, coords))
converted = self._stringToCoordinates(coords)
if converted is not None:
coord_dict[converted] = None
else:
logger.notice(
_("Ignoring directory '{}' (unexpected name)")
.format(d))
coords = coord_dict.keys()
res = []
for icao in self.sortedIcao():
airport = self.airports[icao]
for c in coords:
if (c[0][0] < airport.lat < c[0][1] and
c[1][0] < airport.lon < c[1][1]):
res.append(icao)
return res
def _calculateRange(self, coordinates):
c = coordinates
if c.startswith('s') or c.startswith('w'):
c = int(c[1:]) * (-1)
return c, c + 1
else:
c = int(c[1:])
return c, c + 1
def _createUserDirectories(self):
"""Create config, log and stats directories if they don't exist."""
for d in USER_DATA_DIR, LOG_DIR, STATS_DIR:
os.makedirs(d, exist_ok=True)
def _maybeMigrateFromFGoConfig_dialogs(self, parent):
message = _("Initialize {prg}'s configuration from your existing " \
"FGo! configuration?").format(prg=PROGNAME)
detail = (_("""\
You have no {cfgfile} file but you do have a {fgo_cfgfile} file,
which normally belongs to FGo!. Except in rare circumstances
(such as using braces or backslashes, or opening brackets at the
beginning of a config line), a configuration file from
FGo! 1.5.5 or earlier should be usable as is by {prg}.""")
.replace('\n', ' ') + "\n\n" + _("""\
If {fgo_cfgfile} was written by FGo! 1.5.5 or earlier, you
should probably say “Yes” here in order to initialize {prg}'s
configuration based on your FGo! config file (precisely:
copy {fgo_cfgfile} to {cfgfile}).""")
.replace('\n', ' ') + "\n\n" + _("""\
If {fgo_cfgfile} was written by a version of FGo! that is greater
than 1.5.5, it is advised to say “No” here.""")
.replace('\n', ' ')
).format(prg=PROGNAME, cfgfile=CONFIG, fgo_cfgfile=FGO_CONFIG)
if askyesno(PROGNAME, message, detail=detail, parent=parent):
choice = "migrate from FGo!"
else:
message = _("Create a default {prg} configuration?").format(
prg=PROGNAME)
detail = _("""\
Choose “Yes” to create a basic {prg} configuration now. If you
choose “No”, {prg} will exit and you'll have to create {cfgfile}
yourself, or restart {prg} to see the same questions again.""") \
.replace('\n', ' ').format(prg=PROGNAME, cfgfile=CONFIG)
if askyesno(PROGNAME, message, detail=detail, parent=parent):
choice = "create default cfg"
message = _("Creating a default {prg} configuration.").format(
prg=PROGNAME)
detail = (_("""\
It is suggested that you go to the Settings menu and choose
Preferences to review your newly-created configuration.""")
.replace('\n', ' ') + "\n\n" + _("""\
You can also reuse most, if not all FlightGear options you
had in FGo!'s main text box (the “options window”). Just copy
them to the corresponding {prg} text box.""")
.replace('\n', ' ') + "\n\n" + _("""\
Note: you may run both FGo! and {prg} simultaneously, as their
configurations are kept separate.""")
.replace('\n', ' ')
).format(prg=PROGNAME)
showinfo(PROGNAME, message, detail=detail, parent=parent)
else:
choice = "abort"
return choice
def _maybeMigrateFromFGoConfig(self):
if os.path.isfile(FGO_CONFIG) and not os.path.isfile(CONFIG):
baseSize = tkinter.font.nametofont("TkDefaultFont").actual()["size"]
def configFontSize(val, absolute=False):
for style in ("Default", "Text", "Fixed", "Caption", "Tooltip"):
font = tkinter.font.nametofont("Tk{}Font".format(style))
if absolute:
font.configure(size=val)
else:
font.configure(size=int(round(baseSize * val)))
# Make sure most people can read the following dialogs (the
# standard Tk size may be rather small): 140% increase
configFontSize(1.4, absolute=False)
choice = None # user choice in the to-be-displayed dialogs
# It seems we need an otherwise useless Toplevel window in order to
# center the Tk standard dialogs...
t = tkinter.Toplevel()
try:
# Transparent if the OS supports it
t.attributes('-alpha', '0.0')
# Center the Toplevel. To be effective, this would probably
# need a visit to the Tk event loop, however it is enough to
# have the child dialogs centered, which is what matters here.
self.master.eval('tk::PlaceWindow {} center'.format(
t.winfo_pathname(t.winfo_id())))
choice = self._maybeMigrateFromFGoConfig_dialogs(t)
finally:
t.destroy()
# Restore font size for later self.setupFonts() call
configFontSize(baseSize, absolute=True)
if choice in (None, "abort"):
raise AbortConfig
elif choice == "migrate from FGo!":
# shutil.copy() and shutil.copy2() attempt to preserve the file's
# permission mode, which is undesirable here → manual copy.
with open(FGO_CONFIG, "r", encoding='utf-8') as fgoConfig, \
open(CONFIG, "w", encoding='utf-8') as config:
config.write(fgoConfig.read())
else:
assert choice == "create default cfg", repr(choice)
def _read(self, path=None):
"""Read the specified or a default configuration file.
- If 'path' is None and CONFIG exists, load CONFIG;
- if 'path' is None and CONFIG does not exist, load the
configuration from the presets and default, localized
config_ll resource;
- otherwise, load configuration from the specified file.
"""
try:
# ExitStack not strictly necessary here, but allows clean and
# convenient handling of the various files or resources the
# configuration may be loaded from.
with contextlib.ExitStack() as stack:
res = self._read0(stack, path)
except OSError as e:
message = _('Error loading configuration')
showerror(_('{prg}').format(prg=PROGNAME), message, detail=str(e))
res = ([''], '')
return res
_presetsBlankLineOrCommentCre = re.compile(r"^[ \t]*(#|$)")
def _read0(self, stack, path):
# Data before the CUT_LINE in the config file, destined to
# self.settings
settings = []
# Data after the CUT_LINE in the config file, destined to
# self.text and to be parsed by CondConfigParser
condConfLines = []
if path is not None or (path is None and os.path.exists(CONFIG)):
if path is None:
path = CONFIG
logger.info("Opening config file '{}' for reading".format(path))
configStream = stack.enter_context(open(path, "r",
encoding="utf-8"))
beforeCutLine = True
else: # Use default config if no regular config exists.
# Load presets if exists.
if resourceExists(PRESETS):
with textResourceStream(PRESETS) as presets:
for line in presets:
line = line.strip()
if not self._presetsBlankLineOrCommentCre.match(line):
settings.append(line)
# Find the currently used language according to the environment.
try:
lang_code = gettext.translation(
MESSAGES, LOCALE_DIR).info()['language']
except OSError:
lang_code = 'en'
if not resourceExists(DEFAULT_CONFIG_STEM + lang_code):
lang_code = 'en'
resPath = DEFAULT_CONFIG_STEM + lang_code
configStream = stack.enter_context(textResourceStream(resPath))
# There is no "cut line" in the template config files.
beforeCutLine = False
for line in configStream:
if beforeCutLine:
line = line.strip()
if line != CUT_LINE:
if beforeCutLine:
# Comments wouldn't be preserved on saving, therefore don't
# try to handle them before the "cut line".
if line:
settings.append(line)
else:
condConfLines.append(line)
else:
beforeCutLine = False
return (settings, ''.join(condConfLines))
def _readAircraft(self):
"""Walk through Aircraft directories and return the available aircraft.
Return a tuple (aircraftDict, aircraftList) listing all aircraft
found via self.aircraft_dirs.
aircraftDict is a dictionary whose keys are the names (derived
from the -set.xml files) of all aircraft. For each aircraft
name 'n', aircraftDict[n] is the list, in self.aircraft_dirs
priority order, of all Aircraft instances with that name.
aircraftList is the sorted list of all Aircraft instances,
suitable for quick building of the aircraft list in the GUI.
"""
aircraftDict = {}
for dir_ in self.aircraft_dirs:
if os.path.isdir(dir_):
for d in os.listdir(dir_):
self._readAircraftData(dir_, d, aircraftDict)
aircraftList = []
# First sort by lowercased aircraft name
sortFunc = lambda s: (s.lower(), s)
for acName in sorted(aircraftDict.keys(), key=sortFunc):
# Then sort by position in self.aircraft_dirs
aircraftList.extend(aircraftDict[acName])
return (aircraftDict, aircraftList)
def _readAircraftData(self, dir_, d, aircraftDict):
path = os.path.join(dir_, d)
if os.path.isdir(path):
for f in os.listdir(path):
self._appendAircraft(f, aircraftDict, path)
def _appendAircraft(self, f, aircraftDict, path):
if f.endswith('-set.xml'):
# Dirty and ugly hack to prevent carrier-set.xml in
# seahawk directory to be attached to the aircraft
# list.
if (not path.startswith('seahawk') and
f != 'carrier-set.xml'):
name = f[:-8]
if name not in aircraftDict:
aircraftDict[name] = []
aircraft = Aircraft(name, path)
aircraftDict[name].append(aircraft)
def sortedIcao(self):
return sorted(self.airports.keys())
def readAptDigestFile(self):
"""Read the apt digest file.
Recreate a new one if there is already one, but written in an
old version of the format. Return a list of AirportStub
instances.
"""
from .fgdata import apt_dat
if not os.path.isfile(APT):
self.aptDatFilesInfoFromDigest, self.airports = [], {}
else:
for attempt in itertools.count(start=1):
try:
self.aptDatFilesInfoFromDigest, self.airports = \
apt_dat.AptDatDigest.read(APT)
except apt_dat.UnableToParseAptDigest:
# Rebuild once in case the apt digest file was written
# in an outdated format.
if attempt < 2:
self.makeAptDigest()
else:
raise
else:
break
if os.path.isfile(OBSOLETE_APT_TIMESTAMP_FILE):
# Obsolete file since version 4 of the apt digest file format
os.unlink(OBSOLETE_APT_TIMESTAMP_FILE)
if self.filteredAptList.get():
installedApt = self._readInstalledAptSet()
res = [ self.airports[icao] for icao in self.sortedIcao()
if icao in installedApt ]
else:
res = [ self.airports[icao] for icao in self.sortedIcao() ]
return res
def _readInstalledAptSet(self):
"""Read the set of locally installed airports from INSTALLED_APT.
Create a new INSTALLED_APT file if none exists yet.
Return a frozenset(), which offers very fast membership test
compared to a list.
"""
if not os.path.exists(INSTALLED_APT):
self.makeInstalledAptList()
logger.info("Opening installed apt file '{}' for reading".format(
INSTALLED_APT))
with open(INSTALLED_APT, "r", encoding="utf-8") as f:
# Strip the newline char ending every line
res = frozenset([ line[:-1] for line in f ])
return res
def makeAptDigest(self, headText=None):
"""
Build the FFGo apt digest file from the apt.dat files used by FlightGear"""
AptDigestBuilder(self.master, self).start(headText)
def autoUpdateApt(self):
"""Rebuild the apt digest file if it is outdated."""
from .fgdata import apt_dat
if os.path.isfile(APT):
# Extract metadata (list of apt.dat files, sizes, timestamps) from
# the existing apt digest file
try:
formatVersion, self.aptDatFilesInfoFromDigest = \
apt_dat.AptDatDigest.read(APT, onlyReadHeader=True)
except apt_dat.UnableToParseAptDigest:
self.aptDatFilesInfoFromDigest = []
else:
self.aptDatFilesInfoFromDigest = []
# Check if the list, size or timestamps of the apt.dat files changed
if not self.aptDatSetManager.isFresh(self.aptDatFilesInfoFromDigest):
self.makeAptDigest(
headText=_('Modification of apt.dat files detected.'))
# The new apt.dat files may invalidate the current parking
status, *rest = self.decodeParkingSetting(self.park.get())
if status == "apt.dat":
# This was a startup location obtained from an apt.dat file; it
# may be invalid with the new files, reset.
self.park.set('')
# This is also outdated with respect to the new set of apt.dat files.
self.aptDatCache.clear()
def _readScenarios(self):
"""Walk through AI scenarios and read carrier data.
Return two lists:
scenarios: [scenario name, ...]
carrier data: [[name, parkking pos, ..., scenario name], ...]
Return two empty lists if no scenario is found.
"""
carriers = []
scenarios = []
if os.path.isdir(self.ai_path):
for f in os.listdir(self.ai_path):
path = os.path.join(self.ai_path, f)
if os.path.isfile(path) and f.lower().endswith('.xml'):
scenario_name = f[:-4]
scenarios.append(scenario_name)
# Appends to 'carriers'
self._append_carrier_data(carriers, path, scenario_name)
return sorted(scenarios), sorted(carriers)
def _append_carrier_data(self, carriers, xmlFilePath, scenario_name):
logger.info("Reading scenario data from '{}'".format(xmlFilePath))
root = self._get_root(xmlFilePath)
scenario = root.find('scenario')
if scenario is not None:
for e in scenario.iterfind('entry'):
typeElt = e.find('type')
if typeElt is not None and typeElt.text == 'carrier':
data = self._get_carrier_data(e, scenario_name)
carriers.append(data)
def _get_root(self, xmlFilePath):
tree = ElementTree.parse(xmlFilePath)
return tree.getroot()
def _get_carrier_data(self, e, scenario_name):
nameElt = e.find('name')
if nameElt is not None:
data = [nameElt.text]
else:
data = ['unnamed']
for child in e.iterfind('parking-pos'):
parkingNameElt = child.find('name')
if parkingNameElt is not None:
data.append(parkingNameElt.text)
data.append(scenario_name)
return data
# The '1' is the version number of this custom format for the contents of
# Config.park, in case we need to change it.
aptDatParkConfStart_cre = re.compile(r"::apt\.dat::1::(?P<nameLen>\d+),")
aptDatParkConfEnd_cre = re.compile(
r"""lat=(?P<lat>{floatRegexp}),
lon=(?P<lon>{floatRegexp}),
heading=(?P<heading>{floatRegexp})$""".format(
floatRegexp=r"-?\d+(\.\d*)?"),
re.VERBOSE)
def decodeParkingSetting(self, parkConf):
status = "invalid" # will be overridden if correct in the end
parkName = None
options = []
if not parkConf:
status = "none" # no parking position
else:
mo = self.aptDatParkConfStart_cre.match(parkConf)
if mo:
# Length of the following parking name (after the comma)
nameLen = int(mo.group("nameLen"))
i = mo.end("nameLen") + 1 + nameLen
if len(parkConf) > i and parkConf[i] == ";":
mo2 = self.aptDatParkConfEnd_cre.match(parkConf[i+1:])
if mo2:
parkName = parkConf[mo.end("nameLen")+1:i]
options = ["--lat=" + mo2.group("lat"),
"--lon=" + mo2.group("lon"),
"--heading=" + mo2.group("heading")]
status = "apt.dat"
else: # plain parking name
parkName = parkConf
options = ["--parkpos=" + parkName]
status = "groundnet"
return (status, parkName, options)
def _earlyTranslationsSetup(self):
"""Setup translations before the config file has been read.
The language is determined from the environment (LANGUAGE,
LC_ALL, LC_MESSAGES, and LANG—cf. gettext.translation() and
gettext.find()).
"""
try:
langCode = gettext.translation(
MESSAGES, LOCALE_DIR).info()['language']
except OSError:
langCode = 'en'
self._setLanguage(langCode)
def _setLanguage(self, lang):
# Initialize provided language...
try:
L = gettext.translation(MESSAGES, LOCALE_DIR, languages=[lang])
L.install()
# ...or fallback to system default.
except Exception:
gettext.install(MESSAGES, LOCALE_DIR)
# Regexp for directory names such as w040n20
_geoDirCre = re.compile(r"[we]\d{3}[ns]\d{2}$")
def _stringToCoordinates(self, coordinates):
"""Convert geo coordinates to decimal format."""
if not self._geoDirCre.match(coordinates):
return None
lat = coordinates[4:]
lon = coordinates[:4]
lat_range = self._calculateRange(lat)
lon_range = self._calculateRange(lon)
return lat_range, lon_range
# Accept any arguments to allow safe use as a Tkinter variable observer
def updateMagFieldProvider(self, *args):
from .geo.magfield import EarthMagneticField, MagVarUnavailable
try:
self.earthMagneticField = EarthMagneticField(self)
except MagVarUnavailable as e:
self.earthMagneticField = None
self.earthMagneticFieldLastProblem = e.message
from .fgdata import airport as airport_mod
from .fgdata import parking as parking_mod
from .gui import airport_finder as airport_finder_mod
from .gui import gps_tool as gps_tool_mod
for module in (airport_mod, parking_mod, airport_finder_mod,
gps_tool_mod):
module.setupEarthMagneticFieldProvider(self.earthMagneticField)
class AptDigestBuilderProgressFeedbackHandler(misc.ProgressFeedbackHandler):
def __init__(self, progressWidget, progressTextVar, progressValueVar,
*args, **kwargs):
self.progressWidget = progressWidget
self.progressTextVar = progressTextVar
self.progressValueVar = progressValueVar
misc.ProgressFeedbackHandler.__init__(self, *args, **kwargs)
def onUpdated(self):
self.progressTextVar.set(self.text)
# The default range in ttk.Progressbar() is [0, 100]
self.progressValueVar.set(100*self.value/self.amplitude)
# Useful when we don't get back to the Tk main loop for long periods
self.progressWidget.update_idletasks()
class AptDigestBuilder:
"""
Build the FFGo apt digest file from the apt.dat files used by FlightGear."""
def __init__(self, master, config):
self.master = master
self.config = config
# For progress feedback, since rebuilding the apt digest file is
# time-consuming
self.progressTextVar = StringVar()
self.progressValueVar = StringVar()
self.progressValueVar.set("0.0")
def start(self, headText=None):
# Check if there are apt.dat files that FlightGear would consider
# (based on scenery paths, including the TerraSync directory)
if self.config.aptDatSetManager.aptDatList:
self.makeWindow(headText)
try:
self.makeAptDigest()
except Exception:
self.closeWindow()
# Will be handled by master.report_callback_exception
raise
self.closeWindow()
else:
message = _('Cannot find any apt.dat file.')
showerror(_('Error'), message)
def makeWindow(self, headText=None):
message = _('Generating the airport database,\n'
'this may take a while...')
if headText:
message = '\n'.join((headText, message))
self.window = InfoWindow(
self.master, text=message, withProgress=True,
progressLabelKwargs={"textvariable": self.progressTextVar},
progressWidgetKwargs={"orient": "horizontal",
"variable": self.progressValueVar,
"mode": "determinate"})
self.config.aptDatSetManager.progressFeedbackHandler = \
AptDigestBuilderProgressFeedbackHandler(self.window.progressWidget,
self.progressTextVar,
self.progressValueVar)
def makeAptDigest(self):
from .fgdata import apt_dat
aptDatFilesStr = textwrap.indent(
'\n'.join(self.config.aptDatSetManager.aptDatList),
" ")
s = _("Generating {prg}'s apt digest file ('{aptDigest}') "
"from:\n\n{aptDatFiles}").format(
prg=PROGNAME, aptDigest=APT, aptDatFiles=aptDatFilesStr)
logger.notice(s)
self.config.aptDatSetManager.writeAptDigestFile(outputFile=APT)
def closeWindow(self):
self.window.destroy() | PypiClean |
/GRID_LRT-1.0.7.tar.gz/GRID_LRT-1.0.7/GRID_LRT/application/submit.py | import os
import signal
import subprocess
import sys
from subprocess import Popen
import logging
import warnings
import random, string
from shutil import copyfile, rmtree
import tempfile
from GRID_LRT.auth.get_picas_credentials import picas_cred as pc
import GRID_LRT
from GRID_LRT.auth import grid_credentials
class SafePopen(Popen):
def __init__(self, *args, **kwargs):
if sys.version_info.major == 3 :
kwargs['encoding'] = 'utf8'
return super(SafePopen, self).__init__(*args, **kwargs)
#class job_launcher(object):
# """Generic Job Launcher
# """
# def __init__(self):
# pass
class gridjob(object):
"""A class containing all required descriptions for a SINGLE
Grid job, as well as its parameters"""
def __init__(self, wholenodes=False, NCPU=1, token_type=None):
self.token_type = token_type
if wholenodes:
self.wholenodes = 'true'
else:
self.wholenodes = 'false'
self.ncpu = NCPU
class RunningJob(object):
def __init__(self, glite_url=''):
self.job_status='Unknown'
self.glite_status='Unknown'
if glite_url:
self.glite_url = glite_url
@property
def status(self):
self.__check_status()
return self.job_status
def __check_status(self):
glite_process = SafePopen(['glite-wms-job-status', self.glite_url],stdout=subprocess.PIPE, stderr=subprocess.PIPE)
result, err = glite_process.communicate()
try:
self.job_status=result.split('Current Status:')[1].split()[0]
except:
print(err)
if self.glite_status== 'Running':
self.count_successes(result)
if self.glite_status=='Waiting':
self.count_successes(result)
if self.glite_status == 'Aborted':
self.count_successes(result)
if self.glite_status=='Running' and self.job_status=='Waiting':
self.glite_status='Completed'
def count_successes(self,jobs):
"""Counts the number of Completed jobs in the results of the glite-wms-job-status
output. """
exit_codes=[]
jobs_list=[]
for j in jobs.split('=========================================================================='):
jobs_list.append(j)
statuses=[]
for j in jobs_list:
if "Current Status:" in j:
statuses.append(j.split("Current Status:")[1].split('\n')[0])
numdone=0
for i in statuses:
if 'Done' in i or 'Cancelled' in i or 'Aborted' in i :
numdone+=1
if 'Done' in statuses[0] or 'Aborted' in statuses[0]:
self.job_status = 'Done'
if numdone == len(jobs_list):
self.job_status='Done'
if self.job_status == 'Waiting':
for i in statuses:
if 'Scheduled' in i:
self.job_status = 'Scheduled'
elif 'Running' in i:
self.job_status = 'Running'
elif 'Submitted' in i:
self.job_status = 'Submitted'
else:
self.job_status = "Done"
logging.info("Num_jobs_done "+str(numdone)+" snd status is "+self.job_status)
self.numdone = numdone
return statuses[1:]
def __str__(self):
return "<Grid job '{}' with status '{}'>".format(self.glite_url, self.status)
def __repr__(self):
return self.__str__()
class JdlLauncher(object):
"""jdl_launcher creates a jdl launch file with the
appropriate queue, CPUs, cores and Memory per node
The jdl file is stored in a temporary location and can be
automatically removed using a context manager such as:
>>> with Jdl_launcher_object as j:
launch_ID=j.launch()
This will launch the jdl, remove the temp file and reutrn the
Job_ID for the glite job.
"""
def __init__(self, numjobs=1, token_type='t_test',
parameter_step=4, **kwargs):
"""The jdl_launcher class is initialized with the number of jobs,
the name of the PiCaS token type to run, and a flag to use the whole node.
Args:
numjobs (int): The number of jobs to launch on the cluster
token_type (str): The name of the token_type to launch from the
PiCaS database. this uses the get_picas_credentials module to
get the PiCaS database name, uname, passw
wholenodes(Boolean): Whether to reserve the entire node. Default is F
NCPU (int, optional): Number of CPUs to use for each job. Default is 1
"""
self.authorized = False
if 'authorize' in kwargs.keys() and kwargs['authorize'] == False:
warnings.warn("Skipping Grid Autorization")
else:
self.__check_authorized()
if numjobs < 1:
logging.warn("jdl_file with zero jobs!")
numjobs = 1
self.numjobs = numjobs
self.parameter_step = parameter_step
self.token_type = token_type
self.wholenodes = 'false'
if 'wholenode' in kwargs:
self.wholenodes = kwargs['wholenode']
if "NCPU" in kwargs:
self.ncpu = kwargs["NCPU"]
else:
self.ncpu = 1
if self.ncpu == 0:
self.wholenodes = 'true'
if "queue" in kwargs:
self.queue = kwargs['queue']
else:
self.queue = "medium"
self.temp_file = None
self.launch_file = str("/".join((GRID_LRT.__file__.split("/")[:-1])) +
"/data/launchers/run_remote_sandbox.sh")
def __check_authorized(self):
grid_credentials.grid_credentials_enabled()
self.authorized = True
def __enter__(self):
return self
def __exit__(self, exc_type, exc_value, traceback):
os.remove(self.temp_file.name)
def build_jdl_file(self, database=None):
"""Uses a template to build the jdl file and place it in a
temporary file object stored internally.
"""
creds = pc() # Get credentials, get launch file to send to workers
if not database:
database = creds.database
if not os.path.exists(self.launch_file):
raise IOError("Launch file doesn't exist! "+self.launch_file)
jdlfile = """[
JobType="Parametric";
ParameterStart=0;
ParameterStep=%d;
Parameters= %d ;
Executable = "/bin/sh";
Arguments = "run_remote_sandbox.sh %s %s %s %s ";
Stdoutput = "parametricjob.out";
StdError = "parametricjob.err";
InputSandbox = {"%s"};
OutputSandbox = {"parametricjob.out", "parametricjob.err"};
DataAccessProtocol = {"gsiftp"};
ShallowRetryCount = 0;
Requirements=(RegExp("gina.sara.nl:8443/cream-pbs-%s",other.GlueCEUniqueID));
WholeNodes = %s ;
SmpGranularity = %d;
CPUNumber = %d;
]""" % (int(self.parameter_step),
int(self.numjobs),
str(database),
str(creds.user),
str(creds.password),
str(self.token_type),
str(self.launch_file),
str(self.queue),
str(self.wholenodes),
int(self.ncpu),
int(self.ncpu))
return jdlfile
def make_temp_jdlfile(self, database=None):
""" Makes a temporary file to store the JDL
document that is only visible to the user"""
self.temp_file = tempfile.NamedTemporaryFile(delete=False)
print("making temp file at "+self.temp_file.name)
with open(self.temp_file.name, 'w') as t_file_obj:
for i in self.build_jdl_file(database):
t_file_obj.write(i)
return self.temp_file
def launch(self, database=None):
"""Launch the glite-job and return the job identification"""
if not self.authorized:
self._check_authorized()
if not self.temp_file:
self.temp_file = self.make_temp_jdlfile(database = database)
sub = safePopen(['glite-wms-job-submit', '-d', os.environ["USER"],
self.temp_file.name], stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
out = sub.communicate()
if out[1] == "":
return out[0].split('Your job identifier is:')[1].split()[0]
raise RuntimeError("Launching of JDL failed because: "+out[1])
class UnauthorizedJdlLauncher(JdlLauncher):
def __init__(self, *args, **kw):
super(UnauthorizedJdlLauncher, self).__init__(*args, authorize=False, **kw)
def launch(self):
if not self.temp_file:
self.temp_file = self.make_temp_jdlfile()
fake_link = 'https://wms2.grid.sara.nl:9000/'+''.join(random.choice(string.ascii_letters + string.digits+"-") for _ in range(22))
warnings.warn("If you were authorized, we would be launching the JDL here. You'd get this in return: {}".format(fake_link))
return fake_link
class LouiLauncher(JdlLauncher):
"""
To make integration tests of an AGLOW step, this job launches it on loui.
"""
def __init__(self, *args, **kwargs):
super(LouiLauncher,self).__init__(*args, **kwargs)
self.pid=None
self.return_directory = os.getcwd()
self.run_directory = tempfile.mkdtemp(prefix='/scratch/')
def launch(self, database=None):
copyfile(self.launch_file, self.run_directory+"/run_remote_sandbox.sh")
os.chdir(self.run_directory)
creds = pc()
if not database:
database = creds.database
command = "./run_remote_sandbox.sh {} {} {} {}".format(database,
creds.user, creds.password, self.token_type)
os.chmod('run_remote_sandbox.sh',0o744)
print("Running in folder: ")
print("")
print("Don't forget to run LouiLauncher.cleanup() in Pythonwhen you're done!")
print(self.run_directory)
with open(self.run_directory+"/stdout.txt","wb") as out:
with open(self.run_directory+"/stderr.txt","wb") as err:
launcher = SafePopen(command.split(), stdout=out, stderr=err)
self.pid = launcher.pid
launcher.wait()
return {'output':self.run_directory+"/stdout.txt",
'error':self.run_directory+"/stderr.txt"}
def __check_authorised(self):
self.authorized = True
def cleanup(self):
print("removing directory " + self.run_directory)
rmtree(self.run_directory)
os.chdir(self.return_directory)
if self.pid:
os.kill(self.pid, signal.SIGKILL)
def __del__(self):
if os.path.exists(self.run_directory):
self.cleanup()
def __exit__(self, exc_type, exc_value, traceback):
return None | PypiClean |
/IPython-Dashboard-0.1.5.tar.gz/IPython-Dashboard-0.1.5/dashboard/static/js/dash.vis.js | function genLineChart(timeFormat){
var chart = nv.models.lineWithFocusChart();
if (timeFormat == 1) {
chart.x(function(d){
return new Date(d.x);
});
chart.xScale = d3.time.scale;
chart.xAxis.tickFormat(function(d) {
return d3.time.format("%Y-%m-%d")(new Date(d))
});
}
chart.yAxis.tickFormat(d3.format(',.2f'));
chart.y2Axis.tickFormat(d3.format(',.2f'));
chart.useInteractiveGuideline(true);
// chart.brushExtent([-Infinity, Infinity]);
return chart;
}
function genPieChart(){
var chart = nv.models.pieChart()
.x(function(d) { return d.key })
.y(function(d) { return d.y })
.growOnHover(true)
.labelType('value')
.color(d3.scale.category20().range())
;
return chart;
}
function genAreaChart(){
var chart = nv.models.stackedAreaChart()
.useInteractiveGuideline(true)
.x(function(d) { return d[0] })
.y(function(d) { return d[1] })
.controlLabels({stacked: "Stacked"})
.duration(300);
;
return chart;
}
function genMultiBarChart(){
var chart = nv.models.multiBarChart()
.margin({ bottom: 30 })
.duration(300)
// .rotateLabels(45)
.groupSpacing(0.1)
.stacked(true)
;
return chart;
}
function renderChart(dom_id, chart, data){
var svg = d3.select(dom_id).datum(data);
svg.transition().duration(0).call(chart);
}
function getChart(type){
switch (type){
case "line": return genLineChart();
case "bar": return genMultiBarChart();
case "pie": return genPieChart();
case "area": return genAreaChart();
}
}
function validateData(type, data){
switch (type){
case "line": return validateLineData(data);
case "bar": return validateMultiBarData(data);
case "pie": return validatePieData(data);
case "area": return validateAreaData(data);
}
}
function validateLineData(data){
return data;
}
function validateMultiBarData(data){
// pattern: [{area: true, disabled: true, key: key, values: [{x: , y: }, ]},]
$.each(data, function(index, obj){
obj.area = true;
obj.disabled = false;
});
return data;
}
function validatePieData(data){
var formatData = [];
$.each(data[0].values, function(index, obj){
formatData.push({key: obj.x, y: obj.y});
});
data = formatData;
return data;
}
function validateAreaData(data){
var formatData = [];
$.each(data, function(index, obj){
var tmp = {};
tmp.key = obj.key;
var tmpValue = [];
$.each(obj.values, function(index, objValue){
tmpValue.push([objValue.x, objValue.y]);
});
tmp.values = tmpValue;
formatData.push(tmp);
});
data = formatData;
return data;
}
function xAxisTimeformat(chart){
chart.x(function(d){
return new Date(d.x);
});
chart.xScale = d3.time.scale;
chart.xAxis.tickFormat(function(d) {
return d3.time.format("%Y-%m-%d")(new Date(d))
});
}
function drawChartIntoGrid(type, graph_id){
var selector = strFormat("div.chart-graph[graph_id='{0}']", graph_id);
console.log(strFormat("###Ready to draw chart : {0}", type));
var modalData = store.get("modal");
var key = modalData.key;
var data = store.get(key);
//
if (type == 'table') {
parseTable(data, selector);
return true;
};
// check data avilablity
// use different js lib to do the drawing, nvd3, c3, d3, leafletjs
// currently, I just use nvd3 to fullfill the basic graph.
// var chart = getChart(type);
// clear content if exissted for creating new content
$.each($(selector)[0].children, function(index, obj){$(selector)[0].removeChild(obj)})
// get data which need draw, axes defined in data-0.1.0.js as xyAxes
var xColumn = data[ modalData.option.x[0] ];
var chart = getChart(type);
if (xColumn[0][4] == '-' && type=='line'){
console.log('change x-axis time format');
xAxisTimeformat(chart);
}
var graphData = [];
$.each(modalData.option.y, function(index, obj){
var tmp = {};
var yColumn = data[obj];
tmp["key"] = obj;
tmp["values"] = [];
for (var index in xColumn){
tmp["values"].push({"x": xColumn[index], "y": yColumn[index]});
}
graphData.push(tmp);
});
// validate and transform data before draw it
graphData = validateData(type, graphData);
d3.select(selector).append('svg')
.datum(graphData)
.call(chart);
// register a resize event
nv.utils.windowResize(chart.update);
}
function initChart(type, graph_id){
var selector = strFormat("div.chart-graph[graph_id='{0}']", graph_id);
console.log(strFormat("###Ready to draw chart : {0}", type));
// var modalData = store.get("modal");
var current_dash = store.get(store.get("current-dash"));
var current_graph = current_dash.grid[graph_id];
var key = current_graph.key;
var data = store.get(key);
//
if (type == 'table') {
parseTable(data, selector);
return true;
};
// check data avilablity
// use different js lib to do the drawing, nvd3, c3, d3, leafletjs
// currently, I just use nvd3 to fullfill the basic graph.
// var chart = getChart(type);
// clear content if exissted for creating new content
$.each($(selector)[0].children, function(index, obj){$(selector)[0].removeChild(obj)})
// get data which need draw, axes defined in data-0.1.0.js as xyAxes
var xColumn = data[ current_graph.option.x[0] ];
var chart = getChart(type);
if (xColumn[0][4] == '-' && type=='line'){
console.log('change x-axis time format');
xAxisTimeformat(chart);
}
var graphData = [];
$.each(current_graph.option.y, function(index, obj){
var tmp = {};
var yColumn = data[obj];
tmp["key"] = obj;
tmp["values"] = [];
for (var index in xColumn){
tmp["values"].push({"x": xColumn[index], "y": yColumn[index]});
}
graphData.push(tmp);
});
// validate and transform data before draw it
graphData = validateData(type, graphData);
d3.select(selector).append('svg')
.datum(graphData)
.call(chart);
// register a resize event
nv.utils.windowResize(chart.update);
}
function drawChartIntoModal(type){
var modalData = store.get("modal");
var key = modalData.key;
var data = store.get(key);
// clear content if exissted for creating new content
$.each($("#value")[0].children, function(index, obj){$("#value")[0].removeChild(obj)})
if (type == 'table') {
parseTable(data, "#value");
return true;
};
var xColumn = data[ modalData.option.x[0] ];
var chart = getChart(type);
if (xColumn[0][4] == '-' && type=='line'){
console.log('change x-axis time format');
xAxisTimeformat(chart);
}
var graphData = [];
$.each(modalData.option.y, function(index, obj){
var tmp = {};
var yColumn = data[obj];
tmp["key"] = obj;
tmp["values"] = [];
for (var index in xColumn){
tmp["values"].push({"x": xColumn[index], "y": yColumn[index]});
}
graphData.push(tmp);
});
// validate and transform data before draw it
graphData = validateData(type, graphData);
d3.select("#value").append('svg')
.datum(graphData)
.call(chart);
// register a resize event
nv.utils.windowResize(chart.update);
// update modal setting
modalData.type = type;
store.set("modal", modalData);
} | PypiClean |
/DNBC4tools-2.1.0.tar.gz/DNBC4tools-2.1.0/dnbc4tools/atac/decon.py | import os
import argparse
from typing import List, Dict
from dnbc4tools.tools.utils import str_mkdir,judgeFilexits,change_path,logging_call,read_json
from dnbc4tools.__init__ import __root_dir__
class Decon:
def __init__(self, args: Dict):
"""
Constructor for Decon class.
Args:
- args (Dict): A dictionary containing the arguments to configure the Decon object.
"""
self.name: str = args.name
self.outdir: str = os.path.abspath(os.path.join(args.outdir, args.name))
self.threads: int = args.threads
self.genomeDir: str = os.path.abspath(args.genomeDir)
self.forcebeads: int = args.forcebeads
self.forcefrags: int = args.forcefrags
self.threshold: int = args.threshold
def run(self) -> None:
"""
Run the Decon algorithm.
"""
# Check if genomeDir exists
judgeFilexits(self.genomeDir)
# Create output and log directories
str_mkdir(f"{self.outdir}/02.decon")
str_mkdir(f"{self.outdir}/log")
# Change to the output directory
change_path()
# Read the genome directory configuration from ref.json
genomeDir = os.path.abspath(self.genomeDir)
indexConfig: Dict = read_json(f"{genomeDir}/ref.json")
blacklist: str = indexConfig['blacklist']
tss: str = indexConfig['tss']
chrmt: str = indexConfig['chrmt']
chromeSize: str = indexConfig['chromeSize']
# Construct the Decon command with the provided parameters
d2c_cmd: List[str] = [
f"{__root_dir__}/software/d2c/bin/d2c merge -i {self.outdir}/01.data/aln.bed --fb {self.threshold}",
f"-o {self.outdir}/02.decon -c {self.threads} -n {self.name} --bg {chromeSize} --ts {tss} --sat --bt1 CB",
f"--log {self.outdir}/02.decon"
]
# Add optional parameters if they are not None
if self.forcefrags:
d2c_cmd.append(f"--bf {self.forcefrags}")
if self.forcebeads:
d2c_cmd.append(f"--bp {self.forcebeads}")
if chrmt != 'None':
d2c_cmd.append(f"--mc {chrmt}")
if blacklist != 'None':
d2c_cmd.append(f"--bl {blacklist}")
# Join the command list into a single string and execute the command
d2c_cmd = ' '.join(d2c_cmd)
print('\nCell calling, Deconvolution.')
logging_call(d2c_cmd, 'decon', self.outdir)
def decon(args):
Decon(args).run()
def helpInfo_decon(parser):
parser.add_argument(
'--name',
metavar='NAME',
help='Sample name.',
type=str,
required=True
)
parser.add_argument(
'--outdir',
metavar='PATH',
help='Output diretory, [default: current directory].',
default=os.getcwd()
)
parser.add_argument(
'--forcefrags',
type=int,
metavar='INT',
help='Minimum number of fragments to be thresholded.'
)
parser.add_argument(
'--forcebeads',
type=int,
metavar='INT',
help='Top N number of beads to be thresholded.'
)
parser.add_argument(
'--threshold',
type=int,
metavar='INT',
default=20000,
help=argparse.SUPPRESS
)
parser.add_argument(
'--threads',
type=int,
metavar='INT',
default=4,
help='Number of threads used for the analysis, [default: 4].'
)
parser.add_argument(
'--genomeDir',
type=str,
metavar='PATH',
help='Path of folder containing reference database.',
required=True
)
return parser | PypiClean |
/FiPy-3.4.4.tar.gz/FiPy-3.4.4/fipy/meshes/sphericalNonUniformGrid1D.py | from __future__ import unicode_literals
__docformat__ = 'restructuredtext'
from fipy.tools import numerix
from fipy.tools.dimensions.physicalField import PhysicalField
from fipy.tools import parallelComm
from fipy.meshes.nonUniformGrid1D import NonUniformGrid1D
__all__ = ["SphericalNonUniformGrid1D"]
from future.utils import text_to_native_str
__all__ = [text_to_native_str(n) for n in __all__]
class SphericalNonUniformGrid1D(NonUniformGrid1D):
"""
Creates a 1D spherical grid mesh.
>>> mesh = SphericalNonUniformGrid1D(nx = 3)
>>> print(mesh.cellCenters)
[[ 0.5 1.5 2.5]]
>>> mesh = SphericalNonUniformGrid1D(dx = (1, 2, 3))
>>> print(mesh.cellCenters)
[[ 0.5 2. 4.5]]
>>> print(numerix.allclose(mesh.cellVolumes, (0.5, 13., 94.5))) # doctest: +PROCESSOR_0
True
>>> mesh = SphericalNonUniformGrid1D(nx = 2, dx = (1, 2, 3))
Traceback (most recent call last):
...
IndexError: nx != len(dx)
>>> mesh = SphericalNonUniformGrid1D(nx=2, dx=(1., 2.)) + ((1.,),)
>>> print(mesh.cellCenters)
[[ 1.5 3. ]]
>>> print(numerix.allclose(mesh.cellVolumes, (3.5, 28))) # doctest: +PROCESSOR_0
True
"""
def __init__(self, dx=1., nx=None, origin=(0,), overlap=2, communicator=parallelComm, *args, **kwargs):
scale = PhysicalField(value=1, unit=PhysicalField(value=dx).unit)
self.origin = PhysicalField(value=origin)
self.origin /= scale
super(SphericalNonUniformGrid1D, self).__init__(dx=dx,
nx=nx,
overlap=overlap,
communicator=communicator,
*args,
**kwargs)
self.vertexCoords += origin
self.args['origin'] = origin
def _calcFaceCenters(self):
faceCenters = super(SphericalNonUniformGrid1D, self)._calcFaceCenters()
return faceCenters + self.origin
def _calcFaceAreas(self):
return self._calcFaceCenters()[0] * self._calcFaceCenters()[0]
def _calcCellVolumes(self):
return super(SphericalNonUniformGrid1D, self)._calcCellVolumes() / 2.
def _translate(self, vector):
return SphericalNonUniformGrid1D(dx=self.args['dx'], nx=self.args['nx'],
origin=numerix.array(self.args['origin']) + vector,
overlap=self.args['overlap'])
def __mul__(self, factor):
return SphericalNonUniformGrid1D(dx=self.args['dx'] * factor, nx=self.args['nx'],
origin=numerix.array(self.args['origin']) * factor,
overlap=self.args['overlap'])
def _test(self):
"""
These tests are not useful as documentation, but are here to ensure
everything works as expected. Fixed a bug where the following throws
an error on solve() when `nx` is a float.
>>> from fipy import CellVariable, DiffusionTerm
>>> mesh = SphericalNonUniformGrid1D(nx=3., dx=(1., 2., 3.))
>>> var = CellVariable(mesh=mesh)
>>> var.constrain(0., where=mesh.facesRight)
>>> DiffusionTerm().solve(var)
This test is for https://github.com/usnistgov/fipy/issues/372. Cell
volumes were being returned as `binOps` rather than arrays.
>>> m = SphericalNonUniformGrid1D(dx=(1., 2., 3., 4.), nx=4)
>>> print(isinstance(m.cellVolumes, numerix.ndarray))
True
>>> print(isinstance(m._faceAreas, numerix.ndarray))
True
If the above types aren't correct, the divergence operator's value can be a `binOp`
>>> print(isinstance(CellVariable(mesh=m).arithmeticFaceValue.divergence.value, numerix.ndarray))
True
"""
def _test():
import fipy.tests.doctestPlus
return fipy.tests.doctestPlus.testmod()
if __name__ == "__main__":
_test() | PypiClean |
/FuzzingTool-3.14.0-py3-none-any.whl/fuzzingtool/utils/utils.py |
from typing import List, Tuple, Union
from .consts import FUZZING_MARK, MAX_PAYLOAD_LENGTH_TO_OUTPUT
def get_indexes_to_parse(content: str,
search_for: str = FUZZING_MARK) -> List[int]:
"""Gets the indexes of the searched substring into a string content
@type content: str
@param content: The parameter content
@type search_for: str
@param search_for: The substring to be searched indexes
on the given content
@returns List[int]: The positions indexes of the searched substring
"""
return [i for i in range(len(content)) if content.startswith(search_for, i)]
def split_str_to_list(string: str,
separator: str = ',',
ignores: str = '\\') -> List[str]:
"""Split the given string into a list, using a separator
@type string: str
@param string: The string to be splited
@type separator: str
@param separator: A separator to split the string
@type ignores: str
@param ignores: A string to ignores the separator
@returns List[str]: The splited string
"""
def split_with_ignores() -> List[str]:
"""Split the string with ignores and separator
@returns List[str]: The splited string
"""
final = []
buffer = ''
for substr in string.split(separator):
if substr and substr[-1] == ignores:
buffer += substr[:-1]+separator
else:
final.extend([buffer+substr])
buffer = ''
return final
if string:
if f'{ignores}{separator}' in string:
return split_with_ignores()
return string.split(separator)
return []
def stringfy_list(one_list: list) -> str:
"""Stringfies a list
@type one_list: list
@param one_list: A list to be stringed
@returns str: The stringed list
"""
if not one_list:
return ''
output = ''
for i in range(len(one_list)-1):
output += f"{one_list[i]},"
output += one_list[-1]
return output
def parse_option_with_args(option: str) -> Tuple[str, str]:
"""Parse the option name into name and parameter
@type option: str
@param option: The option argument
@returns tuple[str, str]: The option name and parameter
"""
if '=' in option:
option, param = option.split('=', 1)
else:
param = ''
return (option, param)
def get_human_length(length: int) -> Tuple[Union[int, float], str]:
"""Get the human readable length from the result
@type length: int
@param length: The length of the response body
@returns Tuple[int|float, str]: The tuple with new length
and the readable order
"""
for order in ["B ", "KB", "MB", "GB"]:
if length < 1024:
return (length, order)
length /= 1024
return (length, "TB")
def get_formatted_rtt(rtt: float) -> Tuple[Union[int, float], str]:
"""Formats the rtt from the result to output
@type rtt: float
@param rtt: The elapsed time of a request
@returns Tuple[int|float, str]: The tuple with the formatted rtt
"""
if rtt < 1:
return (int(rtt*1000), "ms")
for order in ["s ", "m "]:
if rtt < 60:
return (rtt, order)
rtt /= 60
return (rtt, 'h ')
def fix_payload_to_output(payload: str) -> str:
"""Fix the payload's size
@type payload: str
@param payload: The payload used in the request
@returns str: The fixed payload to output
"""
if ' ' in payload:
payload = payload.replace(' ', ' ')
if len(payload) > MAX_PAYLOAD_LENGTH_TO_OUTPUT:
return f'{payload[:(MAX_PAYLOAD_LENGTH_TO_OUTPUT-3)]}...'
return payload
def check_range_list(content: str) -> List[Union[int, str]]:
"""Checks if the given content has a range list,
and make a list of the range specified
@type content: str
@param content: The string content to check for range
@returns List[int|str]: The list with the compiled content
"""
if '\\-' in content:
content = content.replace('\\-', '-')
elif '-' in content:
left, right = content.split('-', 1)
if not left or not right:
return [content]
try:
# Checks if the left and right digits from the mark are integers
int(left[-1])
int(right[0])
except ValueError:
content = _get_letter_range(left, right)
else:
content = _get_number_range(left, right)
return content
return [content]
def _get_letter_range(left: str, right: str) -> List[str]:
"""Get the alphabet range list [a-z] [A-Z] [z-a] [Z-A]
@type left: str
@param left: The left string of the division mark
@type right: str
@param right: The right string of the division mark
@returns List[str]: The list with the range
"""
left_digit, left_str = left[-1], left[:-1]
right_digit, right_str = right[0], right[1:]
compiled_list = []
order_left_digit = ord(left_digit)
order_right_digit = ord(right_digit)
if order_left_digit <= order_right_digit:
range_list = range(order_left_digit, order_right_digit+1)
else:
range_list = range(order_left_digit, order_right_digit-1, -1)
for c in range_list:
compiled_list.append(
f"{left_str}{chr(c)}{right_str}"
)
return compiled_list
def _get_number_range(left: str, right: str) -> List[int]:
"""Get the number range list
@type left: str
@param left: The left string of the division mark
@type right: str
@param right: The right string of the division mark
@returns List[int]: The list with the range
"""
is_number = True
i = len(left)
while is_number and i > 0:
try:
int(left[i-1])
except ValueError:
is_number = False
else:
i -= 1
left_digit, left_str = int(left[i:]), left[:i]
is_number = True
i = 0
while is_number and i < (len(right)-1):
try:
int(right[i+1])
except ValueError:
is_number = False
else:
i += 1
right_digit, right_str = int(right[:(i+1)]), right[(i+1):]
compiled_list = []
if left_digit < right_digit:
range_list = range(left_digit, right_digit+1)
else:
range_list = range(left_digit, right_digit-1, -1)
for d in range_list:
compiled_list.append(
f"{left_str}{str(d)}{right_str}"
)
return compiled_list | PypiClean |
/CAGMon-0.8.5-py3-none-any.whl/cagmon/melody.py | from cagmon.agrement import *
__author__ = 'Phil Jung <[email protected]>'
###------------------------------------------### Coefficients ###-------------------------------------------###
# PCC
def PCC(loaded_dataset, main_channel):
result_bin = dict()
aux_channels = [channel for channel in loaded_dataset['array'].keys()]
aux_channels.remove(main_channel)
hoft_data = loaded_dataset['array'][main_channel]
for aux_channel in aux_channels:
aux_data = loaded_dataset['array'][aux_channel]
if ((hoft_data**2).sum())**(0.5) == 0.0 or ((aux_data**2).sum())**(0.5) == 0.0:
R = 0.
print('PCC\n Channel: {0}\n Value: {1}'.format(aux_channel, R))
else:
hoft_data = hoft_data/((hoft_data**2).sum())**(0.5)
aux_data = aux_data/((aux_data**2).sum())**(0.5)
mx = aux_data.mean()
my = hoft_data.mean()
xm, ym = aux_data-mx, hoft_data-my
if sum(xm)*sum(ym) != 0:
R = abs(pearsonr(aux_data, hoft_data)[0])
elif sum(xm)*sum(ym) == 0 or np.isfinite(R) == True:
R = 0.
result_bin[aux_channel] = R
print('PCC\n Channel: {0}\n Value: {1}'.format(aux_channel, R))
return result_bin
# Kendall's tau
def Kendall(loaded_dataset, main_channel):
result_bin = dict()
aux_channels = [channel for channel in loaded_dataset['array'].keys()]
aux_channels.remove(main_channel)
hoft_data = loaded_dataset['array'][main_channel]
for aux_channel in aux_channels:
aux_data = loaded_dataset['array'][aux_channel]
tau = abs(kendalltau(aux_data, hoft_data)[0])
if not np.isfinite(tau) == True:
tau = 0.
result_bin[aux_channel] = tau
print('Kendall\n Channel: {0}\n Value: {1}'.format(aux_channel, tau))
return result_bin
# Estimate appropriate value of Alpha and c for MICe
def MICe_parameters(data_size):
NPOINTS_BINS = [1, 25, 50, 250, 500, 1000, 2500, 4000, 8000, 10000, 40000]
ALPHAS = [0.85, 0.80, 0.75, 0.70, 0.55, 0.5, 0.55, 0.55, 0.5, 0.45, 0.4]
CS = [5, 5, 5, 5, 7, 7, 6, 6, 0.7, 1, 1]
if data_size < 1:
raise ValueError("the number of data size must be >=1")
alpha = ALPHAS[np.digitize([data_size], NPOINTS_BINS)[0] - 1]
c = CS[np.digitize([data_size], NPOINTS_BINS)[0] - 1]
return alpha, c
# MICe for multiprocessinf queue
def Queue_MIC(loaded_dataset, main_channel, aux_channel):
result_bin = list()
hoft_data = loaded_dataset['array'][main_channel]
aux_data = loaded_dataset['array'][aux_channel]
data_size = int(hoft_data.size)
alpha, c = MICe_parameters(data_size)
mine = minepy.MINE(alpha=alpha, c=c, est="mic_e")
mine.compute_score(aux_data, hoft_data)
mic_value = mine.mic()
print('MICe\n Channel: {0}\n Value: {1}'.format(aux_channel, mic_value))
result_bin.append([aux_channel, mic_value])
queue.put(result_bin)
# Calculate MICe parallely
def Parallel_MIC(loaded_dataset, main_channel):
number_of_cpus = cpu_count()
aux_channels = [channel for channel in loaded_dataset['array'].keys()]
aux_channels.remove(main_channel)
input_channel_list = list()
if len(aux_channels) <= number_of_cpus:
input_channel_list.append(aux_channels)
else:
for n in range(1+int(len(aux_channels)/number_of_cpus)):
if number_of_cpus*(n+1) < len(aux_channels):
input_channel_list.append(aux_channels[number_of_cpus*n : number_of_cpus*(n+1)])
elif number_of_cpus*(n+1) >= len(aux_channels):
input_channel_list.append(aux_channels[number_of_cpus*(n) : ])
data_list = list()
for channel_segment in input_channel_list:
procs = list()
for channel in channel_segment:
proc = Process(target=Queue_MIC, args=(loaded_dataset, main_channel, channel))
procs.append(proc)
for proc in procs:
proc.start()
for proc in procs:
gotten_data = queue.get()
data_list.extend(gotten_data)
for proc in procs:
proc.join()
result_bin = dict()
for row in data_list:
result_bin[row[0]] = row[1]
return result_bin
###------------------------------------------### Trend ###-------------------------------------------###
# Coefficient trend
def Coefficients_Trend(output_path, framefiles_path, aux_channels_file_path, gst, get, stride, sample_rate, preprocessing_options, main_channel):
if not output_path.split('/')[-1] == '':
output_path = output_path + '/'
dicts_bin = dict()
if sample_rate * stride < 1000:
raise ValueError('These arguments are not vailable if the number of valiables in a set is less than 1 000')
else:
segments = np.arange(gst, get, stride)
for segment in segments:
start = segment
end = start + stride
print('# Segment[{0}/{1}]: {2} - {3} (stride: {4} seconds)'.format(1+list(segments).index(segment), len(segments), start, end, stride))
cache = GWF_Glue(framefiles_path, start, end)
AuxChannels = Read_AuxChannels(main_channel, aux_channels_file_path)
loaded_dataset = Parallel_Load_data(cache, main_channel, AuxChannels, start, end, sample_rate, preprocessing_options)
print('Calculating PCC coefficients...')
PCC_dict = PCC(loaded_dataset, main_channel)
print('Calculating Kendall coefficients...')
Kendall_dict = Kendall(loaded_dataset, main_channel)
print('Calculating MICe coefficients...')
MIC_dict = Parallel_MIC(loaded_dataset, main_channel)
dicts_bin[start] = {'PCC_dict': PCC_dict,'Kendall_dict': Kendall_dict ,'MIC_dict': MIC_dict}
head = ['channel']
head.extend(sorted(dicts_bin.keys()))
PCC_trend_bin = [head]
Kendall_trend_bin = [head]
MIC_trend_bin = [head]
for row in AuxChannels:
aux_channel = row['name']
PCC_trend_row_bin = [aux_channel]
Kendall_trend_row_bin = [aux_channel]
MIC_trend_row_bin = [aux_channel]
for start in sorted(dicts_bin.keys()):
try:
PCC_value = dicts_bin[start]['PCC_dict'][aux_channel]
except KeyError:
PCC_value = 'nan'
try:
Kendall_value = dicts_bin[start]['Kendall_dict'][aux_channel]
except KeyError:
Kendall_value = 'nan'
try:
MIC_value = dicts_bin[start]['MIC_dict'][aux_channel]
except KeyError:
MIC_value = 'nan'
PCC_trend_row_bin.append(PCC_value)
Kendall_trend_row_bin.append(Kendall_value)
MIC_trend_row_bin.append(MIC_value)
PCC_trend_bin.append(PCC_trend_row_bin)
Kendall_trend_bin.append(Kendall_trend_row_bin)
MIC_trend_bin.append(MIC_trend_row_bin)
PCC_csv = open('{0}data/PCC_trend_{1}-{2}_{3}-{4}.csv'.format(output_path, int(gst), int(get-gst), main_channel,int(stride)), 'w')
PCC_csvwriter = csv.writer(PCC_csv)
for row in PCC_trend_bin:
PCC_csvwriter.writerow(row)
PCC_csv.close()
Kendall_csv = open('{0}data/Kendall_trend_{1}-{2}_{3}-{4}.csv'.format(output_path, int(gst), int(get-gst), main_channel, int(stride)), 'w')
Kendall_csvwriter = csv.writer(Kendall_csv)
for row in Kendall_trend_bin:
Kendall_csvwriter.writerow(row)
Kendall_csv.close()
MIC_csv = open('{0}data/MICe_trend_{1}-{2}_{3}-{4}.csv'.format(output_path, int(gst), int(get-gst), main_channel, int(stride)), 'w')
MIC_csvwriter = csv.writer(MIC_csv)
for row in MIC_trend_bin:
MIC_csvwriter.writerow(row)
MIC_csv.close()
# Coefficient trend within active segments
def Coefficients_Trend_Segment(output_path, framefiles_path, aux_channels_file_path, segment, gst, get, stride, sample_rate, preprocessing_options, main_channel):
dicts_bin = dict()
flag = segment.active
flaged_segments = list()
if len(flag) == 1:
if float(gst) == float(flag[0][0]) and float(get) == float(flag[0][1]):
segments = np.arange(gst, get, stride)
for start in segments:
if flag[0][0] <= start and flag[0][1] >= start+stride and sample_rate * stride > 1000:
flaged_segments.append((start, start+stride, 'ON'))
elif sample_rate * stride < 1000:
raise ValueError('These arguments are not vailable if the number of valiables in a set is less than 1 000')
sys.exit()
else:
if float(gst) == flag[0][0]:
all_flag = [(flag[0][0],flag[0][1],'ON'),(flag[0][1],float(get),'OFF')]
elif float(get) == flag[0][1]:
all_flag = [(float(gst),flag[0][0],'OFF'),(flag[0][0],flag[0][1],'ON')]
else:
all_flag = [(float(gst),flag[0][0],'OFF'),(flag[0][0],flag[0][1],'ON'),(flag[0][1],float(get),'OFF')]
for item in all_flag:
segments = np.arange(item[0], item[1], stride)
status = item[2]
for start in segments:
if status == 'ON' and item[0] <= start and item[1] >= start+stride and sample_rate * stride > 1000:
flaged_segments.append((start, start+stride, 'ON'))
elif status == 'OFF' and item[0] <= start and item[1] >= start+stride and sample_rate * stride > 1000:
flaged_segments.append((start, start+stride, 'OFF'))
elif sample_rate * stride < 1000:
raise ValueError('These arguments are not vailable if the number of valiables in a set is less than 1 000')
elif len(flag) > 1:
all_flag = list()
if float(gst) == float(flag[0][0]) and float(get) == float(flag[-1][1]):
for i in range(len(flag)):
all_flag.append((flag[i][0], flag[i][1], 'ON'))
if i < len(flag)-1:
all_flag.append((flag[i][1], flag[i+1][0],'OFF'))
else:
if float(gst) != float(flag[0][0]):
all_flag.append((float(gst), flag[0][0],'OFF'))
for i in range(len(flag)):
all_flag.append((flag[i][0], flag[i][1], 'ON'))
if i < len(flag)-1:
all_flag.append((flag[i][1], flag[i+1][0],'OFF'))
if float(get) != float(flag[-1][1]):
all_flag.append((flag[-1][1], float(get),'OFF'))
for item in all_flag:
segments = np.arange(item[0], item[1], stride)
status = item[2]
for start in segments:
if status == 'ON' and item[0] <= start and item[1] >= start+stride and sample_rate * stride > 1000:
flaged_segments.append((start, start+stride, 'ON'))
elif status == 'OFF' and item[0] <= start and item[1] >= start+stride and sample_rate * stride > 1000:
flaged_segments.append((start, start+stride, 'OFF'))
elif sample_rate * stride < 1000:
raise ValueError('These arguments are not vailable if the number of valiables in a set is less than 1 000')
for flaged_segment in flaged_segments:
if flaged_segment[2] == 'ON':
start = flaged_segment[0]
end = flaged_segment[1]
print('# Segment[{0}/{1}]: {2} - {3} (stride: {4} seconds)'.format(1+list(flaged_segments).index(flaged_segment), len(flaged_segments), start, end, end-start))
print('# Flagged semgnet: active')
cache = GWF_Glue(framefiles_path, start, end)
AuxChannels = Read_AuxChannels(main_channel, aux_channels_file_path)
loaded_dataset = Parallel_Load_data(cache, main_channel, AuxChannels, start, end, sample_rate, preprocessing_options)
print('Calculating PCC coefficients...')
PCC_dict = PCC(loaded_dataset, main_channel)
print('Calculating Kendall coefficients...')
Kendall_dict = Kendall(loaded_dataset, main_channel)
print('Calculating MICe coefficients...')
MIC_dict = Parallel_MIC(loaded_dataset, main_channel)
dicts_bin[start] = {'PCC_dict': PCC_dict,'Kendall_dict': Kendall_dict ,'MIC_dict': MIC_dict}
elif flaged_segment[2] == 'OFF':
start = flaged_segment[0]
end = flaged_segment[1]
print('# Segment[{0}/{1}]: {2} - {3} (stride: {4} seconds)'.format(1+list(flaged_segments).index(flaged_segment), len(flaged_segments), start, end, end-start))
print('# Flaged semgnet: inactive')
PCC_dict = dict()
Kendall_dict = dict()
MIC_dict = dict()
AuxChannels = Read_AuxChannels(main_channel, aux_channels_file_path)
for AuxChannel in AuxChannels:
channel = AuxChannel['name']
PCC_dict[channel] = 'inactive'
Kendall_dict[channel] = 'inactive'
MIC_dict[channel] = 'inactive'
dicts_bin[start] = {'PCC_dict': PCC_dict,'Kendall_dict': Kendall_dict ,'MIC_dict': MIC_dict}
head = ['channel']
head.extend(sorted(dicts_bin.keys()))
PCC_trend_bin = [head]
Kendall_trend_bin = [head]
MIC_trend_bin = [head]
for row in AuxChannels:
aux_channel = row['name']
PCC_trend_row_bin = [aux_channel]
Kendall_trend_row_bin = [aux_channel]
MIC_trend_row_bin = [aux_channel]
for start in sorted(dicts_bin.keys()):
try:
PCC_value = dicts_bin[start]['PCC_dict'][aux_channel]
except KeyError:
PCC_value = 'nan'
try:
Kendall_value = dicts_bin[start]['Kendall_dict'][aux_channel]
except KeyError:
Kendall_value = 'nan'
try:
MIC_value = dicts_bin[start]['MIC_dict'][aux_channel]
except KeyError:
MIC_value = 'nan'
PCC_trend_row_bin.append(PCC_value)
Kendall_trend_row_bin.append(Kendall_value)
MIC_trend_row_bin.append(MIC_value)
PCC_trend_bin.append(PCC_trend_row_bin)
Kendall_trend_bin.append(Kendall_trend_row_bin)
MIC_trend_bin.append(MIC_trend_row_bin)
PCC_csv = open('{0}data/PCC_trend_{1}-{2}_{3}-{4}.csv'.format(output_path, int(gst), int(get-gst), main_channel,int(stride)), 'w')
PCC_csvwriter = csv.writer(PCC_csv)
for row in PCC_trend_bin:
PCC_csvwriter.writerow(row)
PCC_csv.close()
Kendall_csv = open('{0}data/Kendall_trend_{1}-{2}_{3}-{4}.csv'.format(output_path, int(gst), int(get-gst), main_channel, int(stride)), 'w')
Kendall_csvwriter = csv.writer(Kendall_csv)
for row in Kendall_trend_bin:
Kendall_csvwriter.writerow(row)
Kendall_csv.close()
MIC_csv = open('{0}data/MICe_trend_{1}-{2}_{3}-{4}.csv'.format(output_path, int(gst), int(get-gst), main_channel, int(stride)), 'w')
MIC_csvwriter = csv.writer(MIC_csv)
for row in MIC_trend_bin:
MIC_csvwriter.writerow(row)
MIC_csv.close() | PypiClean |
/DaMa_ML-1.0a0-py3-none-any.whl/dama/groups/postgres.py | from dama.abc.group import AbsGroup
from dama.utils.core import Shape
import numpy as np
from collections import OrderedDict
from dama.utils.decorators import cache
from dama.data.it import Iterator, BatchIterator
from psycopg2.extras import execute_values
import uuid
class Table(AbsGroup):
inblock = True
def __init__(self, conn, name=None, query_parts=None):
super(Table, self).__init__(conn)
self.name = name
if query_parts is None:
self.query_parts = {"columns": None, "slice": None}
else:
self.query_parts = query_parts
def __getitem__(self, item):
query_parts = self.query_parts.copy()
if isinstance(item, str):
query_parts["columns"] = [item]
return Table(self.conn, name=self.name, query_parts=query_parts)
elif isinstance(item, list) or isinstance(item, tuple):
it = Iterator(item)
if it.type_elem == int:
query_parts["slice"] = [slice(index, index + 1) for index in item]
elif it.type_elem == slice:
query_parts["slice"] = item
elif it.type_elem == str:
query_parts["columns"] = item
dtype = self.attrs.get("dtype", None)
return Table(self.conn, name=self.name, query_parts=query_parts).to_ndarray(dtype=dtype)
elif isinstance(item, int):
query_parts["slice"] = slice(item, item + 1)
dtype = self.attrs.get("dtype", None)
return Table(self.conn, name=self.name, query_parts=query_parts).to_ndarray(dtype=dtype)
elif isinstance(item, slice):
query_parts["slice"] = item
dtype = self.attrs.get("dtype", None)
return Table(self.conn, name=self.name, query_parts=query_parts).to_ndarray(dtype=dtype)
def __setitem__(self, item, value):
if hasattr(value, 'batch'):
value = value.batch
if isinstance(item, tuple):
if len(item) == 1:
stop = item[0].stop
start = item[0].start
else:
raise NotImplementedError
batch_size = abs(stop - start)
elif isinstance(item, slice):
stop = item.stop
start = item.start
batch_size = abs(stop - start)
elif isinstance(item, int):
stop = item + 1
if hasattr(value, '__len__'):
batch_size = len(value)
else:
batch_size = 1
last_id = self.last_id()
if last_id < stop:
self.insert(value, chunks=(batch_size, ))
else:
self.update(value, item)
def __iter__(self):
pass
def get_group(self, group):
return self[group]
def get_conn(self, group):
return self[group]
def insert(self, data, chunks=None):
if not isinstance(data, BatchIterator):
data = Iterator(data, dtypes=self.dtypes).batchs(chunks=chunks)
columns = "(" + ", ".join(self.groups) + ")"
insert_str = "INSERT INTO {name} {columns} VALUES".format(
name=self.name, columns=columns)
insert = insert_str + " %s"
cur = self.conn.cursor()
num_groups = len(data.groups)
for row in data:
shape = row.batch.shape.to_tuple()
if len(shape) == 1 and num_groups > 1:
value = row.batch.to_df.values # .to_ndarray().reshape(1, -1)
elif len(shape) == 1 and num_groups == 1:
value = row.batch.to_df().values # .to_ndarray().reshape(-1, 1)
else:
value = row.batch.to_df().values
execute_values(cur, insert, value, page_size=len(data))
self.conn.commit()
cur.close()
def update(self, value, item):
if isinstance(item, int):
columns_values = [[self.groups[0], value]]
columns_values = ["{col}={val}".format(col=col, val=val) for col, val in columns_values]
query = "UPDATE {name} SET {columns_val} WHERE ID = {id}".format(
name=self.name, columns_val=",".join(columns_values), id=item+1
)
cur = self.conn.cursor()
cur.execute(query)
self.conn.commit()
else:
raise NotImplementedError
def to_ndarray(self, dtype: np.dtype = None, chunksize=(258,)) -> np.ndarray:
if self.dtype is None:
return np.asarray([])
slice_item, _ = self.build_limit_info()
query, one_row = self.build_query()
cur = self.conn.cursor(uuid.uuid4().hex, scrollable=False, withhold=False)
cur.execute(query)
cur.itersize = chunksize[0]
if one_row:
cur.scroll(0)
else:
cur.scroll(slice_item.start)
array = np.empty(self.shape, dtype=self.dtype)
if len(self.groups) == 1:
for i, row in enumerate(cur):
array[i] = row[0]
else:
array[:] = cur.fetchall()
cur.close()
self.conn.commit()
if dtype is not None and self.dtype != dtype:
return array.astype(dtype)
else:
return array
def to_df(self):
pass
@property
@cache
def shape(self) -> Shape:
cur = self.conn.cursor()
slice_item, limit_txt = self.build_limit_info()
if limit_txt == "":
query = "SELECT COUNT(*) FROM {table_name}".format(table_name=self.name)
cur.execute(query)
length = cur.fetchone()[0]
else:
query = "SELECT Count(*) FROM (SELECT id FROM {table_name} LIMIT {limit} OFFSET {start}) as foo".format(
table_name=self.name, start=slice_item.start, limit=(abs(slice_item.stop - slice_item.start)))
cur.execute(query)
length = cur.fetchone()[0]
cur.close()
shape = OrderedDict([(group, (length,)) for group in self.groups])
return Shape(shape)
@property
@cache
def dtypes(self) -> np.dtype:
cur = self.conn.cursor()
query = "SELECT * FROM information_schema.columns WHERE table_name=%(table_name)s ORDER BY ordinal_position"
cur.execute(query, {"table_name": self.name})
dtypes = OrderedDict()
types = {"text": np.dtype("object"), "integer": np.dtype("int"),
"double precision": np.dtype("float"), "boolean": np.dtype("bool"),
"timestamp without time zone": np.dtype('datetime64[ns]')}
if self.query_parts["columns"] is not None:
for column in cur.fetchall():
if column[3] in self.query_parts["columns"]:
dtypes[column[3]] = types.get(column[7], np.dtype("object"))
else:
for column in cur.fetchall():
dtypes[column[3]] = types.get(column[7], np.dtype("object"))
cur.close()
if "id" in dtypes:
del dtypes["id"]
if len(dtypes) > 0:
return np.dtype(list(dtypes.items()))
def last_id(self):
cur = self.conn.cursor()
query = "SELECT last_value FROM {table_name}_id_seq".format(table_name=self.name)
cur.execute(query)
last_id = cur.fetchone()[0]
cur.close()
return last_id
def format_columns(self):
columns = self.query_parts["columns"]
if columns is None:
columns = [column for column, _ in self.dtypes]
return ",".join(columns)
def build_limit_info(self) -> tuple:
if isinstance(self.query_parts["slice"], list):
index_start = [index.start for index in self.query_parts["slice"]]
index_stop = [index.stop for index in self.query_parts["slice"]]
min_elem = min(index_start)
max_elem = max(index_stop)
return slice(min_elem, max_elem), "LIMIT {}".format(max_elem)
elif isinstance(self.query_parts["slice"], tuple):
item = self.query_parts["slice"][0]
else:
item = self.query_parts["slice"]
if item is None:
start = 0
stop = None
limit_txt = ""
else:
if item.start is None:
start = 0
else:
start = item.start
if item.stop is None:
limit_txt = ""
stop = None
else:
limit_txt = "LIMIT {}".format(item.stop)
stop = item.stop
return slice(start, stop), limit_txt
def build_query(self) -> tuple:
if isinstance(self.query_parts["slice"], list):
id_list = [index.start + 1 for index in self.query_parts["slice"]]
query = "SELECT {columns} FROM {table_name} WHERE ID IN ({id_list}) ORDER BY {order_by}".format(
columns=self.format_columns(), table_name=self.name, order_by="id",
id_list=",".join(map(str, id_list)))
one_row = True
else:
slice_item, limit_txt = self.build_limit_info()
query = "SELECT {columns} FROM {table_name} ORDER BY {order_by} {limit}".format(
columns=self.format_columns(), table_name=self.name, order_by="id",
limit=limit_txt)
one_row = False
return query, one_row | PypiClean |
/AutoDiff-CS207-24-0.2.tar.gz/AutoDiff-CS207-24-0.2/AutoDiff/autodiff.py | import math
import numpy as np
#=====================================Elementary functions=====================================================#
def e(x):
#try:
return np.exp(x)
#except:
# return x.exp()
def sin(x):
#try:
return np.sin(x)
#except:
# return x.sin()
def arcsin(x):
#try:
return np.arcsin(x)
#except:
# return x.arcsin()
def sinh(x):
#try:
return np.sinh(x)
#except:
# return x.sinh()
def cos(x):
#try:
return np.cos(x)
#except:
# return x.cos()
def arccos(x):
#try:
return np.arccos(x)
#except:
# return x.arccos()
def cosh(x):
#try:
return np.cosh(x)
#except:
# return x.cosh()
def tan(x):
#try:
return np.tan(x)
#except:
# return x.tan()
def arctan(x):
#try:
return np.arctan(x)
#except:
# return x.arctan()
def tanh(x):
#try:
return np.tanh(x)
#except:
# return x.tanh()
#def ln(x):
# try:
# return np.log(x)
# except:
# return x.ln()
def log(x):
#try:
return np.log(x)
#except:
# return x.log()
def sigmoid(x, b_0=0, b_1=1):
#try:
return (1 / (1+np.exp(-(b_0 + b_1*x))))
#except:
# return x.sigmoid(b_0, b_1)
def sqrt(x):
#try:
return np.sqrt(x)
#except:
# return x.sqrt()
#=====================================AD_eval=====================================================#
class AD_eval():
def __init__(self, func_string, variable_label, init_value):
assert isinstance(func_string, str), "Input function must be a string"
multiple_variables = isinstance(variable_label, list)
# if we have multiple variables (x,y, ..etc)
if multiple_variables:
assert len(variable_label) == len(init_value), "Variable labels must be the same length as initial values"
for i in range(len(variable_label)):
assert isinstance(variable_label[i], str), "Variable label must be a string"
assert isinstance(init_value[i], (int, float)), "Input value must be numeric"
self.vars = {variable_label[i]: AD_Object(init_value[i], variable_label[i]) for i in range(len(variable_label))}
if 'exp(' in func_string:
raise NameError('Please use e(x) instead of exp(x) for exponential function')
for label in variable_label:
func_string = func_string.replace(label, "self.vars['%s']"%label)
# evaluate function using AD object
self.f = eval(func_string)
self.der = self.f.der
self.val = self.f.val
self.label = variable_label
else:
assert isinstance(variable_label, str), "Variable label must be a string"
assert isinstance(init_value, (int, float)), "Input value must be numeric"
self.x = AD_Object(init_value, variable_label)
if 'exp(' in func_string:
raise NameError('Please use e(x) instead of exp(x) for exponential function')
# evaluate function using AD object
self.f = eval(func_string.replace(variable_label, 'self.x'))
self.der = self.f.der
self.val = self.f.val
self.label = variable_label
def __repr__(self):
der_txt = ["d(%s)= %.3f ; "%(k, self.der[k]) for k in self.der]
return "AD Object: Value = %.3f, Derivative: %s"%(self.val, "".join(der_txt))
def derivative(self, label):
assert isinstance(label, str), "Input label must be string"
return self.der[label]
#=====================================AD_Vector=====================================================#
def AD_Vector(values, label): #Vector Input Values
assert hasattr(values, '__iter__'), "Input values must be iterable"
if type(label)==str :
return np.array([AD_Object(float(val), label) for val in values])
else :
return np.array([AD_Object(float(val), label[i]) for i,val in enumerate(values)])
def value(x):
if isinstance(x, AD_Object):
return x.val
elif hasattr(x, '__iter__'):
try: #for single function with vector input values
return np.array([k.val for k in x])
except: #for vector function with vector input values
temp = []
for k in x:
temp.append([l.val for l in k])
return temp
else:
raise TypeError ("Input must be AD_Object or array of AD_Objects")
def derivative(x, label):
assert isinstance(label, str), "Input label must be string"
if isinstance(x, AD_Object):
return x.der[label]
elif hasattr(x, '__iter__'):
try: #for single function with vector input values
return np.array([k.der[label] for k in x])
except: #for vector function with vector input values
temp = []
for k in x:
temp.append([l.der[label] for l in k])
return temp
else:
raise TypeError ("Input must be AD_Object or array of AD_Objects")
def jacobian(x,label):
if isinstance(x, AD_Object):
return np.array(list(x.der.values()))
elif hasattr(x, '__iter__'):
jacob=[]
for k in x :
if not isinstance(k, AD_Object):
raise TypeError ("Input must be AD_Object or array of AD_Objects")
df_i = []
for l in label :
try :
df_i.append(k.der[l])
except :
df_i.append(0)
jacob.append(np.array(df_i))
return np.array(jacob)
else:
raise TypeError ("Input must be AD_Object or array of AD_Objects")
def AD_FuncVector(func:list): #Vector Functions
assert hasattr(func, '__iter__'), "Input function must be iterable"
return [f for f in func]
class AD_Object():
def __init__(self, value, label, der_initial=1):
assert isinstance(value, (int, float, np.number)), "Input value must be numeric"
self.val = value
if isinstance(label, dict):
self.label = label
self.der = der_initial
elif isinstance(label, str):
if isinstance(der_initial, (float, int)):
self.label = {label: label}
self.der = {label: der_initial}
else:
raise TypeError("der_initial must be numerical")
else:
raise TypeError("label must be string")
def __repr__(self):
der_txt = ["d(%s)= %.3f ; "%(k, self.der[k]) for k in self.der]
return "AD Object: Value = %.3f, Derivative: %s"%(self.val, "".join(der_txt))
def derivative(self, label):
assert isinstance(label, str), "Input label must be string"
return self.der[label]
def __neg__(self):
return AD_Object(-1*self.val, self.label, {k: (-1*self.der[k]) for k in self.der})
def __radd__(self, other):
return AD_Object.__add__(self, other)
def __add__(self, other):
if isinstance(other, AD_Object):
value = self.val + other.val
der = dict()
label = dict()
for key in self.der:
der[key] = (self.der[key] + other.der[key]) if (key in other.der) else self.der[key]
label[key] = self.label[key]
for key in other.der:
if key not in der:
der[key] = other.der[key]
label[key] = other.label[key]
return AD_Object(value, label, der)
#-----
return AD_Object(self.val+other, self.label, self.der)
def __rsub__(self, other):
return AD_Object(other-self.val, self.label, {k: -1*self.der[k] for k in self.der})
def __sub__(self, other):
if isinstance(other, AD_Object):
value = self.val - other.val
der = dict()
label = dict()
for key in self.der:
der[key] = (self.der[key] - other.der[key]) if (key in other.der) else self.der[key]
label[key] = self.label[key]
for key in other.der:
if key not in der:
der[key] = other.der[key]
label[key] = other.label[key]
return AD_Object(value, label, der)
#-----
return AD_Object(self.val-other, self.label, self.der)
def productrule(self, other, key): # both self and other are autodiff objects
return (other.val*self.der[key] + self.val*other.der[key]) if (key in other.der) else (other.val*self.der[key])
def __rmul__(self, other):
return AD_Object.__mul__(self, other)
def __mul__(self, other):
if isinstance(other, AD_Object):
value = self.val * other.val
der = dict()
label = dict()
for key in self.der:
der[key] = self.productrule(other, key)
label[key] = self.label[key]
for key in other.der:
if key not in der:
der[key] = other.productrule(self, key)
label[key] = other.label[key]
return AD_Object(value, label, der)
#-----
return AD_Object(other*self.val, self.label, {k: other*self.der[k] for k in self.der})
def quotientrule(self, other, key): # both self and other are autodiff objects, and the function is self / other
return ((other.val*self.der[key] - self.val*other.der[key])/(other.val**2)) if (key in other.der) else (self.der[key]/other.val)
def __truediv__(self, other):
if isinstance(other, AD_Object):
if other.val == 0:
raise ValueError('Cannot divide by 0')
value = self.val/other.val
der = dict()
label = dict()
for key in self.der:
der[key] = self.quotientrule(other, key)
label[key] = self.label[key]
for key in other.der:
if key not in der:
der[key] = ((-self.val * other.der[key])/(other.val**2))
label[key] = other.label[key]
return AD_Object(value, label, der)
#-----
if other == 0:
raise ValueError('Cannot divide by 0')
return AD_Object(self.val/other, self.label, {k: self.der[k]/other for k in self.der})
def __rtruediv__(self, other):
#when other is a constant, e.g. f(x) = 2/x = 2*x^-1 -> f'(x) = -2/(x^-2)
if self.val == 0:
raise ValueError('Cannot divide by 0')
return AD_Object(other/self.val, self.label, {k: ((-other * self.der[k])/(self.val**2)) for k in self.der})
def powerrule(self, other, key):
# for when both self and other are autodiff objects
# in general, if f(x) = u(x)^v(x) -> f'(x) = u(x)^v(x) * [ln(u(x)) * v(x)]'
if self.val == 0:
return 0
return self.val**other.val * other.productrule(self.log(), key)
def __pow__(self, other):
# when both self and other are autodiff object, implement the powerrule
if isinstance(other, AD_Object):
value = self.val**other.val
der = dict()
label = dict()
for key in self.der:
if key in other.label:
der[key] = self.powerrule(other, key)
label[key] = self.label[key]
else:
der[key] = other.val * (self.val ** other.val - 1) * self.der[key]
for key in other.der:
if key in der:
continue # skip the variables already in der{}
# The following code will only be run when ohter.key not in self.key
# for example: f = x ** y
der[key] = self.val**other.val * np.log(self.val) #k^x -> k^x * ln(k)
label[key] = other.label[key]
return AD_Object(value, label, der)
# when the input for 'other' is a constant
return AD_Object(self.val**other, self.label, {k: (other * (self.val ** (other-1)) * self.der[k]) for k in self.der})
def __rpow__(self, other):
# when other is a constant, e.g. f(x) = 2^x -> f'(x) = 2^x * ln(2)
if other == 0:
return AD_Object(other**self.val, self.label, {k: 0*self.der[k] for k in self.der})
#------
return AD_Object(other**self.val, self.label, {k: (other**self.val * np.log(other) * self.der[k]) for k in self.der})
def sqrt(self):
return AD_Object(np.sqrt(self.val), self.label, {k: ( (1 / (2*np.sqrt(self.val)) ) * self.der[k]) for k in self.der})
def exp(self):
return AD_Object(np.exp(self.val), self.label, {k: (np.exp(self.val) * self.der[k]) for k in self.der})
def log(self):
if (self.val) <= 0:
raise ValueError('log only takes positive number')
return AD_Object(np.log(self.val), self.label, {k: ((1/self.val)*self.der[k]) for k in self.der})
# def log(self, base=math.e):
# if (self.val) <= 0:
# raise ValueError('log only takes positive number')
# if base <= 0:
# raise ValueError('log base must be a positive number')
# return AD_Object(math.log(self.val, base), self.label, {k: ((1/(self.val*math.log(base)))*self.der[k]) for k in self.der})
def sin(self):
return AD_Object(np.sin(self.val), self.label, {k: (np.cos(self.val) * self.der[k]) for k in self.der})
def arcsin(self):
return AD_Object(np.arcsin(self.val), self.label, {k: ((1 / np.sqrt(1 - self.val**2)) * self.der[k]) for k in self.der})
def sinh(self):
return AD_Object(np.sinh(self.val), self.label, {k: (np.cosh(self.val) * self.der[k]) for k in self.der})
def cos(self):
return AD_Object(np.cos(self.val), self.label, {k: (-1 * np.sin(self.val) * self.der[k]) for k in self.der})
def arccos(self):
return AD_Object(np.arccos(self.val), self.label, {k: ((-1 / np.sqrt(1 - self.val**2)) * self.der[k]) for k in self.der})
def cosh(self):
return AD_Object(np.cosh(self.val), self.label, {k: (np.sinh(self.val) * self.der[k]) for k in self.der})
def tan(self):
return AD_Object(np.tan(self.val), self.label, {k: (self.der[k] / np.cos(self.val)**2) for k in self.der})
def arctan(self):
return AD_Object(np.arctan(self.val), self.label, {k: ((1 / (1 + self.val**2)) * self.der[k]) for k in self.der})
def tanh(self):
return AD_Object(np.tanh(self.val), self.label, {k: ((2 / (1 + np.cosh(2*self.val))) * self.der[k]) for k in self.der})
def sigmoid(self, b_0=1, b_1=1):
def calc_s(x, b_0, b_1):
# Sigmoid/Logisitic = 1 / 1 + exp(- (b_0 + b_1*x))
return (1 / (1+np.exp(-(b_0 + b_1*x))))
return AD_Object(calc_s(self.val, b_0, b_1), self.label, {k: ((calc_s(self.val, b_0, b_1)*(1-calc_s(self.val, b_0, b_1))) * self.der[k]) for k in self.der})
def __eq__(self, other):
assert isinstance(other, AD_Object), "Input must be an AD_object"
#check function value
if self.val != other.val:
return False
#check input variable ('label')
self_label = list(set(sorted(self.label.keys())))
other_label = list(set(sorted(other.label.keys())))
for k in range(len(self_label)):
if (self_label[k] != other_label[k]):
return False
#check derivative of each input variable
for k in self_label:
if self.der[k] != other.der[k]:
return False
#if it passed all the checks above, return True
return True
def __ne__(self, other):
return not self.__eq__(other)
def __lt__(self, other): #this only compares the function value
assert isinstance(other, AD_Object), "Input must be an AD_object"
return (self.val < other.val)
def __gt__(self, other): #this only compares the function value
assert isinstance(other, AD_Object), "Input must be an AD_object"
return (self.val > other.val)
def __le__(self, other): #this only compares the function value
assert isinstance(other, AD_Object), "Input must be an AD_object"
return (self.val <= other.val)
def __ge__(self, other): #this only compares the function value
assert isinstance(other, AD_Object), "Input must be an AD_object"
return (self.val >= other.val) | PypiClean |
/GeCO-1.0.7.tar.gz/GeCO-1.0.7/geco/mips/loading/miplib.py | import tempfile
from urllib.request import urlretrieve, urlopen
from urllib.error import URLError
import pyscipopt as scip
import os
import pandas as pd
class Loader:
def __init__(self, persistent_directory=None):
"""
Initializes the MIPLIB loader object
Parameters
----------
persistent_directory: str or None
Path for directory to use for persistent files,
If set to None, resorts to default case of using temporary files
that get deleted after program execution
"""
self.instances_cache = {}
self.dir = persistent_directory
if persistent_directory:
self._load_instances_cache()
def load_instance(self, instance_name, with_solution=False):
if not self._instance_cached(instance_name):
self._download_instance(instance_name)
problem_path = self._instance_path(instance_name)
model = scip.Model()
model.readProblem(problem_path)
if with_solution:
self._add_solution(model, instance_name)
return model
def _instance_path(self, instance_name):
return self.instances_cache[instance_name]
def _generate_path_for_instance(self, instance_name):
if self.dir:
return self.dir + instance_name
else:
extension = instance_name[instance_name.index(".") :]
return tempfile.NamedTemporaryFile(suffix=extension, delete=False).name
def _download_instance(self, instance_name):
path = self._generate_path_for_instance(instance_name)
url = self._look_for_working_url(self._instance_urls(instance_name))
if url:
urlretrieve(url, path)
self.instances_cache[instance_name] = path
else:
raise ValueError(
"Was not able to find the instance in any of the MIPLIB sources"
)
def _look_for_working_url(self, urls):
for url in urls:
try:
response = urlopen(url)
except URLError:
continue
if self._successful_response(response):
return url
return None
@staticmethod
def _successful_response(response):
return response.status == 200 and "not_found" not in response.url
def _instance_cached(self, instance_name):
return instance_name in self.instances_cache
def _load_instances_cache(self):
for path in os.listdir(self.dir):
if path.endswith(".mps.gz"):
instance_name = path.split("/")[-1]
self.instances_cache[instance_name] = self.dir + path
def _add_solution(self, model, instance_name):
url = self._look_for_working_url(self._solution_urls(instance_name))
if url:
with tempfile.NamedTemporaryFile(suffix=".sol.gz") as sol_file:
urlretrieve(url, sol_file.name)
model.readSol(sol_file.name)
else:
raise ValueError(
"Was not able to find the solution in any of the MIPLIB sources"
)
@staticmethod
def _instance_urls(instance_name):
return [
f"https://miplib.zib.de/WebData/instances/{instance_name}", # 2017 instances
f"http://miplib2010.zib.de/download/{instance_name}", # 2010 instances
f"http://miplib2010.zib.de/miplib2003/download/{instance_name}", # 2003 instance
]
@staticmethod
def _solution_urls(instance_name):
name = instance_name[: instance_name.index(".")]
return [
f"https://miplib.zib.de/downloads/solutions/{name}/1/{name}.sol.gz", # 2017 solutions
f"http://miplib2010.zib.de/download/{name}.sol.gz", # 2010 solutions
f"http://miplib2010.zib.de/miplib2003/download/{name}.sol.gz", # 2003 solutions
]
def __del__(self):
if self.dir is None:
for path in self.instances_cache.values():
os.unlink(path)
def benchmark_instances():
for instance in custom_list("https://miplib.zib.de/downloads/benchmark-v2.test"):
yield instance
def easy_instances():
for instance in custom_list("https://miplib.zib.de/downloads/easy-v9.test"):
yield instance
def hard_instances():
for instance in custom_list("https://miplib.zib.de/downloads/hard-v15.test"):
yield instance
def open_instances():
for instance in custom_list("https://miplib.zib.de/downloads/open-v14.test"):
yield instance
def custom_list(source, with_solution=False, loader=None):
"""
Returns a generator of instances from the given list
Parameters
----------
source: str
Path or URL for the instance list source
with_solution: bool
Whether to return the instance with the known solutions or not
loader: Loader
Loader object to download instances with
Returns
-------
A generator for the instances
"""
df = pd.read_csv(source, names=["instance"])
if loader is None:
loader = Loader()
for instance in df["instance"]:
yield loader.load_instance(instance, with_solution=with_solution) | PypiClean |
/Nipo-0.0.1.tar.gz/Nipo-0.0.1/markupsafe/_constants.py | HTML_ENTITIES = {
"AElig": 198,
"Aacute": 193,
"Acirc": 194,
"Agrave": 192,
"Alpha": 913,
"Aring": 197,
"Atilde": 195,
"Auml": 196,
"Beta": 914,
"Ccedil": 199,
"Chi": 935,
"Dagger": 8225,
"Delta": 916,
"ETH": 208,
"Eacute": 201,
"Ecirc": 202,
"Egrave": 200,
"Epsilon": 917,
"Eta": 919,
"Euml": 203,
"Gamma": 915,
"Iacute": 205,
"Icirc": 206,
"Igrave": 204,
"Iota": 921,
"Iuml": 207,
"Kappa": 922,
"Lambda": 923,
"Mu": 924,
"Ntilde": 209,
"Nu": 925,
"OElig": 338,
"Oacute": 211,
"Ocirc": 212,
"Ograve": 210,
"Omega": 937,
"Omicron": 927,
"Oslash": 216,
"Otilde": 213,
"Ouml": 214,
"Phi": 934,
"Pi": 928,
"Prime": 8243,
"Psi": 936,
"Rho": 929,
"Scaron": 352,
"Sigma": 931,
"THORN": 222,
"Tau": 932,
"Theta": 920,
"Uacute": 218,
"Ucirc": 219,
"Ugrave": 217,
"Upsilon": 933,
"Uuml": 220,
"Xi": 926,
"Yacute": 221,
"Yuml": 376,
"Zeta": 918,
"aacute": 225,
"acirc": 226,
"acute": 180,
"aelig": 230,
"agrave": 224,
"alefsym": 8501,
"alpha": 945,
"amp": 38,
"and": 8743,
"ang": 8736,
"apos": 39,
"aring": 229,
"asymp": 8776,
"atilde": 227,
"auml": 228,
"bdquo": 8222,
"beta": 946,
"brvbar": 166,
"bull": 8226,
"cap": 8745,
"ccedil": 231,
"cedil": 184,
"cent": 162,
"chi": 967,
"circ": 710,
"clubs": 9827,
"cong": 8773,
"copy": 169,
"crarr": 8629,
"cup": 8746,
"curren": 164,
"dArr": 8659,
"dagger": 8224,
"darr": 8595,
"deg": 176,
"delta": 948,
"diams": 9830,
"divide": 247,
"eacute": 233,
"ecirc": 234,
"egrave": 232,
"empty": 8709,
"emsp": 8195,
"ensp": 8194,
"epsilon": 949,
"equiv": 8801,
"eta": 951,
"eth": 240,
"euml": 235,
"euro": 8364,
"exist": 8707,
"fnof": 402,
"forall": 8704,
"frac12": 189,
"frac14": 188,
"frac34": 190,
"frasl": 8260,
"gamma": 947,
"ge": 8805,
"gt": 62,
"hArr": 8660,
"harr": 8596,
"hearts": 9829,
"hellip": 8230,
"iacute": 237,
"icirc": 238,
"iexcl": 161,
"igrave": 236,
"image": 8465,
"infin": 8734,
"int": 8747,
"iota": 953,
"iquest": 191,
"isin": 8712,
"iuml": 239,
"kappa": 954,
"lArr": 8656,
"lambda": 955,
"lang": 9001,
"laquo": 171,
"larr": 8592,
"lceil": 8968,
"ldquo": 8220,
"le": 8804,
"lfloor": 8970,
"lowast": 8727,
"loz": 9674,
"lrm": 8206,
"lsaquo": 8249,
"lsquo": 8216,
"lt": 60,
"macr": 175,
"mdash": 8212,
"micro": 181,
"middot": 183,
"minus": 8722,
"mu": 956,
"nabla": 8711,
"nbsp": 160,
"ndash": 8211,
"ne": 8800,
"ni": 8715,
"not": 172,
"notin": 8713,
"nsub": 8836,
"ntilde": 241,
"nu": 957,
"oacute": 243,
"ocirc": 244,
"oelig": 339,
"ograve": 242,
"oline": 8254,
"omega": 969,
"omicron": 959,
"oplus": 8853,
"or": 8744,
"ordf": 170,
"ordm": 186,
"oslash": 248,
"otilde": 245,
"otimes": 8855,
"ouml": 246,
"para": 182,
"part": 8706,
"permil": 8240,
"perp": 8869,
"phi": 966,
"pi": 960,
"piv": 982,
"plusmn": 177,
"pound": 163,
"prime": 8242,
"prod": 8719,
"prop": 8733,
"psi": 968,
"quot": 34,
"rArr": 8658,
"radic": 8730,
"rang": 9002,
"raquo": 187,
"rarr": 8594,
"rceil": 8969,
"rdquo": 8221,
"real": 8476,
"reg": 174,
"rfloor": 8971,
"rho": 961,
"rlm": 8207,
"rsaquo": 8250,
"rsquo": 8217,
"sbquo": 8218,
"scaron": 353,
"sdot": 8901,
"sect": 167,
"shy": 173,
"sigma": 963,
"sigmaf": 962,
"sim": 8764,
"spades": 9824,
"sub": 8834,
"sube": 8838,
"sum": 8721,
"sup": 8835,
"sup1": 185,
"sup2": 178,
"sup3": 179,
"supe": 8839,
"szlig": 223,
"tau": 964,
"there4": 8756,
"theta": 952,
"thetasym": 977,
"thinsp": 8201,
"thorn": 254,
"tilde": 732,
"times": 215,
"trade": 8482,
"uArr": 8657,
"uacute": 250,
"uarr": 8593,
"ucirc": 251,
"ugrave": 249,
"uml": 168,
"upsih": 978,
"upsilon": 965,
"uuml": 252,
"weierp": 8472,
"xi": 958,
"yacute": 253,
"yen": 165,
"yuml": 255,
"zeta": 950,
"zwj": 8205,
"zwnj": 8204,
} | PypiClean |
/MSM_PELE-1.1.1-py3-none-any.whl/AdaptivePELE/AdaptivePELE/analysis/backtrackAdaptiveTrajectory.py | from __future__ import print_function
import os
import sys
import argparse
import glob
import itertools
from AdaptivePELE.utilities import utilities
from AdaptivePELE.atomset import atomset
try:
basestring
except NameError:
basestring = str
def parseArguments():
"""
Parse the command-line options
:returns: :py:class:`.Clustering`, int, int, int, str -- Clustering
object, number of trajectory, number of snapshot, number of epoch,
output path where to write the files
"""
desc = "Write the information related to the conformation network to file\n"
parser = argparse.ArgumentParser(description=desc)
parser.add_argument("epoch", type=str, help="Path to the epoch to search the snapshot")
parser.add_argument("trajectory", type=int, help="Trajectory number")
parser.add_argument("snapshot", type=int, help="Snapshot to select (in accepted steps)")
parser.add_argument("-o", type=str, default=None, help="Output path where to write the files")
parser.add_argument("--name", type=str, default="pathway.pdb", help="Name of the pdb to write the files")
parser.add_argument("--top", type=str, default=None, help="Name of the pdb topology for loading non-pdb trajectories")
args = parser.parse_args()
return args.trajectory, args.snapshot, args.epoch, args.o, args.name, args.top
def main(trajectory, snapshot, epoch, outputPath, out_filename, topology):
if outputPath is not None:
outputPath = os.path.join(outputPath, "")
if not os.path.exists(outputPath):
os.makedirs(outputPath)
else:
outputPath = ""
if topology is not None:
topology_contents = utilities.getTopologyFile(topology)
else:
topology_contents = None
if os.path.exists(outputPath+out_filename):
# If the specified name exists, append a number to distinguish the files
name, ext = os.path.splitext(out_filename)
out_filename = "".join([name, "_%d", ext])
i = 1
while os.path.exists(outputPath+out_filename % i):
i += 1
out_filename %= i
pathway = []
# Strip out trailing backslash if present
pathPrefix, epoch = os.path.split(epoch.rstrip("/"))
sys.stderr.write("Creating pathway...\n")
while True:
filename = glob.glob(os.path.join(pathPrefix, epoch, "*traj*_%d.*" % trajectory))
snapshots = utilities.getSnapshots(filename[0], topology=topology)
if not isinstance(snapshots[0], basestring):
new_snapshots = []
for i in range(snapshot+1):
snapshot = snapshots.slice(i, copy=False)
PDB = atomset.PDB()
PDB.initialise(snapshot, topology=topology_contents)
new_snapshots.append(PDB.get_pdb_string())
snapshots = new_snapshots
else:
snapshots = snapshots[:snapshot+1]
pathway.insert(0, snapshots)
if epoch == '0':
# Once we get to epoch 0, we just need to append the trajectory
# where the cluster was found and we can break out of the loop
break
procMapping = open(os.path.join(pathPrefix, epoch, "processorMapping.txt")).read().rstrip().split(':')
epoch, trajectory, snapshot = map(int, procMapping[trajectory-1][1:-1].split(','))
epoch = str(epoch)
sys.stderr.write("Writing pathway...\n")
with open(outputPath+out_filename, "a") as f:
f.write("ENDMDL\n".join(itertools.chain.from_iterable(pathway)))
if __name__ == "__main__":
traj, num_snapshot, num_epoch, output_path, output_filename, top = parseArguments()
main(traj, num_snapshot, num_epoch, output_path, output_filename, top) | PypiClean |
/EOxServer-1.2.12-py3-none-any.whl/eoxserver/services/exceptions.py |
class HTTPMethodNotAllowedError(Exception):
""" This exception is raised in case of a HTTP requires with unsupported
HTTP method.
This exception should always lead to the 405 Method not allowed HTTP error.
The constructor takes two arguments, the error message ``mgs`` and the list
of the accepted HTTP methods ``allowed_methods``.
"""
def __init__(self, msg, allowed_methods):
super(HTTPMethodNotAllowedError, self).__init__(msg)
self.allowed_methods = allowed_methods
class InvalidRequestException(Exception):
"""
This exception indicates that the request was invalid and an exception
report shall be returned to the client.
The constructor takes three arguments, namely ``msg``, the error message,
``code``, the error code, and ``locator``, which is needed in OWS
exception reports for indicating which part of the request produced the
error.
How exactly the exception reports are constructed is not defined by the
exception, but by exception handlers.
"""
def __init__(self, msg, code=None, locator=None):
super(InvalidRequestException, self).__init__(msg)
self.code = code or "InvalidRequest"
self.locator = locator
def __str__(self):
return "Invalid Request: Code: %s; Locator: %s; Message: '%s'" % (
self.code, self.locator,
super(InvalidRequestException, self).__str__()
)
class VersionNegotiationException(Exception):
"""
This exception indicates that version negotiation fails. Such errors can
happen with OWS 2.0 compliant "new-style" version negotation.
"""
code = "VersionNegotiationFailed"
def __str__(self):
return "Version negotiation failed."
class LocatorListException(Exception):
""" Base class for exceptions that report that a number of items are
missing or invalid
"""
def __init__(self, items):
self.items = items
@property
def locator(self):
"This property provides a list of all missing/invalid items."
return " ".join(self.items)
class InvalidAxisLabelException(Exception):
"""
This exception indicates that an invalid axis name was chosen in a WCS
2.0 subsetting parameter.
"""
code = "InvalidAxisLabel"
def __init__(self, axis_label):
super(InvalidAxisLabelException, self).__init__(
"Invalid axis label: '%s'." % axis_label
)
self.locator = axis_label
class InvalidSubsettingException(Exception):
"""
This exception indicates an invalid WCS 2.0 subsetting parameter was
submitted.
"""
code = "InvalidSubsetting"
locator = "subset"
class InvalidSubsettingCrsException(Exception):
"""
This exception indicates an invalid WCS 2.0 subsettingCrs parameter was
submitted.
"""
code = "SubsettingCrs-NotSupported"
locator = "subsettingCrs"
class InvalidOutputCrsException(Exception):
"""
This exception indicates an invalid WCS 2.0 outputCrs parameter was
submitted.
"""
code = "OutputCrs-NotSupported"
locator = "outputCrs"
class NoSuchCoverageException(LocatorListException):
""" This exception indicates that the requested coverage(s) do not
exist.
"""
code = "NoSuchCoverage"
def __str__(self):
return "No such Coverage%s with ID: %s" % (
"" if len(self.items) == 1 else "s",
", ".join(map(lambda i: "'%s'" % i, self.items))
)
class NoSuchDatasetSeriesOrCoverageException(LocatorListException):
""" This exception indicates that the requested coverage(s) or dataset
series do not exist.
"""
code = "NoSuchDatasetSeriesOrCoverage"
def __str__(self):
return "No such Coverage%s or Dataset Series with ID: %s" % (
" " if len(self.items) == 1 else "s",
", ".join(map(lambda i: "'%s'" % i, self.items))
)
class OperationNotSupportedException(Exception):
""" Exception to be thrown when some operations are not supported or
disabled.
"""
def __init__(self, message, operation=None):
super(OperationNotSupportedException, self).__init__(message)
self.operation = operation
@property
def locator(self):
return self.operation
code = "OperationNotSupported"
class ServiceNotSupportedException(OperationNotSupportedException):
""" Exception to be thrown when a specific OWS service is not enabled.
"""
def __init__(self, service):
self.service = service
def __str__(self):
if self.service:
return "Service '%s' is not supported." % self.service.upper()
else:
return "Service is not supported."
class VersionNotSupportedException(Exception):
""" Exception to be thrown when a specific OWS service version is not
supported.
"""
def __init__(self, service, version):
self.service = service
self.version = version
def __str__(self):
if self.service:
return "Service '%s' version '%s' is not supported." % (
self.service, self.version
)
else:
return "Version '%s' is not supported." % self.version
code = "InvalidParameterValue"
class InterpolationMethodNotSupportedException(Exception):
"""
This exception indicates a not supported interpolation method.
"""
code = "InterpolationMethodNotSupported"
locator = "interpolation"
class RenderException(Exception):
""" Rendering related exception.
"""
def __init__(self, message, locator, is_parameter=True):
super(RenderException, self).__init__(message)
self.locator = locator
self.is_parameter = is_parameter
@property
def code(self):
return (
"InvalidParameterValue" if self.is_parameter else "InvalidRequest"
)
class NoSuchFieldException(Exception):
""" Error in RangeSubsetting when band does not exist.
"""
code = "NoSuchField"
def __init__(self, msg, locator):
super(NoSuchFieldException, self).__init__(msg)
self.locator = locator
class InvalidFieldSequenceException(Exception):
""" Error in RangeSubsetting for illegal intervals.
"""
code = "InvalidFieldSequence"
def __init__(self, msg, locator):
super(NoSuchFieldException, self).__init__(msg)
self.locator = locator
class InvalidScaleFactorException(Exception):
""" Error in ScaleFactor and ScaleAxis operations
"""
code = "InvalidScaleFactor"
def __init__(self, scalefactor):
super(InvalidScaleFactorException, self).__init__(
"Scalefactor '%s' is not valid" % scalefactor
)
self.locator = scalefactor
class InvalidScaleExtentException(Exception):
""" Error in ScaleExtent operations
"""
code = "InvalidExtent"
def __init__(self, low, high):
super(InvalidScaleExtentException, self).__init__(
"ScaleExtent '%s:%s' is not valid" % (low, high)
)
self.locator = high
class ScaleAxisUndefinedException(Exception):
""" Error in all scaling operations involving an axis
"""
code = "ScaleAxisUndefined"
def __init__(self, axis):
super(ScaleAxisUndefinedException, self).__init__(
"Scale axis '%s' is undefined" % axis
)
self.locator = axis | PypiClean |
/Hcl.py-0.8.2.tar.gz/Hcl.py-0.8.2/README.md | <h1 align="center">
<br><a href="https://discord.gg/2ZKDxFRk4Y"><img src="https://cdn.discordapp.com/attachments/914247542114500638/915324335407890565/PicsArt_11-23-01.13.37.jpg" alt="Hcl.py" width="1000"></a>
<br>Hcl.py<br>
</h1>
[

](https://github.com/Oustex/Hcl.py)
### Hcl.py
Hcl.py is an Amino client for Python. It provides to access [aminoapps](https://aminoapps.com) Web, app and socket servers. Developed BY Kapidev And Upgraded BY Oustex
### Installation
You can use either `python3 setup.py install` or `pip3 install Hcl.py` to install.
- **Note** This Python Module tested on `python3.10`, `python3.9`
### Documentation
This project's documentation can not be found right Now
| PypiClean |
/Gallery-0.1.0.tar.gz/Gallery-0.1.0/gallery/help_strings.py | convert="""usage: %(program)s [PROGRAM_OPTIONS] thumb SOURCE DEST
Recursively convert all the pictures and videos in SOURCE into a directory
structure in DEST
Arguments:
SOURCE: The source directory for the images and videos
DEST: An empty directory which will be populated with a converted
data structure
Options:
-t, --thumbnail-size=THUMBNAIL_SIZE
The width in pixels of the thumbnails
-y, --video-overlay-file=OVERLAY_FILE
A transparent PNG file the same size as the
thumbnails to overlay on video thumbnails to
distinguish them from picture thumbnails
--gallery-exif Generate EXIF data files
--gallery-stills
Generate video stills
--gallery-reduced
Generate reduced size photos (1024x1024 max)
--gallery-thumb
Generate thumbnails from video stills and
reduced sized photos (150x150) and apply the
video overlay to the video thubnails
--gallery-h264 Generate copressed and resized h264 video for
use in flash players
All PROGRAM_OPTIONS (see `%(program)s --help')
"""
csv="""usage: %(program)s [PROGRAM_OPTIONS] metadata SOURCE DEST [META]
Generate a gallery (-F gallery) or photo metadata file (-F photo) from a
CSV file.
If generating a gallery the file can conatin multiple columns but must
contain the following:
Path
The name to use for the gallery
Title
The title of the gallery
Description
A description of the gallery
Index
The relative path from the root to a thumbnail to represent the
gallery
If generating photo metadata the file can conatin multiple columns but
must contain a Filename column with the path to the photo. Optionally it
can contain a Category column specifying the name of the gallery it is to
appear in. All other columns will just be added with their column headings
a field names.
In either case the first line in the CSV file will be treated as the
column headings.
Arguments:
SOURCE: The path to the CSV file
DEST: The path to the gallery or photo metadata folder to contain
the output from this command.
META: The path to the meta directory (only used with -F gallery)
Options:
-F, --format=FORMAT
The type of CSV file we are using, photo or gallery
All PROGRAM_OPTIONS (see `%(program)s --help')
Note, only photos can be generated from the CSV file, not videos.
"""
autogen="""usage: %(program)s [PROGRAM_OPTIONS] gallery META DEST
Automatically build galleries based on the file strucutre of the meta
directory specified as META and put them in DEST. It needs the h264 and
1024 directoires too to create the create links.
The order of files is a gallery is determined by stripping all characters
which aren't numbers from the filename and then numbering the files in
order
Arguments:
META: The path to the photo and video metadata directory
DEST: An empty directory within which the galleries will be placed
All PROGRAM_OPTIONS (see `%(program)s --help')
"""
__program__="""usage: %(program)s [PROGRAM_OPTIONS] COMMAND [OPTIONS] ARGS
Commands (aliases):
convert: convert pictures and videos to web format
csv: extract metadata from CSV files
autogen: generate galleries automatically from folders of converted
pics and vids
Try `%(program)s COMMAND --help' for help on a specific command.
""" | PypiClean |
/MegEngine-1.13.1-cp37-cp37m-macosx_10_14_x86_64.whl/megengine/data/dataset/vision/voc.py | import collections.abc
import os
import xml.etree.ElementTree as ET
import cv2
import numpy as np
from .meta_vision import VisionDataset
class PascalVOC(VisionDataset):
r"""`Pascal VOC <http://host.robots.ox.ac.uk/pascal/VOC/>`_ Dataset."""
supported_order = (
"image",
"boxes",
"boxes_category",
"mask",
"info",
)
def __init__(self, root, image_set, *, order=None):
if ("boxes" in order or "boxes_category" in order) and "mask" in order:
raise ValueError(
"PascalVOC only supports boxes & boxes_category or mask, not both."
)
super().__init__(root, order=order, supported_order=self.supported_order)
if not os.path.isdir(self.root):
raise RuntimeError("Dataset not found or corrupted.")
self.image_set = image_set
image_dir = os.path.join(self.root, "JPEGImages")
if "boxes" in order or "boxes_category" in order:
annotation_dir = os.path.join(self.root, "Annotations")
splitdet_dir = os.path.join(self.root, "ImageSets/Main")
split_f = os.path.join(splitdet_dir, image_set.rstrip("\n") + ".txt")
with open(os.path.join(split_f), "r") as f:
self.file_names = [x.strip() for x in f.readlines()]
self.images = [os.path.join(image_dir, x + ".jpg") for x in self.file_names]
self.annotations = [
os.path.join(annotation_dir, x + ".xml") for x in self.file_names
]
assert len(self.images) == len(self.annotations)
elif "mask" in order:
if "aug" in image_set:
mask_dir = os.path.join(self.root, "SegmentationClass_aug")
else:
mask_dir = os.path.join(self.root, "SegmentationClass")
splitmask_dir = os.path.join(self.root, "ImageSets/Segmentation")
split_f = os.path.join(splitmask_dir, image_set.rstrip("\n") + ".txt")
with open(os.path.join(split_f), "r") as f:
self.file_names = [x.strip() for x in f.readlines()]
self.images = [os.path.join(image_dir, x + ".jpg") for x in self.file_names]
self.masks = [os.path.join(mask_dir, x + ".png") for x in self.file_names]
assert len(self.images) == len(self.masks)
else:
raise NotImplementedError
self.img_infos = dict()
def __getitem__(self, index):
target = []
for k in self.order:
if k == "image":
image = cv2.imread(self.images[index], cv2.IMREAD_COLOR)
target.append(image)
elif k == "boxes":
anno = self.parse_voc_xml(ET.parse(self.annotations[index]).getroot())
boxes = [obj["bndbox"] for obj in anno["annotation"]["object"]]
# boxes type xyxy
boxes = [
(bb["xmin"], bb["ymin"], bb["xmax"], bb["ymax"]) for bb in boxes
]
boxes = np.array(boxes, dtype=np.float32).reshape(-1, 4)
target.append(boxes)
elif k == "boxes_category":
anno = self.parse_voc_xml(ET.parse(self.annotations[index]).getroot())
boxes_category = [obj["name"] for obj in anno["annotation"]["object"]]
boxes_category = [
self.class_names.index(bc) + 1 for bc in boxes_category
]
boxes_category = np.array(boxes_category, dtype=np.int32)
target.append(boxes_category)
elif k == "mask":
if "aug" in self.image_set:
mask = cv2.imread(self.masks[index], cv2.IMREAD_GRAYSCALE)
else:
mask = cv2.imread(self.masks[index], cv2.IMREAD_COLOR)
mask = self._trans_mask(mask)
mask = mask[:, :, np.newaxis]
target.append(mask)
elif k == "info":
info = self.get_img_info(index, image)
info = [info["height"], info["width"], info["file_name"]]
target.append(info)
else:
raise NotImplementedError
return tuple(target)
def __len__(self):
return len(self.images)
def get_img_info(self, index, image=None):
if index not in self.img_infos:
if image is None:
image = cv2.imread(self.images[index], cv2.IMREAD_COLOR)
self.img_infos[index] = dict(
height=image.shape[0],
width=image.shape[1],
file_name=self.file_names[index],
)
return self.img_infos[index]
def _trans_mask(self, mask):
label = np.ones(mask.shape[:2]) * 255
for i in range(len(self.class_colors)):
b, g, r = self.class_colors[i]
label[
(mask[:, :, 0] == b) & (mask[:, :, 1] == g) & (mask[:, :, 2] == r)
] = i
return label.astype(np.uint8)
def parse_voc_xml(self, node):
voc_dict = {}
children = list(node)
if children:
def_dic = collections.defaultdict(list)
for dc in map(self.parse_voc_xml, children):
for ind, v in dc.items():
def_dic[ind].append(v)
if node.tag == "annotation":
def_dic["object"] = [def_dic["object"]]
voc_dict = {
node.tag: {
ind: v[0] if len(v) == 1 else v for ind, v in def_dic.items()
}
}
if node.text:
text = node.text.strip()
if not children:
voc_dict[node.tag] = text
return voc_dict
class_names = (
"aeroplane",
"bicycle",
"bird",
"boat",
"bottle",
"bus",
"car",
"cat",
"chair",
"cow",
"diningtable",
"dog",
"horse",
"motorbike",
"person",
"pottedplant",
"sheep",
"sofa",
"train",
"tvmonitor",
)
class_colors = [
[0, 0, 0], # background
[0, 0, 128],
[0, 128, 0],
[0, 128, 128],
[128, 0, 0],
[128, 0, 128],
[128, 128, 0],
[128, 128, 128],
[0, 0, 64],
[0, 0, 192],
[0, 128, 64],
[0, 128, 192],
[128, 0, 64],
[128, 0, 192],
[128, 128, 64],
[128, 128, 192],
[0, 64, 0],
[0, 64, 128],
[0, 192, 0],
[0, 192, 128],
[128, 64, 0],
] | PypiClean |
/GuangTestBeat-0.13.1-cp38-cp38-macosx_10_9_x86_64.whl/econml/iv/dml/_dml.py | import numpy as np
from sklearn.base import clone
from sklearn.linear_model import LinearRegression, LogisticRegressionCV
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import FunctionTransformer
from itertools import product
from ..._ortho_learner import _OrthoLearner
from ..._cate_estimator import LinearModelFinalCateEstimatorMixin, StatsModelsCateEstimatorMixin, LinearCateEstimator
from ...inference import StatsModelsInference, GenericSingleTreatmentModelFinalInference
from ...sklearn_extensions.linear_model import StatsModels2SLS, StatsModelsLinearRegression, WeightedLassoCVWrapper
from ...sklearn_extensions.model_selection import WeightedStratifiedKFold
from ...utilities import (_deprecate_positional, get_feature_names_or_default, filter_none_kwargs, add_intercept,
cross_product, broadcast_unit_treatments, reshape_treatmentwise_effects, shape,
parse_final_model_params, deprecated, Summary)
from ...dml.dml import _FirstStageWrapper, _FinalWrapper
from ...dml._rlearner import _ModelFinal
from ..._shap import _shap_explain_joint_linear_model_cate, _shap_explain_model_cate
class _OrthoIVModelNuisance:
def __init__(self, model_y_xw, model_t_xw, model_z, projection):
self._model_y_xw = model_y_xw
self._model_t_xw = model_t_xw
self._projection = projection
if self._projection:
self._model_t_xwz = model_z
else:
self._model_z_xw = model_z
def _combine(self, W, Z, n_samples):
if Z is not None:
Z = Z.reshape(n_samples, -1)
return Z if W is None else np.hstack([W, Z])
return None if W is None else W
def fit(self, Y, T, X=None, W=None, Z=None, sample_weight=None, groups=None):
self._model_y_xw.fit(X=X, W=W, Target=Y, sample_weight=sample_weight, groups=groups)
self._model_t_xw.fit(X=X, W=W, Target=T, sample_weight=sample_weight, groups=groups)
if self._projection:
# concat W and Z
WZ = self._combine(W, Z, Y.shape[0])
self._model_t_xwz.fit(X=X, W=WZ, Target=T, sample_weight=sample_weight, groups=groups)
else:
self._model_z_xw.fit(X=X, W=W, Target=Z, sample_weight=sample_weight, groups=groups)
return self
def score(self, Y, T, X=None, W=None, Z=None, sample_weight=None, group=None):
if hasattr(self._model_y_xw, 'score'):
Y_X_score = self._model_y_xw.score(X=X, W=W, Target=Y, sample_weight=sample_weight)
else:
Y_X_score = None
if hasattr(self._model_t_xw, 'score'):
T_X_score = self._model_t_xw.score(X=X, W=W, Target=T, sample_weight=sample_weight)
else:
T_X_score = None
if self._projection:
# concat W and Z
WZ = self._combine(W, Z, Y.shape[0])
if hasattr(self._model_t_xwz, 'score'):
T_XZ_score = self._model_t_xwz.score(X=X, W=WZ, Target=T, sample_weight=sample_weight)
else:
T_XZ_score = None
return Y_X_score, T_X_score, T_XZ_score
else:
if hasattr(self._model_z_xw, 'score'):
Z_X_score = self._model_z_xw.score(X=X, W=W, Target=Z, sample_weight=sample_weight)
else:
Z_X_score = None
return Y_X_score, T_X_score, Z_X_score
def predict(self, Y, T, X=None, W=None, Z=None, sample_weight=None, group=None):
Y_pred = self._model_y_xw.predict(X=X, W=W)
T_pred = self._model_t_xw.predict(X=X, W=W)
if self._projection:
# concat W and Z
WZ = self._combine(W, Z, Y.shape[0])
T_proj = self._model_t_xwz.predict(X, WZ)
else:
Z_pred = self._model_z_xw.predict(X=X, W=W)
if (X is None) and (W is None): # In this case predict above returns a single row
Y_pred = np.tile(Y_pred.reshape(1, -1), (Y.shape[0], 1))
T_pred = np.tile(T_pred.reshape(1, -1), (T.shape[0], 1))
if not self._projection:
Z_pred = np.tile(Z_pred.reshape(1, -1), (Z.shape[0], 1))
Y_res = Y - Y_pred.reshape(Y.shape)
T_res = T - T_pred.reshape(T.shape)
if self._projection:
Z_res = T_proj.reshape(T.shape) - T_pred.reshape(T.shape)
else:
Z_res = Z - Z_pred.reshape(Z.shape)
return Y_res, T_res, Z_res
class _OrthoIVModelFinal:
def __init__(self, model_final, featurizer, fit_cate_intercept):
self._model_final = clone(model_final, safe=False)
self._original_featurizer = clone(featurizer, safe=False)
self._fit_cate_intercept = fit_cate_intercept
if self._fit_cate_intercept:
add_intercept_trans = FunctionTransformer(add_intercept,
validate=True)
if featurizer:
self._featurizer = Pipeline([('featurize', self._original_featurizer),
('add_intercept', add_intercept_trans)])
else:
self._featurizer = add_intercept_trans
else:
self._featurizer = self._original_featurizer
def _combine(self, X, T, fitting=True):
if X is not None:
if self._featurizer is not None:
F = self._featurizer.fit_transform(X) if fitting else self._featurizer.transform(X)
else:
F = X
else:
if not self._fit_cate_intercept:
raise AttributeError("Cannot have X=None and also not allow for a CATE intercept!")
F = np.ones((T.shape[0], 1))
return cross_product(F, T)
def fit(self, Y, T, X=None, W=None, Z=None, nuisances=None,
sample_weight=None, freq_weight=None, sample_var=None, groups=None):
Y_res, T_res, Z_res = nuisances
# Track training dimensions to see if Y or T is a vector instead of a 2-dimensional array
self._d_t = shape(T_res)[1:]
self._d_y = shape(Y_res)[1:]
XT_res = self._combine(X, T_res)
XZ_res = self._combine(X, Z_res)
filtered_kwargs = filter_none_kwargs(sample_weight=sample_weight,
freq_weight=freq_weight, sample_var=sample_var)
self._model_final.fit(XZ_res, XT_res, Y_res, **filtered_kwargs)
return self
def predict(self, X=None):
X2, T = broadcast_unit_treatments(X if X is not None else np.empty((1, 0)),
self._d_t[0] if self._d_t else 1)
XT = self._combine(None if X is None else X2, T, fitting=False)
prediction = self._model_final.predict(XT)
return reshape_treatmentwise_effects(prediction,
self._d_t, self._d_y)
def score(self, Y, T, X=None, W=None, Z=None, nuisances=None, sample_weight=None, groups=None):
Y_res, T_res, Z_res = nuisances
if Y_res.ndim == 1:
Y_res = Y_res.reshape((-1, 1))
if T_res.ndim == 1:
T_res = T_res.reshape((-1, 1))
effects = self.predict(X).reshape((-1, Y_res.shape[1], T_res.shape[1]))
Y_res_pred = np.einsum('ijk,ik->ij', effects, T_res).reshape(Y_res.shape)
if sample_weight is not None:
return np.linalg.norm(np.average(cross_product(Z_res, Y_res - Y_res_pred), weights=sample_weight, axis=0),
ord=2)
else:
return np.linalg.norm(np.mean(cross_product(Z_res, Y_res - Y_res_pred), axis=0), ord=2)
class OrthoIV(LinearModelFinalCateEstimatorMixin, _OrthoLearner):
"""
Implementation of the orthogonal/double ml method for CATE estimation with
IV as described in section 4.2:
Double/Debiased Machine Learning for Treatment and Causal Parameters
Victor Chernozhukov, Denis Chetverikov, Mert Demirer, Esther Duflo, Christian Hansen, Whitney Newey, James Robins
https://arxiv.org/abs/1608.00060
Solve the following moment equation:
.. math::
\\E[(Y-\\E[Y|X]-\\theta(X) * (T-\\E[T|X]))(Z-\\E[Z|X])] = 0
Parameters
----------
model_y_xw : estimator or 'auto' (default is 'auto')
model to estimate :math:`\\E[Y | X, W]`. Must support `fit` and `predict` methods.
If 'auto' :class:`.WeightedLassoCV`/:class:`.WeightedMultiTaskLassoCV` will be chosen.
model_t_xw : estimator or 'auto' (default is 'auto')
model to estimate :math:`\\E[T | X, W]`. Must support `fit` and `predict` methods.
If 'auto', :class:`~sklearn.linear_model.LogisticRegressionCV`
will be applied for discrete treatment,
and :class:`.WeightedLassoCV`/:class:`.WeightedMultiTaskLassoCV`
will be applied for continuous treatment.
model_t_xwz : estimator or 'auto' (default is 'auto')
model to estimate :math:`\\E[T | X, W, Z]`. Must support `fit` and `predict` methods.
If 'auto', :class:`~sklearn.linear_model.LogisticRegressionCV`
will be applied for discrete treatment,
and :class:`.WeightedLassoCV`/:class:`.WeightedMultiTaskLassoCV`
will be applied for continuous treatment.
model_z_xw : estimator or 'auto' (default is 'auto')
model to estimate :math:`\\E[Z | X, W]`. Must support `fit` and `predict` methods.
If 'auto', :class:`~sklearn.linear_model.LogisticRegressionCV`
will be applied for discrete instrument,
and :class:`.WeightedLassoCV`/:class:`.WeightedMultiTaskLassoCV`
will be applied for continuous instrument.
projection: bool, optional, default False
If True, we fit a slight variant of OrthoIV where we use E[T|X, W, Z] as the instrument as opposed to Z,
model_z_xw will be disabled; If False, model_t_xwz will be disabled.
featurizer : :term:`transformer`, optional, default None
Must support fit_transform and transform. Used to create composite features in the final CATE regression.
It is ignored if X is None. The final CATE will be trained on the outcome of featurizer.fit_transform(X).
If featurizer=None, then CATE is trained on X.
fit_cate_intercept : bool, optional, default False
Whether the linear CATE model should have a constant term.
discrete_treatment: bool, optional, default False
Whether the treatment values should be treated as categorical, rather than continuous, quantities
discrete_instrument: bool, optional, default False
Whether the instrument values should be treated as categorical, rather than continuous, quantities
categories: 'auto' or list, default 'auto'
The categories to use when encoding discrete treatments (or 'auto' to use the unique sorted values).
The first category will be treated as the control treatment.
cv: int, cross-validation generator or an iterable, optional, default 2
Determines the cross-validation splitting strategy.
Possible inputs for cv are:
- None, to use the default 3-fold cross-validation,
- integer, to specify the number of folds.
- :term:`CV splitter`
- An iterable yielding (train, test) splits as arrays of indices.
For integer/None inputs, if the treatment is discrete
:class:`~sklearn.model_selection.StratifiedKFold` is used, else,
:class:`~sklearn.model_selection.KFold` is used
(with a random shuffle in either case).
Unless an iterable is used, we call `split(concat[W, X], T)` to generate the splits. If all
W, X are None, then we call `split(ones((T.shape[0], 1)), T)`.
mc_iters: int, optional (default=None)
The number of times to rerun the first stage models to reduce the variance of the nuisances.
mc_agg: {'mean', 'median'}, optional (default='mean')
How to aggregate the nuisance value for each sample across the `mc_iters` monte carlo iterations of
cross-fitting.
random_state: int, :class:`~numpy.random.mtrand.RandomState` instance or None, optional (default=None)
If int, random_state is the seed used by the random number generator;
If :class:`~numpy.random.mtrand.RandomState` instance, random_state is the random number generator;
If None, the random number generator is the :class:`~numpy.random.mtrand.RandomState` instance used
by :mod:`np.random<numpy.random>`.
Examples
--------
A simple example with the default models:
.. testcode::
:hide:
import numpy as np
import scipy.special
np.set_printoptions(suppress=True)
.. testcode::
from econml.iv.dml import OrthoIV
# Define the data generation functions
def dgp(n, p, true_fn):
X = np.random.normal(0, 1, size=(n, p))
Z = np.random.binomial(1, 0.5, size=(n,))
nu = np.random.uniform(0, 10, size=(n,))
coef_Z = 0.8
C = np.random.binomial(
1, coef_Z * scipy.special.expit(0.4 * X[:, 0] + nu)
) # Compliers when recomended
C0 = np.random.binomial(
1, 0.06 * np.ones(X.shape[0])
) # Non-compliers when not recommended
T = C * Z + C0 * (1 - Z)
y = true_fn(X) * T + 2 * nu + 5 * (X[:, 3] > 0) + 0.1 * np.random.uniform(0, 1, size=(n,))
return y, T, Z, X
def true_heterogeneity_function(X):
return 5 * X[:, 0]
np.random.seed(123)
y, T, Z, X = dgp(1000, 5, true_heterogeneity_function)
est = OrthoIV(discrete_treatment=True, discrete_instrument=True)
est.fit(Y=y, T=T, Z=Z, X=X)
>>> est.effect(X[:3])
array([-4.57086..., 6.06523..., -3.02513...])
>>> est.effect_interval(X[:3])
(array([-7.45472..., 1.85334..., -5.47322...]),
array([-1.68700... , 10.27712..., -0.57704...]))
>>> est.coef_
array([ 5.11260... , 0.71353..., 0.38242..., -0.23891..., -0.07036...])
>>> est.coef__interval()
(array([ 3.76773..., -0.42532..., -0.78145..., -1.36996..., -1.22505...]),
array([6.45747..., 1.85239..., 1.54631..., 0.89213..., 1.08432...]))
>>> est.intercept_
-0.24090...
>>> est.intercept__interval()
(-1.39053..., 0.90872...)
"""
def __init__(self, *,
model_y_xw="auto",
model_t_xw="auto",
model_t_xwz="auto",
model_z_xw="auto",
projection=False,
featurizer=None,
fit_cate_intercept=True,
discrete_treatment=False,
discrete_instrument=False,
categories='auto',
cv=2,
mc_iters=None,
mc_agg='mean',
random_state=None):
self.model_y_xw = clone(model_y_xw, safe=False)
self.model_t_xw = clone(model_t_xw, safe=False)
self.model_t_xwz = clone(model_t_xwz, safe=False)
self.model_z_xw = clone(model_z_xw, safe=False)
self.projection = projection
self.featurizer = clone(featurizer, safe=False)
self.fit_cate_intercept = fit_cate_intercept
super().__init__(discrete_instrument=discrete_instrument,
discrete_treatment=discrete_treatment,
categories=categories,
cv=cv,
mc_iters=mc_iters,
mc_agg=mc_agg,
random_state=random_state)
def _gen_featurizer(self):
return clone(self.featurizer, safe=False)
def _gen_model_final(self):
return StatsModels2SLS(cov_type="HC0")
def _gen_ortho_learner_model_final(self):
return _OrthoIVModelFinal(self._gen_model_final(), self._gen_featurizer(), self.fit_cate_intercept)
def _gen_ortho_learner_model_nuisance(self):
if self.model_y_xw == 'auto':
model_y_xw = WeightedLassoCVWrapper(random_state=self.random_state)
else:
model_y_xw = clone(self.model_y_xw, safe=False)
if self.model_t_xw == 'auto':
if self.discrete_treatment:
model_t_xw = LogisticRegressionCV(cv=WeightedStratifiedKFold(random_state=self.random_state),
random_state=self.random_state)
else:
model_t_xw = WeightedLassoCVWrapper(random_state=self.random_state)
else:
model_t_xw = clone(self.model_t_xw, safe=False)
if self.projection:
# train E[T|X,W,Z]
if self.model_t_xwz == 'auto':
if self.discrete_treatment:
model_t_xwz = LogisticRegressionCV(cv=WeightedStratifiedKFold(random_state=self.random_state),
random_state=self.random_state)
else:
model_t_xwz = WeightedLassoCVWrapper(random_state=self.random_state)
else:
model_t_xwz = clone(self.model_t_xwz, safe=False)
return _OrthoIVModelNuisance(_FirstStageWrapper(clone(model_y_xw, safe=False), True,
self._gen_featurizer(), False, False),
_FirstStageWrapper(clone(model_t_xw, safe=False), False,
self._gen_featurizer(), False, self.discrete_treatment),
_FirstStageWrapper(clone(model_t_xwz, safe=False), False,
self._gen_featurizer(), False, self.discrete_treatment),
self.projection)
else:
# train [Z|X,W]
if self.model_z_xw == "auto":
if self.discrete_instrument:
model_z_xw = LogisticRegressionCV(cv=WeightedStratifiedKFold(random_state=self.random_state),
random_state=self.random_state)
else:
model_z_xw = WeightedLassoCVWrapper(random_state=self.random_state)
else:
model_z_xw = clone(self.model_z_xw, safe=False)
return _OrthoIVModelNuisance(_FirstStageWrapper(clone(model_y_xw, safe=False), True,
self._gen_featurizer(), False, False),
_FirstStageWrapper(clone(model_t_xw, safe=False), False,
self._gen_featurizer(), False, self.discrete_treatment),
_FirstStageWrapper(clone(model_z_xw, safe=False), False,
self._gen_featurizer(), False, self.discrete_instrument),
self.projection)
def fit(self, Y, T, *, Z, X=None, W=None, sample_weight=None, freq_weight=None, sample_var=None, groups=None,
cache_values=False, inference="auto"):
"""
Estimate the counterfactual model from data, i.e. estimates function :math:`\\theta(\\cdot)`.
Parameters
----------
Y: (n, d_y) matrix or vector of length n
Outcomes for each sample
T: (n, d_t) matrix or vector of length n
Treatments for each sample
Z: (n, d_z) matrix
Instruments for each sample
X: optional(n, d_x) matrix or None (Default=None)
Features for each sample
W: optional(n, d_w) matrix or None (Default=None)
Controls for each sample
sample_weight : (n,) array like, default None
Individual weights for each sample. If None, it assumes equal weight.
freq_weight: (n,) array like of integers, default None
Weight for the observation. Observation i is treated as the mean
outcome of freq_weight[i] independent observations.
When ``sample_var`` is not None, this should be provided.
sample_var : {(n,), (n, d_y)} nd array like, default None
Variance of the outcome(s) of the original freq_weight[i] observations that were used to
compute the mean outcome represented by observation i.
groups: (n,) vector, optional
All rows corresponding to the same group will be kept together during splitting.
If groups is not None, the `cv` argument passed to this class's initializer
must support a 'groups' argument to its split method.
cache_values: bool, default False
Whether to cache inputs and first stage results, which will allow refitting a different final model
inference: string,:class:`.Inference` instance, or None
Method for performing inference. This estimator supports 'bootstrap'
(or an instance of:class:`.BootstrapInference`) and 'auto'
(or an instance of :class:`.LinearModelFinalInference`)
Returns
-------
self: OrthoIV instance
"""
if self.projection:
assert self.model_z_xw == "auto", ("In the case of projection=True, model_z_xw will not be fitted, "
"please leave it when initializing the estimator!")
else:
assert self.model_t_xwz == "auto", ("In the case of projection=False, model_t_xwz will not be fitted, "
"please leave it when initializing the estimator!")
# Replacing fit from _OrthoLearner, to reorder arguments and improve the docstring
return super().fit(Y, T, X=X, W=W, Z=Z,
sample_weight=sample_weight, freq_weight=freq_weight, sample_var=sample_var, groups=groups,
cache_values=cache_values, inference=inference)
def refit_final(self, *, inference='auto'):
return super().refit_final(inference=inference)
refit_final.__doc__ = _OrthoLearner.refit_final.__doc__
def score(self, Y, T, Z, X=None, W=None, sample_weight=None):
"""
Score the fitted CATE model on a new data set. Generates nuisance parameters
for the new data set based on the fitted residual nuisance models created at fit time.
It uses the mean prediction of the models fitted by the different crossfit folds.
Then calculates the MSE of the final residual Y on residual T regression.
If model_final does not have a score method, then it raises an :exc:`.AttributeError`
Parameters
----------
Y: (n, d_y) matrix or vector of length n
Outcomes for each sample
T: (n, d_t) matrix or vector of length n
Treatments for each sample
Z: optional(n, d_z) matrix
Instruments for each sample
X: optional(n, d_x) matrix or None (Default=None)
Features for each sample
W: optional(n, d_w) matrix or None (Default=None)
Controls for each sample
sample_weight: optional(n,) vector or None (Default=None)
Weights for each samples
Returns
-------
score: float
The MSE of the final CATE model on the new data.
"""
# Replacing score from _OrthoLearner, to enforce Z to be required and improve the docstring
return super().score(Y, T, X=X, W=W, Z=Z, sample_weight=sample_weight)
@property
def featurizer_(self):
"""
Get the fitted featurizer.
Returns
-------
featurizer: object of type(`featurizer`)
An instance of the fitted featurizer that was used to preprocess X in the final CATE model training.
Available only when featurizer is not None and X is not None.
"""
return self.ortho_learner_model_final_._featurizer
@property
def original_featurizer(self):
# NOTE: important to use the ortho_learner_model_final_ attribute instead of the
# attribute so that the trained featurizer will be passed through
return self.ortho_learner_model_final_._original_featurizer
def cate_feature_names(self, feature_names=None):
"""
Get the output feature names.
Parameters
----------
feature_names: list of strings of length X.shape[1] or None
The names of the input features. If None and X is a dataframe, it defaults to the column names
from the dataframe.
Returns
-------
out_feature_names: list of strings or None
The names of the output features :math:`\\phi(X)`, i.e. the features with respect to which the
final CATE model for each treatment is linear. It is the names of the features that are associated
with each entry of the :meth:`coef_` parameter. Available only when the featurizer is not None and has
a method: `get_feature_names(feature_names)`. Otherwise None is returned.
"""
if self._d_x is None:
# Handles the corner case when X=None but featurizer might be not None
return None
if feature_names is None:
feature_names = self._input_names["feature_names"]
if self.original_featurizer is None:
return feature_names
return get_feature_names_or_default(self.original_featurizer, feature_names)
@property
def model_final_(self):
# NOTE This is used by the inference methods and is more for internal use to the library
return self.ortho_learner_model_final_._model_final
@property
def model_cate(self):
"""
Get the fitted final CATE model.
Returns
-------
model_cate: object of type(model_final)
An instance of the model_final object that was fitted after calling fit which corresponds
to the constant marginal CATE model.
"""
return self.ortho_learner_model_final_._model_final
@property
def models_y_xw(self):
"""
Get the fitted models for :math:`\\E[Y | X]`.
Returns
-------
models_y_xw: nested list of objects of type(`model_y_xw`)
A nested list of instances of the `model_y_xw` object. Number of sublist equals to number of monte carlo
iterations, each element in the sublist corresponds to a crossfitting
fold and is the model instance that was fitted for that training fold.
"""
return [[mdl._model_y_xw._model for mdl in mdls] for mdls in super().models_nuisance_]
@property
def models_t_xw(self):
"""
Get the fitted models for :math:`\\E[T | X]`.
Returns
-------
models_t_xw: nested list of objects of type(`model_t_xw`)
A nested list of instances of the `model_t_xw` object. Number of sublist equals to number of monte carlo
iterations, each element in the sublist corresponds to a crossfitting
fold and is the model instance that was fitted for that training fold.
"""
return [[mdl._model_t_xw._model for mdl in mdls] for mdls in super().models_nuisance_]
@property
def models_z_xw(self):
"""
Get the fitted models for :math:`\\E[Z | X]`.
Returns
-------
models_z_xw: nested list of objects of type(`model_z_xw`)
A nested list of instances of the `model_z_xw` object. Number of sublist equals to number of monte carlo
iterations, each element in the sublist corresponds to a crossfitting
fold and is the model instance that was fitted for that training fold.
"""
if self.projection:
raise AttributeError("Projection model is fitted for instrument! Use models_t_xwz.")
return [[mdl._model_z_xw._model for mdl in mdls] for mdls in super().models_nuisance_]
@property
def models_t_xwz(self):
"""
Get the fitted models for :math:`\\E[T | X, Z]`.
Returns
-------
models_t_xwz: nested list of objects of type(`model_t_xwz`)
A nested list of instances of the `model_t_xwz` object. Number of sublist equals to number of monte carlo
iterations, each element in the sublist corresponds to a crossfitting
fold and is the model instance that was fitted for that training fold.
"""
if not self.projection:
raise AttributeError("Direct model is fitted for instrument! Use models_z_xw.")
return [[mdl._model_t_xwz._model for mdl in mdls] for mdls in super().models_nuisance_]
@property
def nuisance_scores_y_xw(self):
"""
Get the scores for y_xw model on the out-of-sample training data
"""
return self.nuisance_scores_[0]
@property
def nuisance_scores_t_xw(self):
"""
Get the scores for t_xw model on the out-of-sample training data
"""
return self.nuisance_scores_[1]
@property
def nuisance_scores_z_xw(self):
"""
Get the scores for z_xw model on the out-of-sample training data
"""
if self.projection:
raise AttributeError("Projection model is fitted for instrument! Use nuisance_scores_t_xwz.")
return self.nuisance_scores_[2]
@property
def nuisance_scores_t_xwz(self):
"""
Get the scores for t_xwz model on the out-of-sample training data
"""
if not self.projection:
raise AttributeError("Direct model is fitted for instrument! Use nuisance_scores_z_xw.")
return self.nuisance_scores_[2]
@property
def fit_cate_intercept_(self):
return self.ortho_learner_model_final_._fit_cate_intercept
@property
def bias_part_of_coef(self):
return self.ortho_learner_model_final_._fit_cate_intercept
@property
def model_final(self):
return self._gen_model_final()
@model_final.setter
def model_final(self, model):
if model is not None:
raise ValueError("Parameter `model_final` cannot be altered for this estimator!")
@property
def residuals_(self):
"""
A tuple (y_res, T_res,Z_res, X, W, Z), of the residuals from the first stage estimation
along with the associated X, W and Z. Samples are not guaranteed to be in the same
order as the input order.
"""
if not hasattr(self, '_cached_values'):
raise AttributeError("Estimator is not fitted yet!")
if self._cached_values is None:
raise AttributeError("`fit` was called with `cache_values=False`. "
"Set to `True` to enable residual storage.")
Y_res, T_res, Z_res = self._cached_values.nuisances
return Y_res, T_res, Z_res, self._cached_values.X, self._cached_values.W, self._cached_values.Z
class _BaseDMLIVModelNuisance:
"""
Nuisance model fits the three models at fit time and at predict time
returns :math:`Y-\\E[Y|X]` and :math:`\\E[T|X,Z]-\\E[T|X]` as residuals.
"""
def __init__(self, model_y_xw, model_t_xw, model_t_xwz):
self._model_y_xw = clone(model_y_xw, safe=False)
self._model_t_xw = clone(model_t_xw, safe=False)
self._model_t_xwz = clone(model_t_xwz, safe=False)
def _combine(self, W, Z, n_samples):
if Z is not None:
Z = Z.reshape(n_samples, -1)
return Z if W is None else np.hstack([W, Z])
return None if W is None else W
def fit(self, Y, T, X=None, W=None, Z=None, sample_weight=None, groups=None):
self._model_y_xw.fit(X, W, Y, **filter_none_kwargs(sample_weight=sample_weight, groups=groups))
self._model_t_xw.fit(X, W, T, **filter_none_kwargs(sample_weight=sample_weight, groups=groups))
# concat W and Z
WZ = self._combine(W, Z, Y.shape[0])
self._model_t_xwz.fit(X, WZ, T, **filter_none_kwargs(sample_weight=sample_weight, groups=groups))
return self
def score(self, Y, T, X=None, W=None, Z=None, sample_weight=None, groups=None):
# note that groups are not passed to score because they are only used for fitting
if hasattr(self._model_y_xw, 'score'):
Y_X_score = self._model_y_xw.score(X, W, Y, **filter_none_kwargs(sample_weight=sample_weight))
else:
Y_X_score = None
if hasattr(self._model_t_xw, 'score'):
T_X_score = self._model_t_xw.score(X, W, T, **filter_none_kwargs(sample_weight=sample_weight))
else:
T_X_score = None
if hasattr(self._model_t_xwz, 'score'):
# concat W and Z
WZ = self._combine(W, Z, Y.shape[0])
T_XZ_score = self._model_t_xwz.score(X, WZ, T, **filter_none_kwargs(sample_weight=sample_weight))
else:
T_XZ_score = None
return Y_X_score, T_X_score, T_XZ_score
def predict(self, Y, T, X=None, W=None, Z=None, sample_weight=None, groups=None):
# note that sample_weight and groups are not passed to predict because they are only used for fitting
Y_pred = self._model_y_xw.predict(X, W)
# concat W and Z
WZ = self._combine(W, Z, Y.shape[0])
TXZ_pred = self._model_t_xwz.predict(X, WZ)
TX_pred = self._model_t_xw.predict(X, W)
if (X is None) and (W is None): # In this case predict above returns a single row
Y_pred = np.tile(Y_pred.reshape(1, -1), (Y.shape[0], 1))
TX_pred = np.tile(TX_pred.reshape(1, -1), (T.shape[0], 1))
Y_res = Y - Y_pred.reshape(Y.shape)
T_res = TXZ_pred.reshape(T.shape) - TX_pred.reshape(T.shape)
return Y_res, T_res
class _BaseDMLIVModelFinal(_ModelFinal):
"""
Final model at fit time, fits a residual on residual regression with a heterogeneous coefficient
that depends on X, i.e.
.. math ::
Y - \\E[Y | X] = \\theta(X) \\cdot (\\E[T | X, Z] - \\E[T | X]) + \\epsilon
and at predict time returns :math:`\\theta(X)`. The score method returns the MSE of this final
residual on residual regression.
"""
pass
class _BaseDMLIV(_OrthoLearner):
# A helper class that access all the internal fitted objects of a DMLIV Cate Estimator.
# Used by both Parametric and Non Parametric DMLIV.
# override only so that we can enforce Z to be required
def fit(self, Y, T, *, Z, X=None, W=None, sample_weight=None, freq_weight=None, sample_var=None, groups=None,
cache_values=False, inference=None):
"""
Estimate the counterfactual model from data, i.e. estimates function :math:`\\theta(\\cdot)`.
Parameters
----------
Y: (n, d_y) matrix or vector of length n
Outcomes for each sample
T: (n, d_t) matrix or vector of length n
Treatments for each sample
Z: (n, d_z) matrix
Instruments for each sample
X: optional(n, d_x) matrix or None (Default=None)
Features for each sample
W: optional (n, d_w) matrix or None (Default=None)
Controls for each sample
sample_weight : (n,) array like, default None
Individual weights for each sample. If None, it assumes equal weight.
freq_weight: (n,) array like of integers, default None
Weight for the observation. Observation i is treated as the mean
outcome of freq_weight[i] independent observations.
When ``sample_var`` is not None, this should be provided.
sample_var : {(n,), (n, d_y)} nd array like, default None
Variance of the outcome(s) of the original freq_weight[i] observations that were used to
compute the mean outcome represented by observation i.
groups: (n,) vector, optional
All rows corresponding to the same group will be kept together during splitting.
If groups is not None, the `cv` argument passed to this class's initializer
must support a 'groups' argument to its split method.
cache_values: bool, default False
Whether to cache inputs and first stage results, which will allow refitting a different final model
inference: string, :class:`.Inference` instance, or None
Method for performing inference. This estimator supports 'bootstrap'
(or an instance of :class:`.BootstrapInference`)
Returns
-------
self
"""
return super().fit(Y, T, X=X, W=W, Z=Z,
sample_weight=sample_weight, freq_weight=freq_weight, sample_var=sample_var, groups=groups,
cache_values=cache_values, inference=inference)
def score(self, Y, T, Z, X=None, W=None, sample_weight=None):
"""
Score the fitted CATE model on a new data set. Generates nuisance parameters
for the new data set based on the fitted residual nuisance models created at fit time.
It uses the mean prediction of the models fitted by the different crossfit folds.
Then calculates the MSE of the final residual Y on residual T regression.
If model_final does not have a score method, then it raises an :exc:`.AttributeError`
Parameters
----------
Y: (n, d_y) matrix or vector of length n
Outcomes for each sample
T: (n, d_t) matrix or vector of length n
Treatments for each sample
Z: (n, d_z) matrix
Instruments for each sample
X: optional(n, d_x) matrix or None (Default=None)
Features for each sample
W: optional(n, d_w) matrix or None (Default=None)
Controls for each sample
sample_weight: optional(n,) vector or None (Default=None)
Weights for each samples
Returns
-------
score: float
The MSE of the final CATE model on the new data.
"""
# Replacing score from _OrthoLearner, to enforce Z to be required and improve the docstring
return super().score(Y, T, X=X, W=W, Z=Z, sample_weight=sample_weight)
@property
def original_featurizer(self):
return self.ortho_learner_model_final_._model_final._original_featurizer
@property
def featurizer_(self):
# NOTE This is used by the inference methods and has to be the overall featurizer. intended
# for internal use by the library
return self.ortho_learner_model_final_._model_final._featurizer
@property
def model_final_(self):
# NOTE This is used by the inference methods and is more for internal use to the library
return self.ortho_learner_model_final_._model_final._model
@property
def model_cate(self):
"""
Get the fitted final CATE model.
Returns
-------
model_cate: object of type(model_final)
An instance of the model_final object that was fitted after calling fit which corresponds
to the constant marginal CATE model.
"""
return self.ortho_learner_model_final_._model_final._model
@property
def models_y_xw(self):
"""
Get the fitted models for :math:`\\E[Y | X]`.
Returns
-------
models_y_xw: nested list of objects of type(`model_y_xw`)
A nested list of instances of the `model_y_xw` object. Number of sublist equals to number of monte carlo
iterations, each element in the sublist corresponds to a crossfitting
fold and is the model instance that was fitted for that training fold.
"""
return [[mdl._model_y_xw._model for mdl in mdls] for mdls in super().models_nuisance_]
@property
def models_t_xw(self):
"""
Get the fitted models for :math:`\\E[T | X]`.
Returns
-------
models_t_xw: nested list of objects of type(`model_t_xw`)
A nested list of instances of the `model_t_xw` object. Number of sublist equals to number of monte carlo
iterations, each element in the sublist corresponds to a crossfitting
fold and is the model instance that was fitted for that training fold.
"""
return [[mdl._model_t_xw._model for mdl in mdls] for mdls in super().models_nuisance_]
@property
def models_t_xwz(self):
"""
Get the fitted models for :math:`\\E[T | X, Z]`.
Returns
-------
models_t_xwz: nested list of objects of type(`model_t_xwz`)
A nested list of instances of the `model_t_xwz` object. Number of sublist equals to number of monte carlo
iterations, each element in the sublist corresponds to a crossfitting
fold and is the model instance that was fitted for that training fold.
"""
return [[mdl._model_t_xwz._model for mdl in mdls] for mdls in super().models_nuisance_]
@property
def nuisance_scores_y_xw(self):
"""
Get the scores for y_xw model on the out-of-sample training data
"""
return self.nuisance_scores_[0]
@property
def nuisance_scores_t_xw(self):
"""
Get the scores for t_xw model on the out-of-sample training data
"""
return self.nuisance_scores_[1]
@property
def nuisance_scores_t_xwz(self):
"""
Get the scores for t_xwz model on the out-of-sample training data
"""
return self.nuisance_scores_[2]
@property
def residuals_(self):
"""
A tuple (y_res, T_res, X, W, Z), of the residuals from the first stage estimation
along with the associated X, W and Z. Samples are not guaranteed to be in the same
order as the input order.
"""
if not hasattr(self, '_cached_values'):
raise AttributeError("Estimator is not fitted yet!")
if self._cached_values is None:
raise AttributeError("`fit` was called with `cache_values=False`. "
"Set to `True` to enable residual storage.")
Y_res, T_res = self._cached_values.nuisances
return Y_res, T_res, self._cached_values.X, self._cached_values.W, self._cached_values.Z
def cate_feature_names(self, feature_names=None):
"""
Get the output feature names.
Parameters
----------
feature_names: list of strings of length X.shape[1] or None
The names of the input features. If None and X is a dataframe, it defaults to the column names
from the dataframe.
Returns
-------
out_feature_names: list of strings or None
The names of the output features :math:`\\phi(X)`, i.e. the features with respect to which the
final constant marginal CATE model is linear. It is the names of the features that are associated
with each entry of the :meth:`coef_` parameter. Not available when the featurizer is not None and
does not have a method: `get_feature_names(feature_names)`. Otherwise None is returned.
"""
if self._d_x is None:
# Handles the corner case when X=None but featurizer might be not None
return None
if feature_names is None:
feature_names = self._input_names["feature_names"]
if self.original_featurizer is None:
return feature_names
return get_feature_names_or_default(self.original_featurizer, feature_names)
class DMLIV(_BaseDMLIV):
"""
The base class for parametric DMLIV estimators to estimate a CATE. It accepts three generic machine
learning models as nuisance functions:
1) model_y_xw that estimates :math:`\\E[Y | X]`
2) model_t_xw that estimates :math:`\\E[T | X]`
3) model_t_xwz that estimates :math:`\\E[T | X, Z]`
These are estimated in a cross-fitting manner for each sample in the training set.
Then it minimizes the square loss:
.. math::
\\sum_i (Y_i - \\E[Y|X_i] - \\theta(X) * (\\E[T|X_i, Z_i] - \\E[T|X_i]))^2
This loss is minimized by the model_final class, which is passed as an input.
Parameters
----------
model_y_xw : estimator or 'auto' (default is 'auto')
model to estimate :math:`\\E[Y | X, W]`. Must support `fit` and `predict` methods.
If 'auto' :class:`.WeightedLassoCV`/:class:`.WeightedMultiTaskLassoCV` will be chosen.
model_t_xw : estimator or 'auto' (default is 'auto')
model to estimate :math:`\\E[T | X, W]`. Must support `fit` and `predict` methods.
If 'auto', :class:`~sklearn.linear_model.LogisticRegressionCV`
will be applied for discrete treatment,
and :class:`.WeightedLassoCV`/:class:`.WeightedMultiTaskLassoCV`
will be applied for continuous treatment.
model_t_xwz : estimator or 'auto' (default is 'auto')
model to estimate :math:`\\E[T | X, W, Z]`. Must support `fit` and `predict` methods.
If 'auto', :class:`~sklearn.linear_model.LogisticRegressionCV`
will be applied for discrete treatment,
and :class:`.WeightedLassoCV`/:class:`.WeightedMultiTaskLassoCV`
will be applied for continuous treatment.
model_final : estimator (default is :class:`.StatsModelsLinearRegression`)
final model that at fit time takes as input :math:`(Y-\\E[Y|X])`, :math:`(\\E[T|X,Z]-\\E[T|X])` and X
and supports method predict(X) that produces the CATE at X
featurizer: transformer
The transformer used to featurize the raw features when fitting the final model. Must implement
a `fit_transform` method.
fit_cate_intercept : bool, optional, default True
Whether the linear CATE model should have a constant term.
discrete_instrument: bool, optional, default False
Whether the instrument values should be treated as categorical, rather than continuous, quantities
discrete_treatment: bool, optional, default False
Whether the treatment values should be treated as categorical, rather than continuous, quantities
categories: 'auto' or list, default 'auto'
The categories to use when encoding discrete treatments (or 'auto' to use the unique sorted values).
The first category will be treated as the control treatment.
cv: int, cross-validation generator or an iterable, optional, default 2
Determines the cross-validation splitting strategy.
Possible inputs for cv are:
- None, to use the default 3-fold cross-validation,
- integer, to specify the number of folds.
- :term:`CV splitter`
- An iterable yielding (train, test) splits as arrays of indices.
For integer/None inputs, if the treatment is discrete
:class:`~sklearn.model_selection.StratifiedKFold` is used, else,
:class:`~sklearn.model_selection.KFold` is used
(with a random shuffle in either case).
Unless an iterable is used, we call `split(concat[W, X], T)` to generate the splits. If all
W, X are None, then we call `split(ones((T.shape[0], 1)), T)`.
random_state: int, :class:`~numpy.random.mtrand.RandomState` instance or None, optional (default=None)
If int, random_state is the seed used by the random number generator;
If :class:`~numpy.random.mtrand.RandomState` instance, random_state is the random number generator;
If None, the random number generator is the :class:`~numpy.random.mtrand.RandomState` instance used
by :mod:`np.random<numpy.random>`.
mc_iters: int, optional (default=None)
The number of times to rerun the first stage models to reduce the variance of the nuisances.
mc_agg: {'mean', 'median'}, optional (default='mean')
How to aggregate the nuisance value for each sample across the `mc_iters` monte carlo iterations of
cross-fitting.
Examples
--------
A simple example with the default models:
.. testcode::
:hide:
import numpy as np
import scipy.special
np.set_printoptions(suppress=True)
.. testcode::
from econml.iv.dml import DMLIV
# Define the data generation functions
def dgp(n, p, true_fn):
X = np.random.normal(0, 1, size=(n, p))
Z = np.random.binomial(1, 0.5, size=(n,))
nu = np.random.uniform(0, 10, size=(n,))
coef_Z = 0.8
C = np.random.binomial(
1, coef_Z * scipy.special.expit(0.4 * X[:, 0] + nu)
) # Compliers when recomended
C0 = np.random.binomial(
1, 0.06 * np.ones(X.shape[0])
) # Non-compliers when not recommended
T = C * Z + C0 * (1 - Z)
y = true_fn(X) * T + 2 * nu + 5 * (X[:, 3] > 0) + 0.1 * np.random.uniform(0, 1, size=(n,))
return y, T, Z, X
def true_heterogeneity_function(X):
return 5 * X[:, 0]
np.random.seed(123)
y, T, Z, X = dgp(1000, 5, true_heterogeneity_function)
est = DMLIV(discrete_treatment=True, discrete_instrument=True)
est.fit(Y=y, T=T, Z=Z, X=X)
>>> est.effect(X[:3])
array([-4.47392..., 5.74626..., -3.08471...])
>>> est.coef_
array([ 5.00993..., 0.86981..., 0.35110..., -0.11390... , -0.17933...])
>>> est.intercept_
-0.27719...
"""
def __init__(self, *,
model_y_xw="auto",
model_t_xw="auto",
model_t_xwz="auto",
model_final=StatsModelsLinearRegression(fit_intercept=False),
featurizer=None,
fit_cate_intercept=True,
discrete_treatment=False,
discrete_instrument=False,
categories='auto',
cv=2,
mc_iters=None,
mc_agg='mean',
random_state=None):
self.model_y_xw = clone(model_y_xw, safe=False)
self.model_t_xw = clone(model_t_xw, safe=False)
self.model_t_xwz = clone(model_t_xwz, safe=False)
self.model_final = clone(model_final, safe=False)
self.featurizer = clone(featurizer, safe=False)
self.fit_cate_intercept = fit_cate_intercept
super().__init__(discrete_treatment=discrete_treatment,
discrete_instrument=discrete_instrument,
categories=categories,
cv=cv,
mc_iters=mc_iters,
mc_agg=mc_agg,
random_state=random_state)
def _gen_featurizer(self):
return clone(self.featurizer, safe=False)
def _gen_model_y_xw(self):
if self.model_y_xw == 'auto':
model_y_xw = WeightedLassoCVWrapper(random_state=self.random_state)
else:
model_y_xw = clone(self.model_y_xw, safe=False)
return _FirstStageWrapper(model_y_xw, True, self._gen_featurizer(),
False, False)
def _gen_model_t_xw(self):
if self.model_t_xw == 'auto':
if self.discrete_treatment:
model_t_xw = LogisticRegressionCV(cv=WeightedStratifiedKFold(random_state=self.random_state),
random_state=self.random_state)
else:
model_t_xw = WeightedLassoCVWrapper(random_state=self.random_state)
else:
model_t_xw = clone(self.model_t_xw, safe=False)
return _FirstStageWrapper(model_t_xw, False, self._gen_featurizer(),
False, self.discrete_treatment)
def _gen_model_t_xwz(self):
if self.model_t_xwz == 'auto':
if self.discrete_treatment:
model_t_xwz = LogisticRegressionCV(cv=WeightedStratifiedKFold(random_state=self.random_state),
random_state=self.random_state)
else:
model_t_xwz = WeightedLassoCVWrapper(random_state=self.random_state)
else:
model_t_xwz = clone(self.model_t_xwz, safe=False)
return _FirstStageWrapper(model_t_xwz, False, self._gen_featurizer(),
False, self.discrete_treatment)
def _gen_model_final(self):
return clone(self.model_final, safe=False)
def _gen_ortho_learner_model_nuisance(self):
return _BaseDMLIVModelNuisance(self._gen_model_y_xw(), self._gen_model_t_xw(), self._gen_model_t_xwz())
def _gen_ortho_learner_model_final(self):
return _BaseDMLIVModelFinal(_FinalWrapper(self._gen_model_final(),
self.fit_cate_intercept,
self._gen_featurizer(),
False))
@property
def bias_part_of_coef(self):
return self.ortho_learner_model_final_._model_final._fit_cate_intercept
@property
def fit_cate_intercept_(self):
return self.ortho_learner_model_final_._model_final._fit_cate_intercept
def shap_values(self, X, *, feature_names=None, treatment_names=None, output_names=None, background_samples=100):
if hasattr(self, "featurizer_") and self.featurizer_ is not None:
X = self.featurizer_.transform(X)
feature_names = self.cate_feature_names(feature_names)
return _shap_explain_joint_linear_model_cate(self.model_final_, X, self._d_t, self._d_y,
self.bias_part_of_coef,
feature_names=feature_names, treatment_names=treatment_names,
output_names=output_names,
input_names=self._input_names,
background_samples=background_samples)
shap_values.__doc__ = LinearCateEstimator.shap_values.__doc__
@property
def coef_(self):
""" The coefficients in the linear model of the constant marginal treatment
effect.
Returns
-------
coef: (n_x,) or (n_t, n_x) or (n_y, n_t, n_x) array like
Where n_x is the number of features that enter the final model (either the
dimension of X or the dimension of featurizer.fit_transform(X) if the CATE
estimator has a featurizer.), n_t is the number of treatments, n_y is
the number of outcomes. Dimensions are omitted if the original input was
a vector and not a 2D array. For binary treatment the n_t dimension is
also omitted.
"""
return parse_final_model_params(self.model_final_.coef_, self.model_final_.intercept_,
self._d_y, self._d_t, self._d_t_in, self.bias_part_of_coef,
self.fit_cate_intercept_)[0]
@property
def intercept_(self):
""" The intercept in the linear model of the constant marginal treatment
effect.
Returns
-------
intercept: float or (n_y,) or (n_y, n_t) array like
Where n_t is the number of treatments, n_y is
the number of outcomes. Dimensions are omitted if the original input was
a vector and not a 2D array. For binary treatment the n_t dimension is
also omitted.
"""
if not self.fit_cate_intercept_:
raise AttributeError("No intercept was fitted!")
return parse_final_model_params(self.model_final_.coef_, self.model_final_.intercept_,
self._d_y, self._d_t, self._d_t_in, self.bias_part_of_coef,
self.fit_cate_intercept_)[1]
def summary(self, decimals=3, feature_names=None, treatment_names=None, output_names=None):
""" The summary of coefficient and intercept in the linear model of the constant marginal treatment
effect.
Parameters
----------
decimals: optinal int (default=3)
Number of decimal places to round each column to.
feature_names: optional list of strings or None (default is None)
The input of the feature names
treatment_names: optional list of strings or None (default is None)
The names of the treatments
output_names: optional list of strings or None (default is None)
The names of the outputs
Returns
-------
smry : Summary instance
this holds the summary tables and text, which can be printed or
converted to various output formats.
"""
# Get input names
treatment_names = self.cate_treatment_names(treatment_names)
output_names = self.cate_output_names(output_names)
feature_names = self.cate_feature_names(feature_names)
# Summary
smry = Summary()
smry.add_extra_txt(["<sub>A linear parametric conditional average treatment effect (CATE) model was fitted:",
"$Y = \\Theta(X)\\cdot T + g(X, W) + \\epsilon$",
"where for every outcome $i$ and treatment $j$ the CATE $\\Theta_{ij}(X)$ has the form:",
"$\\Theta_{ij}(X) = \\phi(X)' coef_{ij} + cate\\_intercept_{ij}$",
"where $\\phi(X)$ is the output of the `featurizer` or $X$ if `featurizer`=None. "
"Coefficient Results table portrays the $coef_{ij}$ parameter vector for "
"each outcome $i$ and treatment $j$. "
"Intercept Results table portrays the $cate\\_intercept_{ij}$ parameter.</sub>"])
d_t = self._d_t[0] if self._d_t else 1
d_y = self._d_y[0] if self._d_y else 1
def _reshape_array(arr, type):
if np.isscalar(arr):
arr = np.array([arr])
if type == 'coefficient':
arr = np.moveaxis(arr, -1, 0)
arr = arr.reshape(-1, 1)
return arr
# coefficient
try:
if self.coef_.size == 0: # X is None
raise AttributeError("X is None, please call intercept_inference to learn the constant!")
else:
coef_array = np.round(_reshape_array(self.coef_, "coefficient"), decimals)
coef_headers = ["point_estimate"]
if d_t > 1 and d_y > 1:
index = list(product(feature_names, output_names, treatment_names))
elif d_t > 1:
index = list(product(feature_names, treatment_names))
elif d_y > 1:
index = list(product(feature_names, output_names))
else:
index = list(product(feature_names))
coef_stubs = ["|".join(ind_value) for ind_value in index]
coef_title = 'Coefficient Results'
smry.add_table(coef_array, coef_headers, coef_stubs, coef_title)
except Exception as e:
print("Coefficient Results: ", str(e))
# intercept
try:
if not self.fit_cate_intercept:
raise AttributeError("No intercept was fitted!")
else:
intercept_array = np.round(_reshape_array(self.intercept_, "intercept"), decimals)
intercept_headers = ["point_estimate"]
if d_t > 1 and d_y > 1:
index = list(product(["cate_intercept"], output_names, treatment_names))
elif d_t > 1:
index = list(product(["cate_intercept"], treatment_names))
elif d_y > 1:
index = list(product(["cate_intercept"], output_names))
else:
index = list(product(["cate_intercept"]))
intercept_stubs = ["|".join(ind_value) for ind_value in index]
intercept_title = 'CATE Intercept Results'
smry.add_table(intercept_array, intercept_headers, intercept_stubs, intercept_title)
except Exception as e:
print("CATE Intercept Results: ", str(e))
if len(smry.tables) > 0:
return smry
class NonParamDMLIV(_BaseDMLIV):
"""
The base class for non-parametric DMLIV that allows for an arbitrary square loss based ML
method in the final stage of the DMLIV algorithm. The method has to support
sample weights and the fit method has to take as input sample_weights (e.g. random forests), i.e.
fit(X, y, sample_weight=None)
It achieves this by re-writing the final stage square loss of the DMLIV algorithm as:
.. math ::
\\sum_i (\\E[T|X_i, Z_i] - \\E[T|X_i])^2 * ((Y_i - \\E[Y|X_i])/(\\E[T|X_i, Z_i] - \\E[T|X_i]) - \\theta(X))^2
Then this can be viewed as a weighted square loss regression, where the target label is
.. math ::
\\tilde{Y}_i = (Y_i - \\E[Y|X_i])/(\\E[T|X_i, Z_i] - \\E[T|X_i])
and each sample has a weight of
.. math ::
V(X_i) = (\\E[T|X_i, Z_i] - \\E[T|X_i])^2
Thus we can call any regression model with inputs:
fit(X, :math:`\\tilde{Y}_i`, sample_weight= :math:`V(X_i)`)
Parameters
----------
model_y_xw : estimator or 'auto' (default is 'auto')
model to estimate :math:`\\E[Y | X, W]`. Must support `fit` and `predict` methods.
If 'auto' :class:`.WeightedLassoCV`/:class:`.WeightedMultiTaskLassoCV` will be chosen.
model_t_xw : estimator or 'auto' (default is 'auto')
model to estimate :math:`\\E[T | X, W]`. Must support `fit` and either `predict` or `predict_proba` methods,
depending on whether the treatment is discrete.
If 'auto', :class:`~sklearn.linear_model.LogisticRegressionCV`
will be applied for discrete treatment,
and :class:`.WeightedLassoCV`/:class:`.WeightedMultiTaskLassoCV`
will be applied for continuous treatment.
model_t_xwz : estimator or 'auto' (default is 'auto')
model to estimate :math:`\\E[T | X, W, Z]`. Must support `fit` and either `predict` or `predict_proba`
methods, depending on whether the treatment is discrete.
If 'auto', :class:`~sklearn.linear_model.LogisticRegressionCV`
will be applied for discrete treatment,
and :class:`.WeightedLassoCV`/:class:`.WeightedMultiTaskLassoCV`
will be applied for continuous treatment.
model_final : estimator
final model for predicting :math:`\\tilde{Y}` from X with sample weights V(X)
featurizer: transformer
The transformer used to featurize the raw features when fitting the final model. Must implement
a `fit_transform` method.
discrete_treatment: bool, optional, default False
Whether the treatment values should be treated as categorical, rather than continuous, quantities
discrete_instrument: bool, optional, default False
Whether the instrument values should be treated as categorical, rather than continuous, quantities
categories: 'auto' or list, default 'auto'
The categories to use when encoding discrete treatments (or 'auto' to use the unique sorted values).
The first category will be treated as the control treatment.
cv: int, cross-validation generator or an iterable, optional, default 2
Determines the cross-validation splitting strategy.
Possible inputs for cv are:
- None, to use the default 3-fold cross-validation,
- integer, to specify the number of folds.
- :term:`CV splitter`
- An iterable yielding (train, test) splits as arrays of indices.
For integer/None inputs, if the treatment is discrete
:class:`~sklearn.model_selection.StratifiedKFold` is used, else,
:class:`~sklearn.model_selection.KFold` is used
(with a random shuffle in either case).
Unless an iterable is used, we call `split(concat[W, X], T)` to generate the splits. If all
W, X are None, then we call `split(ones((T.shape[0], 1)), T)`.
mc_iters: int, optional (default=None)
The number of times to rerun the first stage models to reduce the variance of the nuisances.
mc_agg: {'mean', 'median'}, optional (default='mean')
How to aggregate the nuisance value for each sample across the `mc_iters` monte carlo iterations of
cross-fitting.
random_state: int, :class:`~numpy.random.mtrand.RandomState` instance or None, optional (default=None)
If int, random_state is the seed used by the random number generator;
If :class:`~numpy.random.mtrand.RandomState` instance, random_state is the random number generator;
If None, the random number generator is the :class:`~numpy.random.mtrand.RandomState` instance used
by :mod:`np.random<numpy.random>`.
Examples
--------
A simple example:
.. testcode::
:hide:
import numpy as np
import scipy.special
np.set_printoptions(suppress=True)
.. testcode::
from econml.iv.dml import NonParamDMLIV
from econml.sklearn_extensions.linear_model import StatsModelsLinearRegression
# Define the data generation functions
def dgp(n, p, true_fn):
X = np.random.normal(0, 1, size=(n, p))
Z = np.random.binomial(1, 0.5, size=(n,))
nu = np.random.uniform(0, 10, size=(n,))
coef_Z = 0.8
C = np.random.binomial(
1, coef_Z * scipy.special.expit(0.4 * X[:, 0] + nu)
) # Compliers when recomended
C0 = np.random.binomial(
1, 0.06 * np.ones(X.shape[0])
) # Non-compliers when not recommended
T = C * Z + C0 * (1 - Z)
y = true_fn(X) * T + 2 * nu + 5 * (X[:, 3] > 0) + 0.1 * np.random.uniform(0, 1, size=(n,))
return y, T, Z, X
def true_heterogeneity_function(X):
return 5 * X[:, 0]
np.random.seed(123)
y, T, Z, X = dgp(1000, 5, true_heterogeneity_function)
est = NonParamDMLIV(
model_final=StatsModelsLinearRegression(),
discrete_treatment=True, discrete_instrument=True,
cv=5
)
est.fit(Y=y, T=T, Z=Z, X=X)
>>> est.effect(X[:3])
array([-5.52240..., 7.86930..., -3.57966...])
"""
def __init__(self, *,
model_y_xw="auto",
model_t_xw="auto",
model_t_xwz="auto",
model_final,
discrete_treatment=False,
discrete_instrument=False,
featurizer=None,
categories='auto',
cv=2,
mc_iters=None,
mc_agg='mean',
random_state=None):
self.model_y_xw = clone(model_y_xw, safe=False)
self.model_t_xw = clone(model_t_xw, safe=False)
self.model_t_xwz = clone(model_t_xwz, safe=False)
self.model_final = clone(model_final, safe=False)
self.featurizer = clone(featurizer, safe=False)
super().__init__(discrete_treatment=discrete_treatment,
discrete_instrument=discrete_instrument,
categories=categories,
cv=cv,
mc_iters=mc_iters,
mc_agg=mc_agg,
random_state=random_state)
def _gen_featurizer(self):
return clone(self.featurizer, safe=False)
def _gen_model_y_xw(self):
if self.model_y_xw == 'auto':
model_y_xw = WeightedLassoCVWrapper(random_state=self.random_state)
else:
model_y_xw = clone(self.model_y_xw, safe=False)
return _FirstStageWrapper(model_y_xw, True, self._gen_featurizer(),
False, False)
def _gen_model_t_xw(self):
if self.model_t_xw == 'auto':
if self.discrete_treatment:
model_t_xw = LogisticRegressionCV(cv=WeightedStratifiedKFold(random_state=self.random_state),
random_state=self.random_state)
else:
model_t_xw = WeightedLassoCVWrapper(random_state=self.random_state)
else:
model_t_xw = clone(self.model_t_xw, safe=False)
return _FirstStageWrapper(model_t_xw, False, self._gen_featurizer(),
False, self.discrete_treatment)
def _gen_model_t_xwz(self):
if self.model_t_xwz == 'auto':
if self.discrete_treatment:
model_t_xwz = LogisticRegressionCV(cv=WeightedStratifiedKFold(random_state=self.random_state),
random_state=self.random_state)
else:
model_t_xwz = WeightedLassoCVWrapper(random_state=self.random_state)
else:
model_t_xwz = clone(self.model_t_xwz, safe=False)
return _FirstStageWrapper(model_t_xwz, False, self._gen_featurizer(),
False, self.discrete_treatment)
def _gen_model_final(self):
return clone(self.model_final, safe=False)
def _gen_ortho_learner_model_nuisance(self):
return _BaseDMLIVModelNuisance(self._gen_model_y_xw(), self._gen_model_t_xw(), self._gen_model_t_xwz())
def _gen_ortho_learner_model_final(self):
return _BaseDMLIVModelFinal(_FinalWrapper(self._gen_model_final(),
False,
self._gen_featurizer(),
True))
def shap_values(self, X, *, feature_names=None, treatment_names=None, output_names=None, background_samples=100):
return _shap_explain_model_cate(self.const_marginal_effect, self.model_cate, X, self._d_t, self._d_y,
featurizer=self.featurizer_,
feature_names=feature_names,
treatment_names=treatment_names,
output_names=output_names,
input_names=self._input_names,
background_samples=background_samples)
shap_values.__doc__ = LinearCateEstimator.shap_values.__doc__
@deprecated("The DMLATEIV class has been deprecated by OrthoIV class with parameter `projection=False`, "
"an upcoming release will remove support for the old name")
def DMLATEIV(model_Y_W,
model_T_W,
model_Z_W,
discrete_treatment=False,
discrete_instrument=False,
categories='auto',
cv=2,
mc_iters=None,
mc_agg='mean',
random_state=None):
return OrthoIV(model_y_xw=model_Y_W,
model_t_xw=model_T_W,
model_z_xw=model_Z_W,
projection=False,
featurizer=None,
fit_cate_intercept=True,
discrete_treatment=discrete_treatment,
discrete_instrument=discrete_instrument,
categories=categories,
cv=cv,
mc_iters=mc_iters,
mc_agg=mc_agg,
random_state=random_state)
@deprecated("The DMLATEIV class has been deprecated by OrthoIV class with parameter `projection=True`, "
"an upcoming release will remove support for the old name")
def ProjectedDMLATEIV(model_Y_W,
model_T_W,
model_T_WZ,
discrete_treatment=False,
discrete_instrument=False,
categories='auto',
cv=2,
mc_iters=None,
mc_agg='mean',
random_state=None):
return OrthoIV(model_y_xw=model_Y_W,
model_t_xw=model_T_W,
model_t_xwz=model_T_WZ,
projection=True,
featurizer=None,
fit_cate_intercept=True,
discrete_treatment=discrete_treatment,
discrete_instrument=discrete_instrument,
categories=categories,
cv=cv,
mc_iters=mc_iters,
mc_agg=mc_agg,
random_state=random_state) | PypiClean |
/BuildStream-2.0.1-cp39-cp39-manylinux_2_28_x86_64.whl/buildstream/_stream.py |
import itertools
import os
import sys
import stat
import shlex
import shutil
import tarfile
import tempfile
from contextlib import contextmanager, suppress
from collections import deque
from typing import List, Tuple, Optional, Iterable, Callable
from ._artifactelement import verify_artifact_ref, ArtifactElement
from ._artifactproject import ArtifactProject
from ._exceptions import StreamError, ImplError, BstError, ArtifactElementError, ArtifactError
from ._scheduler import (
Scheduler,
SchedStatus,
TrackQueue,
CacheQueryQueue,
FetchQueue,
SourcePushQueue,
BuildQueue,
PullQueue,
ArtifactPushQueue,
)
from .element import Element
from ._profile import Topics, PROFILER
from ._project import ProjectRefStorage
from ._remotespec import RemoteSpec
from ._state import State
from .types import _KeyStrength, _PipelineSelection, _Scope, _HostMount
from .plugin import Plugin
from . import utils, node, _yaml, _site, _pipeline
# Stream()
#
# This is the main, toplevel calling interface in BuildStream core.
#
# Args:
# context (Context): The Context object
# session_start (datetime): The time when the session started
# session_start_callback (callable): A callback to invoke when the session starts
# interrupt_callback (callable): A callback to invoke when we get interrupted
# ticker_callback (callable): Invoked every second while running the scheduler
#
class Stream:
def __init__(
self, context, session_start, *, session_start_callback=None, interrupt_callback=None, ticker_callback=None
):
#
# Public members
#
self.targets = [] # Resolved target elements
self.session_elements = [] # List of elements being processed this session
self.total_elements = [] # Total list of elements based on targets
self.queues = [] # Queue objects
#
# Private members
#
self._context = context
self._artifacts = None
self._elementsourcescache = None
self._sourcecache = None
self._project = None
self._state = State(session_start) # Owned by Stream, used by Core to set state
self._notification_queue = deque()
context.messenger.set_state(self._state)
self._scheduler = Scheduler(context, session_start, self._state, interrupt_callback, ticker_callback)
self._session_start_callback = session_start_callback
self._running = False
self._terminated = False
self._suspended = False
# init()
#
# Initialization of Stream that has side-effects that require it to be
# performed after the Stream is created.
#
def init(self):
self._artifacts = self._context.artifactcache
self._elementsourcescache = self._context.elementsourcescache
self._sourcecache = self._context.sourcecache
# cleanup()
#
# Cleans up application state
#
def cleanup(self):
# Reset the element loader state
Element._reset_load_state()
# Reset global state in node.pyx, this is for the sake of
# test isolation.
node._reset_global_state()
# set_project()
#
# Set the top-level project.
#
# Args:
# project (Project): The Project object
#
def set_project(self, project):
assert self._project is None
self._project = project
if self._project:
self._project.load_context.set_fetch_subprojects(self._fetch_subprojects)
# load_selection()
#
# An all purpose method for loading a selection of elements, this
# is primarily useful for the frontend to implement `bst show`
# and `bst shell`.
#
# Args:
# targets: Targets to pull
# selection: The selection mode for the specified targets (_PipelineSelection)
# except_targets: Specified targets to except from fetching
# load_artifacts (bool): Whether to load artifacts with artifact names
# connect_artifact_cache: Whether to try to contact remote artifact caches
# connect_source_cache: Whether to try to contact remote source caches
# artifact_remotes: Artifact cache remotes specified on the commmand line
# source_remotes: Source cache remotes specified on the commmand line
# ignore_project_artifact_remotes: Whether to ignore artifact remotes specified by projects
# ignore_project_source_remotes: Whether to ignore source remotes specified by projects
#
# Returns:
# (list of Element): The selected elements
def load_selection(
self,
targets: Iterable[str],
*,
selection: str = _PipelineSelection.NONE,
except_targets: Iterable[str] = (),
load_artifacts: bool = False,
connect_artifact_cache: bool = False,
connect_source_cache: bool = False,
artifact_remotes: Iterable[RemoteSpec] = (),
source_remotes: Iterable[RemoteSpec] = (),
ignore_project_artifact_remotes: bool = False,
ignore_project_source_remotes: bool = False,
):
with PROFILER.profile(Topics.LOAD_SELECTION, "_".join(t.replace(os.sep, "-") for t in targets)):
target_objects = self._load(
targets,
selection=selection,
except_targets=except_targets,
load_artifacts=load_artifacts,
connect_artifact_cache=connect_artifact_cache,
connect_source_cache=connect_source_cache,
artifact_remotes=artifact_remotes,
source_remotes=source_remotes,
ignore_project_artifact_remotes=ignore_project_artifact_remotes,
ignore_project_source_remotes=ignore_project_source_remotes,
)
return target_objects
# query_cache()
#
# Query the artifact and source caches to determine the cache status
# of the specified elements.
#
# Args:
# elements (list of Element): The elements to check
# sources_of_cached_elements (bool): True to query the source cache for elements with a cached artifact
# only_sources (bool): True to only query the source cache
#
def query_cache(self, elements, *, sources_of_cached_elements=False, only_sources=False):
# It doesn't make sense to combine these flags
assert not sources_of_cached_elements or not only_sources
with self._context.messenger.simple_task("Query cache", silent_nested=True) as task:
# Enqueue complete build plan as this is required to determine `buildable` status.
plan = list(_pipeline.dependencies(elements, _Scope.ALL))
if self._context.remote_cache_spec:
# Parallelize cache queries if a remote cache is configured
self._reset()
self._add_queue(
CacheQueryQueue(
self._scheduler, sources=only_sources, sources_if_cached=sources_of_cached_elements
),
track=True,
)
self._enqueue_plan(plan)
self._run()
else:
task.set_maximum_progress(len(plan))
for element in plan:
if element._can_query_cache():
# Cache status already available.
# This is the case for artifact elements, which load the
# artifact early on.
pass
elif not only_sources and element._get_cache_key(strength=_KeyStrength.WEAK):
element._load_artifact(pull=False)
if (
sources_of_cached_elements
or not element._can_query_cache()
or not element._cached_success()
):
element._query_source_cache()
if not element._pull_pending():
element._load_artifact_done()
elif element._has_all_sources_resolved():
element._query_source_cache()
task.add_current_progress()
# shell()
#
# Run a shell
#
# Args:
# target: The name of the element to run the shell for
# scope: The scope for the shell, only BUILD or RUN are valid (_Scope)
# prompt: A function to return the prompt to display in the shell
# unique_id: (str): A unique_id to use to lookup an Element instance
# mounts: Additional directories to mount into the sandbox
# isolate (bool): Whether to isolate the environment like we do in builds
# command (list): An argv to launch in the sandbox, or None
# usebuildtree (bool): Whether to use a buildtree as the source, given cli option
# artifact_remotes: Artifact cache remotes specified on the commmand line
# source_remotes: Source cache remotes specified on the commmand line
# ignore_project_artifact_remotes: Whether to ignore artifact remotes specified by projects
# ignore_project_source_remotes: Whether to ignore source remotes specified by projects
#
# Returns:
# (int): The exit code of the launched shell
#
def shell(
self,
target: str,
scope: int,
prompt: Callable[[Element], str],
*,
unique_id: Optional[str] = None,
mounts: Optional[List[_HostMount]] = None,
isolate: bool = False,
command: Optional[List[str]] = None,
usebuildtree: bool = False,
artifact_remotes: Iterable[RemoteSpec] = (),
source_remotes: Iterable[RemoteSpec] = (),
ignore_project_artifact_remotes: bool = False,
ignore_project_source_remotes: bool = False,
):
element: Element
# Load the Element via the unique_id if given
if unique_id and target is None:
element = Plugin._lookup(unique_id)
else:
if usebuildtree:
selection = _PipelineSelection.NONE
elif scope == _Scope.BUILD:
selection = _PipelineSelection.BUILD
else:
selection = _PipelineSelection.RUN
try:
elements = self.load_selection(
(target,),
selection=selection,
load_artifacts=True,
connect_artifact_cache=True,
connect_source_cache=True,
artifact_remotes=artifact_remotes,
source_remotes=source_remotes,
ignore_project_artifact_remotes=ignore_project_artifact_remotes,
ignore_project_source_remotes=ignore_project_source_remotes,
)
except StreamError as e:
if e.reason == "deps-not-supported":
raise StreamError(
"Only buildtrees are supported with artifact names",
detail="Use the --build and --use-buildtree options to shell into a cached build tree",
reason="only-buildtrees-supported",
) from e
raise
# Get element to stage from `targets` list.
# If scope is BUILD, it will not be in the `elements` list.
assert len(self.targets) == 1
element = self.targets[0]
element._set_required(scope)
if scope == _Scope.BUILD:
pull_elements = [element] + elements
else:
pull_elements = elements
# Check whether the required elements are cached, and then
# try to pull them if they are not already cached.
#
self.query_cache(pull_elements)
self._pull_missing_artifacts(pull_elements)
# We dont need dependency artifacts to shell into a cached build tree
if not usebuildtree:
missing_deps = [dep for dep in _pipeline.dependencies([element], scope) if not dep._cached()]
if missing_deps:
raise StreamError(
"Elements need to be built or downloaded before staging a shell environment",
detail="\n".join(list(map(lambda x: x._get_full_name(), missing_deps))),
reason="shell-missing-deps",
)
# Check if we require a pull queue attempt, with given artifact state and context
if usebuildtree:
if not element._cached_buildroot():
if not element._cached():
message = "Artifact not cached locally or in available remotes"
reason = "missing-buildtree-artifact-not-cached"
elif element._buildroot_exists():
message = "Buildtree is not cached locally or in available remotes"
reason = "missing-buildtree-artifact-buildtree-not-cached"
else:
message = "Artifact was created without buildtree"
reason = "missing-buildtree-artifact-created-without-buildtree"
raise StreamError(message, reason=reason)
# Raise warning if the element is cached in a failed state
if element._cached_failure():
self._context.messenger.warn("using a buildtree from a failed build.")
# Ensure we have our sources if we are launching a build shell
if scope == _Scope.BUILD and not usebuildtree:
self.query_cache([element], only_sources=True)
self._fetch([element])
_pipeline.assert_sources_cached(self._context, [element])
return element._shell(
scope, mounts=mounts, isolate=isolate, prompt=prompt(element), command=command, usebuildtree=usebuildtree
)
# build()
#
# Builds (assembles) elements in the pipeline.
#
# Args:
# targets: Targets to build
# selection: The selection mode for the specified targets (_PipelineSelection)
# ignore_junction_targets: Whether junction targets should be filtered out
# artifact_remotes: Artifact cache remotes specified on the commmand line
# source_remotes: Source cache remotes specified on the commmand line
# ignore_project_artifact_remotes: Whether to ignore artifact remotes specified by projects
# ignore_project_source_remotes: Whether to ignore source remotes specified by projects
#
# If `remote` specified as None, then regular configuration will be used
# to determine where to push artifacts to.
#
def build(
self,
targets: Iterable[str],
*,
selection: str = _PipelineSelection.NONE,
ignore_junction_targets: bool = False,
artifact_remotes: Iterable[RemoteSpec] = (),
source_remotes: Iterable[RemoteSpec] = (),
ignore_project_artifact_remotes: bool = False,
ignore_project_source_remotes: bool = False,
):
# Flag the build state
self._context.build = True
elements = self._load(
targets,
selection=selection,
ignore_junction_targets=ignore_junction_targets,
dynamic_plan=True,
connect_artifact_cache=True,
connect_source_cache=True,
artifact_remotes=artifact_remotes,
source_remotes=source_remotes,
ignore_project_artifact_remotes=ignore_project_artifact_remotes,
ignore_project_source_remotes=ignore_project_source_remotes,
)
# Assert that the elements are consistent
_pipeline.assert_consistent(self._context, elements)
source_push_enabled = self._sourcecache.has_push_remotes()
# If source push is enabled, the source cache status of all elements
# is required, independent of whether the artifact is already available.
self.query_cache(elements, sources_of_cached_elements=source_push_enabled)
# Now construct the queues
#
self._reset()
if self._artifacts.has_fetch_remotes():
self._add_queue(PullQueue(self._scheduler))
self._add_queue(FetchQueue(self._scheduler, skip_cached=True))
self._add_queue(BuildQueue(self._scheduler, imperative=True))
if self._artifacts.has_push_remotes():
self._add_queue(ArtifactPushQueue(self._scheduler, skip_uncached=True))
if source_push_enabled:
self._add_queue(SourcePushQueue(self._scheduler))
# Enqueue elements
self._enqueue_plan(elements)
self._run(announce_session=True)
# fetch()
#
# Fetches sources on the pipeline.
#
# Args:
# targets: Targets to fetch
# selection: The selection mode for the specified targets (_PipelineSelection)
# except_targets: Specified targets to except from fetching
# source_remotes: Source cache remotes specified on the commmand line
# ignore_project_source_remotes: Whether to ignore source remotes specified by projects
#
def fetch(
self,
targets: Iterable[str],
*,
selection: str = _PipelineSelection.NONE,
except_targets: Iterable[str] = (),
source_remotes: Iterable[RemoteSpec] = (),
ignore_project_source_remotes: bool = False,
):
if self._context.remote_cache_spec:
self._context.messenger.warn(
"Cache Storage Service is configured, fetched sources may not be available in the local cache"
)
elements = self._load(
targets,
selection=selection,
except_targets=except_targets,
connect_source_cache=True,
source_remotes=source_remotes,
ignore_project_source_remotes=ignore_project_source_remotes,
)
self.query_cache(elements, only_sources=True)
# Delegated to a shared fetch method
self._fetch(elements, announce_session=True)
# track()
#
# Tracks all the sources of the selected elements.
#
# Args:
# targets (list of str): Targets to track
# selection (_PipelineSelection): The selection mode for the specified targets
# except_targets (list of str): Specified targets to except from tracking
# cross_junctions (bool): Whether tracking should cross junction boundaries
#
# If no error is encountered while tracking, then the project files
# are rewritten inline.
#
def track(self, targets, *, selection=_PipelineSelection.REDIRECT, except_targets=None, cross_junctions=False):
elements = self._load_tracking(
targets, selection=selection, except_targets=except_targets, cross_junctions=cross_junctions
)
# Note: We do not currently need to initialize the state of an
# element before it is tracked, since tracking can be done
# irrespective of source/artifact condition. Once an element
# is tracked, its state must be fully updated in either case,
# and we anyway don't do anything else with it.
self._reset()
track_queue = TrackQueue(self._scheduler)
self._add_queue(track_queue, track=True)
self._enqueue_plan(elements, queue=track_queue)
self._run(announce_session=True)
# source_push()
#
# Push sources.
#
# Args:
# targets (list of str): Targets to push
# selection (_PipelineSelection): The selection mode for the specified targets
# except_targets: Specified targets to except from pushing
# source_remotes: Source cache remotes specified on the commmand line
# ignore_project_source_remotes: Whether to ignore source remotes specified by projects
#
# If `remote` specified as None, then regular configuration will be used
# to determine where to push sources to.
#
# If any of the given targets are missing their expected sources,
# a fetch queue will be created if user context and available remotes allow for
# attempting to fetch them.
#
def source_push(
self,
targets,
*,
selection=_PipelineSelection.NONE,
except_targets: Iterable[str] = (),
source_remotes: Iterable[RemoteSpec] = (),
ignore_project_source_remotes: bool = False,
):
elements = self._load(
targets,
selection=selection,
except_targets=except_targets,
load_artifacts=True,
connect_source_cache=True,
source_remotes=source_remotes,
ignore_project_source_remotes=ignore_project_source_remotes,
)
self.query_cache(elements, only_sources=True)
if not self._sourcecache.has_push_remotes():
raise StreamError("No source caches available for pushing sources")
_pipeline.assert_consistent(self._context, elements)
self._add_queue(FetchQueue(self._scheduler))
self._add_queue(SourcePushQueue(self._scheduler, imperative=True))
self._enqueue_plan(elements)
self._run(announce_session=True)
# pull()
#
# Pulls artifacts from remote artifact server(s)
#
# Args:
# targets: Targets to pull
# selection: The selection mode for the specified targets (_PipelineSelection)
# ignore_junction_targets: Whether junction targets should be filtered out
# artifact_remotes: Artifact cache remotes specified on the commmand line
# ignore_project_artifact_remotes: Whether to ignore artifact remotes specified by projects
#
def pull(
self,
targets: Iterable[str],
*,
selection: str = _PipelineSelection.NONE,
ignore_junction_targets: bool = False,
artifact_remotes: Iterable[RemoteSpec] = (),
ignore_project_artifact_remotes: bool = False,
):
if self._context.remote_cache_spec:
self._context.messenger.warn(
"Cache Storage Service is configured, pulled artifacts may not be available in the local cache"
)
elements = self._load(
targets,
selection=selection,
ignore_junction_targets=ignore_junction_targets,
load_artifacts=True,
attempt_artifact_metadata=True,
connect_artifact_cache=True,
artifact_remotes=artifact_remotes,
ignore_project_artifact_remotes=ignore_project_artifact_remotes,
)
if not self._artifacts.has_fetch_remotes():
raise StreamError("No artifact caches available for pulling artifacts")
_pipeline.assert_consistent(self._context, elements)
self.query_cache(elements)
self._reset()
self._add_queue(PullQueue(self._scheduler))
self._enqueue_plan(elements)
self._run(announce_session=True)
# push()
#
# Pushes artifacts to remote artifact server(s), pulling them first if necessary,
# possibly from different remotes.
#
# Args:
# targets (list of str): Targets to push
# selection (_PipelineSelection): The selection mode for the specified targets
# ignore_junction_targets (bool): Whether junction targets should be filtered out
# artifact_remotes: Artifact cache remotes specified on the commmand line
# ignore_project_artifact_remotes: Whether to ignore artifact remotes specified by projects
#
# If any of the given targets are missing their expected buildtree artifact,
# a pull queue will be created if user context and available remotes allow for
# attempting to fetch them.
#
def push(
self,
targets: Iterable[str],
*,
selection: str = _PipelineSelection.NONE,
ignore_junction_targets: bool = False,
artifact_remotes: Iterable[RemoteSpec] = (),
ignore_project_artifact_remotes: bool = False,
):
elements = self._load(
targets,
selection=selection,
ignore_junction_targets=ignore_junction_targets,
load_artifacts=True,
connect_artifact_cache=True,
artifact_remotes=artifact_remotes,
ignore_project_artifact_remotes=ignore_project_artifact_remotes,
)
if not self._artifacts.has_push_remotes():
raise StreamError("No artifact caches available for pushing artifacts")
_pipeline.assert_consistent(self._context, elements)
self.query_cache(elements)
self._reset()
self._add_queue(PullQueue(self._scheduler))
self._add_queue(ArtifactPushQueue(self._scheduler, imperative=True))
self._enqueue_plan(elements)
self._run(announce_session=True)
# checkout()
#
# Checkout target artifact to the specified location
#
# Args:
# target: Target to checkout
# location: Location to checkout the artifact to
# force: Whether files can be overwritten if necessary
# selection: The selection mode for the specified targets (_PipelineSelection)
# integrate: Whether to run integration commands
# hardlinks: Whether checking out files hardlinked to
# their artifacts is acceptable
# tar: If true, a tarball from the artifact contents will
# be created, otherwise the file tree of the artifact
# will be placed at the given location. If true and
# location is '-', the tarball will be dumped on the
# standard output.
# artifact_remotes: Artifact cache remotes specified on the commmand line
# ignore_project_artifact_remotes: Whether to ignore artifact remotes specified by projects
#
def checkout(
self,
target: str,
*,
location: Optional[str] = None,
force: bool = False,
selection: str = _PipelineSelection.RUN,
integrate: bool = True,
hardlinks: bool = False,
compression: str = "",
tar: bool = False,
artifact_remotes: Iterable[RemoteSpec] = (),
ignore_project_artifact_remotes: bool = False,
):
elements = self._load(
(target,),
selection=selection,
load_artifacts=True,
attempt_artifact_metadata=True,
connect_artifact_cache=True,
artifact_remotes=artifact_remotes,
ignore_project_artifact_remotes=ignore_project_artifact_remotes,
)
# self.targets contains a list of the loaded target objects
# if we specify --deps build, Stream._load() will return a list
# of build dependency objects, however, we need to prepare a sandbox
# with the target (which has had its appropriate dependencies loaded)
element: Element = self.targets[0]
self._check_location_writable(location, force=force, tar=tar)
# Check whether the required elements are cached, and then
# try to pull them if they are not already cached.
#
self.query_cache(elements)
self._pull_missing_artifacts(elements)
try:
scope = {
_PipelineSelection.RUN: _Scope.RUN,
_PipelineSelection.BUILD: _Scope.BUILD,
_PipelineSelection.NONE: _Scope.NONE,
_PipelineSelection.ALL: _Scope.ALL,
}
with element._prepare_sandbox(scope=scope[selection], integrate=integrate) as sandbox:
# Copy or move the sandbox to the target directory
virdir = sandbox.get_virtual_directory()
self._export_artifact(tar, location, compression, element, hardlinks, virdir)
except BstError as e:
raise StreamError(
"Error while staging dependencies into a sandbox" ": '{}'".format(e), detail=e.detail, reason=e.reason
) from e
# _export_artifact()
#
# Export the files of the artifact/a tarball to a virtual directory
#
# Args:
# tar (bool): Whether we want to create a tarfile
# location (str): The name of the directory/the tarfile we want to export to/create
# compression (str): The type of compression for the tarball
# target (Element/ArtifactElement): The Element/ArtifactElement we want to checkout
# hardlinks (bool): Whether to checkout hardlinks instead of copying
# virdir (Directory): The sandbox's root directory as a virtual directory
#
def _export_artifact(self, tar, location, compression, target, hardlinks, virdir):
if not tar:
with target.timed_activity("Checking out files in '{}'".format(location)):
try:
if hardlinks:
try:
utils.safe_remove(location)
except OSError as e:
raise StreamError("Failed to remove checkout directory: {}".format(e)) from e
virdir._export_files(location, can_link=True, can_destroy=True)
else:
virdir._export_files(location)
except OSError as e:
raise StreamError("Failed to checkout files: '{}'".format(e)) from e
else:
to_stdout = location == "-"
mode = _handle_compression(compression, to_stream=to_stdout)
with target.timed_activity("Creating tarball"):
if to_stdout:
# Save the stdout FD to restore later
saved_fd = os.dup(sys.stdout.fileno())
try:
with os.fdopen(sys.stdout.fileno(), "wb") as fo:
with tarfile.open(fileobj=fo, mode=mode) as tf:
virdir.export_to_tar(tf, ".")
finally:
# No matter what, restore stdout for further use
os.dup2(saved_fd, sys.stdout.fileno())
os.close(saved_fd)
else:
with tarfile.open(location, mode=mode) as tf:
virdir.export_to_tar(tf, ".")
# artifact_show()
#
# Show cached artifacts
#
# Args:
# targets (str): Targets to show the cached state of
#
def artifact_show(self, targets, *, selection=_PipelineSelection.NONE):
# Obtain list of Element and/or ArtifactElement objects
target_objects = self.load_selection(
targets, selection=selection, connect_artifact_cache=True, load_artifacts=True
)
self.query_cache(target_objects)
if self._artifacts.has_fetch_remotes():
self._resolve_cached_remotely(target_objects)
return target_objects
# artifact_log()
#
# Show the full log of an artifact
#
# Args:
# targets (str): Targets to view the logs of
#
# Returns:
# logsdir (list): A list of CasBasedDirectory objects containing artifact logs
#
def artifact_log(self, targets):
# Return list of Element and/or ArtifactElement objects
target_objects = self.load_selection(targets, selection=_PipelineSelection.NONE, load_artifacts=True)
self.query_cache(target_objects)
artifact_logs = {}
for obj in target_objects:
ref = obj.get_artifact_name()
if not obj._cached():
self._context.messenger.warn("{} is not cached".format(ref))
continue
if not obj._cached_logs():
self._context.messenger.warn("{} is cached without log files".format(ref))
continue
artifact_logs[obj.name] = obj._get_logs()
return artifact_logs
# artifact_list_contents()
#
# Show a list of content of an artifact
#
# Args:
# targets (str): Targets to view the contents of
#
# Returns:
# elements_to_files (Dict[str, Directory]): A list of tuples of the artifact name and it's contents
#
def artifact_list_contents(self, targets):
# Return list of Element and/or ArtifactElement objects
target_objects = self.load_selection(targets, selection=_PipelineSelection.NONE, load_artifacts=True)
self.query_cache(target_objects)
elements_to_files = {}
for obj in target_objects:
ref = obj.get_artifact_name()
if not obj._cached():
self._context.messenger.warn("{} is not cached".format(ref))
obj.name = {ref: "No artifact cached"}
continue
if isinstance(obj, ArtifactElement):
obj.name = ref
# Just hand over a Directory here
artifact = obj._get_artifact()
files = artifact.get_files()
elements_to_files[obj.name] = files
return elements_to_files
# artifact_delete()
#
# Remove artifacts from the local cache
#
# Args:
# targets (str): Targets to remove
#
def artifact_delete(self, targets, *, selection=_PipelineSelection.NONE):
# Return list of Element and/or ArtifactElement objects
target_objects = self.load_selection(targets, selection=selection, load_artifacts=True)
self.query_cache(target_objects)
# Some of the targets may refer to the same key, so first obtain a
# set of the refs to be removed.
remove_refs = set()
for obj in target_objects:
for key_strength in [_KeyStrength.STRONG, _KeyStrength.WEAK]:
key = obj._get_cache_key(strength=key_strength)
remove_refs.add(obj.get_artifact_name(key=key))
ref_removed = False
for ref in remove_refs:
try:
self._artifacts.remove(ref)
except ArtifactError as e:
self._context.messenger.warn(str(e))
continue
self._context.messenger.info("Removed: {}".format(ref))
ref_removed = True
if not ref_removed:
self._context.messenger.info("No artifacts were removed")
# source_checkout()
#
# Checkout sources of the target element to the specified location
#
# Args:
# target: The target element whose sources to checkout
# location: Location to checkout the sources to
# force: Whether to overwrite existing directories/tarfiles
# deps: The selection mode for the specified targets (_PipelineSelection)
# except_targets: List of targets to except from staging
# tar: Whether to write a tarfile holding the checkout contents
# compression: The type of compression for tarball
# include_build_scripts: Whether to include build scripts in the checkout
# source_remotes: Source cache remotes specified on the commmand line
# ignore_project_source_remotes: Whether to ignore source remotes specified by projects
#
def source_checkout(
self,
target: str,
*,
location: Optional[str] = None,
force: bool = False,
deps=_PipelineSelection.NONE,
except_targets: Iterable[str] = (),
tar: bool = False,
compression: Optional[str] = None,
include_build_scripts: bool = False,
source_remotes: Iterable[RemoteSpec] = (),
ignore_project_source_remotes: bool = False,
):
self._check_location_writable(location, force=force, tar=tar)
elements = self._load(
(target,),
selection=deps,
except_targets=except_targets,
connect_source_cache=True,
source_remotes=source_remotes,
ignore_project_source_remotes=ignore_project_source_remotes,
)
# Assert all sources are cached in the source dir
self.query_cache(elements, only_sources=True)
self._fetch(elements)
_pipeline.assert_sources_cached(self._context, elements)
# Stage all sources determined by scope
try:
self._source_checkout(elements, location, force, deps, tar, compression, include_build_scripts)
except BstError as e:
raise StreamError(
"Error while writing sources" ": '{}'".format(e), detail=e.detail, reason=e.reason
) from e
self._context.messenger.info("Checked out sources to '{}'".format(location))
# workspace_open
#
# Open a project workspace
#
# Args:
# targets (list): List of target elements to open workspaces for
# no_checkout (bool): Whether to skip checking out the source
# force (bool): Whether to ignore contents in an existing directory
# custom_dir (str): Custom location to create a workspace or false to use default location.
# source_remotes: Source cache remotes specified on the commmand line
# ignore_project_source_remotes: Whether to ignore source remotes specified by projects
#
def workspace_open(
self,
targets: Iterable[str],
*,
no_checkout: bool = False,
force: bool = False,
custom_dir: Optional[str] = None,
source_remotes: Iterable[RemoteSpec] = (),
ignore_project_source_remotes: bool = False,
):
# This function is a little funny but it is trying to be as atomic as possible.
elements = self._load(
targets,
selection=_PipelineSelection.REDIRECT,
connect_source_cache=True,
source_remotes=source_remotes,
ignore_project_source_remotes=ignore_project_source_remotes,
)
workspaces = self._context.get_workspaces()
# If we're going to checkout, we need at least a fetch,
#
if not no_checkout:
self.query_cache(elements, only_sources=True)
self._fetch(elements, fetch_original=True)
expanded_directories = []
# To try to be more atomic, loop through the elements and raise any errors we can early
for target in elements:
if not list(target.sources()):
build_depends = [x.name for x in target._dependencies(_Scope.BUILD, recurse=False)]
if not build_depends:
raise StreamError("The element {} has no sources".format(target.name))
detail = "Try opening a workspace on one of its dependencies instead:\n"
detail += " \n".join(build_depends)
raise StreamError("The element {} has no sources".format(target.name), detail=detail)
# Check for workspace config
workspace = workspaces.get_workspace(target._get_full_name())
if workspace:
if not force:
raise StreamError(
"Element '{}' already has an open workspace defined at: {}".format(
target.name, workspace.get_absolute_path()
)
)
if not no_checkout:
target.warn(
"Replacing existing workspace for element '{}' defined at: {}".format(
target.name, workspace.get_absolute_path()
)
)
self.workspace_close(target._get_full_name(), remove_dir=not no_checkout)
if not custom_dir:
directory = os.path.abspath(os.path.join(self._context.workspacedir, target.name))
if directory[-4:] == ".bst":
directory = directory[:-4]
expanded_directories.append(directory)
if custom_dir:
if len(elements) != 1:
raise StreamError(
"Exactly one element can be given if --directory is used",
reason="directory-with-multiple-elements",
)
directory = os.path.abspath(custom_dir)
expanded_directories = [
directory,
]
else:
# If this fails it is a bug in what ever calls this, usually cli.py and so can not be tested for via the
# run bst test mechanism.
assert len(elements) == len(expanded_directories)
for target, directory in zip(elements, expanded_directories):
if os.path.exists(directory):
if not os.path.isdir(directory):
raise StreamError(
"For element '{}', Directory path is not a directory: {}".format(target.name, directory),
reason="bad-directory",
)
if not (no_checkout or force) and os.listdir(directory):
raise StreamError(
"For element '{}', Directory path is not empty: {}".format(target.name, directory),
reason="bad-directory",
)
if os.listdir(directory):
if force and not no_checkout:
utils._force_rmtree(directory)
# So far this function has tried to catch as many issues as possible with out making any changes
# Now it does the bits that can not be made atomic.
targetGenerator = zip(elements, expanded_directories)
for target, directory in targetGenerator:
self._context.messenger.status("Creating workspace for element {}".format(target.name))
workspace = workspaces.get_workspace(target._get_full_name())
if workspace and not no_checkout:
workspaces.delete_workspace(target._get_full_name())
workspaces.save_config()
utils._force_rmtree(directory)
try:
os.makedirs(directory, exist_ok=True)
except OSError as e:
todo_elements = " ".join([str(target.name) for target, directory_dict in targetGenerator])
if todo_elements:
# This output should make creating the remaining workspaces as easy as possible.
todo_elements = "\nDid not try to create workspaces for " + todo_elements
raise StreamError(
"Failed to create workspace directory: {}".format(e),
reason="workspace-directory-failure",
detail=todo_elements,
) from e
workspaces.create_workspace(target, directory, checkout=not no_checkout)
self._context.messenger.info("Created a workspace for element {}".format(target._get_full_name()))
# workspace_close
#
# Close a project workspace
#
# Args:
# element_name (str): The element name to close the workspace for
# remove_dir (bool): Whether to remove the associated directory
#
def workspace_close(self, element_name, *, remove_dir):
self._assert_project("Unable to locate workspaces")
workspaces = self._context.get_workspaces()
workspace = workspaces.get_workspace(element_name)
# Remove workspace directory if prompted
if remove_dir:
with self._context.messenger.timed_activity(
"Removing workspace directory {}".format(workspace.get_absolute_path())
):
try:
shutil.rmtree(workspace.get_absolute_path())
except OSError as e:
raise StreamError("Could not remove '{}': {}".format(workspace.get_absolute_path(), e)) from e
# Delete the workspace and save the configuration
workspaces.delete_workspace(element_name)
workspaces.save_config()
self._context.messenger.info("Closed workspace for {}".format(element_name))
# workspace_reset
#
# Reset a workspace to its original state, discarding any user
# changes.
#
# Args:
# targets (list of str): The target elements to reset the workspace for
# soft (bool): Only set the workspace state to not prepared
#
def workspace_reset(self, targets, *, soft):
self._assert_project("Unable to locate workspaces")
elements = self._load(targets, selection=_PipelineSelection.REDIRECT)
nonexisting = []
for element in elements:
if not self.workspace_exists(element.name):
nonexisting.append(element.name)
if nonexisting:
raise StreamError("Workspace does not exist", detail="\n".join(nonexisting))
workspaces = self._context.get_workspaces()
for element in elements:
workspace = workspaces.get_workspace(element._get_full_name())
workspace_path = workspace.get_absolute_path()
if soft:
workspace.last_build = None
self._context.messenger.info(
"Reset workspace state for {} at: {}".format(element.name, workspace_path)
)
continue
self.workspace_close(element._get_full_name(), remove_dir=True)
workspaces.save_config()
self.workspace_open([element._get_full_name()], no_checkout=False, force=True, custom_dir=workspace_path)
# workspace_exists
#
# Check if a workspace exists
#
# Args:
# element_name (str): The element name to close the workspace for, or None
#
# Returns:
# (bool): True if the workspace exists
#
# If None is specified for `element_name`, then this will return
# True if there are any existing workspaces.
#
def workspace_exists(self, element_name=None):
self._assert_project("Unable to locate workspaces")
workspaces = self._context.get_workspaces()
if element_name:
workspace = workspaces.get_workspace(element_name)
if workspace:
return True
elif any(workspaces.list()):
return True
return False
# workspace_list
#
# Serializes the workspaces and dumps them in YAML to stdout.
#
def workspace_list(self):
self._assert_project("Unable to locate workspaces")
workspaces = []
for element_name, workspace_ in self._context.get_workspaces().list():
workspace_detail = {
"element": element_name,
"directory": workspace_.get_absolute_path(),
}
workspaces.append(workspace_detail)
_yaml.roundtrip_dump({"workspaces": workspaces})
# redirect_element_names()
#
# Takes a list of element names and returns a list where elements have been
# redirected to their source elements if the element file exists, and just
# the name, if not.
#
# Args:
# elements (list of str): The element names to redirect
#
# Returns:
# (list of str): The element names after redirecting
#
def redirect_element_names(self, elements):
element_dir = self._project.element_path
load_elements = []
output_elements = set()
for e in elements:
element_path = os.path.join(element_dir, e)
if os.path.exists(element_path):
load_elements.append(e)
else:
output_elements.add(e)
if load_elements:
loaded_elements = self._load(load_elements, selection=_PipelineSelection.REDIRECT)
for e in loaded_elements:
output_elements.add(e.name)
return list(output_elements)
# get_state()
#
# Get the State object owned by Stream
#
# Returns:
# State: The State object
def get_state(self):
return self._state
#############################################################
# Scheduler API forwarding #
#############################################################
# running
#
# Whether the scheduler is running
#
@property
def running(self):
return self._running
# suspended
#
# Whether the scheduler is currently suspended
#
@property
def suspended(self):
return self._suspended
# terminated
#
# Whether the scheduler is currently terminated
#
@property
def terminated(self):
return self._terminated
# terminate()
#
# Terminate jobs
#
def terminate(self):
self._scheduler.terminate()
self._terminated = True
# quit()
#
# Quit the session, this will continue with any ongoing
# jobs, use Stream.terminate() instead for cancellation
# of ongoing jobs
#
def quit(self):
self._scheduler.stop()
# suspend()
#
# Context manager to suspend ongoing jobs
#
@contextmanager
def suspend(self):
self._scheduler.suspend()
self._suspended = True
yield
self._suspended = False
self._scheduler.resume()
# retry_job()
#
# Retry the indicated job
#
# Args:
# action_name: The unique identifier of the task
# unique_id: A unique_id to load an Element instance
#
def retry_job(self, action_name: str, unique_id: str) -> None:
element = Plugin._lookup(unique_id)
#
# Update the state task group, remove the failed element
#
group = self._state.task_groups[action_name]
group.failed_tasks.remove(element._get_full_name())
#
# Find the queue for this action name and requeue the element
#
queue = None
for q in self.queues:
if q.action_name == action_name:
queue = q
assert queue
queue.enqueue([element])
#############################################################
# Private Methods #
#############################################################
# _assert_project()
#
# Raises an assertion of a project was not loaded
#
# Args:
# message: The user facing error message, e.g. "Unable to load elements"
#
# Raises:
# A StreamError with reason "project-not-loaded" is raised if no project was loaded
#
def _assert_project(self, message: str) -> None:
if not self._project:
raise StreamError(
message, detail="No project.conf or active workspace was located", reason="project-not-loaded"
)
# _fetch_subprojects()
#
# Fetch subprojects as part of the project and element loading process.
#
# Args:
# junctions (list of Element): The junctions to fetch
#
def _fetch_subprojects(self, junctions):
self._reset()
queue = FetchQueue(self._scheduler)
queue.enqueue(junctions)
self.queues = [queue]
self._run()
# _load_artifacts()
#
# Loads artifacts from target artifact refs
#
# Args:
# artifact_names (list): List of target artifact names to load
#
# Returns:
# (list): A list of loaded ArtifactElement
#
def _load_artifacts(self, artifact_names):
with self._context.messenger.simple_task("Loading artifacts") as task:
# Use a set here to avoid duplicates.
#
# ArtifactElement.new_from_artifact_name() will take care of ensuring
# uniqueness of multiple artifact names which refer to the same artifact
# (e.g., if both weak and strong names happen to be requested), here we
# still need to ensure we generate a list that does not contain duplicates.
#
artifacts = set()
for artifact_name in artifact_names:
artifact = ArtifactElement.new_from_artifact_name(artifact_name, self._context, task)
artifacts.add(artifact)
ArtifactElement.clear_artifact_name_cache()
ArtifactProject.clear_project_cache()
return list(artifacts)
# _load_elements()
#
# Loads elements from target names.
#
# This function is called with a list of lists, such that multiple
# target groups may be specified. Element names specified in `targets`
# are allowed to be redundant.
#
# Args:
# target_groups (list of lists): Groups of toplevel targets to load
#
# Returns:
# (tuple of lists): A tuple of Element object lists, grouped corresponding to target_groups
#
def _load_elements(self, target_groups):
# First concatenate all the lists for the loader's sake
targets = list(itertools.chain(*target_groups))
with PROFILER.profile(Topics.LOAD_PIPELINE, "_".join(t.replace(os.sep, "-") for t in targets)):
elements = self._project.load_elements(targets)
# Now create element groups to match the input target groups
elt_iter = iter(elements)
element_groups = [[next(elt_iter) for i in range(len(group))] for group in target_groups]
return tuple(element_groups)
# _load_elements_from_targets
#
# Given the usual set of target element names/artifact refs, load
# the `Element` objects required to describe the selection.
#
# The result is returned as a truple - firstly the loaded normal
# elements, secondly the loaded "excepting" elements and lastly
# the loaded artifact elements.
#
# Args:
# targets - The target element names/artifact refs
# except_targets - The names of elements to except
# rewritable - Whether to load the elements in re-writable mode
# valid_artifact_names: Whether artifact names are valid
#
# Returns:
# ([elements], [except_elements], [artifact_elements])
#
def _load_elements_from_targets(
self,
targets: Iterable[str],
except_targets: Iterable[str],
*,
rewritable: bool = False,
valid_artifact_names: bool = False,
) -> Tuple[List[Element], List[Element], List[Element]]:
# First determine which of the user specified targets are artifact
# names and which are element names.
element_names, artifact_names = self._expand_and_classify_targets(
targets, valid_artifact_names=valid_artifact_names
)
# We need a project in order to load elements
if element_names:
self._assert_project("Unable to load elements: {}".format(", ".join(element_names)))
if self._project:
self._project.load_context.set_rewritable(rewritable)
# Load elements and except elements
if element_names:
elements, except_elements = self._load_elements([element_names, except_targets])
else:
elements, except_elements = [], []
# Load artifacts
if artifact_names:
artifacts = self._load_artifacts(artifact_names)
else:
artifacts = []
return elements, except_elements, artifacts
# _resolve_cached_remotely()
#
# Checks whether the listed elements are currently cached in
# any of their respectively configured remotes.
#
# Args:
# targets (list [Element]): The list of element targets
#
def _resolve_cached_remotely(self, targets):
with self._context.messenger.simple_task("Querying remotes for cached status", silent_nested=True) as task:
task.set_maximum_progress(len(targets))
for element in targets:
element._cached_remotely()
task.add_current_progress()
# _pull_missing_artifacts()
#
# Pull missing artifacts from available remotes, this runs the scheduler
# just to pull the artifacts if any of the artifacts are missing locally,
# and is used in commands which need to use the artifacts.
#
# This function requires Stream.query_cache() to be called in advance
# in order to determine which artifacts to try and pull.
#
# Args:
# elements (list [Element]): The selected list of required elements
#
def _pull_missing_artifacts(self, elements):
uncached_elts = [elt for elt in elements if elt._pull_pending()]
if uncached_elts:
self._context.messenger.info("Attempting to fetch missing or incomplete artifact(s)")
self._reset()
self._add_queue(PullQueue(self._scheduler))
self._enqueue_plan(uncached_elts)
self._run(announce_session=True)
# _load_tracking()
#
# A variant of _load() to be used when the elements should be used
# for tracking
#
# If `targets` is not empty used project configuration will be
# fully loaded.
#
# Args:
# targets (list of str): Targets to load
# selection (_PipelineSelection): The selection mode for the specified targets
# except_targets (list of str): Specified targets to except
# cross_junctions (bool): Whether tracking should cross junction boundaries
#
# Returns:
# (list of Element): The tracking element selection
#
def _load_tracking(self, targets, *, selection=_PipelineSelection.NONE, except_targets=(), cross_junctions=False):
elements, except_elements, artifacts = self._load_elements_from_targets(
targets, except_targets, rewritable=True
)
# We can't track artifact refs, since they have no underlying
# elements or sources to interact with. Abort if the user asks
# us to do that.
if artifacts:
detail = "\n".join(artifact.get_artifact_name() for artifact in artifacts)
raise ArtifactElementError("Cannot perform this operation with artifact refs:", detail=detail)
# Hold on to the targets
self.targets = elements
track_projects = {}
for element in elements:
project = element._get_project()
if project not in track_projects:
track_projects[project] = [element]
else:
track_projects[project].append(element)
track_selected = []
for project, project_elements in track_projects.items():
selected = _pipeline.get_selection(self._context, project_elements, selection)
selected = self._track_cross_junction_filter(project, selected, cross_junctions)
track_selected.extend(selected)
return _pipeline.except_elements(elements, track_selected, except_elements)
# _track_cross_junction_filter()
#
# Filters out elements which are across junction boundaries,
# otherwise asserts that there are no such elements.
#
# This is currently assumed to be only relevant for element
# lists targetted at tracking.
#
# Args:
# project (Project): Project used for cross_junction filtering.
# All elements are expected to belong to that project.
# elements (list of Element): The list of elements to filter
# cross_junction_requested (bool): Whether the user requested
# cross junction tracking
#
# Returns:
# (list of Element): The filtered or asserted result
#
def _track_cross_junction_filter(self, project, elements, cross_junction_requested):
# First filter out cross junctioned elements
if not cross_junction_requested:
elements = [element for element in elements if element._get_project() is project]
# We can track anything if the toplevel project uses project.refs
#
if self._project.ref_storage == ProjectRefStorage.PROJECT_REFS:
return elements
# Ideally, we would want to report every cross junction element but not
# their dependencies, unless those cross junction elements dependencies
# were also explicitly requested on the command line.
#
# But this is too hard, lets shoot for a simple error.
for element in elements:
element_project = element._get_project()
if element_project is not self._project:
detail = (
"Requested to track sources across junction boundaries\n"
+ "in a project which does not use project.refs ref-storage."
)
raise StreamError("Untrackable sources", detail=detail, reason="untrackable-sources")
return elements
# _load()
#
# A convenience method for loading element lists
#
# If `targets` is not empty used project configuration will be
# fully loaded.
#
# Args:
# targets: Main targets to load
# selection: The selection mode for the specified targets (_PipelineSelection)
# except_targets: Specified targets to except from fetching
# ignore_junction_targets (bool): Whether junction targets should be filtered out
# dynamic_plan: Require artifacts as needed during the build
# load_artifacts: Whether to load artifacts with artifact names
# attempt_artifact_metadata: Whether to attempt to download artifact metadata in
# order to deduce build dependencies and reload.
# connect_artifact_cache: Whether to try to contact remote artifact caches
# connect_source_cache: Whether to try to contact remote source caches
# artifact_remotes: Artifact cache remotes specified on the commmand line
# source_remotes: Source cache remotes specified on the commmand line
# ignore_project_artifact_remotes: Whether to ignore artifact remotes specified by projects
# ignore_project_source_remotes: Whether to ignore source remotes specified by projects
#
# Returns:
# (list of Element): The primary element selection
#
def _load(
self,
targets: Iterable[str],
*,
selection: str = _PipelineSelection.NONE,
except_targets: Iterable[str] = (),
ignore_junction_targets: bool = False,
dynamic_plan: bool = False,
load_artifacts: bool = False,
attempt_artifact_metadata: bool = False,
connect_artifact_cache: bool = False,
connect_source_cache: bool = False,
artifact_remotes: Iterable[RemoteSpec] = (),
source_remotes: Iterable[RemoteSpec] = (),
ignore_project_artifact_remotes: bool = False,
ignore_project_source_remotes: bool = False,
):
elements, except_elements, artifacts = self._load_elements_from_targets(
targets, except_targets, rewritable=False, valid_artifact_names=load_artifacts
)
if artifacts:
if selection in (_PipelineSelection.ALL, _PipelineSelection.RUN):
raise StreamError(
"Error: '--deps {}' is not supported for artifact names".format(selection),
reason="deps-not-supported",
)
if ignore_junction_targets:
elements = [e for e in elements if e.get_kind() != "junction"]
# Hold on to the targets
self.targets = elements
# Connect to remote caches, this needs to be done before resolving element state
self._context.initialize_remotes(
connect_artifact_cache,
connect_source_cache,
artifact_remotes,
source_remotes,
ignore_project_artifact_remotes=ignore_project_artifact_remotes,
ignore_project_source_remotes=ignore_project_source_remotes,
)
# In some cases we need to have an actualized artifact, with all of
# it's metadata, such that we can derive attributes about the artifact
# like it's build dependencies.
if artifacts and attempt_artifact_metadata:
#
# FIXME: We need a semantic here to download only the metadata
#
for element in artifacts:
element._set_required(_Scope.NONE)
self.query_cache(artifacts)
self._reset()
self._add_queue(PullQueue(self._scheduler))
self._enqueue_plan(artifacts)
self._run()
#
# After obtaining the metadata for the toplevel specified artifact
# targets, we need to reload just the artifacts.
#
artifact_targets = [e.get_artifact_name() for e in artifacts]
_, _, artifacts = self._load_elements_from_targets(
artifact_targets, [], rewritable=False, valid_artifact_names=True
)
# It can be that new remotes have been added by way of loading new
# projects referenced by the new artifact elements, so we need to
# ensure those remotes are also initialized.
#
self._context.initialize_remotes(
connect_artifact_cache,
connect_source_cache,
artifact_remotes,
source_remotes,
ignore_project_artifact_remotes=ignore_project_artifact_remotes,
ignore_project_source_remotes=ignore_project_source_remotes,
)
self.targets += artifacts
# Now move on to loading primary selection.
#
selected = _pipeline.get_selection(
self._context, self.targets, selection, silent=False, depth_sort=dynamic_plan
)
selected = _pipeline.except_elements(self.targets, selected, except_elements)
# Mark the appropriate required elements
#
required_elements: List[Element] = []
if dynamic_plan:
#
# In a dynamic build plan, we only require the top-level targets and
# rely on state changes during processing to determine which elements
# must be processed.
#
if selection == _PipelineSelection.NONE:
required_elements = elements
elif selection == _PipelineSelection.BUILD:
required_elements = list(_pipeline.dependencies(elements, _Scope.BUILD, recurse=False))
# Without a dynamic build plan, or if `all` selection was made, then everything is required
if not required_elements:
required_elements = selected
for element in required_elements:
element._set_required()
return selected
# _reset()
#
# Resets the internal state related to a given scheduler run.
#
# Invocations to the scheduler should start with a _reset() and end
# with _run() like so:
#
# self._reset()
# self._add_queue(...)
# self._add_queue(...)
# self._enqueue_plan(...)
# self._run()
#
def _reset(self):
self._scheduler.clear_queues()
self.session_elements = []
self.total_elements = []
# _add_queue()
#
# Adds a queue to the stream
#
# Args:
# queue (Queue): Queue to add to the pipeline
#
def _add_queue(self, queue, *, track=False):
if not track and not self.queues:
# First non-track queue
queue.set_required_element_check()
self.queues.append(queue)
# _enqueue_plan()
#
# Enqueues planned elements to the specified queue.
#
# Args:
# plan (list of Element): The list of elements to be enqueued
# queue (Queue): The target queue, defaults to the first queue
#
def _enqueue_plan(self, plan, *, queue=None):
queue = queue or self.queues[0]
queue.enqueue(plan)
self.session_elements += plan
# _run()
#
# Common function for running the scheduler
#
# Args:
# announce_session (bool): Whether to announce the session in the frontend.
#
def _run(self, *, announce_session: bool = False):
# Inform the frontend of the full list of elements
# and the list of elements which will be processed in this run
#
self.total_elements = list(_pipeline.dependencies(self.targets, _Scope.ALL))
if announce_session and self._session_start_callback is not None:
self._session_start_callback()
self._running = True
status = self._scheduler.run(self.queues, self._context.get_cascache().get_casd_process_manager())
self._running = False
if status == SchedStatus.ERROR:
raise StreamError()
if status == SchedStatus.TERMINATED:
raise StreamError(terminated=True)
# _fetch()
#
# Performs the fetch job, the body of this function is here because
# it is shared between a few internals.
#
# Args:
# elements (list of Element): Elements to fetch
# fetch_original (bool): Whether to fetch original unstaged
# announce_session (bool): Whether to announce the session in the frontend
#
def _fetch(self, elements: List[Element], *, fetch_original: bool = False, announce_session: bool = False):
# Assert consistency for the fetch elements
_pipeline.assert_consistent(self._context, elements)
# Construct queues, enqueue and run
#
self._reset()
self._add_queue(FetchQueue(self._scheduler, fetch_original=fetch_original))
self._enqueue_plan(elements)
self._run(announce_session=announce_session)
# _check_location_writable()
#
# Check if given location is writable.
#
# Args:
# location (str): Destination path
# force (bool): Allow files to be overwritten
# tar (bool): Whether destination is a tarball
#
# Raises:
# (StreamError): If the destination is not writable
#
def _check_location_writable(self, location, force=False, tar=False):
if not tar:
try:
os.makedirs(location, exist_ok=True)
except OSError as e:
raise StreamError("Failed to create destination directory: '{}'".format(e)) from e
if not os.access(location, os.W_OK):
raise StreamError("Destination directory '{}' not writable".format(location))
if not force and os.listdir(location):
raise StreamError("Destination directory '{}' not empty".format(location))
elif os.path.exists(location) and location != "-":
if not os.access(location, os.W_OK):
raise StreamError("Output file '{}' not writable".format(location))
if not force and os.path.exists(location):
raise StreamError("Output file '{}' already exists".format(location))
# Helper function for source_checkout()
def _source_checkout(
self,
elements,
location=None,
force=False,
deps="none",
tar=False,
compression=None,
include_build_scripts=False,
):
location = os.path.abspath(location)
# Stage all our sources in a temporary directory. The this
# directory can be used to either construct a tarball or moved
# to the final desired location.
temp_source_dir = tempfile.TemporaryDirectory(dir=self._context.tmpdir) # pylint: disable=consider-using-with
try:
self._write_element_sources(temp_source_dir.name, elements)
if include_build_scripts:
self._write_build_scripts(temp_source_dir.name, elements)
if tar:
self._create_tarball(temp_source_dir.name, location, compression)
else:
self._move_directory(temp_source_dir.name, location, force)
except OSError as e:
raise StreamError("Failed to checkout sources to {}: {}".format(location, e)) from e
finally:
with suppress(FileNotFoundError):
temp_source_dir.cleanup()
# Move a directory src to dest. This will work across devices and
# may optionaly overwrite existing files.
def _move_directory(self, src, dest, force=False):
def is_empty_dir(path):
return os.path.isdir(dest) and not os.listdir(dest)
try:
os.rename(src, dest)
return
except OSError:
pass
if force or is_empty_dir(dest):
try:
utils.link_files(src, dest)
except utils.UtilError as e:
raise StreamError("Failed to move directory: {}".format(e)) from e
# Write the element build script to the given directory
def _write_element_script(self, directory, element):
try:
element._write_script(directory)
except ImplError:
return False
return True
# Write all source elements to the given directory
def _write_element_sources(self, directory, elements):
for element in elements:
element_source_dir = self._get_element_dirname(directory, element)
if list(element.sources()):
os.makedirs(element_source_dir)
element._stage_sources_at(element_source_dir)
# Create a tarball from the content of directory
def _create_tarball(self, directory, tar_name, compression):
if compression is None:
compression = ""
mode = _handle_compression(compression)
try:
with utils.save_file_atomic(tar_name, mode="wb") as f, tarfile.open(fileobj=f, mode=mode) as tarball:
for item in os.listdir(str(directory)):
file_to_add = os.path.join(directory, item)
tarball.add(file_to_add, arcname=item)
except OSError as e:
raise StreamError("Failed to create tar archive: {}".format(e)) from e
# Write all the build_scripts for elements in the directory location
def _write_build_scripts(self, location, elements):
for element in elements:
self._write_element_script(location, element)
self._write_master_build_script(location, elements)
# Write a master build script to the sandbox
def _write_master_build_script(self, directory, elements):
module_string = ""
for element in elements:
module_string += shlex.quote(element.normal_name) + " "
script_path = os.path.join(directory, "build.sh")
with open(_site.build_all_template, "r", encoding="utf-8") as f:
script_template = f.read()
with utils.save_file_atomic(script_path, "w") as script:
script.write(script_template.format(modules=module_string))
os.chmod(script_path, stat.S_IEXEC | stat.S_IREAD)
# _get_element_dirname()
#
# Get path to directory for an element based on its normal name.
#
# For cross-junction elements, the path will be prefixed with the name
# of the junction element.
#
# Args:
# directory (str): path to base directory
# element (Element): the element
#
# Returns:
# (str): Path to directory for this element
#
def _get_element_dirname(self, directory, element):
parts = [element.normal_name]
while element._get_project() != self._project:
element = element._get_project().junction
parts.append(element.normal_name)
return os.path.join(directory, *reversed(parts))
# _expand_and_classify_targets()
#
# Takes the user provided targets, expand any glob patterns, and
# return a new list of targets.
#
# If valid_artifact_names is specified, then glob patterns will
# also be checked for locally existing artifact names, and the
# targets will be classified into separate lists, any targets
# which are found to be an artifact name will be returned in
# the list of artifact names.
#
# Args:
# targets: A list of targets
# valid_artifact_names: Whether artifact names are valid
#
# Returns:
# (list): element names present in the targets
# (list): artifact names present in the targets
#
def _expand_and_classify_targets(
self, targets: Iterable[str], valid_artifact_names: bool = False
) -> Tuple[List[str], List[str]]:
#
# We use dicts here instead of sets, in order to deduplicate any possibly duplicate
# entries, while also retaining the original order of element specification/discovery,
# (which we cannot do with sets).
#
element_names = {}
artifact_names = {}
element_globs = {}
artifact_globs = {}
# First sort out globs and targets
for target in targets:
if any(c in "*?[" for c in target):
if target.endswith(".bst"):
element_globs[target] = True
else:
artifact_globs[target] = True
elif target.endswith(".bst"):
element_names[target] = True
else:
artifact_names[target] = True
# Bail out in commands which don't support artifacts if any of the targets
# or globs did not end with the expected '.bst' suffix.
#
if (artifact_names or artifact_globs) and not valid_artifact_names:
raise StreamError(
"Invalid element names or element glob patterns were specified: {}".format(
", ".join(list(artifact_names) + list(artifact_globs))
),
reason="invalid-element-names",
detail="Element names and element glob expressions must end in '.bst'",
)
# Verify targets which were not classified as elements
for artifact_name in artifact_names:
try:
verify_artifact_ref(artifact_name)
except ArtifactElementError as e:
raise StreamError(
"Specified target does not appear to be an artifact or element name: {}".format(artifact_name),
reason="unrecognized-target-format",
detail="Element names and element glob expressions must end in '.bst'",
) from e
# Expand globs for elements
if element_globs:
# Bail out if an element glob is specified without providing a project directory
if not self._project:
raise StreamError(
"Element glob expressions were specified without any project directory: {}".format(
", ".join(element_globs)
),
reason="glob-elements-without-project",
)
# Collect a list of `all_elements` in the project, stripping out the leading
# project directory and element path prefix, to produce only element names.
#
all_elements = []
element_path_length = len(self._project.element_path) + 1
for dirpath, _, filenames in os.walk(self._project.element_path):
for filename in filenames:
if filename.endswith(".bst"):
element_name = os.path.join(dirpath, filename)
element_name = element_name[element_path_length:]
all_elements.append(element_name)
# Glob the elements and add the results to the set
#
for glob in element_globs:
glob_results = list(utils.glob(all_elements, glob))
for element_name in glob_results:
element_names[element_name] = True
if not glob_results:
self._context.messenger.warn("No elements matched the glob expression: {}".format(glob))
# Glob the artifact names and add the results to the set
#
for glob in artifact_globs:
glob_results = self._artifacts.list_artifacts(glob=glob)
for artifact_name in glob_results:
artifact_names[artifact_name] = True
if not glob_results:
self._context.messenger.warn("No artifact names matched the glob expression: {}".format(glob))
return list(element_names), list(artifact_names)
# _handle_compression()
#
# Return the tarfile mode str to be used when creating a tarball
#
# Args:
# compression (str): The type of compression (either 'gz', 'xz' or 'bz2')
# to_stdout (bool): Whether we want to open a stream for writing
#
# Returns:
# (str): The tarfile mode string
#
def _handle_compression(compression, *, to_stream=False):
mode_prefix = "w|" if to_stream else "w:"
return mode_prefix + compression | PypiClean |
/Catnap-0.4.5.tar.gz/Catnap-0.4.5/catnap/models.py | from __future__ import absolute_import, division, print_function, with_statement, unicode_literals
import functools
import json
import sys
import base64
import requests
import requests.auth
from .compat import *
class ParseException(Exception):
"""An exception that occurrs while parsing a test specification"""
def __init__(self, data_type, data_name, message):
"""
:arg string data_type: The type of item that was being parsed when the
error occurred - 'test' or 'testcase'
:arg string data_name: The name of the test or testcase that was being
parsed when the error occurred
:arg string message: The error message
"""
super(ParseException, self).__init__("%s %s: %s" % (data_type, data_name, message))
def _get_field(data_type, data, field_name, parser, required=False):
"""
Gets/parses a field from a test/testcase
:arg string data_type: The type of item that is being parsed - 'test' or
'testcase'
:arg dict data: The item that contains the field
:arg string field_name: The name of the field to extract
:arg function parser: The function used to parse the raw value
"""
if field_name in data:
# Get the field value if it exists
value = data[field_name]
if parser:
# Parse the field value or throw an error
try:
value = parser(value)
except Exception as e:
data_name = data.get("name", "unknown")
raise ParseException(data_type, data_name, "Could not parse field %s: %s" % (field_name, str(e)))
return value
elif required:
# Throw an error if the field is required and does not exist
data_name = data.get("name", "unknown")
raise ParseException(data_type, data_name, "Missing required field %s" % field_name)
else:
# Return nothing if the field is not required and does not exist
return None
def _auth_config_parser(config):
"""
Parses an auth configuration. Two forms are allowed:
* 'basic user pass' - Performs HTTP basic authentication with the
specified username and password
* 'digest user pass' - Performs HTTP digest authentication with the
specified username and password
:arg string config: The auth config string
"""
parts = config.split()
if len(parts) != 3:
raise Exception(data_type, data_name, "Invalid auth config. Must specify an auth method (basic or digest) followed by the auth parameters for that method.")
if parts[0] == "basic":
return requests.auth.HTTPBasicAuth(parts[1], parts[2])
elif parts[0] == "digest":
return requests.auth.HTTPDigestAuth(parts[1], parts[2])
else:
raise Exception("Unknown auth method: %s" % parts[0])
def _get_file_contents(path):
"""
Gets the contents of a specified file, ensuring that the file is properly
closed when the function exits
"""
with open(path, "r") as f:
return f.read()
class TestcaseResult(object):
"""
Model for the result of a testcase execution. This can be used in a `with`
block so that the model catches exceptions and temporarily replaces
stdout/stderr with a string buffer for output capture.
"""
def __init__(self):
self._old_stdout = None
self._old_stderr = None
self.response = None
def __enter__(self):
# Temporarily replace stdout/stderr with a string buffer
self._old_stdout = sys.stdout
self._old_stderr = sys.stderr
sys.stdout = self._captured_stdout = StringIO()
sys.stderr = self._captured_stderr = StringIO()
return self
def __exit__(self, type, value, traceback):
# Save the error
self.error_type = type
self.error = value
self.error_traceback = traceback
# Capture the stdout/stderr results
self.stdout = self._captured_stdout.getvalue()
self._captured_stdout.close()
self.stderr = self._captured_stderr.getvalue()
self._captured_stderr.close()
# Set stdout/stderr back to their old values
sys.stdout = self._old_stdout
sys.stderr = self._old_stderr
return True
@property
def failed(self):
"""Returns whether the testcase failed"""
return self.error != None
class Testcase(object):
"""Model that represents a testcase specification"""
def __init__(self, name):
"""
Creates a new testcase
:arg string name: The name of the testcase
"""
self.name = name
@classmethod
def _choose_field(cls, testcase, data, field_type_name, **fields):
all_fields = ((field, _get_field("testcase", data, field, parser)) for (field, parser) in fields.items())
set_fields = [(field, value) for (field, value) in all_fields if value != None]
if len(set_fields) > 1:
raise ParseException("testcase", testcase.name, "More than one %s defined" % field_type_name)
else:
setattr(testcase, "%s_type" % field_type_name, set_fields[0][0] if set_fields else None)
setattr(testcase, field_type_name, set_fields[0][1] if set_fields else None)
@classmethod
def parse(cls, data):
"""
Parses a testcase into a model
:arg dict data: The data to parse into a model
"""
# Create a shortcut for extracting a field value
field = functools.partial(_get_field, "testcase", data)
# Get the request fields
t = cls(field("name", str, required=True))
t.method = field("method", lambda m: str(m).upper()) or "GET"
t.url = field("url", str, required=True)
t.query_params = field("query_params", dict) or {}
t.headers = field("headers", dict) or {}
t.auth = field("auth", _auth_config_parser)
# Get the request body payload
cls._choose_field(t, data, "body",
body=lambda b: b,
form_body=lambda b: urllib.urlencode(dict(b)),
base64_body=lambda b: base64.b64decode(bytes(b)),
file_body=_get_file_contents
)
# Set the response fields
t.code = field("code", int)
t.response_url = field("response_url", str)
t.response_headers = field("response_headers", dict) or {}
# Set the expected response body
cls._choose_field(t, data, "response_body",
response_body=lambda b: b,
base64_response_body=lambda b: base64.b64decode(bytes(b)),
file_response_body=_get_file_contents,
json_response_body=json.loads
)
# Set the testcase-specified python executable code
create_compiler = lambda field_name: functools.partial(compile, filename="<%s field of %s>" % (field_name, t.name), mode="exec")
t.on_request = field("on_request", create_compiler("on_request"))
t.on_response = field("on_response", create_compiler("on_response"))
return t
class Test(object):
"""Model that represents a test"""
def __init__(self, name):
"""
Creates a new test
:arg string name: The name of the test
"""
self.name = name
self.testcases = []
@classmethod
def parse(cls, data):
"""
Parses a test into a model
:arg dict data: The data to parse into a model
"""
field = functools.partial(_get_field, "test", data)
test = cls(field("name", str, required=True))
for testcase_data in field("testcases", list, required=True):
test.testcases.append(Testcase.parse(testcase_data))
return test | PypiClean |
/ApiLogicServer-9.2.18-py3-none-any.whl/api_logic_server_cli/create_from_model/safrs-react-admin-npm-build/static/js/1431.d02260cd.chunk.js | "use strict";(self.webpackChunkreact_admin_upgrade=self.webpackChunkreact_admin_upgrade||[]).push([[1431],{51431:function(t,e,i){i.r(e),i.d(e,{conf:function(){return r},language:function(){return m}});var r={wordPattern:/(-?\d*\.\d\w*)|([^\`\~\!\@\$\^\&\*\(\)\=\+\[\{\]\}\\\|\;\:\'\"\,\.\<\>\/\s]+)/g,comments:{blockComment:["{#","#}"]},brackets:[["{#","#}"],["{%","%}"],["{{","}}"],["(",")"],["[","]"],["\x3c!--","--\x3e"],["<",">"]],autoClosingPairs:[{open:"{# ",close:" #}"},{open:"{% ",close:" %}"},{open:"{{ ",close:" }}"},{open:"[",close:"]"},{open:"(",close:")"},{open:'"',close:'"'},{open:"'",close:"'"}],surroundingPairs:[{open:'"',close:'"'},{open:"'",close:"'"},{open:"<",close:">"}]},m={defaultToken:"",tokenPostfix:"",ignoreCase:!0,keywords:["apply","autoescape","block","deprecated","do","embed","extends","flush","for","from","if","import","include","macro","sandbox","set","use","verbatim","with","endapply","endautoescape","endblock","endembed","endfor","endif","endmacro","endsandbox","endset","endwith","true","false"],tokenizer:{root:[[/\s+/],[/{#/,"comment.twig","@commentState"],[/{%[-~]?/,"delimiter.twig","@blockState"],[/{{[-~]?/,"delimiter.twig","@variableState"],[/<!DOCTYPE/,"metatag.html","@doctype"],[/<!--/,"comment.html","@comment"],[/(<)((?:[\w\-]+:)?[\w\-]+)(\s*)(\/>)/,["delimiter.html","tag.html","","delimiter.html"]],[/(<)(script)/,["delimiter.html",{token:"tag.html",next:"@script"}]],[/(<)(style)/,["delimiter.html",{token:"tag.html",next:"@style"}]],[/(<)((?:[\w\-]+:)?[\w\-]+)/,["delimiter.html",{token:"tag.html",next:"@otherTag"}]],[/(<\/)((?:[\w\-]+:)?[\w\-]+)/,["delimiter.html",{token:"tag.html",next:"@otherTag"}]],[/</,"delimiter.html"],[/[^<]+/]],commentState:[[/#}/,"comment.twig","@pop"],[/./,"comment.twig"]],blockState:[[/[-~]?%}/,"delimiter.twig","@pop"],[/\s+/],[/(verbatim)(\s*)([-~]?%})/,["keyword.twig","",{token:"delimiter.twig",next:"@rawDataState"}]],{include:"expression"}],rawDataState:[[/({%[-~]?)(\s*)(endverbatim)(\s*)([-~]?%})/,["delimiter.twig","","keyword.twig","",{token:"delimiter.twig",next:"@popall"}]],[/./,"string.twig"]],variableState:[[/[-~]?}}/,"delimiter.twig","@pop"],{include:"expression"}],stringState:[[/"/,"string.twig","@pop"],[/#{\s*/,"string.twig","@interpolationState"],[/[^#"\\]*(?:(?:\\.|#(?!\{))[^#"\\]*)*/,"string.twig"]],interpolationState:[[/}/,"string.twig","@pop"],{include:"expression"}],expression:[[/\s+/],[/\+|-|\/{1,2}|%|\*{1,2}/,"operators.twig"],[/(and|or|not|b-and|b-xor|b-or)(\s+)/,["operators.twig",""]],[/==|!=|<|>|>=|<=/,"operators.twig"],[/(starts with|ends with|matches)(\s+)/,["operators.twig",""]],[/(in)(\s+)/,["operators.twig",""]],[/(is)(\s+)/,["operators.twig",""]],[/\||~|:|\.{1,2}|\?{1,2}/,"operators.twig"],[/[^\W\d][\w]*/,{cases:{"@keywords":"keyword.twig","@default":"variable.twig"}}],[/\d+(\.\d+)?/,"number.twig"],[/\(|\)|\[|\]|{|}|,/,"delimiter.twig"],[/"([^#"\\]*(?:\\.[^#"\\]*)*)"|\'([^\'\\]*(?:\\.[^\'\\]*)*)\'/,"string.twig"],[/"/,"string.twig","@stringState"],[/=>/,"operators.twig"],[/=/,"operators.twig"]],doctype:[[/[^>]+/,"metatag.content.html"],[/>/,"metatag.html","@pop"]],comment:[[/-->/,"comment.html","@pop"],[/[^-]+/,"comment.content.html"],[/./,"comment.content.html"]],otherTag:[[/\/?>/,"delimiter.html","@pop"],[/"([^"]*)"/,"attribute.value.html"],[/'([^']*)'/,"attribute.value.html"],[/[\w\-]+/,"attribute.name.html"],[/=/,"delimiter.html"],[/[ \t\r\n]+/]],script:[[/type/,"attribute.name.html","@scriptAfterType"],[/"([^"]*)"/,"attribute.value.html"],[/'([^']*)'/,"attribute.value.html"],[/[\w\-]+/,"attribute.name.html"],[/=/,"delimiter.html"],[/>/,{token:"delimiter.html",next:"@scriptEmbedded",nextEmbedded:"text/javascript"}],[/[ \t\r\n]+/],[/(<\/)(script\s*)(>)/,["delimiter.html","tag.html",{token:"delimiter.html",next:"@pop"}]]],scriptAfterType:[[/=/,"delimiter.html","@scriptAfterTypeEquals"],[/>/,{token:"delimiter.html",next:"@scriptEmbedded",nextEmbedded:"text/javascript"}],[/[ \t\r\n]+/],[/<\/script\s*>/,{token:"@rematch",next:"@pop"}]],scriptAfterTypeEquals:[[/"([^"]*)"/,{token:"attribute.value.html",switchTo:"@scriptWithCustomType.$1"}],[/'([^']*)'/,{token:"attribute.value.html",switchTo:"@scriptWithCustomType.$1"}],[/>/,{token:"delimiter.html",next:"@scriptEmbedded",nextEmbedded:"text/javascript"}],[/[ \t\r\n]+/],[/<\/script\s*>/,{token:"@rematch",next:"@pop"}]],scriptWithCustomType:[[/>/,{token:"delimiter.html",next:"@scriptEmbedded.$S2",nextEmbedded:"$S2"}],[/"([^"]*)"/,"attribute.value.html"],[/'([^']*)'/,"attribute.value.html"],[/[\w\-]+/,"attribute.name.html"],[/=/,"delimiter.html"],[/[ \t\r\n]+/],[/<\/script\s*>/,{token:"@rematch",next:"@pop"}]],scriptEmbedded:[[/<\/script/,{token:"@rematch",next:"@pop",nextEmbedded:"@pop"}],[/[^<]+/,""]],style:[[/type/,"attribute.name.html","@styleAfterType"],[/"([^"]*)"/,"attribute.value.html"],[/'([^']*)'/,"attribute.value.html"],[/[\w\-]+/,"attribute.name.html"],[/=/,"delimiter.html"],[/>/,{token:"delimiter.html",next:"@styleEmbedded",nextEmbedded:"text/css"}],[/[ \t\r\n]+/],[/(<\/)(style\s*)(>)/,["delimiter.html","tag.html",{token:"delimiter.html",next:"@pop"}]]],styleAfterType:[[/=/,"delimiter.html","@styleAfterTypeEquals"],[/>/,{token:"delimiter.html",next:"@styleEmbedded",nextEmbedded:"text/css"}],[/[ \t\r\n]+/],[/<\/style\s*>/,{token:"@rematch",next:"@pop"}]],styleAfterTypeEquals:[[/"([^"]*)"/,{token:"attribute.value.html",switchTo:"@styleWithCustomType.$1"}],[/'([^']*)'/,{token:"attribute.value.html",switchTo:"@styleWithCustomType.$1"}],[/>/,{token:"delimiter.html",next:"@styleEmbedded",nextEmbedded:"text/css"}],[/[ \t\r\n]+/],[/<\/style\s*>/,{token:"@rematch",next:"@pop"}]],styleWithCustomType:[[/>/,{token:"delimiter.html",next:"@styleEmbedded.$S2",nextEmbedded:"$S2"}],[/"([^"]*)"/,"attribute.value.html"],[/'([^']*)'/,"attribute.value.html"],[/[\w\-]+/,"attribute.name.html"],[/=/,"delimiter.html"],[/[ \t\r\n]+/],[/<\/style\s*>/,{token:"@rematch",next:"@pop"}]],styleEmbedded:[[/<\/style/,{token:"@rematch",next:"@pop",nextEmbedded:"@pop"}],[/[^<]+/,""]]}}}}]);
//# sourceMappingURL=1431.d02260cd.chunk.js.map | PypiClean |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.